top of page

So we've looked at a lot of ways to make light work in all kinds of situations but up until now we've only been using single coefficients and values across the entire object which does not look all that great. In the real world every material has some kind of variation to it: a reflective metal could have fingerprints which ups the roughness for those fragments, a wall has loads of miniscule bumps and dents, and if an object has both metallic parts and plastic parts seperate shininess values need to be used. Texture maps can be used to store this information on a per-fragment level. The full source code can be found here.

Ambient Occlusion Map

Starting off with something simple the ambient occlusion (AO) map is a 1-channel greyscale image consisting of values 0.0 - 1.0. AO maps, as the name suggests, occludes the ambient light if a fragment isn't likely to be lit by the diffuse or specular light. This benefits two scenarios. First, if light isn't able to reach certain fragments these dark areas will now be darker, not limited by the hard ambient light limit and instead being able to reach values of up to pure blackness. Secondly, in areas where no direct light is affecting a fragment (where the ambient light fakes global illumination), the ambient light will still be occluded by the map, allowing us to somewhat fake geometry where other maps may not be able to.

uniform sampler2D AOMap;

[...]

float AO = texture(AOMap, texCoord.xy).r;

[...]

ambience = AO * (0.01 * ambientColour);

The code is simple. We first read in the map and sample the current fragment just like the colour map. We then take this value and multiply our ambient light contribution by it. Values of 0 will "turn off" the ambient light for that specific fragment whilst values of 1 will act as though an AO map isn't being used. Note that we only sample the red channel of the AO map. This is because the map is greyscale and therefore all channels are the same value. A vec3 is not needed.

Ambient occlusion maps are typically generated using a high polygonal mesh, a normal map or a height map (for displacement) which we will go into later on. The object is then lit from all angles and the map is generated based on the level of light on the final object. Any fragments that are always occluded, partially or completely, become values lower than 1.

Ambient occlusion doesn't only happen per texture either. Most modern games generate AO maps for the entire scene buffer as a post-process too to obtain occlusion information between objects. For example, the corner or edges of a room will typically be slightly darker.

Roughness/Glossiness Map

Since going through our PBR shaders we've seen just how important the roughness value is and how much it's used. Once again a flat value shouldn't be applied to the entire object as certain areas may be more rough than others, even due to the smallest of details such as dirt, dust or grime. Similar to the AO map, roughness is stored in a single channel using values 0 - 1. Glossiness is the inverted value of roughness.

uniform sampler2D roughMap;

[...]

float roughness = texture(roughMap, texCoord.xy).r;

Before the map was applied our roughness value was either set in the main as a single value, or uniformly passed in from the CPU (due to key binds changing the value). Now we sample the roughness map to obtain this value at a per pixel level. Of course, the level of detail is reliant on the resolution of the texture map but bear in mind higher resolutions will cause lower FPS as the texture memory fills up and more precise sampling must be done. In the case of Blinn-Phong the shininess value should use the inverse of the roughness (glossiness) multiplied by a constant value. The coefficient of the diffuse should also use the inverse of the roughness.

While roughness maps can be automatically generated it is highly recommended that you create your own. Resources exist which provide recommended values for a variety of materials. A noise map and some overlays should also be used for a level of randomness. Fine details are the most important! 

Specular/Metalness Map

When talking about PBR there are two options when it comes to the intensity of specular reflection, Specular and Metalness. A specular map is simply a greyscale image that corresponds directly with the coefficient of the specular contribution. The higher the value, the more intense and bright the specular highlight is. In Blinn-Phong it is the Ks value applied to the specular calculation. This value can also be applied to PBR workflows such as Cook-Torrance to control the energy of the light. The alternative option, metalness, is mostly either a value of 0.0 or 1.0, with all intermediate values only being used upon transition between non-metal material and metallic materials or if the metal is partially occluded by dirt or grime. The idea of metalness is that all metal objects have a set defined amount of diffuse contribution and reflective contribution and is typically the preferred option when choosing a workflow due to its better reflective properties.

uniform sampler2D specMap;

[...]

float spec = texture(specMap, texCoord.xy).r;

[...]

vec3 S = spec * (lightColour * specularHighlight);

Before mapping we used a value of 1.0 for the specular multiplier. Where roughness defined the microsurface and therefore the glossiness of an object the specular map defines the reflectivity of the object. A less reflective surface will have less specular contribution.

Generation of specular maps is a little more tricky than metalness maps. While the latter merely needs a white value on all metallic parts of an object and black on everything else (bar when intermediate values are necessary), the former needs explicit definition on the intensity of specular per fragment. Again, automatic generation of this map is possible but it is highly recommended you do it manually for more reliable accuracy.

Height/Depth Map

A height map is a greyscale texture that defines the height of the current fragment. Before normal mapping, height maps were used to fake geometry in a cheaper yet less accurate method called bump mapping where instead of defining the surface normals, the height map would trick the lighting that a fragment was higher or lower than other fragments, causing self occlusion. In modern use, height maps are primarily used for displacement mapping, a method in which the vertices of an object are displaced using the height map, creating the geometric surface detail using just the texture.

For our lighting code a height map is not needed and therefore no code snippet is available.

Height maps are very easily created by hand. Lighter values are higher and darker values are lower. Using a gradient one can create sharp or shallow, smooth bumps or dips. 

Normal Map

Normal maps are one of the most important texture maps of all in real-time applications. Normal maps are used to replace the surface normals of an object on a per-pixel level, thereby tricking the lighting calculations into thinking a flat surface in fact has a lot of fine surface detail. This results in the light reflecting at various angles to create self occlusion and increases the realism of an object tenfold at only a small computational cost. It should however be noted that since normal maps only change the surface normals and not the geometry of a surface larger details such as entire bricks, while occluding correctly, will still look flat at shallow angles. normal maps use all three colour channels, where R, G, B correspond to the X, Y, Z values of the surface normal respectively.

uniform sampler2D normalMap;

[...]

vec3 normals = normalize(texture(normalMap, texCoord).rgb * 2.0 - 1.0);

Above, we read the normal map in as with all other textures maps except this time we multiply the resulting values by 2 then subtract by 1. This is done to convert the values from colour range (0.0 - 1.0) to normals range (-1.0 - 1.0) so now 0 corresponds to -1 and 1 corresponds to 1. The result is also normalized to ensure there are no errors due to length of the vector. This normal map replaces what used to be vec3 normals = normalize(uNormals);.

The normal map will typically be in tangent space so that it can be used on any surface regardless of orientation of the vertex or its normals. Tangent space is a local space that is always perpendicular to the surface normals. That is, positive Z in tangent space will always be facing outwards of a surface, no matter the orientation of the surface in world space.

There is a problem now. The light, camera and vertex positions are all in camera space so when we do important calculations such as (N.L) or (N.H) the normals are assumed to be in camera space too. This results in all bumps facing the same positive Z direction which isn't the desired effect. Therefore we now have to transform the light, camera and vertex positions into tangent space too so that everything is in the same space and the dot products will be correct. This requires a TBN matrix.

void main()
{
    position = vec3((viewMatrix * modelMatrix) * vec4(inPosition, 1.0));
    texCoord = inTexCoord;

    vec3 N = normalize(normalMatrix * inNormal);
    vec3 T = normalize(normalMatrix * inTangent);
         T = normalize(T - dot(T, N) * N);
    vec3 B = normalize(normalMatrix * inBitangent);
    mat3 TBNMatrix = transpose(mat3(T, B, N));

    camPos = TBNMatrix * camPosIn;
    lightPos = TBNMatrix * lightPosIn;
    tangentPos = TBNMatrix * position;

    gl_Position = (projectionMatrix * (viewMatrix * modelMatrix)) * vec4(inPosition, 1.0);
}

Here we are moving back into our vertex shader. In addition to calculating the position in camera space and passing the texture coordinates through to the fragment shader we are now calculating the TBN Matrix. TBN stands for Tangent Bitangent Normal and is a 3x3 matrix containing the X,Y and Z coordinates for these three vectors.

Since we are no longer using the surface normals in place of a normal map we do not pass uNormals up to the fragment shader. If a normal map is not needed it is now simpler to create a new texture map with colour values (0, 0, 255), or pure blue. This simulates all normals facing straight out in the positive Z direction.

The normal contribution is obtained in the same way as uNormals were. We multiply the surface normals, obtained from the .obj file, by the normal matrix to bring it into camera space and then normalize the result to set the length of the vector to 1.

The tangent contribution is obtained similarly to the normal contribution: We multiply our tangents, calculated as we set our object up to be written into GPU memory, by the normal matrix and normalize. We do some additional math known as the Gram-Schmidt process: As we calculate the TBN matrix in the vertex shader it will be interpolated in the fragment shader. This interpolation may approximate the matrix in a way that de-orthogonalises it, making the normal matrix multiplication result incorrectly. Gram-Schmidt re-orthogonalises the tangents to prevent this from happening.

Finally the bitangent contribution is obtained in much the same way as the normal contribution, with calculations obtained as we set the object up to be written into GPU memory. While this is quicker as we are doing the maths in the CPU as opposed to the GPU it should be mentioned that the bitangent can also be calculated via the cross product of the normal contribution and the tangent contribution vec3 B = cross(N, T); as the matrix is orthogonal.

The purpose of the TBN matrix is to convert a vector from tangent space to world space or in our cause tangent to camera space as we have multiplied by the normal matrix. We want to do the opposite here, bringing the light, camera and vertex position into tangent space. Usually this would require us to invert the matrix which is a very expensive process, especially in the GPU, however since it is orthogonal the inverse is equal to the transpose so we transpose the matrix instead.

Now that we have the inverted TBN Matrix we simply multiply the positions by the matrix and the resulting values are the positions in tangent space in relation to the current vertex. These values are sent up to the fragment shader to be used as normal.

            float edge1[3];
            edge1[0] = obj[j - 1].vertices[0] - obj[j - 2].vertices[0];
            edge1[1] = obj[j - 1].vertices[1] - obj[j - 2].vertices[1];
            edge1[2] = obj[j - 1].vertices[2] - obj[j - 2].vertices[2];

            float edge2[3];
            edge2[0] = obj[j].vertices[0] - obj[j - 2].vertices[0];
            edge2[1] = obj[j].vertices[1] - obj[j - 2].vertices[1];
            edge2[2] = obj[j].vertices[2] - obj[j - 2].vertices[2];

            float texEdge1[2];
            texEdge1[0] = obj[j - 1].texcoords[0] - obj[j - 2].texcoords[0];
            texEdge1[1] = obj[j - 1].texcoords[1] - obj[j - 2].texcoords[1];

            float texEdge2[2];
            texEdge2[0] = obj[j].texcoords[0] - obj[j - 2].texcoords[0];
            texEdge2[1] = obj[j].texcoords[1] - obj[j - 2].texcoords[1];

            float determinant = 1.0 / ((texEdge1[0] * texEdge2[1]) - (texEdge2[0] * texEdge1[1]));

            obj[j - 2].tangents[0] = obj[j - 1].tangents[0] = obj[j].tangents[0] = determinant *       ((texEdge2[1] * edge1[0]) - (texEdge1[1] * edge2[0]));
            obj[j - 2].tangents[1] = obj[j - 1].tangents[1] = obj[j].tangents[1] = determinant * ((texEdge2[1] * edge1[1]) - (texEdge1[1] * edge2[1]));
            obj[j - 2].tangents[2] = obj[j - 1].tangents[2] = obj[j].tangents[2] = determinant * ((texEdge2[1] * edge1[2]) - (texEdge1[1] * edge2[2]));

            mathLib.normalize(obj[j - 2].tangents);
            mathLib.normalize(obj[j - 1].tangents);
            mathLib.normalize(obj[j].tangents);

            obj[j - 2].bitangents[0] = obj[j - 1].bitangents[0] = obj[j].bitangents[0] = determinant * ((-texEdge2[0] * edge1[0]) + (texEdge1[0] * edge2[0]));
            obj[j - 2].bitangents[1] = obj[j - 1].bitangents[1] = obj[j].bitangents[1] = determinant * ((-texEdge2[0] * edge1[1]) + (texEdge1[0] * edge2[1]));
            obj[j - 2].bitangents[2] = obj[j - 1].bitangents[2] = obj[j].bitangents[2] = determinant * ((-texEdge2[0] * edge1[2]) + (texEdge1[0] * edge2[2]));

            mathLib.normalize(obj[j - 2].bitangents);
            mathLib.normalize(obj[j - 1].bitangents);
            mathLib.normalize(obj[j].bitangents);

So moving back even further we go into the CPU side of the code and look at our object loader. Instead of setting a loop to store the vertex information into our vertex structure one at a time we instead do three every time so now our loop works on every triangle instead of every vertex. This is necessary because to calculate the tangents and bi-tangents we need to use the texture UVs and the edges of each triangle.

Using the three vertices passed in in the specific loop we obtain the edge between the first and second vertex and the edge between the first and third vertex, named edge1 and edge 2 respectively. We then do the same with the texture coordinates, finding the delta between the first and second and first and third vertex.

the fractional determinant is obtained via 1.0 / ((texEdge1[0] * texEdge2[1]) - (texEdge2[0] * texEdge1[1])) where [0] is x axis and [1] is y axis and this determinant is multiplied by the tangent and bitangent equation to get the final result. These values are then normalized before being sent to the GPU memory for use in our shaders.

With this we now have our tangents and bi-tangents ready to be used in the TBN matrix to bring all necessary vectors into tangent space! These calculations are a solution for custom obj loaders but some third party alternatives, such as ASSIMP, can do this automatically. Additionally my specific example does not take into account vertex indexing which may need to be accounted for depending on the implementation.

bottom of page