This is the sixth part of a tutorial series about rendering. The previous part added support for more complex lighting. This time, we'll create the illusion of more complex surfaces.

This tutorial was made with Unity 5.4.0f3.

Bump Mapping

We can use albedo textures to create materials with complex color patterns. We can use normals to adjust the apparent surface curvature. With these tools, we can produces all kinds of surfaces. However, the surface of a single triangle will always be smooth. It can only interpolate between three normal vectors. So it cannot represent a rough or varied surface. This becomes obvious when forsaking an albedo texture and using only a solid color.

A good example of this flatness is a simple quad. Add one to the scene and make it point upwards, by rotating it 90° around the X axis. Give it our Lighting material, without textures and with a fully white tint.

Perfectly flat quad.

Because the default skybox is very bright, it is hard to see the contribution of the other lights. So let's turn it off for this tutorial. You can do so by decreasing the Ambient Intensity to zero in the lighting settings. Then only enable the main directional light. Find a good point of view in the scene view so you can some light differences on the quad.

No ambient, only the main directional light.

How could we make this quad appear non-flat? We could fake roughness by baking shading into the albedo texture. However, that would be completely static. If the lights change, or the objects move, so should the shading. If it doesn't, the illusion will be broken. And in case of specular reflections, even the camera isn't allowed to move.

We can change the normals to create the illusion of a curving surface. But there are only four normals per quad, one for each vertex. This can only produce smooth transitions. If we want a varied and rough surface, we need more normals.

We could subdivide our quad into smaller quads. This gives us more normals to work with. In fact, once we have more vertices, we can also move them around. Then we don't need the illusion of roughness, we can make an actual rough surface! But the sub-quads still have the same problem. Are we going to subdivide those too? That will lead to huge meshes with an enormous amount of triangles. That is fine when creating 3D models, but isn't feasible for real-time use in games.

Height Maps A rough surface has a non-uniform elevation, compared to a flat surface. If we store this elevation data in a texture, we might be able to use it generate normal vectors per fragment, instead of per vertex. This idea is known as bump mapping, and was first formulated by James Blinn. Here is a height map to accompany our marble texture. It is an RGB texture with each channel set to the same value. Import it into your project, with the default import settings. Height map for marble. Add a _HeightMap texture property to My First Lighting Shader. As it'll use the same UV as our albedo texture, it doesn't need its own scale and offset parameters. The default texture doesn't really matter, as long as it's uniform. Gray will do. Properties { _Tint ("Tint", Color) = (1, 1, 1, 1) _MainTex ("Albedo", 2D) = "white" {} [NoScaleOffset] _HeightMap ("Heights", 2D) = "gray" {} [Gamma] _Metallic ("Metallic", Range(0, 1)) = 0 _Smoothness ("Smoothness", Range(0, 1)) = 0.1 } Material with height map. Add the matching variable to the My Lighting include file, so we can access the texture. Let's see how it looks, by factoring it into the albedo. float4 _Tint; sampler2D _MainTex; float4 _MainTex_ST; sampler2D _HeightMap; … float4 MyFragmentProgram (Interpolators i) : SV_TARGET { i.normal = normalize(i.normal); float3 viewDir = normalize(_WorldSpaceCameraPos - i.worldPos); float3 albedo = tex2D(_MainTex, i.uv).rgb * _Tint.rgb; albedo *= tex2D(_HeightMap, i.uv); … } Using heights as colors.

Adjusting Normals Because our fragment normals are going to become more complex, let's move their initialization to a separate function. Also, get rid the height map test code. void InitializeFragmentNormal(inout Interpolators i) { i.normal = normalize(i.normal); } float4 MyFragmentProgram (Interpolators i) : SV_TARGET { InitializeFragmentNormal(i); float3 viewDir = normalize(_WorldSpaceCameraPos - i.worldPos); float3 albedo = tex2D(_MainTex, i.uv).rgb * _Tint.rgb; // albedo *= tex2D(_HeightMap, i.uv); … } Because we're currently working with a quad that lies in the XZ plane, its normal vector is always (0, 1, 0). So we can use a constant normal, ignoring the vertex data. Let's do that for now, and worry about different orientations later. void InitializeFragmentNormal(inout Interpolators i) { i.normal = float3(0, 1, 0); i.normal = normalize(i.normal); } How do we include the height data in this? A naive approach is to use the height as the normal's Y component, before normalizing. void InitializeFragmentNormal(inout Interpolators i) { float h = tex2D(_HeightMap, i.uv); i.normal = float3(0, h , 0); i.normal = normalize(i.normal); } Using heights as normals. This doesn't work, because normalization converts every vector back to (0, 1, 0). The black lines appear where the heights are zero, because normalization fails in those cases. We need a different method.

Finite Difference Because we're working with texture data, we have two-dimensional data. There's the U and V dimensions. The heights can be thought of as going in a third dimension, upwards. We could say that the texture represents a function, `f(u,v) = h`. Let's begin by limiting ourselves to only the U dimension. So the function is reduced to `f(u) = h`. Can we derive normal vectors from this function? If we knew the slope of the function, then we could use it to compute its normal at any point. The slope is defined by the rate of change of `h`. This is its derivative, `h^'`. Because `h` is the result of a function, `h^'` is the result of a function as well. So we have the derivative function `f^'(u) = h^'`. Unfortunately, we do not know what these function are. But we can approximate them. We can compare the heights at two different points in our texture. For example, at the extreme ends, using U coordinates 0 and 1. The difference between those two samples is the rate of change between those coordinates. Expressed as a function, that's `f(1) - f(0)`. We can use this to construct a tangent vector, `[[1],[f(1) - f(0)],[0]]`. Tangent vector from `[[0],[f(0)]]` to `[[1],[f(1)]]`. That's of course a very crude approximation of the real tangent vector. It treats the entire texture as a linear slope. We can do better by sampling two points that are closer together. For example, U coordinates 0 and ½. The rate of change between those two points is `f(1/2) - f(0)`, per half a unit of U. Because it is easier to deal with rate of change per whole units, we divide it by the distance between the points, so we get `(f(1/2) - f(0))/(1/2) = 2(f(1/2) - f(0))`. That gives us the tangent vector `[[1],[2(f(1/2) - f(0))],[0]]`. In general, we have to do this relative to the U coordinate of every fragment that we render. The distance to the next point is defined by a constant delta. So the derivative function is approximated by `f^'(u) ~~ (f(u + delta) - f(u))/delta`. The smaller δ becomes, the better we approximate the true derivative function. Of course it cannot become zero, but when taken to its theoretical limit, you get `f^'(u) = lim_(delta->0)(f(u + delta) - f(u))/delta`. This method of approximating a derivative is known as the finite difference method. With that, we can construct tangent vectors at any point, `[[1],[f^'(u)],[0]]`.

From Tangent to Normal What value could we use for δ in our shader? The smallest sensible difference would cover a single texel of our texture. We can retrieve this information in the shader via a float4 variable with the _TexelSize suffix. Unity sets those variables, similar to _ST variables. sampler2D _HeightMap; float4 _HeightMap_TexelSize; What is stored in _TexelSize variables? Its first two components contain the texel sizes, as fractions of U and V. The other two components contain the amount of pixels. For example, in case of a 256×128 texture, it will contain (0.00390625, 0.0078125, 256, 128). Now we can sample the texture twice, compute the height derivative, and construct a tangent vector. Let's directly use that as our normal vector. float2 delta = float2(_HeightMap_TexelSize.x, 0); float h1 = tex2D(_HeightMap, i.uv); float h2 = tex2D(_HeightMap, i.uv + delta); i.normal = float3(1, (h2 - h1) / delta.x, 0); i.normal = normalize(i.normal); Actually, because we're normalizing anyway, we can scale our tangent vector by δ. This eliminates a division and improves precision. i.normal = float3( delta.x , h2 - h1 , 0); Using tangents as normals. We get a very pronounced result. That's because the heights have a range of one unit, which produces very steep slopes. As the perturbed normals don't actually change the surface, we don't want such huge differences. We can scale the heights by an arbitrary factor. Let's reduce the range to a single texel. We can do that by multiplying the height difference by δ, or by simply replacing δ with 1 in the tangent. i.normal = float3( 1 , h2 - h1, 0); Scaled heights. This is starting to look good, but the lighting is wrong. It is far too dark. That's because we're directly using the tangent as a normal. To turn it into an upward-pointing normal vector, we have to rotate the tangent 90° around the Z axis. i.normal = float3( h1 - h2 , 1 , 0); Using actual normals. How does that vector rotation work? You can rotate a 2D vector 90° counter-clockwise by swapping the X and Y components of the vector, and flipping the sign of the new X component. So we end up with `[[-f^'(u)],[1],[0]]`. Rotating a 2D vector 90°.

Central Difference We've used finite difference approximations to create normal vectors. Specifically, by using the forward difference method. We take a point, and then look in one direction to determine the slope. As a result, the normal is biased in that direction. To get a better approximation of the normal, we can instead offset the sample points in both directions. This centers the linear approximation on the current point, and is known as the central difference method. This changes the derivative function to `f^'(u) = lim_(delta->0)(f(u + delta/2) - f(u - delta/2))/delta`. float2 delta = float2(_HeightMap_TexelSize.x * 0.5 , 0); float h1 = tex2D(_HeightMap, i.uv - delta ); float h2 = tex2D(_HeightMap, i.uv + delta); i.normal = float3(h1 - h2, 1, 0); This shifts the bumps slightly, so they are better aligned with the height field. Besides that, their shape doesn't change.