Hair Rendering Part 1: The Shading Model

Given my proclivity to pore over material from recent GDCs and GPU Pro articles, I’ve been meaning to write up a more accessible concept as a way to segue from the early to mid 2000s techniques to the state-of-the-art ideas that find their way into modern AAA titles. I think it’s also important to demonstrate that even the simplest solutions have the potential to change a rendering in significant ways.

With that in mind, I think that Scheuermann’s ad hoc bidirectional scattering distribution function model (BSDF, or BRDF with light integrated over a sphere that aggregates all the light interactions within hair fibers) is a great place to start because we can focus our attention on a relatively easy-to-implement pixel shader. The model also lends itself to real-time rendering and works well for various hair models.

That being said, it’s not easy to flesh out what I would consider an aesthetically pleasing result because so much of the workflow depends on proper modeling and texture authoring capabilities, which I don’t quite possess at the moment.

But that won’t stop us from demonstrating the power of a concept and how it treats light in a more realistic (i.e., as realistic as possible) manner or from getting a sense of how the reconfigured diffuse and specular calculations manifest and differ from the usual Blinn–Phong reflection model.

Most materials tend to fit nicely into a PBR workflow, with uniform diffuse responses and specular calculations that don’t directly assume anything about the possible light interactions that occur underneath the surfaces (i.e., subsurface scattering). Hair requires a bit more thought regarding not only the underlying material but also the interactions with surrounding strands and potentially more.

To understand the fundamental drivers of hair color, we need to understand a few geometric and theoretical concepts. Scheuermann’s shader is a mix of Kajiya and Kay’s BRDF model, which considers hair as volumes consisting of organized and infinitesimal cylindrical fibers, and the model by Marschner, which is based on his measurements of light scattering in human hair fibers. The former reconfigures specular lighting calculations to use the tangent instead of the normal, whereas the latter depicts different components of scattering in a single hair strand, like so:

Picture courtesy of Real-Time Rendering.

To keep things brief, R represents a white specular peak shifted toward the root and appears as a white specular reflection on the hair. TT isn’t quite represented in our shader, but it typically adds brightness to light (e.g., blond) hair when backlit. As the crux addition to what would normally be just a single specular response, TRT manifests as a secondary specular highlight and is colored due to the absorption of light as it travels through the fiber. In my demonstration, I kept it the same color as the diffuse response, where some applications recommend maintaining a middle-gray hue. Most notably, it appears as glints on strands with eccentricity and presents as a non-uniform, realistic illumination.

To reiterate, we have a particular definition of specular contributions via the Kajiya–Kay model followed by a real-time approximation of two (as opposed to just one) specular highlights as determined by Marschner. The key thing to note is that the individual components—the diffuse, R-specular, and TRT-specular reflections—map to concrete functions or calculations in our shader. However, to render both specular effects, we need a few textures to help us perform our calculations.

Scheuermann calls for a base texture to represent the fine structure of our hair model. For our purposes, the diffuse map of a diffuse/bump/specular/opacity material workflow serves as an excellent starting point to execute our shader. Technically, the base texture is supposed to encapsulate stretched noise and defer hair color to a constant that we’d have to set in our shader. However, I found it sensible and workable to have both the hair color and structure essentially distilled in a familiar representation (i.e., the diffuse texture). In the shader, I opted to utilize gray hues for the diffuse and secondary specular colors after careful consideration of documentation that outlines the usage of rendering products instead of colors that more closely resemble the color of our hair. This choice worked well for my use case and it just made more sense to me to separate lighting and material variables instead of using one to define the other (though in the context of subsurface scattering, that isn’t completely unreasonable).

I also made use of each of the bump, specular, and opacity textures, choosing to convert the bump map into a tangent space normal representation that more easily lends itself to calculating world space normals and utilizing tangents with respect to said normals. In addition to those choices, I also generated displacement and ambient occlusion maps (also from the bump map), the former being the specular shift texture that our implementation requires. We also require a specular noise texture, for which I chose the following glitter pattern I found in a discussion thread:

This pattern was used to modulate the secondary specular response and produce the glints that I mentioned earlier.

The diffuse lighting calculation centers around the usual Lambertian factor (i.e., N dot L), except this time, we scale and bias the term to brighten up areas facing away from the light for a softer overall look:

vec3 diffuse = clamp(mix(0.25f, 1.0f, dot(N, L)), 0.0f, 1.0f) * DIFFUSE_COLOR * gLightColor;

This approach results in the following:

The flatness of the response lies in stark contrast to the specular contributions, which add significant detail due to a more complete extrapolation of lighting behavior onto the underlying strand volumes. We calculate the specular colors in a similar fashion by first shifting tangents along the normal of the surface of the hair to reposition the specular highlights along the hair strand:

const float PRIMARY_SHIFT = -1.0f;
const float SECONDARY_SHIFT = 1.5f;

// ...

const vec3 T = normalize(dFdx(fs_in.WorldPos) * dFdy(fs_in.TexCoords).t - dFdy(fs_in.WorldPos) * dFdx(fs_in.TexCoords).t);
vec3 calcWorldSpaceNormal(vec3 tangentSpaceNormal)
    vec3 N = normalize(fs_in.WorldNormal);
    vec3 B = -normalize(cross(N, T));
    mat3 TBN = mat3(T, B, N);
    return normalize(TBN * tangentSpaceNormal);
const vec3 TANGENT_SPACE_N = texture(gNormalMap, fs_in.TexCoords).xyz * 2.0f - 1.0f;
const vec3 N = calcWorldSpaceNormal(TANGENT_SPACE_N);
const vec3 L = normalize(gLightPos - fs_in.WorldPos);

vec3 shiftTangent(float shift)
    vec3 shiftedT = T + shift * N;
    return normalize(shiftedT);

// ...

vec4 calcHairColor()
    // ...

    float baseShiftAmount = texture(gShiftTexture, fs_in.TexCoords).r - 0.5f;
    vec3 t1 = shiftTangent(PRIMARY_SHIFT + baseShiftAmount);
    vec3 t2 = shiftTangent(SECONDARY_SHIFT + baseShiftAmount);

    // ...

The reason behind adjusting our desired shift amounts with a texture lookup is to add a bit of randomness to what would otherwise be a fairly uniform look for our specular highlights over the hair. It’s possible to use a different map that better resonates with the artist’s vision, so to speak, but for this particular demo, the displacement texture yields an explicit enough rendering of the streaks commonly observed in hair.

Basically, we want to shift the tangent in opposite directions so that each of the specular contributions adds something different to our result. However, the final specular calculations are mainly going to center around the reconstituted tangents, the colors characterizing each of our specular components (i.e., white for the first and middle-gray for the second), different exponents that control the emphasis of each contribution, and a half-angle vector.

// ...
const float PRIMARY_SPECULAR_EXP = 0.98f;
const float SECONDARY_SPECULAR_EXP = 0.49f;
const vec3 DIFFUSE_COLOR = vec3(0.416f, 0.424f, 0.431f);

// ...

float calcStrandSpecularLighting(vec3 T, float exponent)
    vec3 V = normalize(gViewPos - fs_in.WorldPos);
    vec3 H = normalize(L + V);
    float ToH = dot(T, H);
    return smoothstep(-1.0f, 0.0f, ToH) * pow(sqrt(1.0f - ToH * ToH), exponent);

vec4 calcHairColor()
    // ...

    vec3 specular1 = texture(gSpecularMap, fs_in.TexCoords).rgb * calcStrandSpecularLighting(t1, PRIMARY_SPECULAR_EXP) * gLightColor;

    float mask = texture(gNoiseTexture, fs_in.TexCoords).r;
    vec3 specular2 = SECONDARY_SPECULAR_COLOR * calcStrandSpecularLighting(t2, SECONDARY_SPECULAR_EXP) * gLightColor;

    vec4 hairColor;
    hairColor.a = texture(gAlphaTexture, fs_in.TexCoords).r;
    // ...
    hairColor.rgb = (diffuse + specular1 + mask * specular2) * color;
    hairColor.rgb *= texture(gAOMap, fs_in.TexCoords).r;
    return hairColor;

There is one caveat I should point out, and that’s the modulation of the R-specular calculation (i.e., specular1) using the specular map itself. This pattern is commonly used in more canonical lighting calculations (e.g., Blinn–Phong reflectance); therefore, I figured the most prominent specular effect was a good opportunity to utilize what I had from the asset package. The results were fairly pleasing:

The specular component characterized by R or by light reflecting off the surface without SSS.
The specular component characterized by TRT and unmodulated by the noise texture.

Note that the second highlight is modulated by the noise texture to give us this additional sparkling effect that’s reminiscent of actual hair. Combining each of the individual lighting contributions or results, we get this:

For the sake of comparison, here’s the result under the canonical Blinn–Phong shading calculations:

For reference, here’s the generated ambient occlusion map for our general approximation of self-shadowing (not that it’s super crucial for black hair) used during the final calibration of all our lighting calculations:

Clearly, from a side-by-side comparison, we can tell that treating diffuse and specular terms in a particular manner that obeys the physics of light interactions with hair can add quite a bit of pizzazz to our results. The straight dope behind the Kajiya–Kay model is precisely the anisotropic treatment of light; that is, we center our specular lighting calculations around the tangent with respect to each strand of hair to better simulate how our highlights manifest. Given a normal defined by the map with respect to our mesh, we can assume that it lies in a plane spanned by the associated tangent and the view vector, and we adjust our calculations accordingly. In particular, defining the specular effects, each in terms of sin(T, H) raised to a specular exponent, leaves us with a more thorough rendering of light on hair.

One obvious thing to explore from this point forward would be more impacting treatments of self-shadowing on hair, such as the vaunted deep opacity maps that I keep hearing everyone talk about. That technique relies on slicing the light’s view frustum in such a way that the relatively low numbers of slices fit well onto the intersecting models and serve as “canvases” for the accumulation of self-shadowing evaluations instead of indiscriminately partitioning the frustum and focusing calculations on slices where they aren’t needed as much. The GPU Pro 360 book on shadows has an article on deep opacity maps, so a redefinition of my treatment of self-shadowing is definitely on the table for a future post.


Thorsten Scheuermann’s SIGGRAPH Talk on Practical Real-Time Hair Rendering and Shading
Thorsten Scheuermann’s GDC Talk on Hair Rendering and Shading
Practical Real-Time Hair Rendering and Shading Paper

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s