Percentage-Closer Soft Shadows

I’ve implemented a few shadow mappers, and so far, percentage-closer soft shadows (PCSS) was my favorite one to get up and running. It is a great illustration of Poisson disk sampling, and the contact hardening that is characteristic of the technique makes for compelling spatial relationship information. For comparison’s sake, here’s what I did for cascaded shadow mapping (CSM):

It doesn’t look too bad, because the transitions between the closest and furthest shadows don’t really display any particularly egregious artifacts.

CSM is a concept that utilizes different shadow maps at different clip-space depth ranges for the sake of performance and focusing detail on where it matters most. The closer shadows are rendered with smaller dimensions whereas the ones that are furthest out are drawn from maps that cover a larger area. Generally speaking, though, the artifacts that typically occur when shadows transition from using one map to another can be very distracting. In addition to that, there isn’t really any meaningful distinction or perceptual information being portrayed between the softer shadows and the harder ones. But of course, nothing is stopping us from implementing PCSS on top of CSM. It’s just that as a standalone technique, CSM isn’t going to give us the most realistic shadows.

Variance shadow mapping (VSM) avoids the performance penalty of sampling in the render pass and instead focuses the implementation on filtering the shadow map itself. VSM calculates the amount of shadowing using variance (which is determined by the moments passed in from the shadow map pass) and Chebyshev’s inequality.

For the curious, here is the corresponding shadow map that depicts the depth information from the light’s perspective:

The result after applying a horizontal and vertical Gaussian blur.

Compare this to the shadow map that PCSS would draw from:

Clearly, with VSM, there is a difference not only in the way shadow maps are generated but also in the way we retrieve information from them. But VSM’s advantage in minimizing the run-time of screen space calculations is something to really consider. Any opportunity that you can take to optimize an implementation should be considered, but the lack of spatial relationship information just doesn’t do it for me. The Gaussian smoothing more or less makes a difference along the edges of the shadows, but there’s no visual cue to demonstrate whether or not an object is floating above the ground.

Percentage-Closer Filtering

We can think of PCSS as an extension of percentage-closer filtering (PCF) in which we essentially calibrate the size of the PCF kernel (i.e., the sample area) so that it correlates with the softness of the shadow. What PCF does well is ameliorate the aliasing problems inherent in traditional shadow mapping, which occur due to the typically lower resolutions of shadow maps compared to the default render target and the distortion and magnification of depth information when light sources are at near-grazing angles with respect to occluders. The idea behind PCF is that it calculates the percentage of the surface that is closer to the light and not in shadow.

The quality of the penumbra (i.e., the partially shadowed region typically occurring at the edges of a shadow) really depends on the sampling method that we choose to quantify the amount of shadowing for each of our pixels. I do two things to increase the shadow quality of the rendering: employ stochastic sampling (i.e., Poisson disk sampling) and apply a random rotation to each sample kernel before sampling the shadow map.

Canonical PCF sampling in a 4 x 4 grid pattern results in blocky aliasing or regular patterns because using the same sampling locations for each pixel can result in patterns. In contrast, Poisson disk samples are chosen uniformly at random in such a way that they are no closer to one another than a specified minimum distance, resulting in a more natural pattern. It should be noted that Poisson disk sampling also draws from a consistent set of points and, as such, cannot completely eliminate patterns along the edges. But it does address the aforementioned shadow artifacts to a large extent.

Randomly rotating the Poisson disk sample distribution around its center takes this improvement the whole way by allowing us to turn all this structured aliasing into noise. There is one thing to keep in mind, though. We typically want random number generation to be stable with respect to the camera so that it doesn’t induce flickering from the numerical instability on camera movement or subsequent frame calculations, the latter of which can also affect still cameras.

To achieve this stability, we map any given world position to a specific random angle from a set that we precompute at the start of the application. This step ensures random rotations in our implementation but maintains frame-by-frame consistency so that our calculations don’t output different values on the same inputs.

void generateRandom3DTexture()
{
    std::array<std::array<std::array<glm::vec2, 32>, 32>, 32> randomAngles;
    const int RESOLUTION = 32;
    srand(time(nullptr));
    for (size_t i = 0; i < RESOLUTION; ++i)
    {
        for (size_t j = 0; j < RESOLUTION; ++j)
        {
            for (size_t k = 0; k < RESOLUTION; ++k)
            {
                float randomAngle = static_cast<float>(rand()) / RAND_MAX * 2 * glm::pi<float>();
                randomAngles[i][j][k] = glm::vec2(glm::cos(randomAngle) * 0.5f + 0.5f, glm::sin(randomAngle) * 0.5f + 0.5f);
            }
        }
    }

    glGenTextures(1, &anglesTexture);
    glBindTexture(GL_TEXTURE_3D, anglesTexture);
    glTexImage3D(GL_TEXTURE_3D, 0, GL_RG16F, RESOLUTION, RESOLUTION, RESOLUTION, 0, GL_RG, GL_FLOAT, &randomAngles);

    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}

This is how I implemented PCF to calculate the shadow at each shaded point:

float calcShadow()
{
    if (LIGHT_SPACE_POS_POST_W.z > 1.0f)
    {
        return 0.0f;
    }

    float shadow = 0.0f;

    float bias = max(0.05f * (1.0f - NoL), 0.005f);
    float pcfKernelSize = calcPCFKernelSize(bias);
    for (int i = 0; i < NUM_SAMPLES; ++i)
    {
        vec2 offset = vec2(
            ROTATION.x * POISSON_DISK[i].x - ROTATION.y * POISSON_DISK[i].y,
            ROTATION.y * POISSON_DISK[i].x + ROTATION.x * POISSON_DISK[i].y);

        float pcfDepth = texture(gShadowMap, LIGHT_SPACE_POS_POST_W.xy + offset * TEXEL_SIZE * pcfKernelSize).r;
        shadow += LIGHT_SPACE_POS_POST_W.z - bias > pcfDepth ? 1.0f : 0.0f;
    }

    float l = clamp(smoothstep(0.0f, 0.2f, NoL), 0.0f, 1.0f);
    float t = smoothstep(RANDOM_VALUES.x * 0.5f, 1.0f, l);

    shadow /= (NUM_SAMPLES * t);

    return shadow;
}

Here, I defined 1 to be in shadow and 0 to be not in shadow. I will get into the calcPCFKernelSize function in just a bit, but that essentially just varies the filter size as I mentioned earlier.

Notably, there are a few correction factors: one to deal with the visual discrepancy of coordinates beyond the depth buffer’s range being in shadow (i.e., if LIGHT_SPACE_POS_POST_W.z is greater than 1) and a slope scale bias to prevent self-shadow aliasing (i.e., shadow acne). Technically, these issues can be addressed in the hardware, but for this exercise I choose to resolve them in code. Shadow acne occurs due to the limits of numerical precision. Values generated from the light’s perspective are almost never the same as what’s sampled in screen space—the light’s stored depth value may be slightly less than the surface’s when viewed from our perspective. The fix for this is to use a bias that is proportional to the angle of the receiver to the light (i.e., N dot L) so that the more the surface tilts away from the light, the greater we increase the bias.

There’s also a third correction factor, t, that biases the shadow factor towards 0 or 1, thereby controlling the falloff or transition from shadowy to lit regions. The idea is that we want to bias the N dot L factor (which typically creates hard edges and sharp transitions from shadowy to lit) to match more desirable smooth transitions. Concretely, if the angle of the receiver to the light falls between 0 and 0.2, we perform a smooth interpolation of it between the values 0 and 1. If the result of that interpolation falls between half of the cosine extracted from our texture of random cosines and 1, we smoothly interpolate that as well.

Notice that the value of t depends greatly on l to the extent that the closer l is to 1, the closer t will be to 1. Conversely, the closer l is to 0, the closer t will be to 0.

The rest is pretty simple: extract a random angle, use it to rotate a given Poisson disk sample, then take the computed offsets and calibrate them even further with the calculated PCF kernel size before putting it all together to determine the texture coordinates from which to sample the shadow map. At the end of our routine, we average the samples and apply the correction that I mentioned earlier to land us a shadow factor that we can use to calibrate the result on the pixel.

All that remains is an explanation of the soft shadow control functions, all encapsulated in our kernel size calculation. In other words, we’re ready to discuss the actual PCSS part of the algorithm.

Extending Percentage-Closer Filtering

What the kernel size function essentially does is estimate the width of the penumbra. To do that, it evaluates a formula based on the size of our light source, the average distance of the occluders sampled from the nearby area on the shadow map, and the receiver distance (i.e., the depth value indicating the distance from the light source to the point at which we want to compute an illumination value or color):

In code, this translates to the following:

float calcPCFKernelSize(float bias)
{
    float receiverDepth = LIGHT_SPACE_POS_POST_W.z;
    float blockerDistance = calcBlockerDistance(bias);
    if (blockerDistance == -1)
    {
        return 1;
    }

    float penumbraWidth = (receiverDepth - blockerDistance) / blockerDistance;
    return penumbraWidth * gCalibratedLightSize * NEAR / receiverDepth;
}

Note that all our calculations so far have been done from the light’s perspective because it simplifies the code, and we’ve also applied an additional correction in the form of the near plane distance (from the camera position) divided by the receiver distance.

There’s a notable correlation between the average blocker distance and the size of the penumbra. The closer the occluders are to the receiving pixel, the smaller and more contact hardened the result, and the more we defer to the umbra. The farther they are, the larger the penumbra width, and the more easily discernible the soft shadow. Through the mechanism of controlling the size of the sample region in PCF, the formula listed above has a large influence in translating vertical depth information to our final result.

To calculate the average blocker distance, I employed the following shader code:

float calcSearchWidth(float receiverDepth)
{
    return gCalibratedLightSize * (receiverDepth - NEAR) / gViewPos.z;
}

float calcBlockerDistance(float bias)
{
    float sumBlockerDistances = 0.0f;
    int numBlockerDistances = 0;
    float receiverDepth = LIGHT_SPACE_POS_POST_W.z;

    int sw = int(calcSearchWidth(receiverDepth));
    for (int i = 0; i < NUM_SAMPLES; ++i)
    {
        vec2 offset = vec2(
            ROTATION.x * POISSON_DISK[i].x - ROTATION.y * POISSON_DISK[i].y,
            ROTATION.y * POISSON_DISK[i].x + ROTATION.x * POISSON_DISK[i].y);

        float depth = texture(gShadowMap, LIGHT_SPACE_POS_POST_W.xy + offset * TEXEL_SIZE * sw).r;
        if (depth < receiverDepth - bias)
        {
            ++numBlockerDistances;
            sumBlockerDistances += depth;
        }
    }

    if (numBlockerDistances > 0)
    {
        return sumBlockerDistances / numBlockerDistances;
    }
    else
    {
        return -1;
    }
}

The most important thing to note is that the size of the search region depends on the size of our light source and the receiver’s distance from the light.

Resources

Real-Time Rendering
Andrew Lauritzen on Variance Shadow Maps in GPU Gems 3
Variance Shadow Mapping by Kevin Myers
Randima Fernando’s Percentage Closer Soft Shadows Paper

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s