The rendering equation consists of two major components—the diffuse and specular lighting integrals—that can each be approximated in a number of ways. At the end of the day, physically based rendering is all about representing the precise interactions of light in the most accurate manner possible with the goal of presenting scenes in a way that most resembles what we see in real life. To this end, a lot of techniques are focused on extrapolating data from these integrals, which are continuous functions that we need to understand in terms of discretization because computers can only understand discrete numbers.

There’s another aspect to all of this, and that is the real-time component that takes these discretizations a step further so that we can experience physically accurate colors in real-time in, say, a video game. Even integral discretizations are tough to compute in real-time, so what we normally do is precompute environment maps for each of the diffuse and specular reflections (such precomputations typically manifest as loading times to the user) and then sample these maps in our shaders during run-time. Diffuse maps are indexed with the world space normal on any given geometry, and they contain irradiance values—hence the term “irradiance environment map.” The idea is that the normal contains all the information that we need to determine the diffuse reflectance value, and while irradiance isn’t precisely what we need in radiance, it does lead to it.

To go a step further, we can precompute function coefficients that map to our desired diffuse map, thereby saving some space even though we don’t normally store irradiance maps at high resolution. Spherical harmonics (SHs) are a popular irradiance environment map representation and can lend any game engine a number of options that canonically baked lighting (usually determined by Monte Carlo integration at each normal) might struggle to produce. Spherical harmonics open up the scene to dynamic objects when embedded as light probes, which we can think of as cubemap extrapolations at various points in world space. We can merge the SH coefficients of dynamic light irradiance contributions with existing coefficients to avoid recomputing entire irradiance maps and use zonal harmonics rotation to greatly simplify rotations (i.e., from matrix transformations to dot products), among other options that make SHs truly amenable to real-time situations.

Spherical harmonics are also excellent for triaging light sources as they can compress aggregate lighting information for the less prominent lights into light probes and allow the most important ones to be rendered normally. This, again, lends itself to lights changing in real-time and the simulation being able to reflect those changes back into the scene. We can also precompute the light probes before run-time and just interpolate them. It’s definitely a lot nicer to precompute these spherical harmonic coefficients than it is to deal with the Monte Carlo stuff.

So, that’s the motivation for spherical harmonics; however, the method is quite a bit simpler than it first appears. Beyond the mathematical foundations, it’s really about generating a set of coefficients that will be utilized in the render pass to calculate the diffuse lighting contribution from the environment. That being said, there are a couple of things to keep in mind so that the reasoning behind the method is clear.

We compute irradiance by taking the incoming radiance, which is represented as a computed set of SH coefficients, and convolving it with a cosine lobe (also represented as a set of SH) oriented about the surface normal. This involves taking the dot product between two sets of spherical harmonics, where the cosine lobe presents a much simpler representation, as derived in detail in Ravi Ramamoorthi’s paper. So, given a function with coefficients that collectively describe the scene’s diffuse lighting, the input is any given normal and the output is the irradiance (and radiance) at that normal, which is what you’d expect with any diffuse lighting equation. One nice thing about convolution with the cosine function is that it removes all the high-frequency components from the environment map, which works well for irradiance from diffuse lighting.

Given the world space normal at the pixel and the spherical harmonic coefficients that function map to an irradiance representation of the environment, we can compute the irradiance and, subsequently, the radiance value that corresponds to the diffuse lighting.

**SH Projection**

Spherical harmonics have an analog in the Fourier transform, which takes a function in the time domain and decomposes it into a frequency domain representation that usually consists of multiple functions. The difference, of course, is that spherical harmonics are defined across the surface of the sphere as opposed to in 1D. The idea is that because directly evaluating the integral representing diffuse lighting can be rather expensive, it’s beneficial to use spherical harmonics instead because they are supposed to be good approximations of the underlying input functions and are faster to evaluate.

There’s quite a bit of theory that’s worth going over (take a look at the resources listed down below), but the most important thing that we need to do is sample an input environment cubemap for a set of coefficients. In this case, let’s keep things simple and concern ourselves with the diffuse lighting contribution from our environment map.

SH9Color genLightingCoefficients(unsigned int texture, int resolution) { SH9Color result; for (size_t i = 0; i < result._coefficients.size(); ++i) { result[i] = glm::vec3(0.0f); } float u, v, temp, weight, weightSum = 0.0f; size_t row, col; glm::vec3 L; float* img = new float[3 * resolution * resolution]; glBindTexture(GL_TEXTURE_CUBE_MAP, texture); for (size_t s = 0; s < NUM_CUBEMAP_FACES; ++s) { glGetTexImage(GL_TEXTURE_CUBE_MAP_POSITIVE_X + s, 0, GL_RGB, GL_FLOAT, img); for (size_t y = 0; y < resolution; ++y) { for (size_t x = 0; x < resolution; ++x) { row = 3 * resolution * y; col = 3 * x; L = glm::vec3(img[row + col], img[row + col + 1], img[row + col + 2]); u = (x + 0.5f) / resolution; v = (y + 0.5f) / resolution; u = u * 2.0f - 1.0f; v = v * 2.0f - 1.0f; temp = 1.0f + u * u + v * v; weight = 4.0f / (sqrt(temp) * temp); glm::vec3 N = mapUVSToN(u, v, s, resolution); result += genLightingCoefficientsForNormal(N, L) * weight; weightSum += weight; } } } delete img; result *= 4.0f * glm::pi<float>() / weightSum; return result; }

A basic sketch of this processing of every texel on the environment map can be found in the talk *Stupid Spherical Harmonics (SH) Tricks*. The texture coordinate calculations require a bit of explaining, but basically, we’re offsetting each *x* and *y* index by 0.5 to match the center of a pixel we’d like to query precisely. The subsequent times 2 minus 1 maps the value from texture coordinate space to the coordinate on the given face that is not +/− 1 and allows us to readily determine a corresponding normal and weight for our spherical harmonic lighting coefficients sample (we need to weight the result by the differential solid angle of the texel because our situation is equivalent to sampling points off of a sphere).

As for the normalization step, the normalized sum (i.e., weightSum) of the differential solid angles should be 4π; however, in cases when it is not, we can make sure we’re applying an appropriate correction factor to our result.

Essentially, the Monte Carlo integration that we just described serves to project a spherical function (e.g., a function describing the incoming light contribution with respect to a given direction) onto spherical harmonic coefficients. For context, recall the equation for exitant light that we’re trying to evaluate:

In the case of Lambertian diffuse lighting, we have the following:

What we really need to do is project *both* the function describing the incoming light radiance *and *the cosine correction factor to their respective sets of spherical harmonic coefficients. However, as I mentioned briefly, there’s no need to explicitly precompute the coefficients for the cosine factor, and it’s trivial to represent cosine correction in this context as a set of SH coefficients that can be generated with just the normal as an input, as you will see below in the shader code.

The only thing we need to worry about is generating coefficients for the incoming diffuse light from the environment. We can calculate a single coefficient **m** for a specific band **l** using the following formula (I highly recommend getting a sense of specific concepts such as band by reading Robin Green’s document posted below, so you can visualize how spherical harmonics manifest):

In code and across all lighting coefficients (we choose nine for our use case because that number tends to map well to diffuse lighting), this manifests as the following:

SH9 genSHCoefficients(const glm::vec3& N) { SH9 result; // Band 0 result[0] = 0.282095f; // Band 1 result[1] = 0.488603f * N.y; result[2] = 0.488603f * N.z; result[3] = 0.488603f * N.x; // Band 2 result[4] = 1.092548f * N.x * N.y; result[5] = 1.092548f * N.y * N.z; result[6] = 0.315392f * (3.0f * N.z * N.z - 1.0f); result[7] = 1.092548f * N.x * N.z; result[8] = 0.546274f * (N.x * N.x - N.y * N.y); return result; } // c_l_m = Sum of All L(theta, phi) * Y_l_m(theta, phi) = Sum of All L(N) * Y_l_m(N) SH9Color genLightingCoefficientsForNormal(const glm::vec3& N, const glm::vec3& L) { SH9 SHCoefficients = genSHCoefficients(N); SH9Color result; for (size_t i = 0; i < result._coefficients.size(); ++i) { result[i] = L * SHCoefficients[i]; } return result; }

In this context, the SH functions *y *are specified in the **genSHCoefficients** function and the incoming radiance *L *at the given normal takes on the function *f *in the outlined equation. As can be inferred from the outlying function above, **genLightingCoefficientsForNormal **returns a single sample for each coefficient that we’re looking to compute, that the aforementioned **genLightingCoefficients **function aggregates and uses to finalize our desired set of coefficients.

There’s quite a bit of theory behind how the spherical harmonic functions *y *are represented. As a programmer, I always recommend taking a deeper look into some of the theory behind the numbers!

**Signal Reconstruction**

We’re not quite done yet. Assuming that we went through and determined our lighting coefficients for the radiance coming from the environment, we still need to find a way to utilize these values at every pixel that calls for a calculation of the diffuse environment lighting contribution. To reconstruct the function *f*, we take the sum of the SH functions *y*, each scaled by the corresponding coefficient:

However, we don’t actually do that because there’s still the caveat of having to incorporate the cosine correction factor. Let’s take a closer look at the paper by Ramamoorthi, and suppose that we have the following related signal reconstruction sums—one referring to the incoming radiance that we’re familiar with, and the other for the irradiance *E *that actually takes into consideration the cosine correction factor:

Noting, of course, that

This equation is pretty much the reflectance equation from earlier sans the Lambertian term.

Now, given the SH coefficients corresponding to our cosine factor—of which we only concern ourselves with the band **l **suffix because **m** is always 0 due to the observation that we always orient the cosine lobe about the z-axis—what we want to do next is essentially rotate the lobe so that it becomes aligned with the normal direction and then perform a dot product between the two sets of coefficients to give us the irradiance. It can be shown that

Letting

The irradiance can now be calculated precisely with

where the *A hats* are just the aforementioned zonal harmonic coefficients (that are constants because the cosine lobe never changes) that we need to rotate using the normal information encapsulated in the SH functions.

With all of that out of the way, we are ready to calculate the diffuse lighting contribution from the environment for a given pixel (in cases when the material requires that calculation):

vec3 calcSHIrradiance(vec3 N) { vec3 irradiance = gSH9Color[0] * 0.282095f * A0 + gSH9Color[1] * 0.488603f * N.y * A1 + gSH9Color[2] * 0.488603f * N.z * A1 + gSH9Color[3] * 0.488603f * N.x * A1 + gSH9Color[4] * 1.092548f * N.x * N.y * A2 + gSH9Color[5] * 1.092548f * N.y * N.z * A2 + gSH9Color[6] * 0.315392f * (3.0f * N.z * N.z - 1.0f) * A2 + gSH9Color[7] * 1.092548f * N.x * N.z * A2 + gSH9Color[8] * 0.546274f * (N.x * N.x - N.y * N.y) * A2; return irradiance; }

Don’t forget to apply the Lambertian BRDF constant for the final radiance as well:

diffuseColor = calcSHIrradiance(N) * albedo / PI;

**Results**

A lot has been written regarding concepts and implementation details, but I think some comparisons with ground truth (i.e., irradiance maps derived from standard prefiltering) results should demonstrate the viability of our approach.

**Future Work**

The next thing I’d like to do is take this spherical harmonic concept and use it to implement light probes distributed in my scene. I’m curious to see how it all looks because, for now, the comparisons in the last section were made using a single light probe at the center of the scene (i.e., we generated maps with respect to world space location [0, 0, 0]). It would be nice to use spherical harmonics in its optimal use case—that is, in the way it was meant to be used to incorporate dynamic light sources.

**Resources**

Real-Time Rendering

Robin Green’s Spherical Harmonic Lighting: The Gritty Details Document

Ravi Ramamoorthi’s An Efficient Representation for Irradiance Environment Maps Paper

Stupid Spherical Harmonics (SH) Tricks