
I intend for this post to be the first in a series of blog posts on techniques that are greatly facilitated by the concept of deferred rendering. The second post in the series will deal with tiled deferred shading, which is one of several possible solutions to the problem of light culling. The third will be about volumetric lighting, which sounds difficult, but makes use of familiar concepts that will be touched on in the posts leading up it.
Today, I want to focus on my implementation of screen space reflections, which uses information gathered in the geometry buffer (G-buffer) and the rendered scene to display effects such as water reflections. The basic idea is to execute full-screen post-processing that samples the depth to calculate the reflection for each pixel we want to apply the effect to.
Screen space reflections can offer a level of immersion that surpasses what static cubemaps give us with image-based lighting and typically involves ray marching. With screen space reflections, any other pixel in the rendered scene with respect to any given pixel—and even those not within the bounds of our rendered quad—is a potential reflector. As a screen-space pass that’s subsequent to the geometry pass, we get all the benefits of deferred rendering, such as not having to deal with per-object costs on the CPU or GPU. Because we’re sampling the environment and not a static cubemap, the algorithm really lends itself to animated and dynamic objects in the scene, and to glossy or approximate reflections, because we can choose to do whatever we want with the intersection data.
With that out of the way, let’s walk through the render pass fragment shader in greater detail.
Ray Marching from the Fragment
The idea is to start by calculating the reflection vector with respect to the fragment’s position and normal from the geometry buffer. Then, starting from our point in world space, travel in the direction of the reflection vector to find the exact point where the ray collides with a surface. Basically, you’ll want to march until you reach a point that potentially collides with the ray. At that point, we can apply a binary search to determine the precise position of that intersection, and as a result, the location of the fragment to map onto our pixel in question.
If it turns out that we never determine such a collision—for example, because the actual intersection lies beyond the bounds of possible depths—we might run into some issues. There are ways to deal with this situation. I just choose to limit the difference between the starting and sampled depths, essentially putting a bound on the range of possible fragments that our pixel could draw from. I also limited the amount of marching to a specific number of steps so that in the worst case we draw the color for our pixel in question from a pixel that is considered by the function to be the furthest one out.
vec4 rayMarch(inout vec3 position, vec3 direction, out float delta)
{
direction *= STEP;
vec4 projectedCoords;
float depth;
for (int i = 0; i < NUM_STEPS; ++i)
{
position += direction;
projectedCoords = gProjectionMatrix * vec4(position, 1.0f);
projectedCoords.xy /= projectedCoords.w;
projectedCoords.xy = projectedCoords.xy * 0.5f + 0.5f;
depth = texture(gPositionMap, projectedCoords.xy).z;
delta = position.z - depth;
// if (depth - (position.z - direction.z) < 1.2f)
// Is the difference between the starting and sampled depths smaller than the width of the unit cube?
// We don't want to sample too far from the starting position.
if (direction.z - delta < 1.2f)
{
// We're at least past the point where the ray intersects a surface.
// Now, determine the values at the precise location of intersection.
if (delta < 0.0f)
{
return vec4(binarySearch(position, direction, delta), 1.0f);
}
}
}
return vec4(projectedCoords.xy, depth, 0.0f);
}
In my particular case, I stored my positions in view space, so it’s just a single projection matrix away from determining the depth at a fragment. STEP is just a calibration factor to control for the size of a ray march step. I defined delta as the difference between the ray’s depth (i.e., at the current view space position) in our ray marching routine and the depth at the sampled position given by projecting that same view space position back onto our position map in the geometry buffer. If the stored depth value indicating the distance of the object closest to the camera is greater than the ray’s depth (i.e., closer to the camera in OpenGL’s right-handed coordinate system), then we can log an intersection and calculate the hit info more precisely with a subsequent binary search on our reflected ray:
vec3 binarySearch(inout vec3 position, inout vec3 direction, inout float delta)
{
vec4 projectedCoords;
float depth;
for (int i = 0; i < NUM_ITERATIONS; ++i)
{
direction *= BINARY_SEARCH_STEP;
projectedCoords = gProjectionMatrix * vec4(position, 1.0f);
projectedCoords.xy /= projectedCoords.w;
projectedCoords.xy = projectedCoords.xy * 0.5f + 0.5f;
depth = texture(gPositionMap, projectedCoords.xy).z;
delta = position.z - depth;
if (delta > 0.0f)
{
position += direction;
}
else
{
position -= direction;
}
}
projectedCoords = gProjectionMatrix * vec4(position, 1.0f);
projectedCoords.xy /= projectedCoords.w;
projectedCoords.xy = projectedCoords.xy * 0.5f + 0.5f;
return vec3(projectedCoords.xy, depth);
}
That’s pretty much the meat of my shader code. The rest is just calibration to fit the hit results nicely onto our scene. I went ahead and added a correction factor to calibrate the result for our pixel depending on how close the intersection is to the center of the screen; however, you may have a different idea of what kind of effect you would like to achieve. I also applied a Fresnel factor, which makes reflections at near-grazing angles more apparent. Whatever you do, just make sure you’re operating with a reasonably correct reflection vector.
I should also note that it may be useful to incorporate a reflection mask with your shader code just so that surfaces that don’t reflect data won’t have to incur the cost of the more expensive ray marching routine. I do so with the assistance of a metalness factor, which I also extract from material data and put through my deferred rendering pipeline. At values of 0, I opt to default to the normal color as determined by the lighting pass calculations. Obviously, with any “if” statement, there’s a possibility of bifurcation of shader executions across GPU warps, which comes with a staggering of shader execution times. Tread lightly, and don’t prematurely optimize if you don’t have to!
Potential Issues
It’s easy to get caught up in all these nice, glossy colors that get superimposed on otherwise boring materials, but at some point, we do have to address the potential artifacts that manifest due to the limitations of the G-buffer. Wronski suggests applying a separable filter (e.g., a Gaussian blur) based on the material’s glossiness, as well as a “push-pull” filter to eliminate holes. His blog post contains more details on those methods.
As for problems with the depth information, the biggest issue is that occluding objects can obfuscate the depth information in the scene, rendering the depth itself being insufficient to communicate the full granularity of the scene (i.e., because it can only store the depth information of the closest object at each pixel). As an illustration, suppose a ray registers a hit at a depth that is exceeded by the value in the G-buffer. There will be inherent information loss because you don’t actually know how thick the object sampled at that location is; you can only guess if there was a collision. However, this is just an implementation detail.
Another major problem with screen space reflections and the usually associated calibration steps is just how scene-dependent an implementation can be. Depending on which surfaces reflect and how much of the scene can reflect rays, we may have to spend a lot of time figuring out how to calibrate or cull data so that reflection amounts differ between surfaces. I’ve found that even small things such as offsetting the reflection vector by an amount based on a factor can have a major impact on the result, so there are certainly a lot of options to achieve the desired effect. All of this becomes more of an art than a science past the point of actual determination of light behavior.
Resources
Bartlomiej Wronski on The future of screenspace reflections
Assassin’s Creed IV: Black Flag – Road to Next-Gen Graphics
GDC follow-up: Screenspace reflections filtering and up-sampling