Showing posts with label Rendering. Show all posts
Showing posts with label Rendering. Show all posts

Thursday, 25 April 2013

Distant Shadowing "brainfart"

Recently, I've been having a little bit of a think about how to handle realtime distant shadowing in games, in order to avoid shadow baking.

Clearly, it's entirely possible to use some kind of cascaded shadow map extending all the way out. But, this soon ends up getting expensive in terms of textures and rendering. Alternatively, you might consider using something like stencil shadow volumes, and using the extended compute capabilities of modern GPUs to clip down the shadow volume so it doesn't consume enormous amounts of rasterisation or clipping time.

Alternatively, I've been thinking a lot about reprojection. Let's say you interrogate your cascaded shadow map and use that to build a screen-space mask of the shadow response. You want soft(ish) shadows, so you commit a lot of PCF samples.

How about reprojecting last frame's shadow mask into this frame's shadow mask, then filling in the holes with some simple filtering operations? Distant shadows are typically pretty low-frequency things, so you could well get away with some blurring and imperfection in the shadow mask. Then, you need only render the geometry that became newly visible this frame into the shadow map. You could just subtract one frame's light frustum from the other, yielding a convex polytope, and only render the intersecting objects.

Just a brainfart...

Thursday, 28 March 2013

Thoughts on VPL Generation Using Cube Maps

Recently, I've been doing a lot of research and thinking about realtime GI and its related ideas. A common theme in realtime GI is the use of VPLs, a way to approximate the radiance bouncing off a surface after interacting with its material. A VPL is only formed where there is a definite light-to-surface interaction. The VPL can therefore be used to simulate light bouncing off geometry.

One technique in use today for generating VPLs is the Reflective Shadow Map algorithm. Here, we render the scene from the light's point of view, and every pixel in the output image represents a surface receiving primary illumination from the light source. This is obviously a fairly efficient way of finding all the surfaces that receive radiance from a given light source without too much wastage. You can then construct VPLs using that image, reduce the working set and inject them into some lighting algorithm to bounce light around your scene.

The difficulty is, this may not scale well as each new light source would require a new RSM render.

One alternative I considered was the possibility of using a cube map to sample the primary illumination from multiple lights simultaneously. You would put your cube map at your camera position, or some other meaningful point-of-interest. You then render the scene into each cube map face. As you render it, you light it from all of your light sources simultaneously, with shadow maps too if desired. Obviously this would be an expensive render.. but it would only be done once, you can vary the resolution of it and you can always use techniques like tiled lighting to accelerate it if you have many lights.

When complete, this cube map holds the radiance at the first light bounce for many surfaces in the scene near to the point of interest - ie the camera. It contains samples for surfaces all around - both in front of and behind the camera, which is important for GI. You may then use this cube map to generate VPLs for further bounce lighting.

Now, this is as-yet a completely untested idea. I will get round to it, though. I thought I'd throw it out there as an idea for using a commonly-accelerated rendering structure to try to help the scalability of realtime GI solutions.