Showing posts with label Lighting. Show all posts
Showing posts with label Lighting. Show all posts

Wednesday, 5 June 2013

Polygonal Area Light Sources on DX11-Class Hardware

The surge in interest in Physically-Based Rendering has led to a surge in interest in area light sources. There is, however, something of a dearth of information available for the aspiring implementer.


Introduction

The objective of polygonal area light sources is to define an arbitrary polygon in space that may emit light. I'm going to make two simplifications here: firstly, that the polygon emits a constant colour across its surface and secondly, that we are not considering shadowing.

The earliest source I was able to find on this subject is James Arvo's 1994 paper on polyhedral light sources.  John Snyder also presents another excellent reference. The approach taken here is to compute a vector field such that at any point in space r, we may obtain the direction of the light. This vector field is a function of the position r, but also the polygon's vertices. This direction can then be substituted into your brdf equation for a particular surface.

Arvo's algorithm is simple: for each edge of the polygon he computes a normal named gamma, and a weighting value named theta. The normal is of the triangle formed by the polygon edge and the point r. The weight theta is the angle formed by the pair of vectors from the point r to each edge vertex. He forms a weighted sum of these normals, divides it by 2*PI (to normalise the weights returned by acos) and scales it by the light's intensity. This is all you need to perform basic area lighting.

There are a couple of twists. Firstly, you typically need to negate the computed vector to get it into a form ready to dot against your normal. Secondly, you must clip the polygon against the plane of the point being shaded. It is possible to have a valid light source that straddles the plane of the point being shaded.


Basic HLSL code

Here is the HLSL code to perform the basic calculation to obtain the light vector for a polygon at a given point. Here, I process quads. This is no algorithm restriction; this is purely to minimise buffer traffic.

// Derive quad vertices in world-space
float3 quadCentre = g_quadLights[lightIndex].pos;
float3 quadAxisX = g_quadLights[lightIndex].axisX * 0.5f;
float3 quadAxisY = g_quadLights[lightIndex].axisY * 0.5f;
float4 quadVertices[4] = 
{
    float4(quadCentre - quadAxisX - quadAxisY, 1.0f),
    float4(quadCentre + quadAxisX - quadAxisY, 1.0f),
    float4(quadCentre + quadAxisX + quadAxisY, 1.0f),
    float4(quadCentre - quadAxisX + quadAxisY, 1.0f),
};

float3 lightField = 0.0f;
uint polyVertex1 = 3;
for(uint polyVertex0 = 0; polyVertex0 < 4; ++polyVertex0)
{
    float3 delta0 = quadVertices[polyVertex0].xyz - worldSpacePos;
    float3 delta1 = clippedPolygon[polyVertex1].xyz - worldSpacePos;
    float3 edge0 = normalize(delta0);
    float3 edge1 = normalize(delta1);
    float theta = acos(dot(edge0, edge1));

    float3 gamma = normalize(cross(delta1, delta0)); // Note you can't just cross edge0 and edge1 to save the normalize - that gets you the wrong result!

    lightField += theta * gamma;
    polyVertex1 = polyVertex0;
}

// Note that we don't divide by 2*PI as in the paper. That is just there to account for the sum of the theta values
// We don't need it since we're doing a normalise - it's just a uniform scale

// Summing normalised vectors doesn't necessarily give a normalised result. Also flip the vector ready for light calculations
lightField = -normalize(lightField);


Clipping

We must now deal with the problem of clipping. This is a particularly awkward operation, especially when it may need to be done per-pixel. Firstly, we must establish whether we actually need to clip. Since the quads are formed using a quad centre and two axes, this may be handled using something very similar to an OBB vs plane test:

float criticalDistance = abs(dot(worldSpaceNormal, quadAxisX)) + abs(dot(worldSpaceNormal, quadAxisY));
float planeToQuadCentreDistance = dot(worldSpacePlane, float4(quadCentre, 1.0f));
if(planeToQuadCentreDistance < -criticalDistance)
{
    // Invisible
}

if(planeToQuadCentreDistance < -criticalDistance)
{
    // Fully visible - unclipped
}

We then use standard Sutherland-Hodgman clipping to trim the polygon against the plane:

float4 clippedPolygon[5]; // Clipping against 1 plane can add one extra vertex to the quad
uint numClippedVertices = 0;

// Clip the polygon against the plane of the point being shaded
float vertexPlaneSide[4] =
{
    sign(dot(quadVertices[0], worldSpacePlane)),
    sign(dot(quadVertices[1], worldSpacePlane)),
    sign(dot(quadVertices[2], worldSpacePlane)),
    sign(dot(quadVertices[3], worldSpacePlane)),
};

numClippedVertices = 0;
uint previous = 3;
for(uint current = 0; current < 4; ++current)
{
    float currentSide = vertexPlaneSide[current];
    if(currentSide < 0.0f)
    {
        if(vertexPlaneSide[previous] > 0.0f)
        {
            float4 v1 = quadVertices[previous];
            float4 v2 = quadVertices[current];
            float3 delta = v2.xyz - v1.xyz;
            float t = dot(worldSpacePlane, v2) / dot(worldSpacePlane.xyz, delta);
            clippedPolygon[numClippedVertices++] = float4(v2.xyz - delta * t, 1.0f);
        }
    }
    else
    {
        float4 v1 = quadVertices[current];
        if(currentSide > 0.0f && vertexPlaneSide[previous] < 0.0f)
        {
            float4 v2 = quadVertices[previous];
            float3 delta = v2.xyz - v1.xyz;
            float t = dot(worldSpacePlane, v2) / dot(worldSpacePlane.xyz, delta);
            clippedPolygon[numClippedVertices++] = float4(v2.xyz - delta * t, 1.0f);
                    }

            clippedPolygon[numClippedVertices++] = v1;
        }

        previous = current;
    }
}

The resulting polygon may then be used in the original light field calculation code.

Thursday, 28 March 2013

Thoughts on VPL Generation Using Cube Maps

Recently, I've been doing a lot of research and thinking about realtime GI and its related ideas. A common theme in realtime GI is the use of VPLs, a way to approximate the radiance bouncing off a surface after interacting with its material. A VPL is only formed where there is a definite light-to-surface interaction. The VPL can therefore be used to simulate light bouncing off geometry.

One technique in use today for generating VPLs is the Reflective Shadow Map algorithm. Here, we render the scene from the light's point of view, and every pixel in the output image represents a surface receiving primary illumination from the light source. This is obviously a fairly efficient way of finding all the surfaces that receive radiance from a given light source without too much wastage. You can then construct VPLs using that image, reduce the working set and inject them into some lighting algorithm to bounce light around your scene.

The difficulty is, this may not scale well as each new light source would require a new RSM render.

One alternative I considered was the possibility of using a cube map to sample the primary illumination from multiple lights simultaneously. You would put your cube map at your camera position, or some other meaningful point-of-interest. You then render the scene into each cube map face. As you render it, you light it from all of your light sources simultaneously, with shadow maps too if desired. Obviously this would be an expensive render.. but it would only be done once, you can vary the resolution of it and you can always use techniques like tiled lighting to accelerate it if you have many lights.

When complete, this cube map holds the radiance at the first light bounce for many surfaces in the scene near to the point of interest - ie the camera. It contains samples for surfaces all around - both in front of and behind the camera, which is important for GI. You may then use this cube map to generate VPLs for further bounce lighting.

Now, this is as-yet a completely untested idea. I will get round to it, though. I thought I'd throw it out there as an idea for using a commonly-accelerated rendering structure to try to help the scalability of realtime GI solutions.