[GeeXLab] How to Visualize the Depth Buffer in GLSL

GeeXLab - Display the Depth Buffer



This post can be seen as a reply to this thread. Here is a demo that shows how to visualize the depth buffer in GLSL with GeeXLab.

You can grab the demo here:
Download GXL - Post FX - Depth Buffer
You need GeeXLab 0.1.14 to play the demo. Just unzip the archive somewhere and drop the Display_DepthBuffer.xml file in GeeXLab.

Depth buffer (or z buffer) visualization is implemented with a post processing filter (see details in the demo source code).

On the left part, you see the linearized depth buffer values (see the function LinearizeDepth()) while on the right part you see the values from a direct depth buffer reading (these values are non-linear – more explanations below).

Here is the GLSL source code of this post processing shader:

[Vertex_Shader]
void main(void)
{
  gl_Position = ftransform();
  gl_TexCoord[0] = gl_MultiTexCoord0;
}
[Pixel_Shader]
uniform sampler2D sceneSampler; // 0
uniform sampler2D depthSampler; // 1

float LinearizeDepth(vec2 uv)
{
  float n = 1.0; // camera z near
  float f = 100.0; // camera z far
  float z = texture2D(depthSampler, uv).x;
  return (2.0 * n) / (f + n - z * (f - n));	
}
void main() 
{ 
  vec2 uv = gl_TexCoord[0].xy;
  //vec4 sceneTexel = texture2D(sceneSampler, uv);
  float d;
  if (uv.x < 0.5) // left part
    d = LinearizeDepth(uv);
  else // right part
    d = texture2D(depthSampler, uv).x;
  gl_FragColor.rgb = vec4(d, d, d, 1.0);
}

Some notes:
In a perspective projection, the z buffer value is non-linear in eye space: in short, depth values are proportional to the reciprocal of the z value is eye space (see here). There is more precision close to the camera (or eye) and less precision far from the eye. But z values are linear in screen space (see here).

In this thread, there is a nice explanation about the non-linearity of the depth buffer:

Z is nonlinear because perspective-correct rasterization requires linear interpolation of 1/z -- linear interpolation of z itself does not produce the correct results. The hardware must calculate 1/z at each vertex and interpolate it across a triangle, so it's convenient to just write that value to the depth buffer instead of performing an expensive division at every pixel to recover z.

The fact that you get more z precision closer to the near plane is just a side effect and has nothing to do with the motivation behind 1/z interpolation.

And here is a collection of links related to depth buffer and its linearization:




Geeks3D.com

↑ Grab this Headline Animator