Accessing the depth buffer in GLSL

Started by oli3012, December 16, 2009, 07:43:00 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


Hi! I am currently trying to implement DOF in GeeXLab 0.1.14, but I am unable to access the Depth Buffer. I started from a shader of the shader library, and that's what my code looks like:

       <render_texture name="sceneBuffer" type="FRAMEBUFFER" >
      <attach_texture_depth pixel_format="DEPTH"/>
      <attach_render_buffer type="DEPTH" />

   <post_processing name="MainPostProcessing">
      <effect name="Dream_Effect">
         <phase name="Dream_Effect_Phase">
            <step name="_1" target="FRAMEBUFFER" gpu_shader="DreamShader" >
               <texture render_texture="sceneBuffer" render_texture_type="DEPTH" render_texture_index="0" />

   <shader_program name="DreamShader" >
      <constant_1i name="sceneTex" value="0" />
void main(void)
  gl_Position = ftransform();
  gl_TexCoord[0] = gl_MultiTexCoord0;
uniform sampler2D sceneTex; // 0
void main()
  vec2 uv = gl_TexCoord[0].xy;
  float c = texture2D(sceneTex, uv);
  gl_FragColor = c == 0.0 ? vec4(1.0) : vec4(0.0);

My screen is completely white, which means that the depth of every pixel is equal to 0... Could someone see if I made an error here, or send me a piece of code that can access the depth buffer ? Thank you for your help!


I'm preparing a small demo to display the depth buffer...


Wow thank you! This really helped me out! I finally achieved the effect I desired ; a dynamic DOF using 5x5 Gaussian blur. Here is a screenshot of the result:

There is still some optimizations to do, I must find a way to diminish the bleeding around near objects, and I will try manual down sampling to get better performances, but that's a start !

If someone wants to try it, here is the code:


Absolutely nice work man!
I think I'll publish your demo officially on Geeks3D front page.

I recommend you to manage window resizing with a SIZE script like this:

<script name="resize" run_mode="SIZE" >
  local w, h = HYP_Scene.GetWindowSize()
  id = HYP_GPUShader.GetId("HorizontalBlur")
  HYP_GPUShader.SetConstant_1f(id, "texWidth", w)
  id = HYP_GPUShader.GetId("VerticalBlur")
  HYP_GPUShader.SetConstant_1f(id, "texHeight", h)



I finally succeeded to eradicate most of the visual artifacts in my DOF shader!  My biggest problem was the bleeding effect caused by the merging of the blur buffer and the scene. I played a bit with the kernel and sampler values and finally found a way to attenuate it.

Though, I wanted to try other approaches, so I did another version of the DOF shader. In this one, instead of blurring all the scene and merging it to the initial image in concordance with the depth buffer, I do the blur passes by interpolating the kernel value in function of the depth buffer. This technique is far more GPU intensive, though it yields better results. It totally eradicates the bleeding artifact, gives a smoother blur and gives the impression of a smoother focal transition when moving the camera.

Here is a picture that shows the two techniques:

The fast DOF is probably better for games though, the visual difference isn't so noticeable, but the performances are far better (2x faster: 200fps vs 400fps on my 8800gt).

Here is the code if you want to check it out!


Oh yesss! I finally found a way to optimize my DOF shader with manual down sampling :P Now it is as fast as the fast DOF, but without loosing quality!

Here is the link:


If the link is broken, you can use this one:

Crysis-Like post-effect, here it is :P



Maybe a little remark: I don't know if you can use a texture in the same time as source and as destination like you do here with blurBuffer:

<step name="_2" target="blurBuffer" gpu_shader="VerticalBlur" >
  <texture render_texture="blurBuffer" render_texture_type="COLOR" />
  <texture render_texture="sceneBuffer" render_texture_type="DEPTH" />

I think you should use another render texture and do some ping-pong between two render textures:

<render_texture name="blurBuffer1" type="COMPOSITE" />
<render_texture name="blurBuffer2" type="COMPOSITE" />
<step name="_1" target="blurBuffer2" gpu_shader="VerticalBlur" >
  <texture render_texture="blurBuffer1" render_texture_type="COLOR" />
  <texture render_texture="sceneBuffer" render_texture_type="DEPTH" />

<step name="_2" target="blurBuffer1" gpu_shader="HorizontalBlur" >
  <texture render_texture="blurBuffer2" render_texture_type="COLOR" />
  <texture render_texture="sceneBuffer" render_texture_type="DEPTH" />


Mmmmm, this will probably change the visual appearence of the blur, my guess is that it will sharpen the blur a bit. I will try it though, that's a really good point.

And by the way, if you're still interested in posting this demo on Geeks3D, there is absolutly no problem. In fact, I really would be flattered!


I tested what you suggested, I did some ping-pong with the blur buffer. It was a worthwhile experiment, and it confirmed what I was thinking. It produces a sharper, and more aliased blur. The reason is that when using the same buffer for input and output, once a pixel is blurred it is written in the output buffer, and thus the other pixels do their blur computations with the value of already blurred pixels. This creates a very smooth blur and helps reduce aliasing. Performances are not affected by using 2 buffers.

I must admit though, that I don't fully understand how it is working, because modern GPU are supposed to do their computations in parallel, and thus it cannot ensure that all neighboring pixels of the currently processed pixel are actually done processing. My guess is that the computing time per pixel varies each frame and thus the resulting image, though the difference is so minimal that we don't see it.

In my opinion using the same buffer for input and output produces a better looking blur, and thus dof_demo3.rar (see my previous post) will certainly be the final version of my DOF shader.


Ok if the same texture can be used for input and output in the same time, then your technique works fine. I dont know why but I thought we couldn't use a texture in such manner in a render texture.

I will publish a post with your demo and add it to GeeXLab code samples repository.

Do you have a blog or something like this?

PS: are you french?


Yes I am french :P Did I made some english mistakes ? :P Well, I should say that I *speak* french, I live in Quebec. And no, I don't have any blog.

I tried GeeXLab a few weeks ago, so I am quite a beginner with this tool, but I must say that it is so easy to learn and to use, that now it's hard for me to return to using the tools/engines I used for doing shaders and demos. I've seen and used a lot of engines before, but really, GeeXLab (and Demoniak3D) is very unique. It really is a good compromise between tool and engine by offering the control that only a programming language can provide and the easiness of use of RAD applications. I can only admire the work you've done creating this tool, and I will probably use it extensively for my future projects.


Oh no, your english is very very fine, better than mine.
I googled your nickname and found some messages in french that's explain my question  ;)


Alors bienvenu dans la petite communauté GeeXLab. Un blog en francais existe aussi sur GeeXLab et les outils similaires:
Il est mis à jour moins souvent que geeks3d mais c'est le début (ca devrait aller mieux à partir de janvier) alors viens y faire un tour de temps en temps. Et si tu souhaites être contributeur sur le HackLAB, c'est quelque chose qui pourrait se faire...


Thank you for your nice feedback about GeeXLab

I'm preparing a big update of GeeXLab with new features, I hope to release it in early January...


I assessed precisely what you suggested, I did some ping-pong due to the blur buffer. It had been a worthwhile test, and it verified precisely what I was considering. It generates a sharper, and much more aliased blur. The factor would be that whenever using the exact same buffer for input as well as production, when a pixel is blurred it is penned within the output buffer, and so the different pixels do their blur computations while using the value of already blurred pixels. This creates a really smooth blur and also helps reduce aliasing. Shows tend to be not affected through the use of 2 buffers.