开发者

How to compute average of 2 depth textures by blending?

开发者 https://www.devze.com 2023-03-15 20:38 出处:网络
I need to compute the average of two depth textures at each pixel.I expect I could开发者_JAVA技巧 do that with a GLSL fragment shader, but I\'d prefer a solution that works on the dumbest possible har

I need to compute the average of two depth textures at each pixel. I expect I could开发者_JAVA技巧 do that with a GLSL fragment shader, but I'd prefer a solution that works on the dumbest possible hardware, so I tried blending. Here's the main code:

/// Initialize color buffer to black
glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );
glClear( GL_COLOR_BUFFER_BIT );

// Turn off unnecessary operations
glDisable( GL_DEPTH_TEST );
glDisable( GL_LIGHTING );
glDisable( GL_CULL_FACE );
glDisable( GL_BLEND );
glDisable( GL_STENCIL_TEST );
glDisable( GL_DITHER );

// Set all matrices to identity
glMatrixMode( GL_TEXTURE );
glLoadIdentity();
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glMatrixMode( GL_PROJECTION );
glLoadIdentity();

glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE );
RenderAQuadTexturedWith( texture1 );

glEnable( GL_BLEND );
glBlendEquation( GL_FUNC_ADD );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glColor4f( 1.0f, 1.0f, 1.0f, 0.5f );

RenderAQuadTexturedWith( texture2 );

The problem is that the values I'm getting are off by around 0.002, as compared with what I get by getting the pixels of the two textures and computing the average on the CPU. When a set a breakpoint in OpenGL Profiler (this is on Mac OS X 10.6.8) and eyeball the color buffer, it looks about like what I'd expect. Is there some inherent inaccuracy in blend mode?

I also tried setting the current color to 0.5, 0.5, 0.5 and using glBlendFunc( GL_ONE, GL_ONE ), and the errors were in the opposite direction but about the same magnitude.

EDIT TO ADD: In retrospect, I see my mistake clearly: If I render into a render buffer with 8 bits per color component, and then read pixels from one of those components, then I only have 8 bits of accuracy.

So now I need to figure out a way to extract the results without losing accuracy. Maybe a fragment shader that sets gl_FragDepth?


The GPU does all calculations in float. Check that your CPU reference is also using float and not double.


As I added to my question, the root of the problem was not with depth textures or blend mode, but just that I was rendering into a buffer with 8 bits per color component. It probably would work if I went out of my way to get a higher-precision color buffer. But I went for the alternative of using a fragment program that puts the output in gl_FragDepth, and that seems to work as desired, since I already had a 24-bit depth buffer.

0

精彩评论

暂无评论...
验证码 换一张
取 消