I'm currently programming shadow mapping (cascaded shadow mapping, to be precise) into my c++ opengl engine. I therefore want to have a texture containing the distance between my light source and every pixel in my shadow map. Which texture type should I use?
I saw that there is a GL_DEPTH_COMPONENT texture internal format, but it scales the data I want to give the texture to [0,1]. Should I invert my length once when I cre开发者_运维问答ate the shadow map then a second time during my final rendering to get back the real length? It seems quite useless!
Is there a way to use textures to store lengths without inverting them 2 times? (once at the texture creation, once during its usage).
I'm not sure what you mean with invert (I'm sure you cannot mean to invert the distance as this won't work). What you do is transform the distance to the light source into the [0,1] range.
This can be done by constructing a usual projection matrix for the light source's view and applying this to the vertices in the shadow map construction pass. This way their distance to the light source is written into the depth buffer (to which you can connect a texture with GL_DEPTH_COMPONENT
format either by glCopyTexSubImage
or FBOs). In the final pass you of course use the same projection matrix to compute the texture coordinates for the shadow map using projective texturing (using a sampler2DShadow
sampler when using GLSL).
But this transformation is not linear, as the depth buffer has a higher precision near the viewer (or light source in this case). Another disadvantage is that you have to know the valid range of the distance values (the farthest point your light source affects). Using shaders (which I assume you do), you can make this transformation linear by just dividing the distance to the light source by this maximum distance and manually assign this to the fragment's depth value (gl_FragDepth
in GLSL), which is what you probably meant by "invert".
The division (and knowledge of the maximum distance) can be prevented by using a floating point texture for the light distance and just writing the distance out as a color channel and then performing the depth comparison in the final pass yourself (using a normal sampler2D
). But linearly filtering floating point textures is only supported on newer hardware and I'm not sure if this will be faster than a single division per fragment. But the advantage of this way is, that this opens the path for things like "variance shadow maps", which won't work that good with normal ubyte textures (because of the low precision) and neither with depth textures.
So to sum up, GL_DEPTH_COMPONENT
is just a good compromise between ubyte textures (which lack the neccessary precision, as GL_DEPTH_COMPONENT
should have at least 16bit precision) and float textures (which are not that fast or completely supported on older hardware). But due to its fixed point format you won't get around a transformation into the [0,1]-range (be it linear of projective). I'm not sure if floating point textures would be faster, as you only spare one division, but if you are on the newest hardware supporting linear (or even trilinear) filtering of float textures and 1 or 2 component float textures and render targets, it might be worth a try.
Of course, if you are using the fixed function pipeline you have only GL_DEPTH_COMPONENT
as an option, but regarding your question I assume you are using shaders.
精彩评论