So, I've gotten to basic lighting in my OpenGL learning quest.
Imagine this simplest lighting model. Each vertex has a position, color, and normal. The shader gets the ModelViewProjection matrix (MVP), Modelview matrix(MV) , and Normal matrix (N) which calculated as (MV-1)T, as well as LightColor and LightDirection as uniforms. The vertex shader performs the lighting calculations - the fragment shader just outputs the interpolated colors.
Now, in every tutorial on this subject I have come across I see two things that puzzle me. First, the LightDirection is already assumed to be in eye coordinates. Second, the output color is calculated as
max(0, dot(LightDirection, N *开发者_运维问答 normal))*LightColor*Color;
I would expect that the LightDirection should be inversed first, that is, the correct formula I would think is
max(0, dot(-LightDirection, N * normal))*LightColor*Color;
It seems that it is assumed that the LightDirection is actually the reverse of the vector of the actual light flow.
Q1: Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?
Q2: If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?
Thanks for clarifying these things!
Q1: Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?
In fixed function OpenGL when supplying the position of the light, the 4th parameter determined if the light was directional or positional. In the case of directional light, the vector supplied points towards the (infinitely far away) light source.
In case of a positional light source, LightDirection is computed per vertex as LightDirection = LightPosition - VertexPosition
-- for the sake of simplicity this calculation is done in eye coordinates.
Q2: If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?
In fixed function OpenGL the light position supplied through glLighti
was transformed by the at call time current modelview matrix. If the light was positional, it got transformed by the usual modelview matrix. In case of a directional light, the normal matrix was used.
Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?
Neither. There are simply multiple kinds of lights.
A directional light represents a light source that, from the perspective of the scene, is infinitely far away. It is represented by a vector direction.
A positional light represents a light source that has a position in the space of the scene. It is represented by a vector position.
It's up to your shader's lighting model as to which it uses. Indeed, you could have a directional light and several point lights, all affecting the same model. It's up to you.
The tutorials you saw simply used directional lights. Though they probably should have at least mentioned the directional light approximation.
If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?
Neither. If the light's direction is in world coordinates, then you need to get the normal into world coordinates as well. It doesn't matter what space you do lighting in (though doing it in clip-space or other non-linear post-projective spaces is rather hard); what matters is that everything is in the same space.
The default OpenGL modelview matrix does what it says: it goes from model space to view space (eye space). It passes through world space, but it doesn't stop there. And the default OpenGL normal matrix is just the inverse-transpose of the modelview matrix. So neither of them will get you to world space.
In general, you should not do lighting (or anything else on the GPU) in world space, for reasons best explained elsewhere. In order to do lighting in world space, you need to have matrices that transform into world space. My suggestion would be to do it right: put the light direction in eye space, and leave world space to CPU code.
精彩评论