I'm currently working with OpenGL ES 1.1 and using the DrawElements convention along with Vertex, Normal, Texture Coordinate, and Index arrays.
I recently came across this while researching the idea of using Normal/Bump mapping which I previously though impossible with OpenGL ES: http://iphone-3d-programming.labs.oreilly.com/ch08.html
I can generate an object-apace normal map already from my 3D modeler, but what I'm not completely clear on is whether or not the Normal coordinates array will be necessary any longer if implementing a 2nd texture unit for normal mapping, or will Lighting + Color Texture Combined with a Normal map via the DOT3_RGB option be all that is required?
EDIT - After researching DOT3 Lighting a bit further, I'm not sure if the answer given by ognian is correct. This page, http://www.3dkingdoms.com/tutorial.htm gives an example of it's usage and 开发者_Go百科if you look at the "Rendering & Final Result" section bit of code, there is no normal array ClientState for Normal Arrays is never enabled.
I also found this post here, What is DOT3 lighting? which explains it well... but leads me to another question. In the comments, it's stated that instead of translation of normals, you translate light direction. I'm confused about this as if I have a game with a stationary wall... why would I move the light around just for one model? Hoping someone can give a good explanation of all of this...
Whereas tangent-space normal maps perturb the normals that are interpolated from the per-vertex normals, object-space normal maps already contain all needed information about surface orientation in the map. Therefore, if you’re just doing DOT3 lighting in OpenGL ES 1.1, you don’t need to pass the normals again.
The reason the other post mentioned translating light direction rather than the normals is because both arguments to the dot product (the per-pixel normal and the light vector) need to be in the same coordinate space for the dot product to make any sense. Because you have an object-space normal map, your per-pixel normal will always be in your object’s local coordinate space, and the texture environment doesn’t provide any means of applying further transformations. Chances are that your light vectors are in some other space, so the transformation that was mentioned is there to convert from the other space back to your object’s local space.
精彩评论