i am trying to load md2 files in opengl but i noticed that most example programs just use a
precompiled list of normals. something like this.....
//table of precalculated normals
{ -0.525731f, 0.000000f, 0.850651f },
{ -0.442863f, 0.238856f, 0.864188f },
{ -0.295242f, 0.000000f, 0.955423f },
{ -0.309017f, 0.500000f, 0.809017f },
...
...
Ok this may sound abit dumb, but i thought each model is made of different triangles, how then is it possible that you can use one set of precompiled normals to render all models? 开发者_如何学CIt seems abit strange and any ideas will be appreciated.
You could use a precompiled table of normals, and use a lookup table to select one that is 'good enough' for a particular case. Each triangle is on a distinct plane, and it's that plane that has a normal, not the triangle itself.
For instance, lets imagine we have a point. Expand that point into a sphere for the purposes of this discussion, makes it a little easier to grasp conceptually. If you draw a perfect circle around that sphere on the y
axis, then rotate that circle in the x
axis 1 degree each time, you'll end up with 360 circles. If you take a normal at 1 degree intervals along each of those circles, you'll end up with 360 ** 2
points. From there, your normal is the vector from the center of the sphere to that point on the sphere, and it is a normal for a plane constructed tangential to point on the sphere. What you end up with if you calculate these two for every point on that sphere is a precalculated table of normals, which will almost certainly be good enough for most situations. Now you just need to design a lookup scheme for that data (plane -> normal).
It's been answered already but I want to shed some more light on it.
The table contains vectors that cover the unit sphere's surface pretty uniformly. It seems the set of 162 vectors are the corners of a subdivided icosahedron. This is done to lossily compress 3D vectors of unit length to an index (8 bits), see vector quantization. For storing an arbitrary normal vector you can search the table for the closest match and store the index of this match instead. With this table of 162 well distributed vectors the angle between the original vector and the approximated one is expected to be below 11° which seems to be good enough for the Quake2 engine.
The MD2 file format specifies that each vertex has a "normal index", and this is a lookup into a well-known table of normals. I would assume that these normals are distributed around a sphere. Presumably, the tool that built the model chose the most appropriate of these normals for each vertex.
With regard to the first answer: if you want a very faceted model (like a cube), then each polygon does indeed have its own normal, and each of the vertices that makes up that polygon should use the same normal vector. However, if you want smooth shading (such as a torso), it's common for each vertex in a polygon to have a different normal vector. This allows the lighting to vary across the polygon, which is useful in both per-vertex and per-pixel lighting scenarios.
精彩评论