开发者

How do you represent a normal or texture coordinate using GLshorts?

开发者 https://www.devze.com 2022-12-22 21:57 出处:网络
A lot of suggestions on improving the performance in iPhone games revolve around sending less data to the GPU. The obvious suggestion is to use G开发者_开发百科Lshorts instead of GLfloat wherever poss

A lot of suggestions on improving the performance in iPhone games revolve around sending less data to the GPU. The obvious suggestion is to use G开发者_开发百科Lshorts instead of GLfloat wherever possible, such as for vertices, normals, or texture coordinates.

What are the specifics when using a GLshort for a normal or a texture coordinate? Is it possible to represent a GLfloat texture coordinate of 0.5 when using a GLshort? If so, how do you do that? Would it just be SHRT_MAX/2? That is, does the range of 0 to 1 for a GLfloat map to 0 to SHRT_MAX when using a GLshort texture coordinate?

What about normals? I've always created normals with GLfloats and normalized them to unit length. When using a GLshort for a normal, are you sending a non-normalized vector to the GPU? If so, when and how is it normalized? By dividing by all components by SHRT_MAX?


The OpenGL ES 1.1 specification says that Normals are automatically brought back to the [-1:1] or [0:1] range when using integer types. (See Table 2.7 of specification for the full list of formulaes)

(for shorts)   n_x = (2c +1)/(2^16 − 1)

So you don't need to rely on GL_NORMALIZE for normals (and can use whatever trick you want).

Texture coordinates however do not get scaled (values outside the [0:1] range are perfectly valid...). If you want to apply such a scaling, your best bet is to use a texture coordinate matrix, at a somewhat significant cost.

glMatrixMode(GL_TEXTURE);
glLoadMatrix(matrix_that_does_conversion_based_on_type);


I have had success using GLshorts as texture coords by multiplying them by 1000 when creating the VBO, then dividing by 1000 before use in the shader. Of course, you'll have to weigh the extra computations against the memory gain on your device.

This comes up pretty high when Googling how to use GLshort to increase performance, but apologies for posting on such an old thread.


Normals

If you glEnable( GL_NORMALIZE ) then you can submit normals as GL_BYTE (or a short). A normal submitted as bytes 0xFF 0xCC 0x33 is normalized by the GPU to (0.77, 0.62, 0.15).

Note that there is a small performance penalty from GL_NORMALIZE because the GPU has to normalize each normal.

Another caveat with GL_NORMALIZE is that you can't do lighting trickery by using un-normalized normals.

Edit: By 'trickery', I mean adjusting the length of a normal in the source data (to a value other than 1.0) to make a vert brighter or darker.

Texture Coordinates

As far as I can tell, integers (bytes or shorts) are less useful for texture coordinates. There's no easy call instruct OpenGL to 'normalize' your texture coordinates. 0 means 0.0, 1 means 1.0, and 255 means 255.0 (for tiling). There's no way to specify fractional values in between.

However, don't forget about the texture matrix. You might be able to use it to transform the integers into useful texture coordinates. (I haven't tried.)

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号