开发者

Left-handed calculations in OpenGL

开发者 https://www.devze.com 2023-03-30 17:07 出处:网络
I\'m jumping back into the world game programming after a 2yr hiatus. Unfortuna开发者_高级运维tely, most of my knowledge pertaining to 3D math is rather rusty. So bear with me.

I'm jumping back into the world game programming after a 2yr hiatus. Unfortuna开发者_高级运维tely, most of my knowledge pertaining to 3D math is rather rusty. So bear with me.

My engine and game were originally designed for DirectX, which is a left-handed system that uses a row-major Matrix structure. My math code is all home-brew and works perfectly within the confines of that system. I'm at a point where I want to give my game an OpenGL renderer. Since all my math uses a left-handed, row-major Matrix system (for example, to create a projection matrix), how hard would it be to port my math to OpenGL's left-handed, column major system?

Is it a matter of transposing the matrix and sticking the values into a column-major struct? Or am I simplifying this too much.


It depends. Are we talking shader-based OpenGL or fixed-function (FF)?

In FF land, what you need to do is use gluPerspective (or glFrustum) to generate your perspective matrix, using similar parameters that you would give to your code under D3D. Then, you need to transpose the matrices you would compute for D3D (leaving out the projection component of the computation) to make the column-major, the way that glLoad/MultMatrix wants.

And then, you need to generate a matrix to flip your scene, which you put at the very bottom of the GL_MODELVIEW stack. The easiest way to figure out what to do is to just render everything and see how the world is inverted. Then stick a matrix there which negates along an axis; if that fixes it, you're done.

In pseudo-code, what you do is this:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(/*Projection parameters here*/);

glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(/*Your flip matrix here*/);
glPushMatrix();

//Render your stuff here.
//When rendering an object:
Matrix mat = ComputeD3DModelToCameraMatrixForObject();
mat = Transpose(mat);
glPushMatrix();
glMultMatrixf(GetMatrixAsFloatArray(mat));
//Draw the object.
glPopMatrix();

//When finished rendering stuff:
glPopMatrix();

In shaders, things are simpler. This assumes that you're using your own uniforms to pass matrices to GLSL.

Really, all you need to do is look at the differences between the clip-space that OpenGL uses and the clip-space that D3D uses. Clip-space is the space of the vertex positions output from the vertex shader. You can pass your matrices to GLSL as normal, since the glUniformMatrix functions have a parameter that allows you to specify if the matrix is transposed (row-major). Once you have computed the D3D clip-space positions as you would have for D3D, simply modify the results based on what OpenGL expects.

I don't recall the differences off-hand, but the OpenGL specification section 2.13 (in version 3.3, it may be a different section for other versions) very explicitly details the coordinate system expected, as well as the subsequent transformations to window-space.


you can check this out:

http://www.opengl.org/wiki/Viewing_and_Transformations#Can_I_make_OpenGL_use_a_left-handed_coordinate_space.3F

0

精彩评论

暂无评论...
验证码 换一张
取 消