I started using GLM library to do mathematics operations over OpenGL 3 and GLSL. I need an orthographic projection to draw 2D graphics, so I writed this simple code:
glm::mat4 projection(1.0);
projection = glm::ortho( 0.0f, 640.0f, 480.0f, 0.0f, 0.0f, 500.0f);
Printing on screen the values that glm::ortho has created I get:
0.00313 0.00000 0.00000 0.00000
0.00000 -0.00417 0.00000 0.00000
0.00000 0.00000 -0.00200 0.00000
-1.00000 1.00000 -1.00000 1.00000
As I know this is not the correct order for the values in OpenGL, because multiplying this matrix by a position vector will ignore all translation values.
I tested that matrix with my shader and some primitives and I only get a blank screen. But if I modify by hand the matrix as follows it works ok:
0.00313 0.00000 0.00000 -1.00000
0.00000 -0.00417 0.00000 1.00000
0.00000 0.00000 -0.00200 -1.00000
0.00000 0.00000 0.00000 1.00000
Moreover, looking at the function "ortho" in the "glm/gtc/matrix_transform.inl" file:
template <typename valType>
inline detail::tmat4x4<valType> ortho(
valType const & left,
valType const & right,
valType const & bottom,
valType const & top,
valType const & zNear,
valType const & zFar)
{
detail::tmat4x4<valType> Result(1);
Result[0][0] = valType(2) / (right - left);
Result[1][1] = valType(2) / (top - bottom);
Result[2][2] = - valType(2) / (zFar - zNear);
Result[3][0] = - (right + left) / (right - left);
Result[3][1] = - (top + bottom) / (top - bottom);
Result[3][2] = - (zFar + zNear) / (zFar - zNear);
return Result;
}
I have replaced the last 3 initialization lines by the following code and also worked ok:
Result[0][3] = - (right + left) / (right - left);
Result[1][3] = - (top + bottom) / (top - bottom);
Result[2][3] = - (zFar + zNear) / (zFar - zNear);
This is a minimal vertex shader that I'm using for test (note that at this moment the uni_MVP is only the projection matrix explained above):
uniform mat4 uni_MVP;
in vec2 in_Position;
void main(void)
{
gl_Position = uni_MVP * vec4(in_Position.xy,0.0, 1.0);
}
I thik that this is not a bug, because all functions works the same way. Maybe is an issue of my C++ compiler that inverts the order of multidimensional arrays? How can I solve this without modifying all GLM source code?
I'm using the last version of GLM library (0.9.1) with开发者_Python百科 Code::Blocks and MinGW running on Windows Vista.
First, it's called transposition, not inversion. Inversion means something completely different. Second, this is exactly how it's supposed to be. OpenGL accesses matrices in column major order, i.e. the matrix elements have to following indices:
0 4 8 12
1 5 9 13
2 6 10 14
3 7 11 15
However your usual C/C++ multidimensional arrays you normally number like this:
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
i.e. row and column indices are transposed. Older versions of OpenGL sported some extension that allows to supply matrices in transposed form, to spare people rewriting their code. It's called GL_ARB_transpose_matrix http://www.opengl.org/registry/specs/ARB/transpose_matrix.txt
With shaders it's even easier than having to use new functions. glUniformMatrix has a parameter GLboolean transpose
, you've got 3 guesses what it does.
精彩评论