开发者

How do I translate single objects in OpenGL 3.x?

开发者 https://www.devze.com 2022-12-09 12:58 出处:网络
I have a bit of experience writing OpenGL 2 applications and want to learn using OpenGL 3. For this I\'ve bought the Addison Wesley \"Red-book\" and \"Orange-book\" (GLSL) which descirbe the deprecati

I have a bit of experience writing OpenGL 2 applications and want to learn using OpenGL 3. For this I've bought the Addison Wesley "Red-book" and "Orange-book" (GLSL) which descirbe the deprecation of the fixed functionality and the new programmable pipeline (shaders). But what I can't get a grasp of is how to construct a scene with multiple objects without using the deprecated translate*, rotate* and scale* functions.

What I used to do in OGL2 was to "move about" in 3D space using the translate and rotate functions, and create开发者_开发问答 the objects in local coordinates where I wanted them using glBegin ... glEnd. In OGL3 these functions are all deprecated, and, as I understand, replaced by shaders. But I can't call a shaderprogram for each and every object I make, can I? Wouldn't this affect all the other objects too?

I'm not sure if I've explained my problem satisfactory, but the core of it is how to program a scene with multiple objects defined in local coordinates in OpenGL 3.1. All the beginner tutorials I've found only uses a single object and doesn't have/solve this problem.

Edit: Imagine you want two spinning cubes. It would be a pain manually modifying each vertex coordinate, and you can't simply modify the modelview-matrix, because that would rather spin the camera around two static cubes...


Let's start with the basics.

Usually, you want to transform your local triangle vertices through the following steps:

local-space coords-> world-space coords -> view-space coords -> clip-space coords

In standard GL, the first 2 transforms are done through GL_MODELVIEW_MATRIX, the 3rd is done through GL_PROJECTION_MATRIX

These model-view transformations, for the many interesting transforms that we usually want to apply (say, translate, scale and rotate, for example), happen to be expressible as vector-matrix multiplication when we represent vertices in homogeneous coordinates. Typically, the vertex V = (x, y, z) is represented in this system as (x, y, z, 1).

Ok. Say we want to transform a vertex V_local through a translation, then a rotation, then a translation. Each transform can be represented as a matrix*, let's call them T1, R1, T2. We want to apply the transform to each vertex: V_view = V_local * T1 * R1 * T2. Matrix multiplication being associative, we can compute once and for all M = T1 * R1 * T2.

That way, we only need to pass down M to the vertex program, and compute V_view = V_local * M. In the end, a typical vertex shader multiplies the vertex position by a single matrix. All the work to compute that one matrix is how you move your object from local space to the clip space.

Ok... I glanced over a number of important details.

First, what I described so far only really covers the transformation we usually want to do up to the view space, not the clip space. However, the hardware expects the output position of the vertex shader to be represented in that special clip-space. It's hard to explain clip-space coordinates without significant math, so I will leave that out, but the important bit is that the transformation that brings the vertices to that clip-space can usually be expressed as the same type of matrix multiplication. This is what the old gluPerspective, glFrustum and glOrtho compute.

Second, this is what you apply to vertex positions. The math to transform normals is somewhat different. That's because you want the normal to stay perpendicular to the surface after transformation (for reference, it requires a multiplication by the inverse-transpose of the model-view in the general case, but that can be simplified in many cases)

Third, you never send 4-D coordinates to the vertex shader. In general you pass 3-D ones. OpenGL will transform those 3-D coordinates (or 2-D, btw) to 4-D ones so that the vertex shader does not have to add the extra coordinate. it expands each vertex to add the 1 as the w coordinate.

So... to put all that back together, for each object, you need to compute those magic M matrices based on all the transforms that you want to apply to the object. Inside the shader, you then have to multiply each vertex position by that matrix and pass that to the vertex shader Position output. Typical code is more or less (this is using old nomenclature):

mat4 MVP;
gl_Position=MVP * gl_Vertex;

* the actual matrices can be found on the web, notably on the man pages for each of those functions: rotate, translate, scale, perspective, ortho


Those functions are apparently deprecated, but are technically still perfectly functional and indeed will compile. So you can certainly still use the translate3f(...) etc functions.

HOWEVER, this tutorial has a good explanation of how the new shaders and so on work, AND for multiple objects in space.

You can create x arrays of vertexes, and bind them into x VAO objects, and you render the scene from there with shaders etc...meh, it's easier for you to just read it - it is a really good read to grasp the new concepts.

Also, the OpenGL 'Red Book' as it is called has a new release - The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. It includes 'Discussion of OpenGL’s deprecation mechanism and how to verify your programs for future versions of OpenGL'.

I hope that's of some assistance!

0

精彩评论

暂无评论...
验证码 换一张
取 消