I am trying to draw a triangle using GL_POLYGON but for some reason it is taking the whole window..
...
typedef struct myTriangle {
float tx;
float ty;
} myTriangle;
std::vector<myTriangle> container;
void display() {
glClear(GL_COLOR_BUFFER_BIT);
for(int i = 0; i < (int)container.size(); ++i) {
glBegin(GL_POLYGON);
glColor3f(0.35, 0.0, 1.0);
glVertex2f(container.at(i).tx, container.at(i).ty + 20);
glVertex2f(container.at(i).tx - 20, container.at(i).ty - 20);
glVertex2f(container.at(i).tx + 20, container.at(i).ty - 20);
glEnd();
}
glutSwapBuffers();
}
...
int main(int argc, char** argv) {
myTriangle t1;
container.push_back(t1);
container.back().tx = (float)0.;
container.back().ty = (float)0.;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE);
// initializations
glutInitWindowSize(400, 400);
glutInitWindowPosition(100, 100);
glutCreateWindow( "Transformer" );
glClearColor(1.0, 1.0, 1.0, 1.0);
// global variable initialization
GW = GH = 200;
// callback functions
glutDisplayFunc(display);
glutMouseFunc(mouse);
glutMotionFunc(mouseMove);
glutKeyboardFunc(keyboard);
g开发者_如何转开发lutMainLoop();
}
It should be drawing an equilateral 40x40 triangle at the origin (0,0) in a 400x400 window. Is there something I did wrong?
You seem to be confusing 3D (world) coordinates with 2D (screen) coordinates. The coordinates you pass to glVertex2f
are 3D coordinates that need to be transformed appropriately before they are displayed in your window. The size of your window is immaterial: you can always set up your projection matrix to show as much of the 3D space as you want in any window.
You haven't set up any transformations after initializing OpenGL, so you're using the default matrix, which sits at the origin (0,0,0) in 3D space and the triangle is drawn right over you.
Here's a quick tutorial that shows you how to set up the transformation matrices so that you appear to view the triangle from a distance.
In OpenGL, your screen is defined from [-1, -1] to [1, 1]. Its how rendering systems work.
Try doing
glScalef(2.0f/400, 2.0f/400, 1);
glTranslatef(-1f, -1f, 0);
What the graphics card now does is take your vertices which are defined in pixels, and transforms them so that they correctly sit inside the [-1, -1] to [1, 1] boundry.
You'll see that it first scales it from the [0,0]-[400, 400] boundry to a [0, 0]-[2, 2]. The it translates it to the final [-1, -1]-[1, 1]
精彩评论