I have simplified my problem to this example:
#include <GL/glut.h>
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize (600, 600);
glutInitWindowPosition( 0, 0 );
int win = glutCreateWindow("Recon");
return 0;
}
When it executes the glutCreateWindow, it takes about 1 minute and the screens flicker several times.
This is ridiculously long. This can't be normal.
Environment:
- Fedora 10
- Dual NVIDIA GTX280开发者_开发技巧 cards driving 3 monitors.
- NVIDIA driver version 190.53 CUDA 2.3 installed
- gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC)
Any ideas as to what could be wrong?
Edit: I have no display function because my ultimate goal is to create a rendering context so that I can create a Pixel Buffer Object from some CUDA code (which for the moment is not going to be displaying its output. I have also tried creating a context with a series of glx calls with the same delay and flickering happening when gkxMakeCurrent is called.
Do you have a display function? I'm not sure if this will help, but maybe putting in a display function in which you clear the buffers might help?
e.g. glutDisplayFunc(myDisplay);
void myDisplay()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the screen
glutSwapBuffers();
}
What compiler are you using? And, have you looked into any possible performance issues associated with Fedora 10 and openGL (I'm looking into the second bit right now).
Edit: There are definitely some anecedotal stories of a performance hit in Fedora 10 Here and Here. The second one seems to describe at least one of your symptoms. Are you able to try your code on another OS?
精彩评论