开发者

Error when calling glGetTexImage (atioglxx.dll)

开发者 https://www.devze.com 2023-02-22 15:57 出处:网络
I\'m experiencing a difficult problem on certain ATI cards (Radeon X1650, X1550 + and others). The message is: \"Access violation at address 6959DD46 in module \'atioglxx.dll\'. Read of address 00000

I'm experiencing a difficult problem on certain ATI cards (Radeon X1650, X1550 + and others).

The message is: "Access violation at address 6959DD46 in module 'atioglxx.dll'. Read of address 00000000"

It happens on this line:

glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_FLOAT,P);

Note:

  • Latest graphics drivers are installed.
  • It works perfectly on other cards.

Here is what I've tried so far (with assertions in the code):

  • That the pointer P is valid and allocated enough memory to hold the image
  • Texturing is enabled: glIsEnabled(GL_TEXTURE_2D)
  • Test that the currently bound texture is the one I expect: glGetIntegerv(GL_TEXTURE_2D_BINDING)
  • Test that the currently bound texture has the dimensions I expect: glGetTexLevelParameteriv( GL_TEXTURE_WIDTH / HEIGHT )
  • Test that no errors have been reported: glGetError

It passes all those test and then still fails with the message.

I feel I've tried everything and have no more ideas. I really hope some GL-guru here can help!

EDIT:

After concluded it is probably a driver bug I posted about it here too: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=295137#Post295137

I also tried GL_PACK_ALIG开发者_Python百科NMENT and it didn't help.

By some more investigation I found that it only happened on textures that I have previously filled with pixels using a call to glCopyTexSubImage2D. So I could produce a workaround by replacing the glCopyTexSubImage2d call with calls to glReadPixels and then glTexImage2D instead.

Here is my updated code:

{
  glCopyTexSubImage2D cannot be used here because the combination of calling
  glCopyTexSubImage2D and then later glGetTexImage on the same texture causes
  a crash in atioglxx.dll on ATI Radeon X1650 and X1550.
  Instead we copy to the main memory first and then update.
}
//    glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, PixelWidth, PixelHeight);  //**
GetMem(P, PixelWidth * PixelHeight * 4);
glReadPixels(0, 0, PixelWidth, PixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, P);
SetMemory(P,GL_RGBA,GL_UNSIGNED_BYTE);


You might take care of the GL_PACK_ALIGNEMENT. This parameter told you the closest byte count to pack the texture. Ie, if you have a image of 645 pixels:

  • With GL_PACK_ALIGNEMENT to 4 (default value), you'll have 648 pixels.
  • With GL_PACK_ALIGNEMENT to 1, you'll have 645 pixels.

So ensure that the pack value is ok by doing:

glPixelStorei(GL_PACK_ALIGNMENT, 1)

Before your glGetTexImage(), or align your memory texture on the GL_PACK_ALIGNEMENT.


This is most likely a driver bug. Having written 3D apis myself it is even easy to see how. You are doing something that is really weird and rare to be covered by test: Convert float data to 8 bit during upload. Nobody is going to optimize that path. You should reconsider what you are doing in the first place. The generic conversion cpu conversion function probably kicks in there and somebody messed up a table that drives allocation of temp buffers for that. You should really reconsider using an external float format with an internal 8 bit format. Conversions like that in the GL api usually point to programming errors. If you data is float and you want to keep it as such you should use a float texture and not RGBA. If you want 8 bit why is your input float?

0

精彩评论

暂无评论...
验证码 换一张
取 消