I'm using the glgrab code to try and grab a full-screen screenshot of the Mac screen. However, I want the bitmap data to be in the GL_RGB format. That is, each pixel should be in the format:
0x00RRGGBB
The original code specified the GL_BGRA
format. However, changing that to GL_RGB
gives me a completely blank result. The total source code I'm using is:
CGImageRef grabViaOpenGL(CGDirectDisplayID display, CGRect srcRect)
{
CGContextRef bitmap;
CGImageRef image;
void * data;
long bytewidth;
GLint width, height;
long bytes;
CGColorSpaceRef cSpace = CGColorSpaceCreateWithName (kCGColorSpaceGenericRGB);
CGLContextObj glContextObj;
CGLPixelFormatObj pixelFormatObj ;
GLint numPixelFormats ;
//CGLPixelFormatAttribute
int attribs[] =
{
// kCGLPFAClosestPolicy,
kCGLPFAFullScreen,
kCGLPFADisplayMask,
NULL, /* Display mask bit goes here */
kCGLPFAColorSize, 24,
kCGLPFAAlphaSize, 0,
kCGLPFADepthSize, 32,
kCGLPFASupersample,
NULL
} ;
if ( display == kCGNullDirectDisplay )
display = CGMainDisplayID();
attribs[2] = CGDisplayIDToOpenGLDisplayMask(display);
/* Build a full-screen GL context */
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if ( pixelFormatObj == NULL ) // No full screen context support
{
// GL didn't find any suitable pixel formats. Try again without the supersample bit:
attribs[10] = NULL;
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if (pixelFormatObj == NULL)
{
qDebug("Unable to find an openGL pixel format that meets constraints");
return NULL;
}
}
CGLCreateContext( pixelFormatObj, NULL, &glContextObj ) ;
CGLDestroyPixelFormat( pixelFormatObj ) ;
if ( glContextObj == NULL )
{
qDebug("Unable to create OpenGL context");
return NULL;
}
CGLSetCurrentContext( glContextObj ) ;
CGLSetFullScreen( glContextObj ) ;
glReadBuffer(GL_FRONT);
width = srcRect.size.width;
height = srcRect.size.height;
bytewidth = width * 4; // Assume 4 bytes/pixel for now
bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes
bytes = bytewidth * height; // width * height
/* Build bitmap context */
data = malloc(height * bytewidth);
if ( data == NULL )
{
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
qDebug("OpenGL drawable clear failed");
return NULL;
}
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,
cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
CFRelease(cSpace);
/* Read framebuffer into our bitmap */
glFinish(); /* Finish all OpenGL commands */
glPixelStorei(GL_PACK_ALIGNMENT, 4);开发者_如何学Python /* Force 4-byte alignment */
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
/*
* Fetch the data in XRGB format, matching the bitmap context.
*/
glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height,
GL_RGB,
#ifdef __BIG_ENDIAN__
GL_UNSIGNED_INT_8_8_8_8_REV, // for PPC
#else
GL_UNSIGNED_INT_8_8_8_8, // for Intel! http://lists.apple.com/archives/quartz-dev/2006/May/msg00100.html
#endif
data);
/*
* glReadPixels generates a quadrant I raster, with origin in the lower left
* This isn't a problem for signal processing routines such as compressors,
* as they can simply use a negative 'advance' to move between scanlines.
* CGImageRef and CGBitmapContext assume a quadrant III raster, though, so we need to
* invert it. Pixel reformatting can also be done here.
*/
swizzleBitmap(data, bytewidth, height);
/* Make an image out of our bitmap; does a cheap vm_copy of the bitmap */
image = CGBitmapContextCreateImage(bitmap);
/* Get rid of bitmap */
CFRelease(bitmap);
free(data);
/* Get rid of GL context */
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
/* Returned image has a reference count of 1 */
return image;
}
I'm completely new to OpenGL, so I'd appreciate some pointers in the right direction. Cheers!
Update:
After some experimentation, I have managed to narrow my problem down. My problem is that while I don't want the alpha component, I Do want each pixel to be packed to 4-byte boundaries. Now, when I specify GL_RGB or GL_BGR formats to the glReadPixels
call, I get the bitmap data packed in 3 byte blocks. When I specify GL_RGBA
, or GL_BGRA
, I get four byte blocks, but always with the alpha channel component last.
I then tried changing the value passed to
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
however, no variations of AlphaNoneSkipFirst
or AlphaNoneSkipLast
puts the alpha channel at the start of the pixel byte block.
Any ideas?
I'm not a Mac guy, but if you can get RGBA data and want XRGB, can't you just bitshift each pixel down eight bits?
foreach( unsigned int* RGBA_pixel, pixbuf )
{
(*RGBA_pixel) = (*RGBA_pixel) >> 8;
}
Try with GL_UNSIGNED_BYTE
instead of GL_UNSIGNED_INT_8_8_8_8_REV
/ GL_UNSIGNED_INT_8_8_8_8
.
Although it seems you want GL_RGBA instead -- then it should work with either 8_8_8_8_REV or 8_8_8_8 instead.
When I use GL_BGRA the data is returned pre-swizzled which is confirmed because the colors look correct when i display the result in a window.
Contact me if you want the project I created. Hope this helps.
精彩评论