开发者

win32 - How do I capture a screen as an 8-bit or 16-bit bitmap?

开发者 https://www.devze.com 2022-12-30 23:30 出处:网络
I am just starting out with win32 GDI programming and finding good references hard to come by. I have a simple application that captures the screen by doing the following:

I am just starting out with win32 GDI programming and finding good references hard to come by. I have a simple application that captures the screen by doing the following:

UINT32 x,y;
x = GetSystemMetrics(SM_CXSCREEN);
y = GetSy开发者_Python百科stemMetrics(SM_CYSCREEN);

HDC hdc = GetDC(NULL);
HDC hdcScreen = CreateCompatibleDC(hdc);

HBITMAP hbmp = CreateCompatibleBitmap(hdc, x, y);

SelectObject(hdcScreen, hbmp);

BitBlt(hdcScreen, 0, 0, x, y, hdc, 0, 0, SRCCOPY)

ReleaseDC(NULL, hdc);

I am capturing a compatible bitmap which on my machine is 32-bit. Using same/similar calls how would I capture the screen in 8-bit? What about as 16-bit?


Use CreateDIBSection to create a 8bpp bitmap, and BitBlt onto that.


Filling in the BITMAPINFO structure is going to be the interesting. You can't use a normal BITMAPINFO struct as it only allocates space for a single palette entry and, with a 8bpp image - you will need the full 256 entries.

If you want to cheat a bit, one can use a anonymous union to declare a BITMAPINFO with enough space for its palette.

union
{
  BITMAPINFO bmi;
  struct {
    BITMAPINFOHEADER bmih;
    RGBQUAD extra[256];
  } dummy;
};

bmi.bmiHeader.biSize = sizeof (bmi.bmiHeader);
bmi.biBitCount = 8;
// and so on.

As to the values to initialize in the color table... I can't think of an easy way to get a default 8bpp palette from GDI when its not in 8bpp mode. I have a suspicion that CreateHalftonePalette isn't going to do anything on a non palette device.


I'm pretty sure you'll have to capture a 32-bit bitmap, then convert that to 8-bits on your own. The 8-bit conversion will normally lose a fair amount of data, and there are quite a few different algorithms for how to do it. Unless you really have no choice about it, I'd reconsider doing this at all. It's been a long time since most people had much reason to mess with 8-bit bitmaps -- and they are a mess.

An 8-bit bitmap (at least a typical one) has a "palette", which specifies the 24-bit values of each of the (up to) 256 colors to be used in the bitmap file. Typically, you want to pick colors that are the closest to those in the original bitmap. There have been lots of algorithms invented to do this. Googling for something like "color reduction algorithm" should yield quite a few hits, with quite a few variations on how to do it, trading off execution speed, memory usage, etc. I can't even begin to guess which will be best suited to your particular purpose.

As I said up-front, my first inclination would be to put some time and effort into simply eliminating this requirement. Reducing from 32 to 24 or even 16 bits is pretty easy, and preserves a lot of the original quality. Going to 8 bits is considerably more difficult and loses a lot of quality.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号