I'm looking for a way to speed up the drawing of my game engine, which is currently the significant bottleneck, and is causing slowdowns. I'm on the verge of converting it over to XNA, but I just noticed something.
Say I have a small image that I've loaded.
Image img = Image.FromFile("mypict.png");
We have a picturebox on the screen we want to draw on. So we have a handler.
pictureBox1开发者_开发知识库.Paint += new PaintEventHandler(pictureBox1_Paint);
I want our loaded image to be tiled on the picturebox (this is for a game, after all). Why on earth is this code:
void pictureBox1_Paint(object sender, PaintEventArgs e)
{
for (int y = 0; y < 16; y++)
for (int x = 0; x < 16; x++)
e.Graphics.DrawImage(image, x * 16, y * 16, 16, 16);
}
over 25 TIMES FASTER than this code:
Image buff = new Bitmap(256, 256, PixelFormat.Format32bppPArgb); // actually a form member
void pictureBox1_Paint(object sender, PaintEventArgs e)
{
using (Graphics g = Graphics.FromImage(buff))
{
for (int y = 0; y < 16; y++)
for (int x = 0; x < 16; x++)
g.DrawImage(image, x * 16, y * 16, 16, 16);
}
e.Graphics.DrawImage(buff, 0, 0, 256, 256);
}
To eliminate the obvious, I've tried commenting out the last e.Graphics.DrawImage (which means I don't see anything, but it gets rid a call that isn't in the first example). I've also left in the using block (needlessly) in the first example, but it's still just as blazingly fast. I've set properties of g
to match e.Graphics
- things like InterpolationMode
, CompositingQuality
, etc, but nothing I do bridges this incredible gap in performance. I can't find any difference between the two Graphics objects. What gives?
My test with a System.Diagnostics.Stopwatch
says that the first code snippet runs at about 7100 fps, while the second runs at a measly 280 fps. My reference image is VS2010ImageLibrary\Objects\png_format\WinVista\SecurityLock.png
, which is 48x48 px, and which I modified to be 72 dpi instead of 96, but those made no difference either.
When you're drawing to the screen, the OS is able to take advantage of special hardware in the graphics adapter to do simple operations such as copying an image around.
I'm getting ~5 msec for both. 7100 fps is way too fast for the software rendering done by GDI+. Video drivers notoriously cheat to win benchmarks, they can detect that a BitBlt doesn't have to be performed because the image didn't change. Try passing random values to e.Graphics.TranslateTransform to eliminate the cheat.
Are you sure the difference isn't from the using-block, i.e. setting up the try-finally block and creating the Graphics instance from the image buffer.
I could easily see the latter as being an expensive operation, unlike the paint event where you simply get a reference to an already created graphics instance.
精彩评论