开发者

Efficiently composing/rendering multiple layers for Photoshop-like image editor

开发者 https://www.devze.com 2023-02-12 08:51 出处:网络
I\'m making a Photoshop-like application that has 5 or so 1600x1200 layers. Each layer can have different blending modes like normal blending, XOR blending, additive blend etc. where normal blending i

I'm making a Photoshop-like application that has 5 or so 1600x1200 layers. Each layer can have different blending modes like normal blending, XOR blending, additive blend etc. where normal blending is most common. Blending all the layers together takes about 0.3s on my target platform (hardware acceleration an option here).

My problem: How can I efficiently update the screen to show all the layers flattened/blended together when the user performs editing operations on a layer?

For example, a simple operation might be to convert one layer to grayscale. A more complex operation is to paint a brush image in real-time onto one of the layers in several places. Specifically, my interface will be too unresponsive if I try to reblend all the user uses a brush.

The only optimisations I can think of are:

  1. cache the flattened image and, when changes occur, only update the rectangle of the flatten image that has changed. This will be speedy for small brush images for 开发者_如何转开发instance.
  2. when editing a layer, cache the flattened image of all the layers below the active layer. When updating the full flattened image, we then only have to blend the active layer and the layers above on this cached image.

2 doesn't help if you're editing the bottom layer though and I cannot see how I can pre-flatten all the layers above the active layer.


Without knowing your target platform, some sort of hardware acceleration is highly recommended. OpenGL 2.0+ (or ES 2.0+) is the most likely thing to help — using GLSL you get a C-style language in which the interesting bit for you would be supplying a fragment program, which is what the GPU should do per pixel based on the input textures in order to produce a colour for output. Where you're outputting to is implicit, but you can output to an image then subsequently use that image as input, which would hook right in to your idea (2). Depending on the exact hardware you're targeting, it may be relevant that Direct3d has a very similar construct in HLSL and NVidia supply a more proprietary equivalent called Cg that I think nowadays can compile to GLSL or HLSL.

Otherwise: idea (1) is a smart move, especially if the user is allowed to open images of arbitrary size. It causes the time spent to be a function of the size of the brush, not the image. You need to be reasonably precise in your thinking if you're doing things at subpixel accuracy.

Idea (2) potentially has precision ramifications (especially if coupled to hardware). To maintain the exact same results, obviously your intermediate buffers need to be of the same precision as your intermediate variables, which in a typical consumer-oriented drawing application often means that file inputs are 8bpp/channel but intermediate storage needs to be at least 16bpp/channel if you don't want errors to accumulate. This is likely to be the biggest potential barrier to hardware acceleration, since older hardware tends to limit you to 8bpp/channel intermediate buffers. Modern hardware can do floating point buffers of decent precision.

It is possible to gain an advantage from precomputation on layers to be applied afterwards, but the information per pixel generally needs to be more complicated than merely a colour, and may end up being no more simple than just storing the original buffers. Probably the smart thing to do is peephole optimisation. So if you have two additive layers on top of each other, you can easily replace that with a single additive layer. Ditto for two multiplicative or two XOR. So you'd implement a loop that looked at the effect queue, finds any patterns that it knows how to turn into a simpler form and makes those substitutions. And repeat until no substitutions can be found. For optimisation purposes you may even want to implement some compound operations that aren't directly offered to the user. Though, again, you need to consider precision.

In the common case, with normal blending applied to all layers, you'd end up with a single operation for arbitrarily many layers above.

0

精彩评论

暂无评论...
验证码 换一张
取 消