开发者

GLSL Convolution with Large Kernel in Texture Memory

开发者 https://www.devze.com 2023-02-15 09:52 出处:网络
I\'m very new to GLSL, but I\'m trying to write convolution kernel with in a fragment shader for image processing.I was able to do this just fine when my kernel was small (3x3) using a constant matrix

I'm very new to GLSL, but I'm trying to write convolution kernel with in a fragment shader for image processing. I was able to do this just fine when my kernel was small (3x3) using a constant matrix. Now, however, I'd like to use a kernel of size 9x9. Or for that matter of arbitrary size. My initial thought was to setup a texture memory containing the convoluti开发者_如何学Pythonon kernel. Then using a 2Dsampler I'd read the texture memory of the kernel and convolve it with the texture memory of the image (also a 2Dsampler). Is this the right way to go about this?

I suppose you could also make an array of arbitrary size that contains coefficients. This might work for 81 coefficients, but what happens if you want something larger? Like say a 20x20?

In general if you need to access multiple large objects in GLSL what's the proper strategy? Thanks! Thanks,

D


Sequential access:

  1. Vertex Attributes

Random access:

  1. Texture Buffers / Uniform blocks if the source is a buffer
  2. Uniforms if the source is small
  3. Textures otherwise


Yes, since uniform and constant space is limited, using a texture as replacement is a good strategy.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号