I understand generally the concepts behind bilateral filtering when using only gra开发者_Python百科yscale images. I've read this website on bilateral filtering and the paper it discusses.
My main problem is this: how can you determine the similarity of colors? Is the similarity of two RGB values the sum/product/some other operation of the similarities of their R values, G values, and B values? If this is the case, would it then be reasonable to determine the similarities separately and filter over each channel?
Thank you
I checked out your link, and the answer is in there. It talks about first converting to the CIE-lab color space. Then you calculate the Euclidean distance between points, i.e.
distance = sqrt(L*L + a*a + b*b)
Here is a site that has several conversion formulae, for XYZ, RGB and LAB color spaces.
The euclidean distance of RGB values is not a good estimate for perceived color similarity.
Your linked page saying the following above that issue:
In fact, a bilateral filter allows combining the three color bands appropriately, and measuring photometric distances between pixels in the combined space. Moreover, this combined distance can be made to correspond closely to perceived dissimilarity by using Euclidean distance in the CIE-Lab color space.
So, try to use the euclidiean distance in CIE-Lab color space.
best Lars
CIE-Lab color space measure the distance between colors works fine. But converting RGB data to these kinds of color space takes time, which is not desirable for real time applications(mobile apps especially).
I have check opencv and GPUImage codes.
opencv codes use the difference of the sum of RGB channel.
distance=(R1+G1+B1)-(R2+G2+B2)
GPUImage runs great in GPU use exact the euclidean distance of each channel.
discante=sqrt((R1-R2)^2 + (G1-G2)^2 + (B1-B2)^2)
in practical scenario(I am developing a real-time bilateral filter app on mobile phones) , these two methods works similarly. I believe that instead of transforming RGB to CIE-Lab color space, use methods above would be enough.
精彩评论