开发者

iPhone colour Image analysis

开发者 https://www.devze.com 2023-04-01 09:02 出处:网络
I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is...I am emphasizing ISH here because, I am interested in captur

I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is... I am emphasizing ISH here because, I am interested in capturing ALL the shades of these colours. So far, I have done the following:

I have my UIImage, I have CGImageRef and I a开发者_如何学Cctually have the colour of the pixel itself (it's RGB and Alpha), what I don't know is how to quantify this, and determine all the green shades, blues, browns, yellows, purples etc... So, I can process each and every pixel, determine it's basic RGB, but I need some help in quantifying the colours it over a whole image.

Thanks for your ideas... Alex.


One fairly good solution is to switch from RGB colour space to one of the Y colour spaces, such as YUV, YCrCb or any of those. In all cases the Y channel represents brightness and the other two channels together represent colour, relative to brightness. You probably want to factor brightness out, possibly with the caveat that all colours below a certain darkness are to be excluded, so getting Y separately is a helpful first step in itself.

Converting from RGB to YUV is achieved with a simple linear combination. Straight from Wikipedia and a thousand other sources:

y = 0.299*r + 0.587*g + 0.114*b;
u = -0.14713*r - 0.28886*g + 0.436*b;
v = 0.615*r - 0.51499*g - 0.10001*b;

Assuming you're keeping r, g and b in the range [0, 1], your first test might be:

if(y < 0.05)
{
    // this colour is very dark, so it's considered to be as
    // far as we allow from any colour we're interested in
}

To decide how close your colour then is to, say, green, work out the u and v components of the green you're interested in, as a proportion of the y:

r = b = 0;
g = 0;

y = 0.299*r + 0.587*g + 0.114*b = 0.587;

u = -0.14713*r - 0.28886*g + 0.436*b = -0.28886;
v = 0.615*r - 0.51499*g - 0.10001*b  = -0.51499;

proportionOfU = u / y = -2.0479;
proportionOfV = v / y = -0.8773;

Subsequently, work out and compare the proportions of U and V for incoming colours and compare (e.g. with 2d planar distance) to those you've computed for the colour you're comparing to. Closer values are more similar. How you scale and use that metric depends on your application.

Notice that as y goes toward 0, the computed proportions become increasingly less precise because of the limited range of the input data, and are undefined when y is 0. Conceptually that's because all colours look exactly the same when there's no light on them. Checking that y is above at least a certain minimum value is the pragmatic way of working around this issue. This also means that you're not going to get sensible results if you try to say "how black is this picture?", though again that's because of the ambiguity between a surface that doesn't reflect any light and a surface that doesn't have any light falling upon it.

0

精彩评论

暂无评论...
验证码 换一张
取 消