Color space. Well, everybody knows about RGB: three values normalized in the range [0.0,1.0], which have the meaning of the intensity of the color components Red Green Blue; this intensity is meant as linear, isn't?
Gamma. As far I can understand, gamma is a function which maps RGB color components to another value. Googling on this, I've seen linear functions and non linear functions... Linear functions seems to scale RGB components, so it seems to tune image brightness; non linear functions seems to "decompress" darker/lighter components.
Now, I'm starting to implement an image viewer, which shall display different image formats as texture. I'd like to modify the gamma of these images, so I should build up a fragment shader and run over the textured quad. Fine, but how do I determine the right gamma correction?
OpenGL works using linear RGB color space, using floating point components. Indeed, I could compute gamma-corrected values starting from those values (with special floating point precision), so they are displayed after having clamped the gamma-corrected value.
First, I shall determine the gamma ramp. How could I determine it? (analitically or using lookup tables)
Then, I came up to investigate on the OpenGL extension EXT_framebuffer_sRGB, which seems very relate开发者_开发问答d with the extension EXT_texture_sRGB.
EXT_texture_sRGB introduce a new texture format which is used to linearize textel values into RGB linear space. (footnote 1) In this way, I'm aware of sRGB color space and use it as linear RGB color space.
Instead, EXT_framebuffer_sRGB extension allows me to encode linear RGB values onto the sRGB framebuffer, without worrying about it.
...
Wait, all this information for what? If I can use sRGB framebuffer and load sRGB textures, process that textures without sRGB conversions... why should I correct gamma?
Maybe can I correct gamma all the same, even on a sRGB buffer? Or I better not? And brightness and contrast: shall they applied before or after gamma correction?
That's a lot of information, I'm getting confused now. Hope that someone of you can explain me more all these concepts! Thank you.
...
There's another question. In the case the device gamma is different from the "standard" 2.2, how do I "accumulate" different gamma corrections? I don't know if it is clear: in the case image RGB values are already corrected for a monitor with a gamma value of 2.2, but the monitor has a gamma of value 2.8, how to I correct gamma?
(1) Here is some extract to highlight what I mean:
The sRGB color space is based on typical (non-linear) monitor characteristics expected in a dimly lit office. It has been standardized by the International Electrotechnical Commission (IEC) as IEC 61966-2-1. The sRGB color space roughly corresponds to 2.2 gamma correction.
Does this extension provide any sort of sRGB framebuffer formats or guarantee images rendered with sRGB textures will "look good" when output to a device supporting an sRGB color space?
RESOLVED: No. Whether the displayed framebuffer is displayed to a monitor that faithfully reproduces the sRGB color space is beyond the scope of this extension. This involves the gamma correction and color calibration of the physical display device. With this extension, artists can author content in an sRGB color space and provide that sRGB content for use as texture imagery that can be properly converted to linear RGB and filtered as part of texturing in a way that preserves the sRGB distribution of precision, but that does NOT mean sRGB pixels are output to the framebuffer. Indeed, this extension provides texture formats that convert sRGB to linear RGB as part of filtering. With programmable shading, an application could perform a linear RGB to sRGB conversion just prior to emitting color values from the shader. Even so, OpenGL blending (other than simple modulation) will perform linear math operations on values stored in a non-linear space which is technically incorrect for sRGB-encoded colors. One way to think about these sRGB texture formats is that they simply provide color components with a distribution of values distributed to favor precision towards 0 rather than evenly distributing the precision with conventional non-sRGB formats such as GL_RGB8.
Unfortunately OpenGL by itself doesn't define a colour space. It's just defined that the RGB values passed to OpenGL form a linear vector space. The values of the rendered framebuffer are then sent to the display device as they are. OpenGL just passes through the value.
Gamma services two purposes:
- Sensory perception is nonlinear
- In the old days, display devices had a nonlinear response
The gamma correction is used to compensate for both.
The transformation is just "linear value V to some power Gamma", i.e. y(v) = v^gamma
Colorspace transformations involve the complete chain from input values to whats sent to the display, so this includes the gamma correction. This also implies that you should not manipulate the gamma ramp youself.
For a long time the typical Gamma value used to be 2.2. However this caused some undesireable quantisation of low values, so Adobe introduced a new colour space, called sRGB which has a linear part for low values and a powerfunction with exponential ~2.3 for the higher values. Most display devices these days use sRGB. Also most image files these days are in sRGB.
So if you have a sRGB image, and display it as-is on a sRGB display device with a linear gamma ramp configured on the device (i.e. video driver gamma=1) you're good by simply using sRGB texturing and framebuffer and not doing anything else.
EDIT due to comments
Just to summarize:
Use a ARB_framebuffer_sRGB framebuffer, so that the results of the linear OpenGL processing are properly color transformed by the driver http://www.opengl.org/registry/specs/ARB/framebuffer_sRGB.txt
Linearize all colour inputs to OpenGL
Textures in sRGB colour space should be passed through EXT_texture_sRGB http://www.opengl.org/registry/specs/EXT/texture_sRGB.txt
Don't gamma correct the output values (the sRGB format framebuffer will take care of this)
If your system does not support sRGB framebuffers:
Set a linear colour ramp on your display device.
- Windows http://msdn.microsoft.com/en-us/library/ms536529(v=vs.85).aspx
- X11 can be done though xgamma http://www.xfree86.org/current/xgamma.1.html
create (linear) framebuffer objects, to linear rendering in the framebuffer object. The use of a FBO is, to properly to blending, which only works in linear colour space.
draw the final render result from the FBO to the window using a fragment shader that applies the desired colour (gamma and other) corrections.
Wait, all this information for what? If I can use sRGB framebuffer and load sRGB textures, process that textures without sRGB conversions... why should I correct gamma?
Generally, you don't. The purpose of the sRGB texturing and framebuffers is so that you don't have to manually do gamma correction. Reads from sRGB textures are converted to a linear colorspace, and writes to sRGB framebuffers take linear RGB values and convert them to sRGB values. This is all automatic, and more to the point free, performance-wise.
The only time you will need to do gamma correction is if the monitor's gamma does not match the sRGB gamma approximation of 2.2 gamma. Rare is the monitor that does this.
Your textures do not have to be in sRGB colorspace. However, most image creation applications will save images in sRGB and works with colors in sRGB, so odds are most of your textures are already in sRGB whether you want them to be or not. The sRGB texture feature simply allows you to actually get the correct color values, rather than the color values you've been getting up until now.
And brightness and contrast: shall they applied before or after gamma correction?
I don't know what you mean by brightness and contrast. That's something that should be set by the monitor, not your application. But virtually all math operations you will want to do on image data should be done in a linear colorspace. Therefore, if you are given an image in the sRGB colorspace, you need to linearize it before you can do any math on it. The sRGB texture feature makes this free, rather than having to do complex shader math.
RGB
RGB: three values normalized in the range [0.0,1.0], which have the meaning of the intensity of the color components Red Green Blue; this intensity is meant as linear, isn't?
No. RGB values are meaningless numbers unless their relevance to a particular space/encoding is defined. They may be linear, gamma encoded, or log encoded, or use a compound transfer curve like the Rec709 and sRGB specs.
Also, they are relative to their primaries and whitepoint as defined in the colorspace, so for instance, #00FF00 in sRGB is a different color than #00FF00 in DCI-P3.
To define how an RGB pixel value should be displayed, you need not only the RGB triplet, but you need to know the colorspace it is intended for, which needs to include the primary coordinates, whitepoint, and transfer curve.
sRGB is the default "standard" RGB colorspace for the Web and general purpose computing. It is related to Rec709, the standard colorspace for HDTV.
GAMMA aka TRANSFER CURVE
Gamma. As far I can understand, gamma is a function which maps RGB color components to another value.
Image gamma takes advantage of the non-linearity of human perception to make the best use of the limited data size of 8 bit per channel images. The human eye is more sensitive to changes in darker colors, so more bits ae used to define the darker colors in a gamma encoded image.
Before digital, gamma was also used in the NTSC broadcast system which suppressed the apparent noise in the signal, in a way similar to how image gamma prevents an 8-bit per channel image from having "banding" artifacts.
First, I shall determine the gamma ramp. How could I determine it? (analitically or using lookup tables)
Gamma CURVE. The sRGB gamma curve is easily accessed. Here is the Wikipedia link for go from sRGB to linear. You can also use the "simplified" method which simply uses a 2.2 exponent curve:
linearVideo = sRGBvideo^2.2
and the simplified inverse, to go back to sRGB:
sRGBvideo = linearVideo^0.4545
Using the simplified version will introduce some minor gamma errors, it is advised to use the "correct" curve for critical operations or where an image will be "round tripped" multiple times.
There's another question. In the case the device gamma is different from the "standard" 2.2, how do I "accumulate" different gamma corrections? I don't know if it is clear: in the case image RGB values are already corrected for a monitor with a gamma value of 2.2, but the monitor has a gamma of value 2.8, how to I correct gamma?
2.8 ??? What monitor is that? PAL? This is unusual — While the PAL spec says that, 2.8 isn't "practical". Monitors are typically around 2.3 to 2.5 depending on how they are setup. When you adjust black level and contrast (white level) you are in essence adjusting the perceived gamma to match the viewing environment (room lighting).
Just FYI, while the sRGB "signal" has an encoded gamma of 1/2.2, the monitor normally adds an exponent of about 1.1
For Rec709, the encoded signal has an effective gamma of about 1/1.9 ish but the monitor in the reference viewing environment is about 2.4
In both cases there is an intentional system gamma gain.
If you wanted to encode an image with a gamma for a 2.8 display and you wanted no system gamma gain, then the exponent is 1/2.8
The "highest" gamma in common use is for digital cinema (and also Rec2020), at 2.6 For those of you thinking PAL & 2.8, I encourage you to read Poynton on that subject:
HIGHLY RECOMMENDED READING
Charles Poynton's Gamma FAQ is an easy read and completely describes these issues and why they are important in an image pipeline. Also read his Color FAQ at the same link.
A FEW WORDS ON LINEAR vs sRGB
Working on images in a linear workspace is typically ideal, as it not only simplifies the math, but emulates light in the real world. Light in the world works in a linear manner (additive). But if working in linear, you need adequate bit depth, and 8 bits is not enough.
Human perception is NON linear. Image gamma encoding takes advantage of the non linearity to make the most use of 8 bit image containers. When you convert to linear YOU NEED MORE BITS. 12 bit per chan is considered a minimum, but 16bit float is the minimum "recommend best practice" for linear workspaces.
If using textures in a linear rendering environment, those textures need to be transformed to a linear space (and often a deeper bit depth). While the added bits increase data bandwidth, the simplified math often allows faster computation.
sRGB is a DISPLAY REFERRED space, it is intended for DISPLAY PURPOSES, and for storing images in a compact "display ready" state. Black is 0 and white is 255, and the transfer curve is close to 1/2.2
sRGB is based on Rec709 (HDTV), and uses identical primaries and whitepoint. But the transfer curve and data encoding are different. Rec709 is intended for display on a higher gamma monitor in a darkened livingroom, and encodes black at 16 and white at 235.
精彩评论