开发者

How does H.264 or video encoders in general compute the residual image of two frames?

开发者 https://www.devze.com 2023-03-18 01:41 出处:网络
I have been trying to understand how video encoding works for modern encoders, in particular H264. It is very often mentioned in documentation that residual frames are created from the differences be

I have been trying to understand how video encoding works for modern encoders, in particular H264. It is very often mentioned in documentation that residual frames are created from the differences between the current p-frame and the last i-frame (assuming the following frames are not used in the prediction). I understand that a YUV color space is used (maybe YV12), and that one image is "substracted" from the other and then the residual is formed. What I don't understand is how exactly this substraction works. I don't think it is an absolute value of the difference because that would be ambiguous. What is the per pixel formula to obtain this differen开发者_运维技巧ce?


Subtraction is just one small step in video encoding; the core principle behind most modern video encoding is motion estimation, followed by motion compensation. Basically, the process of motion estimation generates vectors that show offsets between macroblocks in successive frames. However, there's always a bit of error in these vectors.

So what happens is the encoder will output both the vector offsets, and the "residual" is what's left. The residual is not simply the difference between two frames; it's the difference between the two frames after motion estimation is taken into account. See the "Motion compensated difference" image in the wikipedia article on compensation for a clear illustration of this--note that the motion compensated difference is drastically smaller than the "dumb" residual.

Here's a decent PDF that goes over some of the basics.

A few other notes:

  • Yes, YUV is always used, and typically most encoders work in YV12 or some other chroma subsampled format
  • Subtraction will have to happen on the Y, U and V frames separately (think of them as three separate channels, all of which need to be encoded--then it becomes pretty clear how subtraction has to happen). Motion estimation may or may not happen on Y, U and V planes; sometimes encoders only do it on the Y (the luminance) values to save a bit of CPU at the expense of quality.
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号