开发者

C# predictive coding for image compression

开发者 https://www.devze.com 2023-02-24 03:22 出处:网络
I\'ve been playing with Huffman Compression on images to reduce size while maintaining a lossless image, but I\'ve also read that you can use predictive coding to further compress image data by reduci

I've been playing with Huffman Compression on images to reduce size while maintaining a lossless image, but I've also read that you can use predictive coding to further compress image data by reducing entropy.

From what I understand, in the lossless JPEG standard, each pixel is predicted as the weighted average of the adjacent 4 pixels already encountered in raster order (three above and one to the left). e.g., trying to predict the value of a pixel a based on preceding pixels, x, to the left as well as above a :

x x x
x a 

Then calculate and encode the residual (difference between predicted and actual value).

But what I don't get is if the average 4 neighbor pixels aren't a multiple of 4, you'd get a fraction right? Should that fraction be ignored? If so, would the proper encoding of an 8 bit image (saved in a byte[]) be something like:

public static void Encode(byte[] buffer, int width, int height)
{
    var tempBuff = new byte[buffer.Length];

    for (int i = 0; i < buffer.Length; i++)
    {
        tempBuff[i] = buffer[i];
    }

    for (int i = 1; i < height; i++)
    {
        for (int j = 1; j < width - 1; j++)
        {
            int offsetUp = ((i - 1) * width) + (j - 1);
            int offset = (i * width) + (j - 1);

            int a = tempBuff[offsetUp];
            int b = tempBuff[offsetUp + 1];
            int c = tempBuff[offsetUp + 2];
            int d = tempBuff[offset];
            int pixel = tempBuff[offset + 1];

            var ave = (a + b + c + d) / 4;
            var val = (byte)(ave - pixel);
            buffer[offset + 1] = val;
        }
    }
}

public static void Decode(byte[] buffer, int width, int height)
{
    for (int i = 1; i < height; i++)
    {
        for (int j = 1; j < width - 1; j++)
        {
            int offsetUp = ((i - 1) * width) + (j - 1);
            int offset = (i * width) + (j - 1);

            int a = buffer[offsetUp];
            int b = buffer[offsetUp + 1];
            int c = buffer[offsetUp + 2];
            int d = buffer[offset];
            int pixel = buffer[offset + 1];

            var ave = (a + b + c + d) / 4;
            var val = (byte)(ave - pixel);
            buffer[offset + 1] = val;
        }
    }
}

I don't see how this really will reduce entropy? How will this help compress my images furth开发者_StackOverflow中文版er while still being lossless?

Thanks for any enlightenment

EDIT:

So after playing with the predictive coding images, I noticed that the histogram data shows a lot of +-1's of the varous pixels. This reduces entropy quite a bit in some cases. Here is a screenshot:

C# predictive coding for image compression


Yes, just truncate. Doesn't matter because you store the difference. It reduces entropy because you only store small values, a lot of them will be -1, 0 or 1. There are a couple of off-by-one bugs in your snippet btw.

0

精彩评论

暂无评论...
验证码 换一张
取 消