I am currently working on an Android application that processes camera frames retrieved from Camera.PreviewCallback.onPreviewFrame(). These frames are encoded in YUV420SP format and provided as a byte array.
I need to downsize the full frame and its contents, let's say by a factor of 2, from 640x480 px to 320x240. I guess, for downsizing the luminance part, I could just run a loop copying every second value from the byte[] frame to a new, smaller array, but what about the chrominance part? Does anyone know more about the st开发者_JS百科ructure of a YUV420SP frame?
Many thanks in advance!
Here is a code to get a half size RGBA image from yuv420sp bytes:
//byte[] data;
int frameSize = getFrameWidth() * getFrameHeight();
int[] rgba = new int[frameSize / 4];
for (int i = 0; i < getFrameHeight() / 2; i++)
for (int j = 0; j < getFrameWidth() / 2; j++) {
int y1 = (0xff & ((int) data[2 * i * getFrameWidth() + j * 2]));
int y2 = (0xff & ((int) data[2 * i * getFrameWidth() + j * 2 + 1]));
int y3 = (0xff & ((int) data[(2 * i + 1) * getFrameWidth() + j * 2]));
int y4 = (0xff & ((int) data[(2 * i + 1) * getFrameWidth() + j * 2 + 1]));
int y = (y1 + y2 + y3 + y4) / 4;
int u = (0xff & ((int) data[frameSize + i * getFrameWidth() + j * 2 + 0]));
int v = (0xff & ((int) data[frameSize + i * getFrameWidth() + j * 2 + 1]));
y = y < 16 ? 16 : y;
int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
rgba[i * getFrameWidth() / 2 + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
Bitmap bmp = Bitmap.createBitmap(getFrameWidth()/2, getFrameHeight()/2, Bitmap.Config.ARGB_8888);
bmp.setPixels(rgba, 0/* offset */, getFrameWidth()/2 /* stride */, 0, 0, getFrameWidth()/2, getFrameHeight()/2);
精彩评论