开发者

Convert bitmap array to YUV (YCbCr NV21)

开发者 https://www.devze.com 2023-03-04 20:32 出处:网络
How t开发者_Python百科o convert Bitmap returned by BitmapFactory.decodeFile() to YUV format (simillar to what camera\'s onPreviewFrame() returns in byte array)?Here is some code that actually works:

How t开发者_Python百科o convert Bitmap returned by BitmapFactory.decodeFile() to YUV format (simillar to what camera's onPreviewFrame() returns in byte array)?


Here is some code that actually works:

    // untested function
    byte [] getNV21(int inputWidth, int inputHeight, Bitmap scaled) {

        int [] argb = new int[inputWidth * inputHeight];

        scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);

        byte [] yuv = new byte[inputWidth*inputHeight*3/2];
        encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

        scaled.recycle();

        return yuv;
    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
        final int frameSize = width * height;

        int yIndex = 0;
        int uvIndex = frameSize;

        int a, R, G, B, Y, U, V;
        int index = 0;
        for (int j = 0; j < height; j++) {
            for (int i = 0; i < width; i++) {

                a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
                R = (argb[index] & 0xff0000) >> 16;
                G = (argb[index] & 0xff00) >> 8;
                B = (argb[index] & 0xff) >> 0;

                // well known RGB to YUV algorithm
                Y = ( (  66 * R + 129 * G +  25 * B + 128) >> 8) +  16;
                U = ( ( -38 * R -  74 * G + 112 * B + 128) >> 8) + 128;
                V = ( ( 112 * R -  94 * G -  18 * B + 128) >> 8) + 128;

                // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
                //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
                //    pixel AND every other scanline.
                yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
                if (j % 2 == 0 && index % 2 == 0) { 
                    yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
                    yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
                }

                index ++;
            }
        }
    }


Following is the code for converting Bitmap to Yuv(NV21) Format.

void yourFunction(){

    // mBitmap is your bitmap

    int mWidth = mBitmap.getWidth();
    int mHeight = mBitmap.getHeight();

    int[] mIntArray = new int[mWidth * mHeight];

    // Copy pixel data from the Bitmap into the 'intArray' array
    mBitmap.getPixels(mIntArray, 0, mWidth, 0, 0, mWidth, mHeight);

    // Call to encoding function : convert intArray to Yuv Binary data
    encodeYUV420SP(data, intArray, mWidth, mHeight);

}

static public void encodeYUV420SP(byte[] yuv420sp, int[] rgba,
        int width, int height) {
    final int frameSize = width * height;

    int[] U, V;
    U = new int[frameSize];
    V = new int[frameSize];

    final int uvwidth = width / 2;

    int r, g, b, y, u, v;
    for (int j = 0; j < height; j++) {
        int index = width * j;
        for (int i = 0; i < width; i++) {

            r = Color.red(rgba[index]);
            g = Color.green(rgba[index]);
            b = Color.blue(rgba[index]);

            // rgb to yuv
            y = (66 * r + 129 * g + 25 * b + 128) >> 8 + 16;
            u = (-38 * r - 74 * g + 112 * b + 128) >> 8 + 128;
            v = (112 * r - 94 * g - 18 * b + 128) >> 8 + 128;

            // clip y
            yuv420sp[index] = (byte) ((y < 0) ? 0 : ((y > 255) ? 255 : y));
            U[index] = u;
            V[index++] = v;
        }
    }


If using java to convert Bitmap to YUV byte[] is too slow for you, you can try libyuv by Google


Via OpenCV library you can replace encodeYUV420SP java function with one native OpenCV line and it is ~4x more fastest:

Mat mFrame = Mat(height,width,CV_8UC4,pFrameData).clone();

Complete example:

Java side:

    Bitmap bitmap = mTextureView.getBitmap(mWidth, mHeight);
    int[] argb = new int[mWidth * mHeight];
    // get ARGB pixels and then proccess it with 8UC4 opencv convertion
    bitmap.getPixels(argb, 0, mWidth, 0, 0, mWidth, mHeight);
    // native method (NDK or CMake)
    processFrame8UC4(argb, mWidth, mHeight);

Native side (NDK):

JNIEXPORT jint JNICALL com_native_detector_Utils_processFrame8UC4
    (JNIEnv *env, jobject object, jint width, jint height, jintArray frame) {

    jint *pFrameData = env->GetIntArrayElements(frame, 0);
    // it is the line:
    Mat mFrame = Mat(height,width,CV_8UC4,pFrameData).clone();
    // the next only is a extra example to gray convertion:
    Mat mout;
    cvtColor(mFrame, mout,CV_RGB2GRAY);
    int objects = face_detection(env, mout);
    env->ReleaseIntArrayElements(frame, pFrameData, 0);
    return objects;
}


The bmp file will be in RGB888 format, so you will need to convert it to YUV. I have not come across any api in Android that will do this for you.

But you can do this yourself, see this link on how to..


first you calculate the rgb data:

r=(p>>16) & 0xff;
g=(p>>8) & 0xff;
b= p & 0xff;
y=0.2f*r+0.7*g+0.07*b;
u=-0.09991*r-0.33609*g+0.436*b;
v=0.615*r-0.55861*g-0.05639*b;

y, u and v are the composants of the yuv matrix.

0

精彩评论

暂无评论...
验证码 换一张
取 消