开发者

Efficiently Implementing Java Native Interface Webcam Feed

开发者 https://www.devze.com 2023-01-27 03:24 出处:网络
I\'m working on a project that takes video input from a webcam and displays regions of motion to the user.My \"beta\" attempt at this project was to use the Java Media Framework to retrieve the webcam

I'm working on a project that takes video input from a webcam and displays regions of motion to the user. My "beta" attempt at this project was to use the Java Media Framework to retrieve the webcam feed. Through some utility functions, JMF conveniently returns webcam frames as Buffe开发者_如何学JAVAredImages, which I built a significant amount of framework around to process. However, I soon realized that JMF isn't well supported by Sun/Oracle anymore, and some of the higher webcam resolutions (720p) are not accessible through the JMF interface.

I'd like to continue processing frames as BufferedImages, and use OpenCV (C++) to source the video feed. Using OpenCV's framework alone, I've found that OpenCV does a good job of efficiently returning high-def webcam frames and painting them to screen.

I figured it would be pretty straightforward to feed this data into Java and achieve the same efficiency. I just finished writing the JNI DLL to copy this data into a BufferedImage and return it to Java. However, I'm finding that the amount of data copying I'm doing is really hindering performance. I'm targeting 30 FPS, but it takes roughly 100 msec alone to even copy the data from the char array returned by OpenCV into a Java BufferedImage. Instead, I'm seeing about 2-5 FPS.

When returning a frame capture, OpenCV provides a pointer to a 1D char array. This data needs to be provided to Java, and apparently I don't have the time to copy any of it.

I need a better solution to get these frame captures into a BufferedImage. A few solutions I'm considering, none of which I think are very good (fairly certain they would also perform poorly):

(1) Override BufferedImage, and return pixel data from various BufferedImage methods by making native calls to the DLL. (Instead of doing the array copying at once, I return individual pixels as requested by the calling code). Note that calling code typically needs all pixels in the image to paint the image or process it, so this individual pixel-grab operation would be implemented in a 2D for-loop.

(2) Instruct the BufferedImage to use a java.nio.ByteBuffer to somehow directly access data in the char array returned by OpenCV. Would appreciate any tips as to how this is done.

(3) Do everything in C++ and forget Java. Well well, yes this does sound like the most logical solution, however I will not have time to start this many-month project from scratch.

As of now, my JNI code has been written to return the BufferedImage, however at this point I'm willing to accept the return of a 1D char array and then put it into a BufferedImage.

By the way... the question here is: What is the most efficient way to copy a 1D char array of image data into a BufferedImage?

Provided is the (inefficient) code that I use to source image from OpenCV and copy into BufferedImage:

JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame
  (JNIEnv * env, jobject jThis, jobject camera)
{
 //get the memory address of the CvCapture device, the value of which is encapsulated in the camera jobject
 jclass cameraClass = env->FindClass("graphicanalyzer/Camera");
 jfieldID fid = env->GetFieldID(cameraClass,"pCvCapture","I");

 //get the address of the CvCapture device
 int a_pCvCapture = (int)env->GetIntField(camera, fid);

 //get a pointer to the CvCapture device
    CvCapture *capture = (CvCapture*)a_pCvCapture;

 //get a frame from the CvCapture device
 IplImage *frame = cvQueryFrame( capture );

 //get a handle on the BufferedImage class
 jclass bufferedImageClass = env->FindClass("java/awt/image/BufferedImage");
 if (bufferedImageClass == NULL)
 {
  return NULL;
 }

 //get a handle on the BufferedImage(int width, int height, int imageType) constructor
 jmethodID bufferedImageConstructor = env->GetMethodID(bufferedImageClass,"<init>","(III)V");

 //get the field ID of BufferedImage.TYPE_INT_RGB
 jfieldID imageTypeFieldID = env->GetStaticFieldID(bufferedImageClass,"TYPE_INT_RGB","I");

 //get the int value from the BufferedImage.TYPE_INT_RGB field
 jint imageTypeIntRGB = env->GetStaticIntField(bufferedImageClass,imageTypeFieldID);

 //create a new BufferedImage
 jobject ret = env->NewObject(bufferedImageClass, bufferedImageConstructor, (jint)frame->width, (jint)frame->height, imageTypeIntRGB);

 //get a handle on the method BufferedImage.getRaster()
 jmethodID getWritableRasterID = env->GetMethodID(bufferedImageClass, "getRaster", "()Ljava/awt/image/WritableRaster;");

 //call the BufferedImage.getRaster() method
 jobject writableRaster = env->CallObjectMethod(ret,getWritableRasterID);

 //get a handle on the WritableRaster class
 jclass writableRasterClass = env->FindClass("java/awt/image/WritableRaster");

 //get a handle on the WritableRaster.setPixel(int x, int y, int[] rgb) method
 jmethodID setPixelID = env->GetMethodID(writableRasterClass, "setPixel", "(II[I)V"); //void setPixel(int, int, int[])

 //iterate through the frame we got above and set each pixel within the WritableRaster
 jintArray rgbArray = env->NewIntArray(3);
 jint rgb[3];
 char *px;
 for (jint x=0; x < frame->width; x++)
 {
  for (jint y=0; y < frame->height; y++)
  {
   px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
   rgb[0] = abs(px[2]);  // OpenCV returns BGR bit order
   rgb[1] = abs(px[1]);  // OpenCV returns BGR bit order
   rgb[2] = abs(px[0]);  // OpenCV returns BGR bit order
   //copy jint array into jintArray
   env->SetIntArrayRegion(rgbArray,0,3,rgb); //take values in rgb and move to rgbArray
   //call setPixel()  this is a copy operation
   env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray);
  }
 }

 return ret;  //return the BufferedImage
}


There is another option if you wish to make your code really fast and still use Java. The AWT windowing toolkit has a direct native interface you can use to draw to an AWT surface using C or C++. Thus, there would be no need to copy anything to Java, as you could render directly from the buffer in C or C++. I am not sure of the specifics on how to do this because I have not looked at it in a while, but I know that it is included in the standard JRE distribution. Using this method, you could probably approach the FPS limit of the camera if you wished, rather than struggling to reach 30 FPS.

If you want to research this further I would start here and here.

Happy Programming!


I would construct the RGB int array required by BufferedImage and then use a single call to

 void setRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize) 

to set the entire image data array at once. Or at least, large portions of it.

Without having timed it, I would suspect that it's the per-pixel calls to

env->SetIntArrayRegion(rgbArray,0,3,rgb);
env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray);

which are taking the lion's share of the time.

EDIT: It will be likely the method invocations rather than manipulation of memory, per se, that is taking the time. So build data in your JNI code and copy it in blocks or a single hit to the Java image. Once you create and pin a Java int[] you can access it via native pointers. Then one call to setRGB will copy the array into your image.

Note: You do still have to copy the data at least once, but doing all pixels in one hit via 1 function call will be vastly more efficient than doing them individually via 2 x N function calls.

EDIT 2:

Reviewing my JNI code, I have only ever used byte arrays, but the principles are the same for int arrays. Use:

NewIntArray

to create an int array, and

GetIntArrayElements

to pin it and get a pointer, and when you are done,

ReleaseIntArrayElements

to release it, remembering to use the flag to copy data back to Java's memory heap.

Then, you should be able to use your Java int array handle to invoke the setRGB function.

Remember also that this is actually setting RGBA pixels, so 4 channels, including alpha, not just three (the RGB names in Java seem to predate alpha channel, but most of the so-named methods are compatible with a 32 bit value).


As a secondary consideration, if the only difference between the image data array returned by OpenCV and what is required by Java is the BGR vs RGB, then

px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
rgb[0] = abs(px[2]);  // OpenCV returns BGR bit order
rgb[1] = abs(px[1]);  // OpenCV returns BGR bit order
rgb[2] = abs(px[0]);  // OpenCV returns BGR bit order

is a relatively inefficient way to convert them. Instead you could do something like:

uint32 px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
javaArray[ofs]=((px&0x00FF0000)>>16)|(px&0x0000FF00)|((px&0x000000FF)<<16);

(note my C code is rusty, so this might not be entirely valid, but it shows what is needed).


Managed to speed up the process using an NIO ByteBuffer.

On the C++ JNI side...

JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame
  (JNIEnv * env, jobject jThis, jobject camera)
{
    //...

    IplImage *frame = cvQueryFrame(pCaptureDevice);

    jobject byteBuf = env->NewDirectByteBuffer(frame->imageData, frame->imageSize);

    return byteBuf;
}

and on the Java side...

void getFrame(Camera cam)
{
    ByteBuffer frameData = cam.getFrame();   //NATIVE call

    byte[] imgArray = new byte[frame.data.capacity()];
    frameData.get(imgArray); //although it seems like an array copy, this call returns very quickly
    DataBufferByte frameDataBuf = new DataBufferByte(imgArray,imgArray.length);

    //determine image sample model characteristics
    int dataType = DataBuffer.TYPE_BYTE;
    int width = cam.getFrameWidth();
    int height = cam.getFrameHeight();
    int pixelStride = cam.getPixelStride();
    int scanlineStride = cam.getScanlineStride();
    int bandOffsets = new int[] {2,1,0};  //BGR

    //create a WritableRaster with the DataBufferByte
    PixelInterleavedSampleModel pism = new PixelInterleavedSampleModel
    (
        dataType,
        width,
        height,
        pixelStride,
        scanlineStride,
        bandOffsets
    );
    WritableRaster raster = new ImgFeedWritableRaster( pism, frameDataBuf, new Point(0,0) );

    //create the BufferedImage
    ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_sRGB);
    ComponentColorModel cm = new ComponentColorModel(cs, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
    BufferedImage newImg = new BufferedImage(cm,raster,false,null);

    handleNewImage(newImg);
}

Using the java.nio.ByteBuffer, I can quickly address the char array returned by the OpenCV code without (apparently) doing much gruesome array copying.

0

精彩评论

暂无评论...
验证码 换一张
取 消