开发者

How can I do fast image processing from the iPhone camera?

开发者 https://www.devze.com 2023-02-15 15:50 出处:网络
I am trying to write an iPhone application which will do some real-time camera image processing.I used the example presented in the AVFoundation docs as a starting point: setting a capture session, ma

I am trying to write an iPhone application which will do some real-time camera image processing. I used the example presented in the AVFoundation docs as a starting point: setting a capture session, making a开发者_开发问答 UIImage from the sample buffer data, then drawing an image at a point via -setNeedsDisplay, which I call on the main thread.

This works, but it is fairly slow (50 ms per frame, measured between -drawRect: calls, for a 192 x 144 preset) and I've seen applications on the App Store which work faster than this.

About half of my time is spent in -setNeedsDisplay.

How can I speed up this image processing?


As Steve points out, in my answer here I encourage people to look at OpenGL ES for the best performance when processing and rendering images to the screen from the iPhone's camera. The reason for this is that using Quartz to continually update a UIImage onto the screen is a fairly slow way to send raw pixel data to the display.

If possible, I encourage you to look to OpenGL ES to do your actual processing, because of how well-tuned GPUs are for this kind of work. If you need to maintain OpenGL ES 1.1 compatibility, your processing options are much more limited than with 2.0's programmable shaders, but you can still do some basic image adjustment.

Even if you're doing all of your image processing using the raw data on the CPU, you'll still be much better off by using an OpenGL ES texture for the image data, updating that with each frame. You'll see a jump in performance just by switching to that rendering route.

(Update: 2/18/2012) As I describe in my update to the above-linked answer, I've made this process much easier with my new open source GPUImage framework. This handles all of the OpenGL ES interaction for you, so you can just focus on applying the filters and other effects that you'd like to on your incoming video. It's anywhere from 5-70X faster than doing this processing using CPU-bound routines and manual display updates.


Set sessionPresent of capture session to AVCaptureSessionPresetLow as shown in the below sample code, this will increase the processing speed, but image from the buffer will be of low quality.



- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init] ;
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 
    captureOutput.minFrameDuration = CMTimeMake(1, 25);
    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 
    self.captureSession = [[AVCaptureSession alloc] init] ;
    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];
    self.captureSession.sessionPreset=AVCaptureSessionPresetLow;
    /*sessionPresent choose appropriate value to get desired speed*/

    if (!self.prevLayer) {
        self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    }
    self.prevLayer.frame = self.view.bounds;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer: self.prevLayer];

}



0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号