开发者

Optimizing iPhone photo-taking speed, UIImagePickerController

开发者 https://www.devze.com 2023-02-14 23:11 出处:网络
I\'m using the UIImagePickerController in my app. Has anyone used any optimization tricks for picture-taking latency? I don\'t need to store them to the library. I simply want to capture the pixel dat

I'm using the UIImagePickerController in my app. Has anyone used any optimization tricks for picture-taking latency? I don't need to store them to the library. I simply want to capture the pixel data and destroy the image object after making a calculation.

Also, is there a way to hide the lens when it loads?

(Worst case, I could mask the lens on camera startup and mask the frozen image upon saving/calculating.)

开发者_开发百科

EDIT: I've already set showsCameraControls = NO;. This hides the lens effect between snapshots, but does not affect the presence of the lens animation on camera startup.


Are you wedded to the UIImagePickerController? As of iOS 4, AVFoundation allows you to receive a live stream of images from the camera at any supported video resolution, with no prescribed user interface, so there's no lens effect and on an iPhone 4 you can grab an image up to 720p with no latency; on earlier devices you can grab a 480p.

Session 409 of the WWDC 2010 videos available from here is a good place to start. You'll want to create an AVCaptureSession, attach a suitable AVCaptureDevice via an AVCaptureDeviceInput, add an AVCaptureVideoDataOutput and give that a dispatch queue on which to pipe data to you. You'll end up with a CVImageBufferRef, which directly exposes the raw pixel data.

EDIT: Apple's example code seemingly being missing, I tend to use approximately the following:

AVCaptureSession *session;
AVCaptureDevice *device;
AVCaptureVideoDataOutput *output;

// create a capture session
session = [[AVCaptureSession alloc] init];
session.sessionPreset = ...frame quality you want...;

// grab the default video device (which will be the back camera on a device
// with two), create an input for it to the capture session
NSError *error = nil;
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput 
                                    deviceInputWithDevice:device error:&error];

// connect the two
[session addInput:input];

// create an object to route output back here
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];

// create a suitable dispatch queue, GCD style, and hook 
// self up as the delegate
dispatch_queue_t queue = dispatch_queue_create(NULL, NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

// set 32bpp BGRA pixel format
output.videoSettings =
        [NSDictionary 
            dictionaryWithObject:
                             [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
            forKey:(id)kCVPixelBufferPixelFormatTypeKey];

[session startRunning];

That'll then start delivering CMSampleBuffers to your captureOutput:didOutputSampleBuffer:fromConnection: on the dispatch queue you created (ie, a separate thread). Obviously production code would have a lot more sanity and result checks than the above.

The following example code takes an incoming CMSampleBuffer that contains a video frame and converts it into a CGImage, then sends that off to the main thread where, in my test code, it's converted into a UIImage and set as the thing inside a UIImageView, proving that the whole thing is working:

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// lock momentarily, to get enough details to create a CGImage in the future...
CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

// create a CGImageRef
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = 
    CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colourSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colourSpace);

[self performSelectorOnMainThread:@selector(postCGImage:) withObject:[NSValue valueWithPointer:imageRef] waitUntilDone:YES];
CGImageRelease(imageRef);

I've conflated the object that I use normally to receive video frames and some stuff from a view controller for the sake of example; hopefully I haven't made any errors.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号