I'm writing an iPhone App which uses AVFoundation to take a photo and crop it. The App is similar to a QR code reader: It uses a AVCaptureVideoPreviewLayer with an overlay. The overlay has a square. I want to crop the image so the cropped image is exactly what the user has places inside the square.
The preview layer has gravity AVLayerVideoGravityResizeAspectFill.
It looks like what the camera actually captures is not exactly what the user sees in the preview layer. This means that I need to move from the preview c开发者_如何学Pythonoordinate system to the captured image coordinate system so I can crop the image. For this I think that I need the following parameters: 1. ration between view size and captured image size. 2. information which tells which part of the captured image matches what is displayed in the preview layer.
Does anybody know how I can obtain this info, or if there is a different approach to crop the image.
(p.s. capturing a screenshot of the preview is not an option, as I understand it might resulting in the App being rejected).
Thank you in advance
Hope this meets your requirements
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
NSLog(@"---------");
NSLog(@"*cropRect.origin.y=%f",cropRect.origin.y);
NSLog(@"*cropRect.origin.x=%f",cropRect.origin.x);
NSLog(@"*cropRect.size.width=%f",cropRect.size.width);
NSLog(@"*cropRect.size.height=%f",cropRect.size.height);
NSLog(@"---------");
NSLog(@"*size.width=%f",size.width);
NSLog(@"*size.height=%f",size.height);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
you can use this api from AVFoundation: AVMakeRectWithAspectRatioInsideRect
it will return the crop region for an image in a bounding region, apple doc is here: https://developer.apple.com/library/ios/Documentation/AVFoundation/Reference/AVFoundation_Functions/Reference/reference.html
I think this is just simple as this
- (CGRect)computeCropRect:(CGImageRef)cgImageRef
{
static CGFloat cgWidth = 0;
static CGFloat cgHeight = 0;
static CGFloat viewWidth = 320;
if(cgWidth == 0)
cgWidth = CGImageGetWidth(cgImageRef);
if(cgHeight == 0)
cgHeight = CGImageGetHeight(cgImageRef);
CGRect cropRect;
// Only based on width
cropRect.origin.x = cropRect.origin.y = kMargin * cgWidth / viewWidth;
cropRect.size.width = cropRect.size.height = kSquareSize * cgWidth / viewWidth;
return cropRect;
}
with kMargin and kSquareSize (20point and 280point in my case) are the margin and Scanning area respectively
Then perform cropping
CGRect cropRect = [self computeCropRect:cgCapturedImageRef];
CGImageRef croppedImageRef = CGImageCreateWithImageInRect(cgCapturedImageRef, cropRect);
精彩评论