I have an UIView with many UIImageViews as subviews. The app runs on iOS4 and I use images with retina display resolution (i.e. the images load with scale = 2)
I want to save the contents of the UIView ... BUT ... have the real size of the images inside. I.e. the view has size 200x200 and images with scale=2 inside, I'd like to save a resulting image of 400x400 and all the images with their real size.
Now what comes first to mind is to create a new image context and load again all images inside with scale=1 and that should do, but I was wondering if there is any more elegant way to do that? Seems like a waist of memory and processor time to reload everything again since it's already done ...
p.s. if anyone has an answ开发者_运维知识库er - including code would be nice
Implementation for rendering any UIView to image (working also for retina display).
helper.h file:
@interface UIView (Ext)
- (UIImage*) renderToImage;
@end
and belonging implementation in helper.m file:
#import <QuartzCore/QuartzCore.h>
@implementation UIView (Ext)
- (UIImage*) renderToImage
{
// IMPORTANT: using weak link on UIKit
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
0.0 is the scale factor. The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
QuartzCore.framework also should be put into the project because we are calling function on the layer object.
To enable weak link on UIKit framework, click on the project item in left navigator, click the project target -> build phases -> link binary and choose "optional" (weak) type on UIKit framework.
Here is library with similar extensions for UIColor, UIImage, NSArray, NSDictionary, ...
I've performed such thing to save the pins from the MKMapView as a PNG file (in retina display): MKPinAnnotationView: Are there more than three colors available?
Here's an extract of the crucial part that performs the saving of a UIView
(theView
) using its retina definition:
-(void) saveMyView:(UIView*)theView {
//The image where the view content is going to be saved.
UIImage* image = nil;
UIGraphicsBeginImageContextWithOptions(theView.frame.size, NO, 2.0);
[theView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData* imgData = UIImagePNGRepresentation(image);
NSString* targetPath = [NSString stringWithFormat:@"%@/%@", [self writablePath], @"thisismyview.png" ];
[imgData writeToFile:targetPath atomically:YES];
}
-(NSString*) writablePath {
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
return documentsDirectory;
}
The key is that the third parameter to UIGraphicsBeginImageContextWithOptions is the scale, which determines how the image will ultimately be written out.
If you always want the real pixel dimensions, use [[UIScreen mainScreen] scale] to get the current scale of the screen:
UIGraphicsBeginImageContextWithOptions(viewSizeInPoints, YES, [[UIScreen mainScreen] scale]);
If you use scale=1.0 on iPhone4, you will get an image with its dimension in points and the result is scaled down from the true pixel count. If you manually write the image out to a 640x960 dimension (eg: passing pixels as the first parameter), it will actually be the scaled-down image that is scaled back up which looks about as terrible as you imagine it would look.
Couldn't you just create a new graphics context at the desired size, use a CGAffineTransform to scale it down, render the root UIView's root layer, restore the context to the original size and render the image? Haven't tried this for retina content, but this seems to work well for large images that have been scaled down in UIImageViews...
something like:
CGSize originalSize = myOriginalImage.size; //or whatever
//create context
UIGraphicsBeginImageContext(originalSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context); //1 original context
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, originalSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
//original image
CGContextDrawImage(context, CGRectMake(0,0,originalSize.width,originalSize.height), myOriginalImage.CGImage);
CGContextRestoreGState(context);//1 restore to original for UIView render;
//scaling
CGFloat wratio = originalSize.width/self.view.frame.size.width;
CGFloat hratio = originalSize.height/self.view.frame.size.height;
//scale context to match view size
CGContextSaveGState(context); //1 pre-scaled size
CGContextScaleCTM(context, wratio, hratio);
//render
[self.view.layer renderInContext:context];
CGContextRestoreGState(context);//1 restore to pre-scaled size;
UIImage *exportImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Import QuartzCore (Click main project, Build Phases, import) and where you need it add:
#import <QuartzCore/QuartzCore.h>
My imageViews are properties, if your are not, ignore the .self
and pass the imageViews into the function as parameters, then call renderInContext
on the two images in a new UIGraphicsCurrentContext
- (UIImage *)saveImage
{
UIGraphicsBeginImageContextWithOptions(self.mainImage.bounds.size, NO, 0.0);
[self.backgroundImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.mainImage.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return savedImage;
}
精彩评论