I'm trying to adapt THIS question to work with a retina display. Here's how I figured out how to work with retina graphics for UIGraphicsBeginImageContext
.
开发者_开发问答if (UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0f);
}
else
{
UIGraphicsBeginImageContext(self.view.bounds.size);
}
However, when I use it the image looks really huge and doesn't scale down to fit the display. Any ideas how I can tell it to resize the captured image to fit the boxes? (Please refer to the other question to understand what I mean).
I've written this code to perform exactly the same thing described in the other post. It works on any iOS device with at least iOS 3.1.
_launcherView is the view that need to be photographed
CGFloat scale = 1.0;
if([[UIScreen mainScreen]respondsToSelector:@selector(scale)]) {
CGFloat tmp = [[UIScreen mainScreen]scale];
if (tmp > 1.5) {
scale = 2.0;
}
}
if(scale > 1.5) {
UIGraphicsBeginImageContextWithOptions(_launcherView.frame.size, NO, scale);
} else {
UIGraphicsBeginImageContext(_launcherView.frame.size);
}
[_launcherView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect upRect = CGRectMake(0, 0, _launcherView.frame.size.width*scale, (diff - offset)*scale);
CGImageRef imageRefUp = CGImageCreateWithImageInRect([screenshot CGImage], upRect);
[self.screenshot1 setFrame:CGRectMake(0, 0, screenshot1.frame.size.width, diff - offset)];
[screenshot1 setContentMode:UIViewContentModeTop];
UIImage * img1 = [UIImage imageWithCGImage:imageRefUp];
[self.screenshot1 setBackgroundImage:img1 forState:UIControlStateNormal];
CGImageRelease(imageRefUp);
CGRect downRect = CGRectMake(0, (diff - offset)*scale, _launcherView.frame.size.width*scale, (screenshot.size.height - diff + offset)*scale);
CGImageRef imageRefDown = CGImageCreateWithImageInRect([screenshot CGImage], downRect);
[self.screenshot2 setFrame:CGRectMake(0, screenshot1.frame.size.height , screenshot2.frame.size.width, _launcherView.frame.size.height - screenshot1.frame.size.height)];
[screenshot2 setContentMode:UIViewContentModeTop];
UIImage * img2 = [UIImage imageWithCGImage:imageRefDown];
[self.screenshot2 setBackgroundImage:img2 forState:UIControlStateNormal];
CGImageRelease(imageRefDown);
According to this session, iOS Memory Deep Dive, we had better use ImageIO
to downscale images.
- Memory use is related to the dimensions of the images, not the file size.
UIGraphicsBeginImageContextWithOptions
always usesSRGB
rendering-format, which use 4 bytes per pixel.- A image have
load -> decode -> render
3 phases.
For the following image, if you use UIGraphicsBeginImageContextWithOptions
we only need 590KB to load a image, while we need
2048 pixels x 1536 pixels x 4 bytes per pixel
= 10MB when decoding
while UIGraphicsImageRenderer
, introduced in iOS 10, will automatically pick the best graphic format in iOS12. It means, you will save 75% of memory by replacing UIGraphicsBeginImageContextWithOptions
with UIGraphicsImageRenderer
if you don't need SRGB.
let url = NSURL(fileURLWithPath: filePath)
func resize() =
let imgSource = CGImageSourceCreateWithURL(url, nil)
if let imageSource = imgSource {
let options: [NSString: Any] = [
kCGImageSourceThumbnailMaxPixelSize: 100,
kCGImageSourceCreateThumbnailFromImageAlways: true
]
let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options as CFDictionary)
}
I took some notes here for this session.
- (UIImage *)getNewimage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
精彩评论