开发者

iphone - Displaying a big image in a small frame UIImageView still costs much memory?

开发者 https://www.devze.com 2023-03-22 06:02 出处:网络
I have several big images with size of about 1.5 MB. I display each of them in a UIImageView with UIViewContentModeScaleFit.

I have several big images with size of about 1.5 MB. I display each of them in a UIImageView with UIViewContentModeScaleFit.

The frame size of the UIImageViews is just 150 * 150.

My question is

I understand if I display the big images for full screens, the m开发者_StackOverflowemory will go up tremendously.

But if they are in small UIImageView, will they still cost memory?

Thanks


UIImages and UIImageViews are separate things. Each UIImageView knows the size at which to display its associated UIImage. A UIImage has no concept of how to display itself or how it'll be used for display, so the mere act of changing the size of a UIImageView has no effect on the UIImage. Hence it has no effect on total memory usage.

What you probably want to do is use Core Graphics to take a UIImage and produce 150x150 version of it as a new UIImage, then push that into a UIImageView.

To perform the scale, code something like the following (written as I type, so not thoroughly checked) should do the job:

#include <math.h>

- (UIImage *)scaledImageForImage:(UIImage *)srcImage toSize:(CGSize)maxSize
{
    // pick the target dimensions, as though applying
    // UIViewContentModeScaleAspectFit; seed some values first
    CGSize sizeOfImage = [srcImage size];
    CGSize targetSize; // to store the output size

    // logic here: we're going to scale so as to apply some multiplier
    // to both the width and height of the input image. That multiplier
    // is either going to make the source width fill the output width or
    // it's going to make the source height fill the output height. Of the
    // two possibilities, we want the smaller one, since the larger will
    // make the other axis too large
    if(maxSize.width / sizeOfImage.width < maxSize.height / sizeOfImage.height)
    {
        // we'll letter box then; scaling width to fill width, since
        // that's the smaller scale of the two possibilities
        targetSize.width = maxSize.width;

        // height is the original height adjusted proportionally
        // to match the proportional adjustment in width
        targetSize.height = 
                      (maxSize.width / sizeOfImage.width) * sizeOfImage.height;
    }
    else
    {
        // basically the same as the above, except that we pillar box
        targetSize.height = maxSize.height;
        targetSize.width = 
                     (maxSize.height / sizeOfImage.height) * sizeOfImage.width;
    }

    // images can be integral sizes only, so round up
    // the target size and width, then construct a target
    // rect that centres the output image within that size;
    // this all ensures sub-pixel accuracy
    CGRect targetRect;

    // store the original target size and round up the original
    targetRect.size = targetSize;
    targetSize.width = ceilf(targetSize.width);
    targetSize.height = ceilf(targetSize.height);

    // work out how to centre the source image within the integral-sized
    // output image
    targetRect.origin.x = (targetSize.width - targetRect.size.height) * 0.5f;
    targetRect.origin.y = (targetSize.height - targetRect.size.width) * 0.5f;

    // now create a CGContext to draw to, draw the image to it suitably
    // scaled and positioned, and turn the thing into a UIImage

    // get a suitable CoreGraphics context to draw to, in RGBA;
    // I'm assuming iOS 4 or later here, to save some manual memory
    // management.
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context  = CGBitmapContextCreate(
           NULL,
           targetSize.width, targetSize.height,
           8, targetSize.width * 4,
           colourSpace, 
           kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
    CGColorSpaceRelease(colourSpace);

    // clear the context, since it may currently contain anything.
    CGContextClearRect(context, 
                  CGRectMake(0.0f, 0.0f, targetSize.width, targetSize.height));

    // draw the given image to the newly created context
    CGContextDrawImage(context, targetRect, [srcImage CGImage]);

    // get an image from the CG context, wrapping it as a UIImage
    CGImageRef cgImage = CGBitmapContextCreateImage(context);
    UIImage *returnImage = [UIImage imageWithCGImage:cgImage];

    // clean up
    CGContextRelease(context);
    CGImageRelease(cgImage);

    return returnImage;
}

Obviously I've made that look complicated by commenting it very heavily, but it's actually only 23 lines.


They won't cost as much in terms of frame buffer memory (i.e. memory holding the pixels for display), but holding the unscaled images in memory will still incur a sizeable cost. If 1.5MB is the compressed size. that's likely to be high (iOS uses approximately 4 bytes * Width * Height per image to store uncompressed UIImages). Such images also interact poorly with the automatic memory management the kernel will perform when your app goes into the background - backing stores are released when memory pressure occurs, but the backing image isn't.

The best way to work out if this is a problem (and if you should resize them yourself, store the smaller versions, and release the large ones) is to run your app through Instruments using the VM Tracker. If the regions storing the images are excessive, you'll be able to diagnose them, and pick the appropriate solution. You might like to view the WWDC 2011 session on iOS Memory Management, which goes into image memory usage in some detail, including using Instruments to find problems.

As always, profile (or, as Apple might say, Instrument) before you optimise!


#import <ImageIO/ImageIO.h>
#import <MobileCoreServices/MobileCoreServices.h>

+ (UIImage *)resizeImage:(UIImage *)image toResolution:(int)resolution {
  NSData *imageData = UIImagePNGRepresentation(image);
  CGImageSourceRef src = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
  CFDictionaryRef options = (__bridge CFDictionaryRef) @{
                                                   (id) kCGImageSourceCreateThumbnailWithTransform : @YES,
                                                   (id) kCGImageSourceCreateThumbnailFromImageAlways : @YES,
                                                   (id) kCGImageSourceThumbnailMaxPixelSize : @(resolution)
                                                   };
  CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(src, 0, options);
  CFRelease(src);
  UIImage *img = [[UIImage alloc]initWithCGImage:thumbnail];
  return img;
}


Yep. You should create downsized versions of the images, cache them to disk somewhere, use the small versions in your image views and unload the big ones if you can.

0

精彩评论

暂无评论...
验证码 换一张
取 消