3

I have an app that uses UIImage objects. Up to this point, I've been using image objects initialized using something like this:

UIImage *image = [UIImage imageNamed:imageName];

using an image in my app bundle. I've been adding functionality to allow users to use imagery from the camera or their library using UIImagePickerController. These images, obviously, can't be in my app bundle, so I initialize the UIImage object a different way:

UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];

This is done after first resizing the image to a size similar to the other files in my app bundle, in both pixel dimensions and total bytes, both using Jpeg format (interestingly, PNG was much slower, even for the same file size). In other words, the file pointed to by pathToFile is a file of similar size as an image in the bundle (pixel dimensions match, and compression was chosen so byte count was similar).

The app goes through a loop making small pieces from the original image, among other things that are not relevant to this post. My issue is that going through the loop using an image created the second way takes much longer than using an image created the first way.

I realize the first method caches the image, but I don't think that's relevant, unless I'm not understanding how the caching works. If it is the relevant factor, how can I add caching to the second method?

The relevant portion of code that is causing the bottleneck is this:

[image drawInRect:self.imageSquare];

Here, self is a subclass of UIImageView. Its property imageSquare is simply a CGRect defining what gets drawn. This portion is the same for both methods. So why is the second method so much slower with similar sized UIImage object?

Is there something I could be doing differently to optimize this process?

EDIT: I change access to the image in the bundle to imageWithContentsOfFile and the time to perform the loop changed from about 4 seconds to just over a minute. So it's looking like I need to find some way to do caching like imageNamed does, but with non-bundled files.

4

1 回答 1

1

UIImage imageNamed doesn't simply cache the image. It caches an uncompressed image. The extra time spent was not caused by reading from local storage to RAM but by decompressing the image.

The solution was to create a new uncompressed UIImage object and use it for the time sensitive portion of the code. The uncompressed object is discarded when that section of code is complete. For completeness, here is a copy of the class method to return an uncompressed UIImage object from a compressed one, thanks to another thread. Note that this assumes data is in CGImage. That is not always true for UIImage objects.

+(UIImage *)decompressedImage:(UIImage *)compressedImage
{
   CGImageRef originalImage = compressedImage.CGImage;
   CFDataRef imageData = CGDataProviderCopyData(
                         CGImageGetDataProvider(originalImage));
   CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
   CFRelease(imageData);
   CGImageRef image = CGImageCreate(
                                CGImageGetWidth(originalImage),
                                CGImageGetHeight(originalImage),
                                CGImageGetBitsPerComponent(originalImage),
                                CGImageGetBitsPerPixel(originalImage),
                                CGImageGetBytesPerRow(originalImage),
                                CGImageGetColorSpace(originalImage),
                                CGImageGetBitmapInfo(originalImage),
                                imageDataProvider,
                                CGImageGetDecode(originalImage),
                                CGImageGetShouldInterpolate(originalImage),
                                CGImageGetRenderingIntent(originalImage));
   CGDataProviderRelease(imageDataProvider);
   UIImage *decompressedImage = [UIImage imageWithCGImage:image];
   CGImageRelease(image);
   return decompressedImage;
}
于 2013-09-12T18:25:52.467 回答