2

I'm attempting to create a viewer that displays an image and allows the user to use gestures to modify the image pixel data and re-render the image - performance is the most important objective. As of now, I have an object that stores a byte array (~25mb) containing the pixel data, and have methods that will render a UIImage from the array, displaying it in a UIImageView. Using the profiling instrumentation, having the byte array around all of the time is killing my memory and performance. Given as this is some lower-level coding, should this be attempted using more of an assembly-level approach or can this be done and I'm just an idiot? Haha

    @interface ImageInfo : NSObject

    @property (strong, nonatomic) NSMutableArray *byteArray;

    @property (nonatomic) int imageHeight;
    @property (nonatomic) int imageWidth;

    @property (nonatomic) int wLevel1;
    @property (nonatomic) int wWidth1;
    @property (nonatomic) int maxPixelValue;
    @property (nonatomic) int minPixelValue;

    @end
4

0 回答 0