1

我不敢相信自己的眼睛,它们基本上是相同的代码,只是将Object-c代码转换为swift代码,但是Object-c代码总是得到正确的答案,但是swift代码有时会得到正确的答案,有时会得到错误的。

斯威夫特演绎:

class ImageProcessor1 {
    class func processImage(image: UIImage) {
        guard let cgImage = image.cgImage else {
            return
        }
        let width = Int(image.size.width)
        let height = Int(image.size.height)
        let bytesPerRow = width * 4
        let imageData = UnsafeMutablePointer<UInt32>.allocate(capacity: width * height)
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
        guard let imageContext = CGContext(data: imageData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
            return
        }
        imageContext.draw(cgImage, in: CGRect(origin: .zero, size: image.size))
        print("---------data from Swift version----------")
        for i in 0..<width * height {
            print(imageData[i])
        }
    }
}

Objective-C 版本:

- (UIImage *)processUsingPixels:(UIImage*)inputImage {

  // 1. Get the raw pixels of the image
  UInt32 * inputPixels;

  CGImageRef inputCGImage = [inputImage CGImage];
  NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
  NSUInteger inputHeight = CGImageGetHeight(inputCGImage);

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  NSUInteger bytesPerPixel = 4;
  NSUInteger bitsPerComponent = 8;

  NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;

  inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));

  CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
                                               bitsPerComponent, inputBytesPerRow, colorSpace,
                                               kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

  CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);

    NSLog(@"---------data from Object-c version----------");
    UInt32 * currentPixel = inputPixels;
    for (NSUInteger j = 0; j < inputHeight; j++) {
        for (NSUInteger i = 0; i < inputWidth; i++) {
            UInt32 color = *currentPixel;
            NSLog(@"%u", color);
            currentPixel++;
        }
    }
  return inputImage;
}

可在https://github.com/tuchangwei/Pixel

如果你得到相同的答案,请运行更多次。

4

1 回答 1

1

您的 Objective-C 和 Swift 代码都有泄漏。此外,您的 Swift 代码没有初始化分配的内存。当我初始化内存时,我没有看到任何差异:

imageData.initialize(repeating: 0, count: width * height)

FWIW,虽然allocate不会初始化内存缓冲区,但calloc会:

...分配的内存被零值的字节填充。

但就个人而言,我建议您完全摆脱分配内存的业务并传递nil参数data,然后使用它bindMemory来访问该缓冲区。如果你这样做,如文档所述

如果您希望此函数为位图分配内存,请传递 NULL。这使您无需管理自己的内存,从而减少了内存泄漏问题。

因此,也许:

class func processImage(image: UIImage) {
    guard let cgImage = image.cgImage else {
        return
    }
    let width = cgImage.width
    let height = cgImage.height
    let bytesPerRow = width * 4

    let colorSpace = CGColorSpaceCreateDeviceRGB()

    let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
    guard
        let imageContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
        let rawPointer = imageContext.data
    else {
        return
    }

    let pixelBuffer = rawPointer.bindMemory(to: UInt32.self, capacity: width * height)

    imageContext.draw(cgImage, in: CGRect(origin: .zero, size: CGSize(width: width, height: height)))
    print("---------data from Swift version----------")
    for i in 0..<width * height {
        print(pixelBuffer[i])
    }
}
于 2019-06-28T06:19:08.920 回答