注意:我没有注意到原始问题也要求缩放。但无论如何,对于那些只需要裁剪 CMSampleBuffer 的人来说,这就是解决方案。
缓冲区只是一个像素数组,因此您实际上可以直接处理缓冲区而无需使用 vImage。代码是用 Swift 编写的,但我认为很容易找到对应的 Objective-C。
首先,确保您的 CMSampleBuffer 是 BGRA 格式。如果不是,您使用的预设可能是 YUV,并且会破坏以后将使用的每行字节数。
dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [
String(kCVPixelBufferPixelFormatTypeKey):
NSNumber(value: kCVPixelFormatType_32BGRA)
]
然后,当您获得样本缓冲区时:
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let cropWidth = 640
let cropHeight = 640
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
// now the cropped image is inside the context.
// you can convert it back to CVPixelBuffer
// using CVPixelBufferCreateWithBytes if you want.
CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)
// create image
let cgImage: CGImage = context!.makeImage()!
let image = UIImage(cgImage: cgImage)
如果要从某个特定位置裁剪,请添加以下代码:
// calculate start position
let bytesPerPixel = 4
let startPoint = [ "x": 10, "y": 10 ]
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel
并将baseAddress
in更改CGContext()
为startAddress
. 确保不要超过原始图像的宽度和高度。