我正在尝试在 iOS 8 设备上尽快调整 CMSampleBufferRef 的大小以用于图像处理。从我在网上找到的内容来看,这样做的方法似乎是使用 Accelerate 框架中的vImage API。但是,我对 Accelerate 框架所做的工作并不多,而且我不太清楚如何做到这一点。到目前为止,这是我将图像缩放到 200x200 的内容:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
void *imageData = CVPixelBufferGetBaseAddress(cvimgRef);
NSInteger width = CVPixelBufferGetWidth(cvimgRef);
NSInteger height = CVPixelBufferGetHeight(cvimgRef);
unsigned char *newData= // NOT SURE WHAT THIS SHOULD BE...
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, 200, 200, 4*200 };
// NOT SURE IF THIS IS THE CORRECT METHOD... video output settings for kCVPixelBufferPixelFormatTypeKey is set to kCVPixelFormatType_32BGRA
// This seems wrong since the image scale is ARGB, not BGRA.
vImageScale_ARGB8888(inBuffer, outBuffer, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvimgRef,0);
}
其中 outBuffer 是结果。之后,我也不确定如何将 outBuffer 转换回 CVImageBufferRef 以进行进一步的图像处理。任何建议,将不胜感激!