5

我正在制作一个简单的管道,从 AVCaptureSession 获取图像,在 OpenCV 中处理它们,然后在 OpenGL 中渲染它们。它基于 RosyWriter,但没有音频和录音功能。OpenCV 处理看起来像

- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer 
{
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

cv::Mat image = cv::Mat(bufferWidth,bufferHeight,CV_8UC4,pixel);
//do any processing
[self setDisplay_matrix:image];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

到目前为止,在这个函数中,我没有复制任何内存,我想保持这种状态。问题是 pixelBuffer 可能仍然拥有 display_image 中包含的内存。处理代码可能会也可能不会分配新内存并将其存储在图像中。如果处理没有分配新内存,我必须将 pixelBuffer 与 display_matrix 一起传递,以防止数据被擦除。有没有办法让我拥有记忆的所有权?我想在不破坏它指向的内存的情况下破坏 pixelBuffer。

在相关说明中,LockBaseAddress 究竟做了什么?如果我传递一个 cv::Mat,CVImageBufferRef 对,每次我想用 cv::Mat 修改/使用数据时,我是否必须锁定基地址?

4

1 回答 1

0

您可以从基地址数据创建数据提供者而无需复制,然后从该数据提供者创建 UIImage。为避免在引用此图像时重复使用缓冲区,您需要保留样本缓冲区并锁定基地址。当您忘记此图像对象时,它们应该被自动解锁和释放:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    // Retain sample buffer and lock base address
    CFRetain(sampleBuffer);
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    void *baseAddress = (void *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

    UIImage *image = imageFromData(baseAddress, width, height, bytesPerRow, sampleBuffer);

    // Now you can store this UIImage as long as you want
}

imageFromData从这个项目中得到了https://github.com/k06a/UIImage-DecompressAndMap/blob/master/UIImage%2BDecompressAndMap.m并采用了一点:

UIImage *imageFromData(void *data, size_t width, size_t height, size_t bytesPerRow, CMSampleBufferRef sampleBuffer)
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGDataProviderRef provider = CGDataProviderCreateWithData((void *)sampleBuffer, data, bytesPerRow * height, munmap_wrapper);
    CGImageRef inflatedImage = CGImageCreate(width, height, 8, 4*8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);

    CGColorSpaceRelease(colorSpace);
    CGDataProviderRelease(provider);

    UIImage *img = [UIImage imageWithCGImage:inflatedImage scale:scale orientation:UIImageOrientationUp];
    CGImageRelease(inflatedImage);
    return img;
}

您还需要提供unlock_function

void unlock_function(void *info, const void *data, size_t size)
{
    // Unlock base address release sample buffer
    CMSampleBufferRef sampleBuffer = (CMSampleBufferRef)info;
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    CFRelease(sampleBuffer);
}
于 2016-04-27T14:42:36.580 回答