12

我想对 a 执行一些操作CVPixelBufferRef并得出 acv::Mat

  • 裁剪到感兴趣的区域
  • 缩放到固定尺寸
  • 均衡直方图
  • 转换为灰度 - 每像素 8 位 ( CV_8UC1)

我不确定最有效的顺序是什么,但是,我知道所有操作都可以在 open:CV 矩阵上使用,所以我想知道如何转换它。

- (void) captureOutput:(AVCaptureOutput *)captureOutput 
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
         fromConnection:(AVCaptureConnection *)connection
{
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     cv::Mat frame = f(pixelBuffer); // how do I implement f()?
4

2 回答 2

15

我在一些优秀的 GitHub 源代码中找到了答案。为了简单起见,我在这里对其进行了调整。它还为我做了灰度转换。

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);

// Set the following dict on AVCaptureVideoDataOutput's videoSettings to get YUV output
// @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }

NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");

// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);

cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);

// Use the mat here

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

我认为最好的顺序是:

  1. 转换为灰度(因为它几乎是自动完成的)
  2. 裁剪(这应该是一个快速的操作,会减少使用的像素数)
  3. 缩小
  4. 均衡直方图
于 2013-10-14T13:34:18.230 回答
4

我正在使用这个。我cv:Mat的配置为 BGR(8UC3) colorFormat。

CVImageBufferRef -> cv::Mat

- (cv::Mat) matFromImageBuffer: (CVImageBufferRef) buffer {

    cv::Mat mat ;

    CVPixelBufferLockBaseAddress(buffer, 0);

    void *address = CVPixelBufferGetBaseAddress(buffer);
    int width = (int) CVPixelBufferGetWidth(buffer);
    int height = (int) CVPixelBufferGetHeight(buffer);

    mat   = cv::Mat(height, width, CV_8UC4, address, 0);
    //cv::cvtColor(mat, _mat, CV_BGRA2BGR);

    CVPixelBufferUnlockBaseAddress(buffer, 0);

    return mat;
}

cv::Mat -> CVImageBufferRef (CVPixelBufferRef)

- (CVImageBufferRef) getImageBufferFromMat: (cv::Mat) mat {

    cv::cvtColor(mat, mat, CV_BGR2BGRA);

    int width = mat.cols;
    int height = mat.rows;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             [NSNumber numberWithInt:width], kCVPixelBufferWidthKey,
                             [NSNumber numberWithInt:height], kCVPixelBufferHeightKey,
                             nil];

    CVPixelBufferRef imageBuffer;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;


    NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);

    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *base = CVPixelBufferGetBaseAddress(imageBuffer) ;
    memcpy(base, mat.data, _mat.total()*4);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return imageBuffer;
}
于 2017-04-18T06:54:33.290 回答