10

I'm recording live video in my iOS app. On another Stack Overflow page, I found that you can use vImage_Buffer to work on my frames.

The problem is that I have no idea how to get back to a CVPixelBufferRef from the outputted vImage_buffer.

Here is the code that is given in the other article:

NSInteger cropX0 = 100,
          cropY0 = 100,
          cropHeight = 100,
          cropWidth = 100,
          outWidth = 480,
          outHeight = 480;

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);                   
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

vImage_Buffer inBuff;                       
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;

int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;

unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};

vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);

And now I need to convert outBuff to a CVPixelBufferRef.

I assume I need to use vImageBuffer_CopyToCVPixelBuffer, but I'm not sure how.

My first attempts failed with an EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
    
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    
vImage_CGImageFormat format = {
    .bitsPerComponent = 8,
    .bitsPerPixel = 32,
    .bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst,  //BGRX8888
    .colorSpace = NULL,  //sRGB
};
    
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
                                 &format,
                                 pixelBuffer,
                                 NULL,
                                 NULL,
                                 kvImageNoFlags);  // Here is the crash!
    
    
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

Any idea?

4

2 回答 2

2
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
    [NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
    [NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
    [NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
    nil];

status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
                                      480,
                                      480,
                                      kCVPixelFormatType_32BGRA,
                                      outImg,
                                      bytesPerRow,
                                      NULL,
                                      NULL,
                                      (__bridge CFDictionaryRef)options,
                                      &pixbuffer);

pixelBuffer你应该像上面那样生成一个新的。

于 2018-01-03T05:35:20.027 回答
0
  • 以防万一...如果您希望将裁剪AVPlayerLayer的实时视频馈送到您的界面中,请使用AVCaptureVideoPreviewLayer和/或其他CALayer子类,为您的 100x100 像素区域到 480x480 区域使用图层边界、帧和位置。

您的问题的注释vImage(不同的情况可能会有所不同):

  1. CVPixelBufferCreateWithBytes将无法使用,vImageBuffer_CopyToCVPixelBuffer()因为您需要将vImage_Buffer数据复制到 "clean" 或 "empty"CVPixelBuffer中。

  2. 无需锁定/解锁 - 确保您知道何时锁定和何时不锁定像素缓冲区。

  3. inBuff vImage_Buffer只需要从像素缓冲区数据初始化,而不是手动初始化(除非您知道如何使用CGContexts 等来初始化像素网格)

  4. 利用vImageBuffer_InitWithCVPixelBuffer()

  5. vImageScale_ARGB8888将整个 CVPixel 数据缩放为更小/更大的矩形。它不会将缓冲区的一部分/裁剪区域缩放到另一个缓冲区。

  6. 使用时vImageBuffer_CopyToCVPixelBuffer()vImageCVImageFormatRef&vImage_CGImageFormat需要正确填写。

    CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);
    
    vImage_CGImageFormat format = {
        .bitsPerComponent = 16,
        .bitsPerPixel = 64,
        .bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast  |  kCGBitmapByteOrder16Big ,
        .colorSpace = dstColorSpace
    };
    vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,
                                                                kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
                                                                kCVImageBufferChromaLocation_Center,
                                                                format.colorSpace,
                                                                0);
    
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          480,
                                          480,
                                          kCVPixelFormatType_4444AYpCbCr16,
                                          NULL,
                                          &destBuffer);
    
    NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);
    
    err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &format, destBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
    

注意:这些是带有 Alpha 的 64 位 ProRes 的设置 - 调整为 32 位。

于 2019-11-01T19:40:12.900 回答