9

From iOS6, Apple has given the provision to use native YUV to CIImage through this call

initWithCVPixelBuffer:options:

In the core Image Programming guide, they have mentioned about this feature

Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing algorithms expect RBGA data. There is a cost to converting between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color transform.

options = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };

But, I am unable to use it properly. I have a raw YUV data. So, this is what i did

                void *YUV[3] = {data[0], data[1], data[2]};
                size_t planeWidth[3] = {width, width/2, width/2};
                size_t planeHeight[3] = {height, height/2, height/2};
                size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
                CVPixelBufferRef pixelBuffer = NULL;
                CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
                               width, 
                               height,
                               kCVPixelFormatType_420YpCbCr8PlanarFullRange, 
                               nil,
                               width*height*1.5,
                               3, 
                               YUV,
                               planeWidth,
                               planeHeight, 
                               planeBytesPerRow, 
                               nil,
                               nil, nil, &pixelBuffer); 

    NSDict *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                        @(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };

CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];

I am getting nil for image. Anyy idea what I am missing.

EDIT: I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.

 CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];
 CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
4

2 回答 2

1

控制台中应该有错误消息:initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed。有关如何创建 IOSurface-backed 的信息,请参阅 Apple 的技术问答CVPixelBufferQA1781 。

调用CVPixelBufferCreateWithBytes()orCVPixelBufferCreateWithPlanarBytes()将导致CVPixelBuffers不支持 IOSurface...

...为此,您必须在使用创建像素缓冲区时kCVPixelBufferIOSurfacePropertiesKey在字典中指定.pixelBufferAttributesCVPixelBufferCreate()

NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
    nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey,     kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.
 
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes,  &pixelBuffer);

或者,如果CVPixelBuffers提供给包含.CVPixelBufferPoolCreatePixelBuffer()pixelBufferAttributesCVPixelBufferPoolCreate()kCVPixelBufferIOSurfacePropertiesKey

于 2014-10-28T07:00:25.763 回答
0

我正在解决一个类似的问题,并不断从 Apple 那里找到同样的报价,而没有任何关于如何在 YUV 颜色空间中工作的进一步信息。我遇到了以下问题:

默认情况下,Core Image 假定处理节点是每像素 128 位、线性光、使用 GenericRGB 颜色空间的预乘 RGBA 浮点值。您可以通过提供 Quartz 2D CGColorSpace 对象来指定不同的工作色彩空间。请注意,工作色彩空间必须基于 RGB。如果您有 YUV 数据作为输入(或其他不基于 RGB 的数据),您可以使用 ColorSync 函数转换为工作色彩空间。(有关创建和使用 CGColorspace 对象的信息,请参阅 Quartz 2D 编程指南。)使用 8 位 YUV 4:2:2 源,Core Image 每 GB 可以处理 240 个 HD 层。八位 YUV 是视频源的原生颜色格式,例如 DV、MPEG、未压缩的 D1 和 JPEG。您需要将 YUV 颜色空间转换为 Core Image 的 RGB 颜色空间。

我注意到没有 YUV 颜色空间,只有 Gray 和 RGB;和他们经过校准的表亲。我还不确定如何转换色彩空间,但如果我发现肯定会在这里报告。

于 2014-12-04T00:05:21.110 回答