4

我正在尝试将 YUV 图像转换为 CIIMage 并最终转换为 UIImage。我在这些方面相当新手,并试图找出一种简单的方法来做到这一点。据我所知,从 iOS6 YUV 可以直接用于创建 CIImage,但当我试图创建它时,CIImage 只持有一个 nil 值。我的代码是这样的->

NSLog(@"Started DrawVideoFrame\n");

CVPixelBufferRef pixelBuffer = NULL;

CVReturn ret = CVPixelBufferCreateWithBytes(
                                            kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                            lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
                                            );

if(ret != kCVReturnSuccess)
{
    NSLog(@"CVPixelBufferRelease Failed");
    CVPixelBufferRelease(pixelBuffer);
}

NSDictionary *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                      @(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };

CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(@"CURRENT CIImage -> %p\n", cimage);

UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(@"CURRENT UIImage -> %p\n", image);

这里 lpData 是 YUV 数据,它是一个无符号字符数组。

这看起来也很有趣:vImageMatrixMultiply,找不到任何例子。谁能帮我这个?

4

2 回答 2

5
于 2015-10-17T12:04:13.300 回答
0

如果您有一个如下所示的视频帧对象:

int width, 
int height, 
unsigned long long time_stamp,
unsigned char *yData, 
unsigned char *uData, 
unsigned char *vData,
int yStride 
int uStride 
int vStride

您可以使用以下内容来填充像素缓冲区:

NSDictionary *pixelAttributes = @{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:@{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                        width,
                                        height,
                                        kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,   //  NV12
                                        (__bridge CFDictionaryRef)(pixelAttributes),
                                        &pixelBuffer);
if (result != kCVReturnSuccess) {
    NSLog(@"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
    for (int j = 0; j < width; j ++) {
        yDestPlane[k++] = yData[j + i * yStride]; 
    }
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
    for (int j = 0; j < width / 2; j ++) {
        uvDestPlane[k++] = uData[j + i * uStride]; 
        uvDestPlane[k++] = vData[j + i * vStride]; 
    }
}

现在您可以将其转换为CIImage

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
                                        fromRect:CGRectMake(0, 0, width, height)];

如果UIImage你需要的话。(图像方向可能因您的输入而异)

UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
                                    scale:1.0
                                    orientation:UIImageOrientationUp];

不要忘记释放变量:

CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);
于 2020-05-27T07:00:29.187 回答