1

我使用 CVPixelFormatType_420YpCbCr8BiPlanarFullRange 从 iphone 相机设备捕获,以更快地处理灰度平面(平面 0)。我在内存中存储了一些帧以供以后创建视频。在我创建灰度视频之前,我只存储了包含发光的平面(平面 0)。

现在我必须存储两个平面并从它们创建一个彩色视频,用于存储我使用这样的帧:

bytesInFrame = width * height * 2; //2 bytes per pixel, is that correct?
frameData = (unsigned char*) malloc(bytesInFrame  * numbeOfFrames);

从我使用的灰度缓冲区创建图像的功能:

- (UIImage *) convertBitmapGrayScaleToUIImage:(unsigned char *) buffer 
                                withWidth:(int) width
                               withHeight:(int) height {


size_t bufferLength = width * height * 1;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 8;
size_t bytesPerRow = 1 * width;

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
if(colorSpaceRef == NULL) {
    DLog(@"Error allocating color space");
    CGDataProviderRelease(provider);
    return nil;
}

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;

CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

CGImageRef iref = CGImageCreate(width, 
                                height, 
                                bitsPerComponent, 
                                bitsPerPixel, 
                                bytesPerRow, 
                                colorSpaceRef, 
                                bitmapInfo, 
                                provider,   // data provider
                                NULL,       // decode
                                YES,            // should interpolate
                                renderingIntent);

uint32_t* pixels = (uint32_t*)malloc(bufferLength);

if(pixels == NULL) {
    DLog(@"Error: Memory not allocated for bitmap");
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    CGImageRelease(iref);       
    return nil;
}

CGContextRef context = CGBitmapContextCreate(pixels, 
                                             width, 
                                             height, 
                                             bitsPerComponent, 
                                             bytesPerRow, 
                                             colorSpaceRef, 
                                             bitmapInfo); 

if(context == NULL) {
    DLog(@"Error context not created");
    free(pixels);
}

UIImage *image = nil;
if(context) {

    CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);

    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    image = [UIImage imageWithCGImage:imageRef];

    CGImageRelease(imageRef);   
    CGContextRelease(context);  
}

CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);

if(pixels) {
    free(pixels);
}   
return image;
}

我已经看到这个问题与我想要实现的类似: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 框架到 UIImage 转换 但我认为这会为我的转换增加一个额外的步骤。

有什么方法可以直接从缓冲区创建颜色 UIImage?我会很感激一些迹象。

4

0 回答 0