7

我正在做一个项目,在该项目中我从 UIImage 生成一个视频,代码是我在这里找到的,我现在正在努力优化它几天(对于大约 300 张图像,在模拟器上大约需要 5 分钟,并且由于内存而简单地在设备上崩溃)。

我将从今天的工作代码开始(我使用 arc):

-(void) writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size duration:(int)duration 
{
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie error:&error];
    NSParameterAssert(videoWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                               AVVideoCodecH264, AVVideoCodecKey,
                               [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                               [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                               nil];
    AVAssetWriterInput* writerInput = [AVAssetWriterInput
                                   assetWriterInputWithMediaType:AVMediaTypeVideo
                                   outputSettings:videoSettings];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];


    //Start a session:
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    CVPixelBufferRef buffer = NULL;
    buffer = [self newPixelBufferFromCGImage:[[self.frames objectAtIndex:0] CGImage]];

    CVPixelBufferPoolCreatePixelBuffer (NULL, adaptor.pixelBufferPool, &buffer);

    [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];

    dispatch_queue_t mediaInputQueue =  dispatch_queue_create("mediaInputQueue", NULL);
    int frameNumber = [self.frames count];

    [writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
        NSLog(@"Entering block with frames: %i", [self.frames count]);
        if(!self.frames || [self.frames count] == 0)
        {
            return;
        }
        int i = 1;
        while (1)
        {
            if (i == frameNumber) 
            {
                break;
            }
            if ([writerInput isReadyForMoreMediaData]) 
            {
                freeMemory();
                NSLog(@"inside for loop %d (%i)",i, [self.frames count]);
                UIImage *image = [self.frames objectAtIndex:i];
                CGImageRef imageRef = [image CGImage];
                CVPixelBufferRef sampleBuffer = [self newPixelBufferFromCGImage:imageRef];
                CMTime frameTime = CMTimeMake(1, TIME_STEP);

                CMTime lastTime=CMTimeMake(i, TIME_STEP); 

                CMTime presentTime=CMTimeAdd(lastTime, frameTime);       

                if (sampleBuffer) 
                {
                    [adaptor appendPixelBuffer:sampleBuffer withPresentationTime:presentTime];
                    i++;
                    CVPixelBufferRelease(sampleBuffer);
                } 
                else 
                {
                    break;
                }
            }
        }

        [writerInput markAsFinished];
        [videoWriter finishWriting];
        self.frames = nil;

        CVPixelBufferPoolRelease(adaptor.pixelBufferPool);

    }];
}

现在获取像素缓冲区的功能,我正在努力:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
    CVPixelBufferRef pxbuffer = NULL;

    int width = CGImageGetWidth(image)*2;
    int height = CGImageGetHeight(image)*2;

    NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
    CVPixelBufferPoolRef pixelBufferPool; 
    CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
    NSParameterAssert(theError == kCVReturnSuccess);
    CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                             height, 8, width*4, rgbColorSpace, 
                                             kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextDrawImage(context, CGRectMake(0, 0, width, 
                                       height), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

第一个奇怪的事情:正如你在这个函数中看到的,我必须将宽度和高度乘以 2,否则结果视频全乱了,我不明白为什么(如果有帮助,我可以发布截图;像素似乎来自我的图像,但宽度不正确,视频的半底部有一个大的黑色方块)。

另一个问题是它需要大量的内存。我认为像素缓冲区不能很好地释放,但我不明白为什么。

最后,它很慢,但是我有两个想法可以改进它,但我没有使用。

  • 首先是避免使用 UIImage 来创建我的像素缓冲区,因为我自己使用 (uint8_t *) 数据生成 UIImage。我尝试使用“CVPixelBufferCreateWithBytes”,但它不起作用。这是我的尝试方法:

    OSType pixFmt = CVPixelBufferGetPixelFormatType(pxbuffer);
    CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, pixFmt, self.composition.srcImage.resultImageData, width*2, NULL, NULL, (__bridge CFDictionaryRef) attributes, &pxbuffer);
    

(参数与上述函数相同;我的图像数据以每像素 16 位编码,我找不到一个好的 OSType 参数来提供给函数。)如果有人知道如何使用它(也许是16 位/像素数据不可能?),这将帮助我避免真正无用的转换。

  • 第二件事是我想为我的视频避免 kCVPixelFormatType_32ARGB。我想使用更少位/像素的东西会更快,但是当我尝试它时(我已经尝试了所有 kCVPixelFormatType_16XXXXX 格式,使用 5 位/组件和 kCGImageAlphaNoneSkipFirst 创建的上下文),要么它崩溃,要么生成视频不包含任何内容(使用 kCVPixelFormatType_16BE555)。

我知道我只在一篇文章中问了很多问题,但我有点迷失在这段代码中,我尝试了很多组合,但没有一个能奏效......

4

1 回答 1

0

I have to multiply the width and height by 2, otherwise, the result video is all messed up, and I can't understand why

点与像素?高 dpi 视网膜屏幕每点的像素数是其两倍。

于 2016-02-11T16:05:28.247 回答