7

在我目前正在编码的可可应用程序中,我从 Quartz Composer 渲染器(NSImage 对象)获取快照图像,我想使用 addImage 将它们编码为 720*480 大小、25 fps 和 H264 编解码器的 QTMovie : 方法。下面是对应的一段代码:

qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size


imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec
              QTAddImageCodecType, nil];

qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object

long fps = 25;
frameNum = 0;

NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
    if(![qRenderer renderAtTime: renderingTime arguments:NULL])
        NSLog(@"Rendering failed at time %.3fs", renderingTime);
    myImage = [qRenderer snapshotImage];
    [qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
    [myImage release];
    frameNum ++;
    renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release]; 

它可以工作,但是我的应用程序无法在我的新 MacBook Pro 上实时执行此操作,而我知道 QuickTime Broadcaster 可以在同一台计算机上以 H264 实时编码图像,质量比我使用的更高.

所以为什么 ?这里有什么问题?这是硬件管理问题(多核线程、GPU 等)还是我遗漏了什么?让我先说一下我是 Apple 开发领域的新手(练习了 2 周),包括 Objective-C、cocoa、X-code、Quicktime 和 Quartz Composer 库等。

谢谢你的帮助

4

1 回答 1

5

AVFoundation 是一种将 QuartzComposer 动画渲染为 H.264 视频流的更有效方式。


size_t width = 640;
size_t height = 480;

const char *outputFile = "/tmp/Arabesque.mp4";

QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
                                                      colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];

unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];

NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

[videoWriter addInput:writerInput];
[writerInput release];

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];

int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

__block long frameNumber = 0;

dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);

NSLog(@"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData]) {
        NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
        if (![renderer renderAtTime:frameTime arguments:NULL]) {
            NSLog(@"Rendering failed at time %.3fs", frameTime);
            break;
        }

        CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"];
        [pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
        CFRelease(frame);

        frameNumber++;
        if (frameNumber >= totalFrameCount) {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            [renderer release];
            NSLog(@"Rendered %ld frames.", frameNumber);
            break;
        }

    }
}];

在我的测试中,这大约是您发布的使用 QTKit 的代码的两倍。最大的改进似乎来自 H.264 编码被移交给 GPU,而不是在软件中执行。快速浏览一下配置文件,似乎剩下的瓶颈是合成本身的渲染,以及将渲染数据从 GPU 读回像素缓冲区。显然你的作品的复杂性会对此产生一些影响。

可以通过使用QCRenderers 提供快照的能力来进一步优化这一点CVOpenGLBufferRef,这可以将帧的数据保留在 GPU 上,而不是将其读回以将其交给编码器。不过我并没有看得太远。

于 2013-01-22T11:55:03.540 回答