3

我试图更好地理解 AVFoundation 框架以及各种 Core xxxx 框架,所以我决定尝试一个简单的视频捕获,看看我是否可以将图像作为图像输出到 UI。我查看了 rosyWriter 代码和文档,但没有任何答案。所以:

我有标准的捕获会话代码来添加输入和输出。以下内容与该问题相关:

//moving the buffer processing off the main queue
dispatch_queue_t bufferProcessingQueue=dispatch_queue_create("theBufferQueue", NULL);
[self.theOutput setSampleBufferDelegate:self queue:bufferProcessingQueue];
dispatch_release(bufferProcessingQueue);

然后是代表:

-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
    {

    CVPixelBufferRef pb = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pb];
    CGImageRef ref = [self.theContext createCGImage:ciImage fromRect:ciImage.extent];

    dispatch_async(dispatch_get_main_queue(), ^{
        self.testBufferImage.image= [UIImage imageWithCGImage:ref scale:1.0 orientation:UIImageOrientationRight];
    });
}

问题:

1-我猜就像我在上面所做的那样,我们应该始终将委托设置为像我上面所做的那样在单独的队列上运行,而不是在主队列上运行。正确的?

2-结合,在委托方法中,任何处理 UI 的调用都必须像我一样放回主队列。正确的?

3- When I run this code, after about 5-10 seconds, I get a "Received memory warning" error and the app shuts. What could cause this?

4

1 回答 1

3

1) Generally yes you should. You could run it on the main queue, but this can cause issues with UI responsiveness among other things.

2) Correct.

3) You are creating a series of CGImageRefs. Where are you releasing them?

For performance reasons you should probably use OpenGL if you need fine control over the rendering of the video. Otherwise you can use AVCaptureVideoPreviewLayer for an easy way to get a preview.

于 2012-09-28T19:47:41.620 回答