7

我是 iOS 编程和多媒体的新手,我正在浏览苹果在此链接上提供的名为 RosyWriter 的示例项目。在这里我看到在代码中有一个函数captureOutput:didOutputSampleBuffer:fromConnection在下面给出的代码中命名:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{   
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);

if ( connection == videoConnection ) {
    
    // Get framerate
    CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
    [self calculateFramerateAtTimestamp:timestamp];
    
    // Get frame dimensions (for onscreen display)
    if (self.videoDimensions.width == 0 && self.videoDimensions.height == 0)
        self.videoDimensions = CMVideoFormatDescriptionGetDimensions( formatDescription );
    
    // Get buffer type
    if ( self.videoType == 0 )
        self.videoType = CMFormatDescriptionGetMediaSubType( formatDescription );

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    // Synchronously process the pixel buffer to de-green it.
    [self processPixelBuffer:pixelBuffer];
    
    // Enqueue it for preview.  This is a shallow queue, so if image processing is taking too long,
    // we'll drop this frame for preview (this keeps preview latency low).
    OSStatus err = CMBufferQueueEnqueue(previewBufferQueue, sampleBuffer);
    if ( !err ) {        
        dispatch_async(dispatch_get_main_queue(), ^{
            CMSampleBufferRef sbuf = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(previewBufferQueue);
            if (sbuf) {
                CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf);
                [self.delegate pixelBufferReadyForDisplay:pixBuf];
                CFRelease(sbuf);
            }
        });
    }
}

CFRetain(sampleBuffer);
CFRetain(formatDescription);
dispatch_async(movieWritingQueue, ^{

    if ( assetWriter ) {
    
        BOOL wasReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
        
        if (connection == videoConnection) {
            
            // Initialize the video input if this is not done yet
            if (!readyToRecordVideo)
                readyToRecordVideo = [self setupAssetWriterVideoInput:formatDescription];
            
            // Write video data to file
            if (readyToRecordVideo && readyToRecordAudio)
                [self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
        }
        else if (connection == audioConnection) {
            
            // Initialize the audio input if this is not done yet
            if (!readyToRecordAudio)
                readyToRecordAudio = [self setupAssetWriterAudioInput:formatDescription];
            
            // Write audio data to file
            if (readyToRecordAudio && readyToRecordVideo)
                [self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
        }
        
        BOOL isReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
        if ( !wasReadyToRecord && isReadyToRecord ) {
            recordingWillBeStarted = NO;
            self.recording = YES;
            [self.delegate recordingDidStart];
        }
    }
    CFRelease(sampleBuffer);
    CFRelease(formatDescription);
});
}

这里调用了一个名为的函数pixelBufferReadyForDisplay,它需要一个类型的参数CVPixelBufferRef

pixelBufferReadyForDisplay 原型

- (void)pixelBufferReadyForDisplay:(CVPixelBufferRef)pixelBuffer; 

但是在上面的代码中,当调用这个函数时,它传递了pixBuf类型为的变量CVImageBufferRef

所以我的问题是,是否需要使用任何函数或类型转换将 CVImageBufferRef 转换为 CVPixelBufferRef 或者这是由编译器隐式完成的?

谢谢。

4

1 回答 1

25

如果您在 Xcode 文档中搜索 CVPixelBufferRef,您会发现以下内容:

typedef CVImageBufferRef CVPixelBufferRef;

所以 CVImageBufferRef 是 CVPixelBufferRef 的同义词。它们是可以互换的。

您正在查看一些非常粗糙的代码。RosyWriter 和另一个名为“Chromakey”的示例应用程序对像素缓冲区进行了一些非常低级的处理。如果您是 iOS 开发新手和多媒体新手,您可能不想这么深、这么快地挖掘。这有点像试图进行心肺移植的一年级医学生。

于 2013-09-06T15:37:46.243 回答