11

嘿,我正在尝试使用 AVCaptureSession 从 iphone 相机访问原始数据。我遵循 Apple 提供的指南(此处链接)。

来自samplebuffer的原始数据是YUV格式(我在这里对原始视频帧格式是否正确??),如何从存储在samplebuffer中的原始数据中直接获取Y分量的数据。

4

4 回答 4

22

在设置返回原始相机帧的 AVCaptureVideoDataOutput 时,您可以使用如下代码设置帧的格式:

[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

在这种情况下,指定了 BGRA 像素格式(我使用它来匹配 OpenGL ES 纹理的颜色格式)。这种格式的每个像素都有一个字节,依次代表蓝色、绿色、红色和 alpha。这样做可以很容易地提取颜色分量,但是您确实需要从相机原生 YUV 颜色空间进行转换,从而牺牲了一点性能。

其他支持的色彩空间在较新的设备和kCVPixelFormatType_420YpCbCr8BiPlanarVideoRangeiPhone 3G 上。or后缀简单地指示字节是否在 Y 的 16 - 235 和 UV 的 16 - 240 之间或每个组件的完整 0 - 255 之间返回。kCVPixelFormatType_420YpCbCr8BiPlanarFullRangekCVPixelFormatType_422YpCbCr8VideoRangeFullRange

我相信 AVCaptureVideoDataOutput 实例使用的默认色彩空间是 YUV 4:2:0 平面色彩空间(iPhone 3G 除外,它是 YUV 4:2:2 交错的)。这意味着视频帧中包含两个图像数据平面,首先是 Y 平面。对于结果图像中的每个像素,该像素的 Y 值都有一个字节。

你可以通过在你的委托回调中实现这样的东西来获得这个原始的 Y 数据:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

    // Do something with the raw pixels here

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}

然后,您可以找出图像上每个 X、Y 坐标在帧数据中的位置,并拉出与该坐标处的 Y 分量相对应的字节。

Apple 在WWDC 2010上的 FindMyiCone 示例(可与视频一起访问)展示了如何处理来自每一帧的原始 BGRA 数据。我还创建了一个示例应用程序,您可以在此处下载代码,它使用来自 iPhone 摄像头的实时视频执行基于颜色的对象跟踪。两者都展示了如何处理原始像素数据,但这些都不适用于 YUV 颜色空间。

于 2010-11-03T16:01:23.747 回答
19

除了 Brad 的答案和您自己的代码之外,您还需要考虑以下内容:

由于您的图像有两个独立的平面,函数CVPixelBufferGetBaseAddress不会返回平面的基地址,而是返回附加数据结构的基地址。这可能是由于当前的实现,您获得的地址足够接近第一个平面,以便您可以看到图像。但这就是它移动并在左上角有垃圾的原因。接收第一架飞机的正确方法是:

unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

图像中的一行可能比图像的宽度长(由于四舍五入)。这就是为什么有单独的函数来获取每行的宽度和字节数。你目前没有这个问题。但这可能会随着 iOS 的下一个版本而改变。所以你的代码应该是:

int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int size = bufferHeight * bytesPerRow ;

unsigned char *pixel = (unsigned char*)malloc(size);

unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy (pixel, rowBase, size);

另请注意,您的代码在 iPhone 3G 上会严重失败。

于 2010-11-05T18:59:27.037 回答
8

如果只需要亮度通道,我建议不要使用 BGRA 格式,因为它会带来转换开销。如果您要进行渲染,Apple 建议使用 BGRA,但您不需要它来提取亮度信息。正如 Brad 已经提到的,最有效的格式是相机原生 YUV 格式。

但是,从样本缓冲区中提取正确的字节有点棘手,尤其是对于具有交错 YUV 422 格式的 iPhone 3G。所以这是我的代码,它适用于 iPhone 3G、3GS、iPod Touch 4 和 iPhone 4S。

#pragma mark -
#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate Methods
#if !(TARGET_IPHONE_SIMULATOR)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
{
    // get image buffer reference
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    // extract needed informations from image buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    CGSize resolution = CGSizeMake(CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer));

    // variables for grayscaleBuffer 
    void *grayscaleBuffer = 0;
    size_t grayscaleBufferSize = 0;

    // the pixelFormat differs between iPhone 3G and later models
    OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);

    if (pixelFormat == '2vuy') { // iPhone 3G
        // kCVPixelFormatType_422YpCbCr8     = '2vuy',    
        /* Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1 */

        // copy every second byte (luminance bytes form Y-channel) to new buffer
        grayscaleBufferSize = bufferSize/2;
        grayscaleBuffer = malloc(grayscaleBufferSize);
        if (grayscaleBuffer == NULL) {
            NSLog(@"ERROR in %@:%@:%d: couldn't allocate memory for grayscaleBuffer!", NSStringFromClass([self class]), NSStringFromSelector(_cmd), __LINE__);
            return nil; }
        memset(grayscaleBuffer, 0, grayscaleBufferSize);
        void *sourceMemPos = baseAddress + 1;
        void *destinationMemPos = grayscaleBuffer;
        void *destinationEnd = grayscaleBuffer + grayscaleBufferSize;
        while (destinationMemPos <= destinationEnd) {
            memcpy(destinationMemPos, sourceMemPos, 1);
            destinationMemPos += 1;
            sourceMemPos += 2;
        }       
    }

    if (pixelFormat == '420v' || pixelFormat == '420f') {
        // kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', 
        // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange  = '420f',
        // Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]).  
        // Bi-Planar Component Y'CbCr 8-bit 4:2:0, full-range (luma=[0,255] chroma=[1,255]).
        // baseAddress points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct
        // i.e.: Y-channel in this format is in the first third of the buffer!
        int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
        baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
        grayscaleBufferSize = resolution.height * bytesPerRow ;
        grayscaleBuffer = malloc(grayscaleBufferSize);
        if (grayscaleBuffer == NULL) {
            NSLog(@"ERROR in %@:%@:%d: couldn't allocate memory for grayscaleBuffer!", NSStringFromClass([self class]), NSStringFromSelector(_cmd), __LINE__);
            return nil; }
        memset(grayscaleBuffer, 0, grayscaleBufferSize);
        memcpy (grayscaleBuffer, baseAddress, grayscaleBufferSize); 
    }

    // do whatever you want with the grayscale buffer
    ...

    // clean-up
    free(grayscaleBuffer);
}
#endif
于 2011-11-02T15:24:37.630 回答
4

这只是其他人在其他线程上和其他线程上辛勤工作的结晶,为任何认为有用的人转换为 swift 3。

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
        CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.readOnly)

        let pixelFormatType = CVPixelBufferGetPixelFormatType(pixelBuffer)
        if pixelFormatType == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
           || pixelFormatType == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange {

            let bufferHeight = CVPixelBufferGetHeight(pixelBuffer)
            let bufferWidth = CVPixelBufferGetWidth(pixelBuffer)

            let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
            let size = bufferHeight * lumaBytesPerRow
            let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
            let lumaByteBuffer = unsafeBitCast(lumaBaseAddress, to:UnsafeMutablePointer<UInt8>.self)

            let releaseDataCallback: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
                // https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
                // N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
                return
            }

            if let dataProvider = CGDataProvider(dataInfo: nil, data: lumaByteBuffer, size: size, releaseData: releaseDataCallback) {
                let colorSpace = CGColorSpaceCreateDeviceGray()
                let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue)

                let cgImage = CGImage(width: bufferWidth, height: bufferHeight, bitsPerComponent: 8, bitsPerPixel: 8, bytesPerRow: lumaBytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo, provider: dataProvider, decode: nil, shouldInterpolate: false, intent: CGColorRenderingIntent.defaultIntent)

                let greyscaleImage = UIImage(cgImage: cgImage!)
                // do what you want with the greyscale image.
            }
        }

        CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.readOnly)
    }
}
于 2017-04-27T01:09:35.483 回答