0

我正在尝试追加frame.capturedDepthData.depthDataMapAVAssetWriterInputPixelBufferAdaptor但结果总是不成功。

我的适配器配置如下:

NSError* error;
videoWriter = [AVAssetWriter.alloc initWithURL:outputURL fileType:AVFileTypeMPEG4 error:&error];
if (error)
{
    NSLog(@"Error creating video writer: %@", error);
    return;
}

NSDictionary* videoSettings = @{
        AVVideoCodecKey: AVVideoCodecTypeH264,
        AVVideoWidthKey: @640,
        AVVideoHeightKey: @360
};

writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.transform = CGAffineTransformMakeRotation(M_PI_2);

NSDictionary* sourcePixelBufferAttributesDictionary = @{
        (NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_DepthFloat32)
};

adaptor = [AVAssetWriterInputPixelBufferAdaptor
        assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                   sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

if ([videoWriter canAddInput:writerInput])
{
    [videoWriter addInput:writerInput];
}
else
{
    NSLog(@"Error: cannot add writerInput to videoWriter.");
}

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

然后在每个session:(ARSession*)session didUpdateFrame:(ARFrame*)frame回调中,我尝试像这样附加深度像素缓冲区:

if (!adaptor.assetWriterInput.readyForMoreMediaData)
{
    NSLog(@"Asset input writer is not ready for more media data!");
}
else
{
    if (frame.capturedDepthData.depthDataMap != NULL)
    {
        frameCount++;
        CVPixelBufferRef pixelRef = frame.capturedDepthData.depthDataMap;
        BOOL result = [adaptor appendPixelBuffer:frame.capturedDepthData.depthDataMap withPresentationTime:CMTimeMake(frameCount, 15)];
    }
}

但附加像素缓冲区的结果始终为 FALSE。

现在,如果我尝试附加frame.capturedImage到正确配置的适配器,那将始终成功,这就是我目前从前置摄像头制作视频文件的方式。

但我想知道如何从深度像素缓冲区制作视频?

4

1 回答 1

1

以下是如何将 depthDataMap 像素缓冲区转换为可附加到适配器的有效像素缓冲区的示例:

- (void) session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
{
    CVPixelBufferRef depthDataMap = frame.capturedDepthData.depthDataMap;

    if(!depthDataMap)
    {
        // no depth data available
        return;
    }

    CIImage* image = [CIImage imageWithCVPixelBuffer:depthDataMap];
    CVPixelBufferRef buffer = NULL;
    CVReturn err = PixelBufferCreateFromImage(image, &buffer);

    [adaptorDepth appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frameDepthCount, 15)] // 15 is number of fps
}


CVReturn PixelBufferCreateFromImage(CIImage* ciImage, CVPixelBufferRef* outBuffer) {
    CIContext* context = [CIContext context];

    NSDictionary* attributes = @{ (NSString*) kCVPixelBufferCGBitmapContextCompatibilityKey: @YES,
                                  (NSString*) kCVPixelBufferCGImageCompatibilityKey: @YES
    };

    CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault,
                                       (size_t) ciImage.extent.size.width, (size_t) ciImage.extent.size.height,
                                       kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef _Nullable) (attributes),
                                       outBuffer);
    if (err)
    {
        return err;
    }

    if (outBuffer)
    {
        [context render:ciImage toCVPixelBuffer:*outBuffer];
    }

    return kCVReturnSuccess;
}

关键在于PixelBufferCreateFromImage能够CIImage从原始深度像素缓冲区创建有效像素缓冲区的方法。

于 2020-03-24T23:11:30.397 回答