13

我正在尝试创建一个由 captureOutput 返回的 CMSampleBuffer 的副本AVCaptureAudioDataOutputSampleBufferDelegate

我遇到的问题是我的来自委托方法的帧在我长时间captureOutput:didOutputSampleBuffer:fromConnection:保留它们后被丢弃。CFArray

显然,我需要创建传入缓冲区的深层副本以进行进一步处理。我也知道这CMSampleBufferCreateCopy只会创建浅拷贝。

在 SO 上提出的相关问题很少:

但是它们都不能帮助我正确使用具有 12 个参数的CMSampleBufferCreate函数:

  CMSampleBufferRef copyBuffer;

  CMBlockBufferRef data = CMSampleBufferGetDataBuffer(sampleBuffer);
  CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
  CMItemCount itemCount = CMSampleBufferGetNumSamples(sampleBuffer);

  CMTime duration = CMSampleBufferGetDuration(sampleBuffer);
  CMTime presentationStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
  CMSampleTimingInfo timingInfo;
  timingInfo.duration = duration;
  timingInfo.presentationTimeStamp = presentationStamp;
  timingInfo.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer);


  size_t sampleSize = CMBlockBufferGetDataLength(data);
  CMBlockBufferRef sampleData;

  if (CMBlockBufferCopyDataBytes(data, 0, sampleSize, &sampleData) != kCMBlockBufferNoErr) {
    VLog(@"error during copying sample buffer");
  }

  // Here I tried data and sampleData CMBlockBuffer instance, but no success
  OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault, data, isDataReady, nil, nil, formatDescription, itemCount, 1, &timingInfo, 1, &sampleSize, &copyBuffer);

  if (!self.sampleBufferArray)  {
    self.sampleBufferArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
    //EXC_BAD_ACCESS crash when trying to add sampleBuffer to the array
    CFArrayAppendValue(self.sampleBufferArray, copyBuffer);
  } else  {
    CFArrayAppendValue(self.sampleBufferArray, copyBuffer);
  }

你如何深拷贝音频 CMSampleBuffer?随意在您的答案中使用任何语言(swift/objective-c)。

4

2 回答 2

15

这是我最终实施的工作解决方案。我将此片段发送给 Apple 开发人员技术支持,并要求他们检查是否是复制传入样本缓冲区的正确方法。基本思想是复制AudioBufferList然后创建一个CMSampleBuffer并设置AudioBufferList到这个样本。

AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
//Create an AudioBufferList containing the data from the CMSampleBuffer,
//and a CMBlockBuffer which references the data in that AudioBufferList.
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
NSUInteger size = sizeof(audioBufferList);
char buffer[size];

memcpy(buffer, &audioBufferList, size);
//This is the Audio data.
NSData *bufferData = [NSData dataWithBytes:buffer length:size];

const void *copyBufferData = [bufferData bytes];
copyBufferData = (char *)copyBufferData;

CMSampleBufferRef copyBuffer = NULL;
OSStatus status = -1;

/* Format Description */

AudioStreamBasicDescription audioFormat = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef) CMSampleBufferGetFormatDescription(sampleBuffer));

CMFormatDescriptionRef format = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format);

CMFormatDescriptionRef formatdes = NULL;
status = CMFormatDescriptionCreate(NULL, kCMMediaType_Audio, 'lpcm', NULL, &formatdes);
if (status != noErr)
{
  NSLog(@"Error in CMAudioFormatDescriptionCreator");
  CFRelease(blockBuffer);
  return;
}

/* Create sample Buffer */
CMItemCount framesCount = CMSampleBufferGetNumSamples(sampleBuffer);
CMSampleTimingInfo timing   = {.duration= CMTimeMake(1, 44100), .presentationTimeStamp= CMSampleBufferGetPresentationTimeStamp(sampleBuffer), .decodeTimeStamp= CMSampleBufferGetDecodeTimeStamp(sampleBuffer)};

status = CMSampleBufferCreate(kCFAllocatorDefault, nil , NO,nil,nil,format, framesCount, 1, &timing, 0, nil, &copyBuffer);

if( status != noErr) {
  NSLog(@"Error in CMSampleBufferCreate");
  CFRelease(blockBuffer);
  return;
}

/* Copy BufferList to Sample Buffer */
AudioBufferList receivedAudioBufferList;
memcpy(&receivedAudioBufferList, copyBufferData, sizeof(receivedAudioBufferList));

//Creates a CMBlockBuffer containing a copy of the data from the
//AudioBufferList.
status = CMSampleBufferSetDataBufferFromAudioBufferList(copyBuffer, kCFAllocatorDefault , kCFAllocatorDefault, 0, &receivedAudioBufferList);
if (status != noErr) {
  NSLog(@"Error in CMSampleBufferSetDataBufferFromAudioBufferList");
  CFRelease(blockBuffer);
  return;
}

代码级支持答案:

这段代码看起来不错(尽管您需要添加一些额外的错误检查)。我已经在实现 AVCaptureAudioDataOutput 委托 captureOutput:didOutputSampleBuffer:fromConnection:方法以捕获和录制音频的应用程序中成功对其进行了测试。使用此深拷贝代码时获得的捕获音频似乎与直接使用提供的样本缓冲区(没有深拷贝)时获得的相同。

苹果开发者技术支持

于 2017-10-31T11:05:54.577 回答
0

在 Swift 中找不到合适的答案。这是一个扩展:

extension CMSampleBuffer {
    func deepCopy() -> CMSampleBuffer? {
        guard let formatDesc = CMSampleBufferGetFormatDescription(self),
              let data = self.data else {
                  return nil
              }
        let nFrames = CMSampleBufferGetNumSamples(self)
        let pts = CMSampleBufferGetPresentationTimeStamp(self)
        let dataBuffer = data.withUnsafeBytes { (buffer) -> CMBlockBuffer? in
            var blockBuffer: CMBlockBuffer?
            let length: Int = data.count
            guard CMBlockBufferCreateWithMemoryBlock(
                allocator: kCFAllocatorDefault,
                memoryBlock: nil,
                blockLength: length,
                blockAllocator: nil,
                customBlockSource: nil,
                offsetToData: 0,
                dataLength: length,
                flags: 0,
                blockBufferOut: &blockBuffer) == noErr else {
                    print("Failed to create block")
                    return nil
                }
            guard CMBlockBufferReplaceDataBytes(
                with: buffer.baseAddress!,
                blockBuffer: blockBuffer!,
                offsetIntoDestination: 0,
                dataLength: length) == noErr else {
                    print("Failed to move bytes for block")
                    return nil
                }
            return blockBuffer
        }
        guard let dataBuffer = dataBuffer else {
            return nil
        }
        var newSampleBuffer: CMSampleBuffer?
        CMAudioSampleBufferCreateReadyWithPacketDescriptions(
            allocator: kCFAllocatorDefault,
            dataBuffer: dataBuffer,
            formatDescription: formatDesc,
            sampleCount: nFrames,
            presentationTimeStamp: pts,
            packetDescriptions: nil,
            sampleBufferOut: &newSampleBuffer
        )
        return newSampleBuffer
    }
}

于 2021-10-22T19:24:05.733 回答