我正在使用 AudioConverter 将通过 AVCaptureSession 捕获的未压缩 CMSampleBuffer 转换为 AudioBufferList:
let packetDescriptionsPtr = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: 1)
AudioConverterFillComplexBuffer(
converter,
inputDataProc,
Unmanaged.passUnretained(self).toOpaque(),
&ioOutputDataPacketSize,
outOutputData.unsafeMutablePointer,
packetDescriptionsPtr
)
然后,我使用如下数据包描述构造一个包含压缩数据的 CMSampleBuffer:
CMAudioSampleBufferCreateWithPacketDescriptions(
allocator: kCFAllocatorDefault,
dataBuffer: nil,
dataReady: false,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: formatDescription!,
sampleCount: Int(data.unsafePointer.pointee.mNumberBuffers),
presentationTimeStamp: presentationTimeStamp,
packetDescriptions: &packetDescriptions,
sampleBufferOut: &sampleBuffer)
当我尝试使用 AVAssetWriter 保存缓冲区时,出现以下错误:-[AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: First input buffer must have an appropriate kCMSampleBufferAttachmentKey_TrimDurationAtStart since the codec has encoder delay'
我决定准备前三个缓冲区,因为每个缓冲区的长度都是一致的:
if self.receivedAudioBuffers < 2 {
let primingDuration = CMTimeMake(value: 1024, timescale: 44100)
CMSetAttachment(sampleBuffer,
key: kCMSampleBufferAttachmentKey_TrimDurationAtStart,
value: CMTimeCopyAsDictionary(primingDuration, allocator: kCFAllocatorDefault),
attachmentMode: kCMAttachmentMode_ShouldNotPropagate)
self.receivedAudioBuffers += 1
}
else if self.receivedAudioBuffers == 2 {
let primingDuration = CMTimeMake(value: 64, timescale: 44100)
CMSetAttachment(sampleBuffer,
key: kCMSampleBufferAttachmentKey_TrimDurationAtStart,
value: CMTimeCopyAsDictionary(primingDuration, allocator: kCFAllocatorDefault),
attachmentMode: kCMAttachmentMode_ShouldNotPropagate)
self.receivedAudioBuffers += 1
}
现在我不再收到错误,并且在附加样本时也没有收到任何错误,但音频不会在录制中播放,并且还会弄乱整个视频文件(似乎计时信息已损坏)。
这里有什么我想念的吗?我应该如何正确附加音频 CMSampleBuffer?