0

我在 ARSession 期间捕获了麦克风音频,我希望将其传递给另一个 VC 并在捕获发生后播放,但应用程序仍在运行(并且音频在内存中)。

音频当前被捕获为单个 CMSampleBuffer 并通过该didOutputAudioSampleBuffer ARSessionDelegate方法访问。

我以前使用过音频文件和 AVAudioPlayer,但对 CMSampleBuffer 还是陌生的。

有没有办法按原样获取原始缓冲区并播放它?如果是这样,哪些类启用此功能?还是需要先将其渲染/转换为其他格式或文件?

这是缓冲区中数据的格式描述:

mediaType:'soun' 
    mediaSubType:'lpcm' 
    mediaSpecific: {
        ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     } 
        cookie: {(null)} 
        ACL: {Mono}
        FormatList Array: {
            Index: 0 
            ChannelLayoutTag: 0x640001 
            ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     }} 
    } 
    extensions: {(null)}

任何指导都值得赞赏,因为 Apple 的文档在这个问题上并不清楚,关于 SO 的相关问题更多地处理音频的实时流,而不是捕获和随后的回放。

4

2 回答 2

0

似乎答案是否定的,您不能简单地保存和播放原始缓冲区音频,需要先将其转换为更持久的东西。

看起来这样做的主要方法是使用 AVAssetWriter 将缓冲区数据保存为音频文件,以便稍后使用 AVAudioPlayer 播放。

于 2020-08-26T16:11:36.197 回答
0

可以在录制的同时将麦克风传递给音频引擎,延迟最小:

let audioEngine = AVAudioEngine()
...
self.audioEngine.connect(self.audioEngine.inputNode,
    to: self.audioEngine.mainMixerNode, format: nil)
self.audioEngine.start()

如果样本缓冲区的使用很重要——大致可以通过转换为 PCM 缓冲区来完成:

import AVFoundation

extension AVAudioPCMBuffer {
static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? {
    
    guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer),
          let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate,
          let channelsPerFrame: UInt32 = description.audioStreamBasicDescription?.mChannelsPerFrame /*,
     let numberOfChannels = description.audioChannelLayout?.numberOfChannels */
    else { return nil }
    
    guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
        return nil
    }
    
    let samplesCount = CMSampleBufferGetNumSamples(sampleBuffer)
    
    //let length: Int = CMBlockBufferGetDataLength(blockBuffer)
    
    let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(1), interleaved: false)
    
    let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(samplesCount))!
    buffer.frameLength = buffer.frameCapacity
    
    // GET BYTES
    var dataPointer: UnsafeMutablePointer<Int8>?
    CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: nil, dataPointerOut: &dataPointer)
    
    guard var channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[0],
          let data = dataPointer else { return nil }
    
    var data16 = UnsafeRawPointer(data).assumingMemoryBound(to: Int16.self)
    
    for _ in 0...samplesCount - 1 {
        channel.pointee = Float32(data16.pointee) / Float32(Int16.max)
        channel += 1
        for _ in 0...channelsPerFrame - 1 {
            data16 += 1
        }
        
    }
    
    return buffer
}
}


 class BufferPlayer {

let audioEngine = AVAudioEngine()
let player = AVAudioPlayerNode()

deinit {
    self.audioEngine.stop()
}

init(withBuffer: CMSampleBuffer) {
    
    self.audioEngine.attach(self.player)
    
    self.audioEngine.connect(self.player,
                             to: self.audioEngine.mainMixerNode,
                             format: AVAudioPCMBuffer.create(from: withBuffer)!.format
    )
    
    _ = try? audioEngine.start()
}

func playEnqueue(buffer: CMSampleBuffer) {
    guard let bufferPCM = AVAudioPCMBuffer.create(from: buffer) else { return }
    
    self.player.scheduleBuffer(bufferPCM, completionHandler: nil)
    if !self.player.isPlaying { self.player.play() }
}

}
于 2021-05-19T08:09:53.920 回答