1

我在具有以下属性的回调函数中接收音频原始数据包:

  • 每包 640 个样本
  • 32 kHz 采样率
  • 带符号的 16 位 PCM 编码
  • 单通道

换句话说,每个传入的音频数据包都是一个包含 640 个指向音频原始数据值的指针的数组。每个音频原始数据值的位深度为 2 字节(16 位),采用 PCM 编码。

我在接收传入的音频原始数据之前创建了一个 AVAudioFile。录制开始后,我将音频原始数据包保存到 AVAudioPCMBuffer 中,使用 AVAudioConverter 将 AVAudioPCMBuffer 的处理格式转换为 AVAudioEngine 主混音器节点的输出格式,并将转换后的 AVAudioPCMBuffer 写入 AVAudioFile。由于 AVAudioEngine 的主混音器节点的输出格式为 2 ch, 48000 Hz, Float32, non-interleaved,因此需要进行 AVAudioPCMBuffer 格式的转换。最后,一旦录音停止,我使用 AVAudioEngine 播放 AVAudioFile。

问题:播放 AVAudioFile 时,我对着麦克风说话时只听到白噪声。然而,白噪声的持续时间与我对着麦克风说话的时间长度相同,这似乎表明我已经接近解决方案,但还没有完全解决。

我在 Swift 5 中的代码如下:

1.创建AVAudioFile

func createAudioFile() {
        let fileMgr = FileManager.default
        let dirPaths = fileMgr.urls(for: .cachesDirectory, in: .userDomainMask)
        var recordSettings: [String : Any] = [:]
        recordSettings[AVFormatIDKey] = kAudioFormatLinearPCM
        recordSettings[AVAudioFileTypeKey] = kAudioFileCAFType
        recordSettings[AVSampleRateKey] = 44100
        recordSettings[AVNumberOfChannelsKey] = 2
        self.soundFileUrl = dirPaths[0].appendingPathComponent("recording.pcm")
        do {
            audioFile = try AVAudioFile(forWriting: soundFileUrl!, settings: recordSettings, commonFormat: .pcmFormatFloat32, interleaved: false)
        } catch let error as NSError {
            print("error:", error.localizedDescription)
        }
    }

2. 在回调函数中处理传入的音频原始数据

func onAudioRawDataReceived(_ rawData: AudioRawData) {
        // when audio recording starts
        if self.saveAudio == true {
            do {
                let channels = 1
                let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(rawData.sampleRate), channels: AVAudioChannelCount(channels), interleaved: false)!
                let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(rawData.bufferLen / channels))
                let int16ChannelData = audioFileBuffer?.int16ChannelData!
                for i in 0..<rawData.bufferLen {
                    int16ChannelData![0][i] = Int16(rawData.buffer.pointee)
                    rawData.buffer += 1
                }
                audioFileBuffer!.frameLength = AVAudioFrameCount(rawData.bufferLen / channels)
                let outputFormat = audioEngine.mainMixerNode.outputFormat(forBus: 0)
                converter = AVAudioConverter(from: format, to: outputFormat)!
                let outputAudioFileBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: AVAudioFrameCount(rawData.bufferLen / channels))
                converter.convert(to: outputAudioFileBuffer!, error: nil) { inNumPackets, outStatus in
                    outStatus.pointee = .haveData
                    return audioFileBuffer
                }
                try audioFile!.write(from: outputAudioFileBuffer!)
                print("success")
            }
            catch let error as NSError {
                print("Error:", error.localizedDescription)
            }
            isRecordingAudio = true
        }
        else {
        //when audio recording stops
            if isRecordingAudio == true {
                playPCM(soundFileUrl!.absoluteString)
                isRecordingAudio = false
            }
        }
    }

3. 录音停止后播放音频文件

func playPCM(_ filefullpathstr: String) {
        do {
            let audioFile = try AVAudioFile(forReading: URL(string: filefullpathstr)!)
            let audioFormat = audioFile.processingFormat
            let audioFrameCount = UInt32(audioFile.length)
            let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: audioFrameCount)
            try audioFile.read(into: audioFileBuffer!, frameCount: audioFrameCount)
            let mainMixer = audioEngine.mainMixerNode
            audioEngine.attach(audioFilePlayer)
            audioEngine.connect(audioFilePlayer, to:mainMixer, format: audioFileBuffer?.format)
            audioEngine.prepare()
            try audioEngine.start()
            audioFilePlayer.play()
            audioFilePlayer.scheduleBuffer(audioFileBuffer!, at: nil, options: [], completionHandler: {
                print("scheduled buffer")
            })
        } catch let error as NSError {
            print("Error:", error.localizedDescription)
        }
    }

我对使用 Swift 在 iOS 上进行音频处理非常陌生,非常感谢有关解决此问题的任何建议或提示。

非常感谢您提前提供的帮助。

4

0 回答 0