5

我已经完成了这个简单的程序。它所做的只是同时记录和回放缓冲区。如果采样率为 44100 赫兹,一切正常,但如果我将采样率更改为 16000 或 8000,它根本不会产生任何声音,或者可能是一些听不见的白噪声。为什么会发生这种情况?

如何以不同的采样率记录?

以下代码我试过:

import UIKit
import AVFoundation

class ViewController: UIViewController  {

var engine = AVAudioEngine()
let player = AVAudioPlayerNode()
let audioSession = AVAudioSession.sharedInstance()
let newSrc:UnsafeMutablePointer<Float>! = nil
override func viewDidLoad() {
super.viewDidLoad()



let audioSession = AVAudioSession.sharedInstance()
print(audioSession.sampleRate) // here it prints 44100 hz. because it still using the internal mic.
do {

    try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, with: .allowBluetooth)
    try audioSession.setMode(AVAudioSessionModeDefault)
    try audioSession.setActive(true)

} catch {
}
print(audioSession.sampleRate) // here it will print 16000 hz if my bluetooth earbuds is connected, if not it will be 44100 hz.

let input = engine.inputNode
let bus = 0
let mixer = AVAudioMixerNode() // creating mixer as it is needed to set audio format

engine.attach(mixer)
engine.attach(player)
engine.connect(input, to: mixer, format: input.outputFormat(forBus: 0))

let inputFormat = input.inputFormat(forBus: bus)

engine.connect(player, to: engine.mainMixerNode, format: input.inputFormat(forBus: 0))

let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100.0, channels: 1, interleaved: false)

mixer.installTap(onBus: bus, bufferSize: 1024, format: fmt) { (buffer, time) -> Void in

    print(buffer.format)
    print(buffer.floatChannelData)
    print(buffer.format.streamDescription.pointee.mBytesPerFrame)
    self.player.scheduleBuffer(buffer)
    if self.player.isPlaying {
        print("true")
    }
}


engine.prepare()
do{
    try! engine.start()
    player.play()
} catch {
    print(error)
}
}

}
4

2 回答 2

10

正如这里所讨论的AVAudioEngine混音器节点和分接头都不会为您进行速率转换。实际上,在您的情况下,混音龙头不是抛出或记录错误,而是静默(明白吗?)让您保持沉默。

由于你不能使用AVAudioMixerNodefor 速率转换,你可以用方便的替换它AVAudioConverter,确保设置正确的输出格式,AVAudioPlayerNode因为

播放缓冲区时,有一个隐含的假设,即缓冲区与节点的输出格式具有相同的采样率。

如果您不这样做,您可能会听到间隙和/或音高偏移的音频。

像这样

let input = engine.inputNode
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)

engine.attach(player)

let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false)!
engine.connect(player, to: engine.mainMixerNode, format: fmt)

let converter = AVAudioConverter(from: inputFormat, to: fmt)!

input.installTap(onBus: bus, bufferSize: 1024, format: inputFormat) { (buffer, time) -> Void in
    let inputCallback: AVAudioConverterInputBlock = { inNumPackets, outStatus in
        outStatus.pointee = AVAudioConverterInputStatus.haveData
        return buffer
    }

    let convertedBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: AVAudioFrameCount(fmt.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate))!

    var error: NSError? = nil
    let status = converter.convert(to: convertedBuffer, error: &error, withInputFrom: inputCallback)
    assert(status != .error)

    print(convertedBuffer.format)
    print(convertedBuffer.floatChannelData)
    print(convertedBuffer.format.streamDescription.pointee.mBytesPerFrame)
    self.player.scheduleBuffer(convertedBuffer)
}
于 2018-05-20T12:22:34.877 回答
1

这个解决方案对我有用

let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)!
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: fmt) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
        self.recognitionRequest?.append(buffer)
    }
于 2020-01-31T07:45:20.543 回答