0

我正在读取输入文件并使用离线手动渲染模式,我想执行幅度调制并将结果写入输出文件。

为了测试,我产生纯正弦波——这适用于低于 6.000 Hz 的频率。对于更高的频率(我的目标是使用大约 20.000 Hz),信号(因此监听输出文件)会失真,并且频谱在 8.000 Hz 处结束 - 不再有纯频谱,在 0 到 8.000 Hz 之间有多个峰值。

这是我的代码片段:

    let outputFile: AVAudioFile

    do {
        let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
        let outputURL = documentsURL.appendingPathComponent("output.caf")
        outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
    } catch {
        fatalError("Unable to open output audio file: \(error).")
    }

    var sampleTime: Float32 = 0

    while engine.manualRenderingSampleTime < sourceFile.length {
        do {
            let frameCount = sourceFile.length - engine.manualRenderingSampleTime
            let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
            
            let status = try engine.renderOffline(framesToRender, to: buffer)
            
            switch status {
            
            case .success:
                // The data rendered successfully. Write it to the output file.
                let sampleRate:Float = Float((mixer.outputFormat(forBus: 0).sampleRate))

                let modulationFrequency: Float = 20000.0
                
                for i in stride(from:0, to: Int(buffer.frameLength), by: 1) {
                    let val = sinf(2.0 * .pi * modulationFrequency * Float(sampleTime) / Float(sampleRate))
                    // TODO: perform modulation later
                    buffer.floatChannelData?.pointee[Int(i)] = val
                    sampleTime = sampleTime + 1.0
                }

                try outputFile.write(from: buffer)
                
            case .insufficientDataFromInputNode:
                // Applicable only when using the input node as one of the sources.
                break
                
            case .cannotDoInCurrentContext:
                // The engine couldn't render in the current render call.
                // Retry in the next iteration.
                break
                
            case .error:
                // An error occurred while rendering the audio.
                fatalError("The manual rendering failed.")
            @unknown default:
                fatalError("unknown error")
            }
        } catch {
            fatalError("The manual rendering failed: \(error).")
        }
    }

我的问题:有s.th。我的代码错了吗?或者有谁知道如何生成具有更高频率正弦波的输出文件?

我想手动渲染模式不够快,无法处理更高的频率。


更新:与此同时,我确实用 Audacity 分析了输出文件。上图为 1.000 Hz 的波形,下图为 20.000 Hz 的波形: 在此处输入图像描述

当我放大时,我看到以下内容: 在此处输入图像描述

比较两个输出文件的光谱,我得到以下信息: 在此处输入图像描述

奇怪的是,随着频率的升高,幅度趋近于零。此外,我在第二个频谱中看到更多频率。

与结果相关的一个新问题是以下算法的正确性:

// Process the audio in `renderBuffer` here
for i in 0..<Int(renderBuffer.frameLength) {
    let val = sinf(1000.0*Float(index) *2 * .pi / Float(sampleRate))
    renderBuffer.floatChannelData?.pointee[i] = val
    index += 1
}

我确实检查了采样率,即 48000 - 我知道当采样频率大于被采样信号的最大频率的两倍时,可以忠实地重建原始信号。

更新 2:

我更改了如下设置:

    settings[AVFormatIDKey] = kAudioFormatAppleLossless
    settings[AVAudioFileTypeKey] = kAudioFileCAFType
    settings[AVSampleRateKey] = readBuffer.format.sampleRate
    settings[AVNumberOfChannelsKey] = 1
    settings[AVLinearPCMIsFloatKey] = (readBuffer.format.commonFormat == .pcmFormatInt32)
    settings[AVSampleRateConverterAudioQualityKey] = AVAudioQuality.max
    settings[AVLinearPCMBitDepthKey] = 32
    settings[AVEncoderAudioQualityKey] = AVAudioQuality.max

现在输出信号的质量更好,但并不完美。我得到更高的幅度,但在频谱分析仪中总是不止一个频率。也许解决方法可以包括应用高通滤波器?

与此同时,我确实使用了一种 SignalGenerator,将经过处理的缓冲区(使用正弦波)直接传输到扬声器——在这种情况下,输出是完美的。我认为将信号路由到文件会导致此类问题。

4

1 回答 1

0

手动渲染模式的速度不是问题,因为手动渲染环境中的速度有些无关紧要。

以下是从源文件手动渲染到输出文件的框架代码:

// Open the input file
let file = try! AVAudioFile(forReading: URL(fileURLWithPath: "/tmp/test.wav"))

let engine = AVAudioEngine()
let player = AVAudioPlayerNode()

engine.attach(player)

engine.connect(player, to:engine.mainMixerNode, format: nil)

// Run the engine in manual rendering mode using chunks of 512 frames
let renderSize: AVAudioFrameCount = 512

// Use the file's processing format as the rendering format
let renderFormat = AVAudioFormat(commonFormat: file.processingFormat.commonFormat, sampleRate: file.processingFormat.sampleRate, channels: file.processingFormat.channelCount, interleaved: true)!
let renderBuffer = AVAudioPCMBuffer(pcmFormat: renderFormat, frameCapacity: renderSize)!

try! engine.enableManualRenderingMode(.offline, format: renderFormat, maximumFrameCount: renderBuffer.frameCapacity)

try! engine.start()
player.play()

// The render format is also the output format
let output = try! AVAudioFile(forWriting: URL(fileURLWithPath: "/tmp/foo.wav"), settings: renderFormat.settings, commonFormat: renderFormat.commonFormat, interleaved: renderFormat.isInterleaved)

// Read using a buffer sized to produce `renderSize` frames of output
let readBuffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: renderSize)!

// Process the file
while true {
    do {
        // Processing is finished if all frames have been read
        if file.framePosition == file.length {
            break
        }

        try file.read(into: readBuffer)
        player.scheduleBuffer(readBuffer, completionHandler: nil)

        let result = try engine.renderOffline(readBuffer.frameLength, to: renderBuffer)

        // Process the audio in `renderBuffer` here

        // Write the audio
        try output.write(from: renderBuffer)
        if result != .success {
            break
        }
    }
    catch {
        break
    }
}

player.stop()
engine.stop()

这是一个片段,展示了如何在整个引擎中设置相同的采样率:

// Replace:
//engine.connect(player, to:engine.mainMixerNode, format: nil)

// With:
let busFormat = AVAudioFormat(standardFormatWithSampleRate: file.fileFormat.sampleRate, channels: file.fileFormat.channelCount)

engine.disconnectNodeInput(engine.outputNode, bus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: busFormat)

engine.connect(player, to:engine.mainMixerNode, format: busFormat)

验证采样率始终相同:

NSLog("%@", engine)
________ GraphDescription ________
AVAudioEngineGraph 0x7f8194905af0: initialized = 0, running = 0, number of nodes = 3

     ******** output chain ********

     node 0x600001db9500 {'auou' 'ahal' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600001d80b80 {'aumx' 'mcmx' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600000fa0200, {'augn' 'sspl' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001db9500, {'auou' 'ahal' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600000fa0200 {'augn' 'sspl' 'appl'}, 'U'
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
______________________________________
于 2020-11-21T16:36:52.013 回答