我正在构建一个应用程序,该应用程序应允许用户将音频过滤器应用于录制的音频,例如混响、增强。
我无法找到有关如何将过滤器应用于文件本身的任何可行信息来源,因为稍后需要将处理后的文件上传到服务器。
我目前正在使用 AudioKit 进行可视化,并且我知道它能够进行音频处理,但仅用于播放。请对进一步研究提出任何建议。
我正在构建一个应用程序,该应用程序应允许用户将音频过滤器应用于录制的音频,例如混响、增强。
我无法找到有关如何将过滤器应用于文件本身的任何可行信息来源,因为稍后需要将处理后的文件上传到服务器。
我目前正在使用 AudioKit 进行可视化,并且我知道它能够进行音频处理,但仅用于播放。请对进一步研究提出任何建议。
AudioKit 有一个不需要 iOS 11 的离线渲染节点。这是一个示例,需要 player.schedule(...) 和 player.start(at.) 位,因为 AKAudioPlayer 的底层 AVAudioPlayerNode 将阻塞调用线程等待下一个渲染,如果你用player.play()
.
import UIKit
import AudioKit
class ViewController: UIViewController {
var player: AKAudioPlayer?
var reverb = AKReverb()
var boost = AKBooster()
var offlineRender = AKOfflineRenderNode()
override func viewDidLoad() {
super.viewDidLoad()
guard let url = Bundle.main.url(forResource: "theFunkiestFunkingFunk", withExtension: "mp3") else {
return
}
var audioFile: AKAudioFile?
do {
audioFile = try AKAudioFile.init(forReading: url)
player = try AKAudioPlayer.init(file: audioFile!)
} catch {
print(error)
return
}
guard let player = player else {
return
}
player >>> reverb >>> boost >>> offlineRender
AudioKit.output = offlineRender
AudioKit.start()
let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let dstURL = docs.appendingPathComponent("rendered.caf")
offlineRender.internalRenderEnabled = false
player.schedule(from: 0, to: player.duration, avTime: nil)
let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
player.play(at: sampleTimeZero)
do {
try offlineRender.renderToURL(dstURL, seconds: player.duration)
} catch {
print(error)
return
}
offlineRender.internalRenderEnabled = true
print("Done! Rendered to " + dstURL.path)
}
}
您可以使用音频单元插件中新引入的“手动渲染”功能(参见下面的示例)。
如果您需要支持较旧的 macOS/iOS 版本,如果您无法使用AudioKit实现相同的功能,我会感到惊讶(即使我自己没有尝试过)。例如,使用 anAKSamplePlayer
作为您的第一个节点(它将读取您的音频文件),然后构建和连接您的效果并使用 anAKNodeRecorder
作为您的最后一个节点。
import AVFoundation
//: ## Source File
//: Open the audio file to process
let sourceFile: AVAudioFile
let format: AVAudioFormat
do {
let sourceFileURL = Bundle.main.url(forResource: "mixLoop", withExtension: "caf")!
sourceFile = try AVAudioFile(forReading: sourceFileURL)
format = sourceFile.processingFormat
} catch {
fatalError("could not open source audio file, \(error)")
}
//: ## Engine Setup
//: player -> reverb -> mainMixer -> output
//: ### Create and configure the engine and its nodes
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
let reverb = AVAudioUnitReverb()
engine.attach(player)
engine.attach(reverb)
// set desired reverb parameters
reverb.loadFactoryPreset(.mediumHall)
reverb.wetDryMix = 50
// make connections
engine.connect(player, to: reverb, format: format)
engine.connect(reverb, to: engine.mainMixerNode, format: format)
// schedule source file
player.scheduleFile(sourceFile, at: nil)
//: ### Enable offline manual rendering mode
do {
let maxNumberOfFrames: AVAudioFrameCount = 4096 // maximum number of frames the engine will be asked to render in any single render call
try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames)
} catch {
fatalError("could not enable manual rendering mode, \(error)")
}
//: ### Start the engine and player
do {
try engine.start()
player.play()
} catch {
fatalError("could not start engine, \(error)")
}
//: ## Offline Render
//: ### Create an output buffer and an output file
//: Output buffer format must be same as engine's manual rendering output format
let outputFile: AVAudioFile
do {
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let outputURL = URL(fileURLWithPath: documentsPath + "/mixLoopProcessed.caf")
outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
} catch {
fatalError("could not open output audio file, \(error)")
}
// buffer to which the engine will render the processed data
let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
//: ### Render loop
//: Pull the engine for desired number of frames, write the output to the destination file
while engine.manualRenderingSampleTime < sourceFile.length {
do {
let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime))
let status = try engine.renderOffline(framesToRender, to: buffer)
switch status {
case .success:
// data rendered successfully
try outputFile.write(from: buffer)
case .insufficientDataFromInputNode:
// applicable only if using the input node as one of the sources
break
case .cannotDoInCurrentContext:
// engine could not render in the current render call, retry in next iteration
break
case .error:
// error occurred while rendering
fatalError("render failed")
}
} catch {
fatalError("render failed, \(error)")
}
}
player.stop()
engine.stop()
print("Output \(outputFile.url)")
print("AVAudioEngine offline rendering completed")
您可以在此处找到有关 AudioUnit 格式更新的更多文档和示例。