4

我看到 iOS 9/10 和 iOS 11 之间的缓冲区行为不同,当音频路由发生更改时(例如,您插入耳机),这些缓冲区将在 AVAudioPlayerNode 上调度。有没有人经历过类似的事情,你是如何解决的?请注意,我大约两周前在 Apple 的 AVFoundation 支持论坛上报告了这个问题,并且收到的回复完全为零。

显示此问题的代码如下所示 - 首先简要说明:该代码是一个简单的循环,它重复调度缓冲区在将来的某个时间播放。该过程通过调用“runSequence”方法开始,该方法安排音频缓冲区在将来的某个时间播放,并将完成回调设置为嵌套方法“audioCompleteHandler”。完成回调再次调用“runSequence”方法,该方法调度另一个缓冲区并保持进程永远进行。在这种情况下,除非完成处理程序正在执行,否则总是会安排一个缓冲区。各个地方的'trace'方法是内部方法,仅在调试时打印,所以可以忽略。

在音频路由更改通知处理程序 (handleAudioRouteChange) 中,当新设备可用时(案例 .newDeviceAvailable),代码会重新启动引擎和播放器,重新激活音频会话并调用“runSequence”以使循环恢复正常。

这一切在 iOS 9.3.5 (iPhone 5C) 和 iOS 10.3.3 (iPhone 6) 上运行良好,但在 iOS 11.1.1 (iPad Air) 上失败。失败的本质是 AVAudioPlayerNode 不播放音频,而是立即调用完成处理程序。这会导致失控的情况。如果我再次删除启动循环的行(如代码中所示),它在 iOS 11.1.1 上运行良好,但在 iOS 9.3.5 和 iOS 10.3.3 上失败。这个失败是不同的:音频刚刚停止,在调试器中,我可以看到循环没有循环。

因此,一种可能的解释是,在 iOS 9.x 和 iOS 10.x 下,未来计划的缓冲区在发生音频路由更改时是非计划的,而在 iOS 11.x 下,未来计划的缓冲区不是计划外的。

这导致了两个问题: 1. 有没有人看到任何类似的行为?解决方法是什么?2. 当音频路由改变(或音频中断)发生时,谁能指出描述引擎、播放器和会话的确切状态的文档?

private func runSequence() {

    // For test ony
    var timeBaseInfo = mach_timebase_info_data_t()
    mach_timebase_info(&timeBaseInfo)
    // End for test only

    let audioCompleteHandler = { [unowned self] in
        DispatchQueue.main.async {
            trace(level: .skim, items: "Player: \(self.player1.isPlaying), Engine: \(self.engine.isRunning)")
            self.player1.stop()
            switch self.runStatus {
            case .Run:
                self.runSequence()
            case .Restart:
                self.runStatus = .Run
                self.tickSeq.resetSequence()
                //self.updateRenderHostTime()
                self.runSequence()
            case .Halt:
                self.stopEngine()
                self.player1.stop()
                self.activateAudioSession(activate: false)
            }
        }
    }

    // Schedule buffer...
    if self.engine.isRunning {
        if let thisElem: (buffer: AVAudioPCMBuffer, duration: Int) = tickSeq.next() {
            self.player1.scheduleBuffer(thisElem.buffer, at: nil, options: [], completionHandler: audioCompleteHandler)
            self.player1.prepare(withFrameCount: thisElem.buffer.frameLength)
            self.player1.play(at: AVAudioTime(hostTime: self.startHostTime))
            self.startHostTime += AVAudioTime.hostTime(forSeconds: TimeInterval(Double(60.0 / Double(self.model.bpm.value)) * Double(thisElem.duration)))
            trace(level: .skim, items:
                "Samples: \(thisElem.buffer.frameLength)",
                "Time: \(mach_absolute_time() * (UInt64(timeBaseInfo.numer) / UInt64(timeBaseInfo.denom))) ",
                "Sample Time: \(player1.lastRenderTime!.hostTime)",
                "Play At: \(self.startHostTime) ",
                "Player: \(self.player1.isPlaying)",
                "Engine: \(self.engine.isRunning)")
        }
        else {
        }
    }
}


@objc func handleAudioRouteChange(_ notification: Notification) {

    trace(level: .skim, items: "Route change: Player: \(self.player1.isPlaying) Engine: \(self.engine.isRunning)")
    guard let userInfo = notification.userInfo,
        let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt,
        let reason = AVAudioSessionRouteChangeReason(rawValue:reasonValue) else { return }

    trace(level: .skim, items: audioSession.currentRoute, audioSession.mode)
    trace(level: .none, items: "Reason Value: \(String(describing: userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt)); Reason: \(String(describing: AVAudioSessionRouteChangeReason(rawValue:reasonValue)))")

    switch reason {
    case .newDeviceAvailable:
        trace(level: .skim, items: "In handleAudioRouteChange.newDeviceAvailable")
        for output in audioSession.currentRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
            startEngine()
            player1.play()
            activateAudioSession(activate: true)
            //updateRenderHostTime()
            runSequence() // <<--- Problem: works for iOS9,10; fails on iOS11. Remove it and iOS9,10 fail, works on iOS11
        }
    case .oldDeviceUnavailable:
        trace(level: .skim, items: "In handleAudioRouteChange.oldDeviceUnavailable")
        if let previousRoute =
            userInfo[AVAudioSessionRouteChangePreviousRouteKey] as? AVAudioSessionRouteDescription {
            for output in previousRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
                player1.stop()
                stopEngine()
                tickSeq.resetSequence()
                DispatchQueue.main.async {
                    if let pp = self.playPause as UIButton? { pp.isSelected = false }
                }
           }
        }
4

1 回答 1

5

因此,通过一些进一步的挖掘/测试解决了问题:

  • 当 AVAudioSession 发出路由更改通知时,iOS 9/10 和 iOS 11 之间的行为有所不同。在通知处理程序中,iOS 9/10 大约 90% 的时间引擎状态未运行(engine.isRunning == false),而对于 iOS 11,引擎状态始终处于运行状态(engine.isRunning == true)
  • iOS 9/10 有 10% 的时间表明引擎正在运行(engine.isRunning == true),但事实并非如此。不管 engine.isRunning 说什么,引擎都没有运行
  • 因为在iOS 9/10中引擎已经停止了,所以之前准备好的音频已经被释放了,重启引擎不会启动音频;您必须在引擎停止的采样点重新安排文件或缓冲区。可悲的是,当引擎停止(播放器返回 nil)时,您无法找到当前采样时间,因此您必须:

    • 启动发动机
    • 获取采样时间并将其累积 (+=) 在持久属性中
    • 停止播放器
    • 从刚刚抓取的采样时间开始重新安排音频(并准备好)
    • 启动播放器
  • 在 iOS 9/10 中,耳机插入外壳 (.newDeviceAvailable) 和耳机移除外壳 (.oldDeviceUnavailable) 的引擎状态相同,因此您也需要对已移除外壳执行与上述类似的操作(累积样本需要时间,以便您可以从停止的点重新启动音频,因为 player.stop() 会将采样时间重置为 0)

  • iOS 11 不需要这些,但下面的代码适用于 iOS 9/10 和 11,因此最好对所有版本执行相同的操作

下面的代码适用于我的 iOS 9.3.5 (iPhone 5C)、iOS 10.3.3 (iPhone 6) 和 iOS 11.1.1 (iPad Air) 的测试设备(但我仍然对我找不到之前的事实感到困扰关于如何正确处理路线变更的评论,并且肯定有数百人遇到过这个问题..?? 通常,当我找不到任何关于某个主题的先前评论时,我认为我做错了什么或只是不明白...哦,好吧...):

@objc func handleAudioRouteChange(_ notification: Notification) {

    guard let userInfo = notification.userInfo,
        let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt,
        let reason = AVAudioSessionRouteChangeReason(rawValue:reasonValue) else { return }

    switch reason {
    case .newDeviceAvailable:

        for output in audioSession.currentRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
            headphonesConnected = true
        }

        startEngine()   // Do this regardless of whether engine.isRunning == true

        if let lrt = player.lastRenderTime, let st = player.playerTime(forNodeTime: lrt)?.sampleTime {
            playSampleOffset += st  // Accumulate so that multiple inserts/removals move the play point forward
            stopPlayer()
            scheduleSegment(file: playFile, at: nil, player: player, start: playSampleOffset, length: AVAudioFrameCount(playFile.length - playSampleOffset))
            startPlayer()
        }
        else {
            // Unknown problem with getting sampleTime; reset engine, player(s), restart as appropriate
        }

    case .oldDeviceUnavailable:
        if let previousRoute =
            userInfo[AVAudioSessionRouteChangePreviousRouteKey] as? AVAudioSessionRouteDescription {
            for output in previousRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
                headphonesConnected = false
            }
        }

        startEngine()   // Do this regardless of whether engine.isRunning == true

        if let lrt = player.lastRenderTime, let st = player.playerTime(forNodeTime: lrt)?.sampleTime  {
            playSampleOffset += st  // Accumulate...
            stopPlayer()
            scheduleSegment(file: playFile, at: nil, player: player, start: playSampleOffset, length: AVAudioFrameCount(playFile.length - playSampleOffset))
            startPlayer()   // Test only, in reality don't restart here; set play control to allow user to start audio
        }
        else {
            // Unknown problem with getting sampleTime; reset engine, player(s), restart as appropriate
        }

...

于 2017-12-06T10:08:37.853 回答