7

我正在尝试在 iOS 8 中使用新的 AVAudioEngine。

看起来 player.scheduleFile() 的 completionHandler 是在声音文件完成播放之前调用的。

我正在使用一个长度为 5s 的声音文件——并且println()-Message 在声音结束前大约 1 秒出现。

我做错了什么还是我误解了completionHandler的想法?

谢谢!


这是一些代码:

class SoundHandler {
    let engine:AVAudioEngine
    let player:AVAudioPlayerNode
    let mainMixer:AVAudioMixerNode

    init() {
        engine = AVAudioEngine()
        player = AVAudioPlayerNode()
        engine.attachNode(player)
        mainMixer = engine.mainMixerNode

        var error:NSError?
        if !engine.startAndReturnError(&error) {
            if let e = error {
                println("error \(e.localizedDescription)")
            }
        }

        engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0))
    }

    func playSound() {
        var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a")
        var soundFile = AVAudioFile(forReading: soundUrl, error: nil)

        player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") })

        player.play()
    }
}
4

7 回答 7

8

我看到了同样的行为。

根据我的实验,我相信一旦缓冲区/段/文件被“调度”,而不是在播放完成时调用回调。

尽管文档明确指出:“在缓冲区完全播放或播放器停止后调用。可能为零。”

所以我认为这是一个错误或不正确的文档。不知道哪个

于 2015-04-06T19:35:53.200 回答
7

您始终可以使用 AVAudioTime 计算音频播放完成的未来时间。当前的行为很有用,因为它支持在当前缓冲区/段/文件结束之前从回调中安排额外的缓冲区/段/文件播放,从而避免音频播放的间隙。这使您无需大量工作即可创建简单的循环播放器。这是一个例子:

class Latch {
    var value : Bool = true
}

func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch {
    let looping = Latch()
    let frames = file.length

    let sampleRate = file.processingFormat.sampleRate
    var segmentTime : AVAudioFramePosition = 0
    var segmentCompletion : AVAudioNodeCompletionHandler!
    segmentCompletion = {
        if looping.value {
            segmentTime += frames
            player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
        }
    }
    player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
    segmentCompletion()
    player.play()

    return looping
}

上面的代码在调用 player.play() 之前调度了整个文件两次。随着每个片段接近完成,它会在未来安排另一个完整的文件,以避免播放间隙。要停止循环,请使用返回值 Latch,如下所示:

let looping = loopWholeFile(file, player)
sleep(1000)
looping.value = false
player.stop()
于 2016-03-15T02:03:38.673 回答
6

iOS 8 天以前的 AVAudioEngine 文档一定是错的。同时,作为一种解决方法,我注意到如果您改为使用scheduleBuffer:atTime:options:completionHandler:回调,则会按预期触发(在播放完成后)。

示例代码:

AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length];
[file readIntoBuffer:buffer error:&error];

[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{
    // reminder: we're not on the main thread in here
    dispatch_async(dispatch_get_main_queue(), ^{
        NSLog(@"done playing, as expected!");
    });
}];
于 2015-04-14T14:32:54.880 回答
3

我的错误报告被关闭为“按预期工作”,但 Apple 向我指出了 iOS 11 中 scheduleFile、scheduleSegment 和 scheduleBuffer 方法的新变体。这些添加了一个completionCallbackType 参数,您可以使用它来指定您想要完成回调播放完成时:

[self.audioUnitPlayer
            scheduleSegment:self.audioUnitFile
            startingFrame:sampleTime
            frameCount:(int)sampleLength
            atTime:0
            completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
            completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) {
    // do something here
}];

文档没有说明它是如何工作的,但我对其进行了测试,它对我有用。

我一直在为 iOS 8-10 使用这个解决方法:

- (void)playRecording {
    [self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() {
        float totalTime = [self recordingDuration];
        float elapsedTime = [self recordingCurrentTime];
        float remainingTime = totalTime - elapsedTime;
        [self performSelector:@selector(doSomethingHere) withObject:nil afterDelay:remainingTime];
    }];
}

- (float)recordingDuration {
    float duration = duration = self.audioUnitFile.length / self.audioUnitFile.processingFormat.sampleRate;
    if (isnan(duration)) {
        duration = 0;
    }
    return duration;
}

- (float)recordingCurrentTime {
    AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime;
    AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime];
    AVAudioFramePosition sampleTime = playerTime.sampleTime;
    if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing
    sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here
    float time = sampleTime / self.audioUnitFile.processingFormat.sampleRate;
    self.audioUnitLastKnownTime = time;
    return time;
}
于 2017-11-18T00:07:29.053 回答
1

截至今天,在部署目标为 12.4 的项目中,在运行 12.4.1 的设备上,我们发现在播放完成后成功停止节点的方式如下:

// audioFile and playerNode created here ...

playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in
    os_log(.debug, log: self.log, "%@", "Completing playing sound effect: \(filePath) ...")

    DispatchQueue.main.async {
        os_log(.debug, log: self.log, "%@", "... now actually completed: \(filePath)")

        self.engine.disconnectNodeOutput(playerNode)
        self.engine.detach(playerNode)
    }
}

与先前答案的主要区别是在主线程(我猜这也是音频渲染线程?)上推迟节点分离,而不是在回调线程上执行。

于 2019-09-11T10:12:24.337 回答
0

是的,它确实在文件(或缓冲区)完成之前被调用。如果您从完成处理程序中调用 [myNode stop],则文件(或缓冲区)将不会完全完成。但是,如果您调用 [myEngine stop],文件(或缓冲区)将完成到最后

于 2017-03-12T08:21:12.967 回答
0
// audioFile here is our original audio

audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: {
        print("scheduleFile Complete")

        var delayInSeconds: Double = 0

        if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {

            if let rate = rate {
                delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate) / Double(rate!)
            } else {
                delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate)
            }
        }

        // schedule a stop timer for when audio finishes playing
        DispatchTime.executeAfter(seconds: delayInSeconds) {
            audioEngine.mainMixerNode.removeTap(onBus: 0)
            // Playback has completed
        }

    })
于 2018-02-08T06:21:45.607 回答