2

使用 调度文件或缓冲区时AVAudioPlayerNode,您可以提交完成回调。您还可以指定应调用回调的事件,选项为:

dataConsumed
dataRendered
dataPlayedBack

在大多数情况下,这可以按预期工作。dataRendered但是,当一个接一个地调度文件时,如果类型为或,则不会调用最后一个文件以外的文件的回调dataPlayedBack

这是一个演示该问题的独立示例。(此代码可以粘贴在 Xcode iOS 'App' 模板中的 'ViewController.swift' 文件的内容上。该项目需要包含代码中引用的音频文件才能工作。)

import AVFoundation
import UIKit

class ViewController: UIViewController {
    private let engine = AVAudioEngine()
    private let filePlayer = AVAudioPlayerNode()
    private let bufferPlayer = AVAudioPlayerNode()
    private var file: AVAudioFile! = nil
    private var buffer: AVAudioPCMBuffer! = nil

    override func viewDidLoad() {
        super.viewDidLoad()

        let url = Bundle.main.resourceURL!.appendingPathComponent("audio.m4a")
        
        file = try! AVAudioFile(forReading: url)

        buffer = AVAudioPCMBuffer(
            pcmFormat: file.processingFormat,
            frameCapacity: AVAudioFrameCount(file.length)
        )
        
        try! file.read(into: buffer)
        
        let format = file.processingFormat

        engine.attach(filePlayer)
        engine.attach(bufferPlayer)
        engine.connect(filePlayer, to: engine.mainMixerNode, format: format)
        engine.connect(bufferPlayer, to: engine.mainMixerNode, format: format)
        
        try! engine.start()
        
        filePlayer.play()
        bufferPlayer.play()
        
        for i in 0 ..< 3 {
            filePlayer.scheduleFile(
                file, at: nil, completionCallbackType: .dataPlayedBack
            ) {
                type in print("File \(i)")
            }
        }
        for i in 0 ..< 3 {
            filePlayer.scheduleBuffer(
                buffer, at: nil, completionCallbackType: .dataPlayedBack
            ) {
                type in print("Buff \(i)")
            }
        }
    }
}

这是我得到的输出:

File 2
Buff 0
Buff 1
Buff 2

如您所见,回调函数会为缓冲区的所有三个实例调用,但只为文件的最后一个实例调用。

同样,它仅用于dataRendereddataPlayedBack不调用回调。对于dataConsumed,它可以正常工作。

有没有人遇到过这个?任何人都可以确认这种行为吗?这似乎是一个错误,但也有可能我做错了什么。

编辑:

这是响应评论中提出的想法的另一个版本的代码。在这个版本中,不再调度同一个文件实例三次,而是连续调度同一个文件的三个实例:

import AVFoundation
import UIKit

class ViewController: UIViewController {
    private let engine = AVAudioEngine()
    private let filePlayer = AVAudioPlayerNode()
    private let bufferPlayer = AVAudioPlayerNode()
    private var files = [AVAudioFile]()
    private var buffer: AVAudioPCMBuffer! = nil

    override func viewDidLoad() {
        super.viewDidLoad()

        let url = Bundle.main.resourceURL!.appendingPathComponent("audio.m4a")
        
        for _ in 0 ..< 3 {
            files.append(try! AVAudioFile(forReading: url))
        }
        
        let file = files[0]

        buffer = AVAudioPCMBuffer(
            pcmFormat: file.processingFormat,
            frameCapacity: AVAudioFrameCount(file.length)
        )
        
        try! file.read(into: buffer)
        
        let format = file.processingFormat

        engine.attach(filePlayer)
        engine.attach(bufferPlayer)
        engine.connect(filePlayer, to: engine.mainMixerNode, format: format)
        engine.connect(bufferPlayer, to: engine.mainMixerNode, format: format)
        
        try! engine.start()
        
        filePlayer.play()
        bufferPlayer.play()
        
        for i in 0 ..< 3 {
            filePlayer.scheduleFile(
                files[i], at: nil, completionCallbackType: .dataPlayedBack
            ) {
                type in print("File \(i)")
            }
        }
        for i in 0 ..< 3 {
            filePlayer.scheduleBuffer(
                buffer, at: nil, completionCallbackType: .dataPlayedBack
            ) {
                type in print("Buff \(i)")
            }
        }
    }
}

结果是一样的。除了多次读取同一个文件的实用性之外,这是一个有趣的诊断,因为它提供了有关行为的更多信息。如果它是一个错误,它似乎与预定文件是否是相同的 AVAudioFile 实例无关。

4

1 回答 1

2

由于AVAudioPlayerNode本质上是一个包装器kAudioUnitSubType_ScheduledSoundPlayer(大概是一些文件读取和缓冲代码从kAudioUnitSubType_AudioFilePlayer抛出但使用ExtAudioFile),我做了一个实验,看看较低级别的对应物是否表现出相同的行为。

这并不完全是一个苹果对苹果的比较,但它似乎kAudioUnitSubType_ScheduledSoundPlayer按预期工作,所以这可能是AVAudioPlayerNode.

我用于测试的代码如下。kAudioUnitSubType_ScheduledSoundPlayer用于调度三个切片(缓冲区)。它们来自同一个文件,但无关紧要,因为kAudioUnitSubType_ScheduledSoundPlayer只知道缓冲区而不知道文件。

所有三个切片都按预期调用回调。所以看起来问题很可能是如何在AVAudioPlayerNode内部处理这些回调并将它们路由到非实时调度队列(因为回调kAudioUnitSubType_ScheduledSoundPlayer是在 HAL 的实时 IO 线程上处理的,并且不能信任客户端不会阻塞IO 线程)。

//  ViewController.m

#import "ViewController.h"

@import AudioToolbox;
@import AVFoundation;
@import os.log;

@interface ViewController ()
{
    AUGraph _graph;
    AUNode _player;
    AUNode _mixer;
    AUNode _output;
    ScheduledAudioSlice _slice [3];
    AVAudioPCMBuffer *_buf;
}
- (void)scheduledAudioSliceCompleted:(ScheduledAudioSlice *)slice;
@end

void myScheduledAudioSliceCompletionProc(void * __nullable userData, ScheduledAudioSlice *slice)
{
    // ⚠️ WARNING ⚠️
    // THIS FUNCTION IS CALLED FROM THE REAL TIME RENDERING THREAD.
    // OBJ-C USE HERE IS FOR TESTING CALLBACK FUNCTIONALITY ONLY
    // OBJ-C IS NOT REAL TIME SAFE
    // DO NOT DO THIS IN PRODUCTION CODE!!!
    [(__bridge ViewController *)userData scheduledAudioSliceCompleted:slice];
}

@implementation ViewController

- (void)dealloc {
    [self closeGraph];
}

- (void)viewDidLoad {
    [super viewDidLoad];
    [self openGraph];
    [self schedule];
    [self startPlayer];
    [self startGraph];
}

-(OSStatus)openGraph {
    OSStatus result = NewAUGraph(&_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "NewAUGraph failed: %d", result);
        return result;
    }

    // The graph will look like:
    // Player -> MultiChannelMixer -> Output
    AudioComponentDescription desc;

    // Player
    desc.componentType          = kAudioUnitType_Generator;
    desc.componentSubType       = kAudioUnitSubType_ScheduledSoundPlayer;
    desc.componentManufacturer  = kAudioUnitManufacturer_Apple;
    desc.componentFlags         = kAudioComponentFlag_SandboxSafe;
    desc.componentFlagsMask     = 0;

    result = AUGraphAddNode(_graph, &desc, &_player);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphAddNode failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    // Mixer
    desc.componentType          = kAudioUnitType_Mixer;
    desc.componentSubType       = kAudioUnitSubType_MultiChannelMixer;
    desc.componentManufacturer  = kAudioUnitManufacturer_Apple;
    desc.componentFlags         = kAudioComponentFlag_SandboxSafe;
    desc.componentFlagsMask     = 0;

    result = AUGraphAddNode(_graph, &desc, &_mixer);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphAddNode failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    // Output
    desc.componentType          = kAudioUnitType_Output;
    desc.componentSubType       = kAudioUnitSubType_HALOutput;
    desc.componentFlags         = kAudioComponentFlag_SandboxSafe;
    desc.componentManufacturer  = kAudioUnitManufacturer_Apple;
    desc.componentFlagsMask     = 0;

    result = AUGraphAddNode(_graph, &desc, &_output);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphAddNode failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    // Connections
    result = AUGraphConnectNodeInput(_graph, _player, 0, _mixer, 0);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphConnectNodeInput failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    result = AUGraphConnectNodeInput(_graph, _mixer, 0, _output, 0);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphConnectNodeInput failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    // Open the graph
    result = AUGraphOpen(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphOpen failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    // Set the mixer's volume on the input and output
    AudioUnit au = NULL;
    result = AUGraphNodeInfo(_graph, _mixer, NULL, &au);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphNodeInfo failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    result = AudioUnitSetParameter(au, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 0, 1.f, 0);
    if(noErr != result)
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetParameter (kMultiChannelMixerParam_Volume, kAudioUnitScope_Input) failed: %d", result);

    result = AudioUnitSetParameter(au, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, 1.f, 0);
    if(noErr != result)
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetParameter (kMultiChannelMixerParam_Volume, kAudioUnitScope_Output) failed: %d", result);

    // Initialize the graph
    result = AUGraphInitialize(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphInitialize failed: %d", result);

        result = DisposeAUGraph(_graph);
        if(noErr != result)
            os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);

        _graph = NULL;
        return result;
    }

    return noErr;
}

- (OSStatus)closeGraph {
    Boolean graphIsRunning = NO;
    OSStatus result = AUGraphIsRunning(_graph, &graphIsRunning);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphIsRunning failed: %d", result);
        return result;
    }

    if(graphIsRunning) {
        result = AUGraphStop(_graph);
        if(noErr != result) {
            os_log_error(OS_LOG_DEFAULT, "AUGraphStop failed: %d", result);
            return result;
        }
    }

    Boolean graphIsInitialized = false;
    result = AUGraphIsInitialized(_graph, &graphIsInitialized);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphIsInitialized failed: %d", result);
        return result;
    }

    if(graphIsInitialized) {
        result = AUGraphUninitialize(_graph);
        if(noErr != result) {
            os_log_error(OS_LOG_DEFAULT, "AUGraphUninitialize failed: %d", result);
            return result;
        }
    }

    result = AUGraphClose(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphClose failed: %d", result);
        return result;
    }

    result = DisposeAUGraph(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "DisposeAUGraph failed: %d", result);
        return result;
    }

    _graph = NULL;
    _player = -1;
    _mixer = -1;
    _output = -1;

    return noErr;
}

- (OSStatus)startGraph {
    OSStatus result = AUGraphStart(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphStart failed: %d", result);
        return result;
    }

    return noErr;
}

- (OSStatus)stopGraph {
    OSStatus result = AUGraphStop(_graph);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphStop failed: %d", result);
        return result;
    }

    return noErr;
}

- (OSStatus)startPlayer {
    AudioUnit au;
    OSStatus result = AUGraphNodeInfo(_graph, _player, NULL, &au);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphNodeInfo failed: %d", result);
        return result;
    }

    AudioTimeStamp ts = {0};

    ts.mFlags           = kAudioTimeStampSampleTimeValid;
    ts.mSampleTime      = 0;

    result = AudioUnitSetProperty(au, kAudioUnitProperty_ScheduleStartTimeStamp, kAudioUnitScope_Global, 0, &ts, sizeof(ts));
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetProperty failed: %d", result);
        return result;
    }

    return noErr;
}

- (OSStatus)schedule {
    AudioUnit au;
    OSStatus result = AUGraphNodeInfo(_graph, _player, NULL, &au);
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AUGraphNodeInfo failed: %d", result);
        return result;
    }

    AVAudioFile *file = [[AVAudioFile alloc] initForReading:[NSURL fileURLWithPath:@"/tmp/test.wav" isDirectory:NO] commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
    if(!file)
        return paramErr;

    _buf = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(file.processingFormat.sampleRate * 2)];

    if(![file readIntoBuffer:_buf error:nil])
        return paramErr;

    AudioTimeStamp ts = {0};

    ts.mFlags           = kAudioTimeStampSampleTimeValid;
    ts.mSampleTime      = 0;

    _slice[0].mTimeStamp                = ts;
    _slice[0].mCompletionProc           = myScheduledAudioSliceCompletionProc;
    _slice[0].mCompletionProcUserData   = (__bridge void *)self;
    _slice[0].mNumberFrames             = _buf.frameLength;
    _slice[0].mBufferList               = _buf.mutableAudioBufferList;

    result = AudioUnitSetProperty(au, kAudioUnitProperty_ScheduleAudioSlice, kAudioUnitScope_Global, 0, &_slice[0], sizeof(_slice[0]));
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetProperty failed: %d", result);
        return result;
    }

    ts.mSampleTime      += _slice[0].mNumberFrames;

    _slice[1]                           = _slice[0];
    _slice[1].mTimeStamp                = ts;

    result = AudioUnitSetProperty(au, kAudioUnitProperty_ScheduleAudioSlice, kAudioUnitScope_Global, 0, &_slice[1], sizeof(_slice[1]));
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetProperty failed: %d", result);
        return result;
    }

    ts.mSampleTime      += _slice[1].mNumberFrames;

    _slice[2]                           = _slice[1];
    _slice[2].mTimeStamp                = ts;

    result = AudioUnitSetProperty(au, kAudioUnitProperty_ScheduleAudioSlice, kAudioUnitScope_Global, 0, &_slice[2], sizeof(_slice[2]));
    if(noErr != result) {
        os_log_error(OS_LOG_DEFAULT, "AudioUnitSetProperty failed: %d", result);
        return result;
    }

    return noErr;
}

- (void)scheduledAudioSliceCompleted:(ScheduledAudioSlice *)slice {
    if(slice == &_slice[0])
        NSLog(@"_slice[0] scheduledAudioSliceCompleted:%p, mFlags = 0x%.2x", slice, slice->mFlags);
    else if(slice == &_slice[1])
        NSLog(@"_slice[1] scheduledAudioSliceCompleted:%p, mFlags = 0x%.2x", slice, slice->mFlags);
    else if(slice == &_slice[2])
        NSLog(@"_slice[2] scheduledAudioSliceCompleted:%p, mFlags = 0x%.2x", slice, slice->mFlags);
    else
        NSLog(@"scheduledAudioSliceCompleted:%p, mFlags = 0x%.2x for unknown slice", slice, slice->mFlags);
}

@end

输出:

XXX _slice[0] scheduledAudioSliceCompleted:0x7f82ee41add0, mFlags = 0x03
XXX _slice[1] scheduledAudioSliceCompleted:0x7f82ee41ae40, mFlags = 0x03
XXX _slice[2] scheduledAudioSliceCompleted:0x7f82ee41aeb0, mFlags = 0x03

mFlags0x03等于kScheduledAudioSliceFlag_Complete | kScheduledAudioSliceFlag_BeganToRender

于 2020-12-05T13:53:40.300 回答