1

我正在尝试开发一个关于VOIP的应用程序,

是否有适用于 iOS 的高级音频队列服务库?

因为我不太擅长处理扩展文件名“.mm”,

使用开源将是一个更好的选择。

或者有人可以给我一些关于如何从 AudioQueueBufferRef 获取缓冲区的提示?

理想的方式就像一个代表:

- (void)audioRecorderDidReceivedBuffer:(Buffer) {
    do something for other operations
}

总督

  1. 识别音频组件(kAudioUnitType_Output/ kAudioUnitSubType_RemoteIO/ kAudioUnitManufacturerApple)
  2. 使用AudioComponentFindNext(NULL, &descriptionOfAudioComponent)获取AudioComponent,就像你获取音频单元的工厂
  3. 使用 AudioComponentInstanceNew(ourComponent, &audioUnit) 创建音频单元的实例
  4. 启用 IO 以使用 AudioUnitSetProperty 进行录制和播放
  5. 在 AudioStreamBasicDescription 结构中描述音频格式,并使用 AudioUnitSetProperty 应用格式
  6. 再次使用 AudioUnitSetProperty 为录制和可能的播放提供回调
  7. 分配一些缓冲区
  8. 初始化音频单元
  9. 启动音频单元
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));



// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_SetInputCallback, 
                              kAudioUnitScope_Global, 
                              kInputBus, 
                              &callbackStruct, 
                              sizeof(callbackStruct));



//recordingCallback
static OSStatus recordingCallback(void *inRefCon, 
                              AudioUnitRenderActionFlags *ioActionFlags, 
                              const AudioTimeStamp *inTimeStamp, 
                              UInt32 inBusNumber, 
                              UInt32 inNumberFrames, 
                              AudioBufferList *ioData) {

// TODO: Use inRefCon to access our interface object to do stuff
// Then, use inNumberFrames to figure out how much data is available, and make
// that much space available in buffers in an AudioBufferList.

AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)

// Then:
// Obtain recorded samples

OSStatus status;

status = AudioUnitRender([audioInterface audioUnit], 
                         ioActionFlags, 
                         inTimeStamp, 
                         inBusNumber, 
                         inNumberFrames, 
                         bufferList);
checkStatus(status);

// Now, we have the samples we just read sitting in buffers in bufferList
DoStuffWithTheRecordedAudio(bufferList);
return noErr;
}
4

1 回答 1

1

这是如何从录音中获取音频缓冲区的方法

(参考自:http ://atastypixel.com/blog/using-remoteio-audio-unit/ )

static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) {

    // TODO: Use inRefCon to access our interface object to do stuff
    // Then, use inNumberFrames to figure out how much data is available, and make
    // that much space available in buffers in an AudioBufferList.

    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)

    // Then:
    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([audioInterface audioUnit], 
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);
    checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    DoStuffWithTheRecordedAudio(bufferList);
    return noErr;
}
于 2012-11-26T16:30:45.147 回答