1

I'm using the vpio audio unit for capture and playout on mac os x.

All things go well until I set the input/output format on the vpio unit.

The format I wanted just like this:

AudioStreamBasicDescription audio_format ;
audio_format.mSampleRate     = 8000.0 ;
audio_format.mBitsPerChannel     = 16 ;
audio_format.mChannelsPerFrame = 1 ;
audio_format.mBytesPerFrame      = (audio_format.mBitsPerChannel >> 3)  * audio_format.mChannelsPerFrame ;
audio_format.mFramesPerPacket    = 1 ;
audio_format.mBytesPerPacket     = audio_format.mBytesPerFrame * audio_format.mFramesPerPacket ;
audio_format.mFormatID       = kAudioFormatLlinearPCM ;
audio_format.mFormatFlags    = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked ;

I can set this format on the vpio's (input bus / output scope), but I can't set it on the vpio's (output bus / input scope) that I will get the error code (kAudioUnitErr_FormatNotSupported). But when I use AUHAL unit instead, I can set the format both on the AUHAL's (input bus / output scope) and AUHAL's (output bus / input scope).

I want to know what make this difference between the two unit?

After making some attempts , I finally find one of the available format on the vpio's (output bus / input scope) ,just like this:

AudioStreamBasicDescription audio_format ;
audio_format.mSampleRate     = 8000.0 ;
audio_format.mBitsPerChannel     = 32 ;
audio_format.mChannelsPerFrame   = 1 ;
audio_format.mBytesPerFrame      = (audio_format.mBitsPerChannel >> 3)  * audio_format.mChannelsPerFrame ;
audio_format.mFramesPerPacket    = 1 ;
audio_format.mBytesPerPacket     = audio_format.mBytesPerFrame * audio_format.mFramesPerPacket ;
audio_format.mFormatID       = kAudioFormatLlinearPCM ;
audio_format.mFormatFlags    = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked ;

But what confused me is that the format on the vpio's (input bus / output scope) and (output bus / input scope) were mismatch. And I want to know how go get the available formats information of the vpio unit ? I can't find any documentations about the format of available on the Apple Site.

Can someone answer my question?

Thanks & regard.

4

3 回答 3

1

我刚刚找到了一个更聪明的工程师来解决这个 VoiceProcessing 音频单元:

如何在 mac os 中使用核心音频 API 的“kAudioUnitSubType_VoiceProcessingIO”子类型?

简短的回答是:在初始化单元之前设置格式。多么直观!

于 2016-06-28T20:02:05.677 回答
0

我和你有同样的问题。当我尝试将音频格式设置为语音处理音频单元时,我收到错误 -10865,这意味着音频格式属性不可写。

如果我理解正确,您不能在此音频单元上设置任何格式。如果您需要 8000 Hz 采样率的音频,则需要重新采样。

可以办到

  1. 通过创建一个音频单元图并在语音处理单元前面添加一个转换器音频单元
  2. 通过使用外部库(例如 ffmpeg 中的 swresample)重新采样音频

祝你发展顺利。

于 2016-06-27T18:56:46.173 回答
-1

我使用以下设置来获得在我的项目中录制语音的最佳文件大小和音质。

  // Setup audio session
    AVAudioSession *session = [AVAudioSession sharedInstance];
    [session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];

    // Define the recorder setting
    NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];

    [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
    [recordSetting setValue:[NSNumber numberWithFloat:8000.0] forKey:AVSampleRateKey];
    [recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
    [recordSetting setValue:[NSNumber numberWithInt:AVAudioQualityLow] forKey:AVEncoderAudioQualityKey];

   // Initiate and prepare the recorder
    recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:NULL];
    recorder.delegate = self;
    recorder.meteringEnabled = YES;
    [recorder prepareToRecord];
于 2013-08-22T14:34:24.207 回答