1

我昨天一直在努力解决这个问题,非常感谢您的帮助。

我有一个多通道混音器音频单元,分配给每个通道的回调在调用时填充所需的音频缓冲区。我试图通过将数据写入文件来在同一个回调中进行记录。

目前,如果我不调用 AudioUnitRender 并且如果我调用它,音频会记录为噪音,我会收到两个错误。错误 10877 和错误 50。

回调中的录制代码如下所示

if (recordingOn) 
{
    AudioBufferList *bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));

    SInt16 samples[inNumberFrames]; 
    memset (&samples, 0, sizeof (samples));

    bufferList->mNumberBuffers = 1;
    bufferList->mBuffers[0].mData = samples;
    bufferList->mBuffers[0].mNumberChannels = 2;
    bufferList->mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);

    OSStatus status;
    status = AudioUnitRender(audioObject.mixerUnit,     
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);

    if (noErr != status) {
        printf("AudioUnitRender error: %ld", status); 
        return noErr;
    }

    ExtAudioFileWriteAsync(audioObject.recordingFile, inNumberFrames, bufferList);
}

在每个通道回调中写入数据是否正确,还是应该将其连接到远程 I/O 单元?

我正在使用 LPCM,录制文件 (caf) 的 ASBD 是

recordingFormat.mFormatID = kAudioFormatLinearPCM;
recordingFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked;
recordingFormat.mSampleRate = 44100;
recordingFormat.mChannelsPerFrame = 2;
recordingFormat.mFramesPerPacket = 1;
recordingFormat.mBytesPerPacket = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBytesPerFrame = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBitsPerChannel = 16;

我不确定我做错了什么。

立体声如何影响在写入文件之前必须处理记录的数据的方式?

4

2 回答 2

2

有几个问题。如果您尝试录制最终的“混音”,您可以使用 AudioUnitAddRenderNotify(iounit,callback,file) 在 I/O 单元上添加回调。然后回调简单地获取 ioData 并将其传递给 ExtAudioFileWriteAsync(...)。因此,您也不需要创建任何缓冲区。旁注:在渲染线程中分配内存是不好的。您应该避免渲染回调中的所有系统调用。无法保证这些调用将在音频线程的最后期限内执行。因此为什么有一个 ExtAudioFileWriteAsync,它会考虑到这一点并在另一个线程中写入磁盘。

于 2012-07-18T12:26:35.027 回答
0

我找到了一个演示代码,可能有用 4 U;

演示网址:https ://github.com/JNYJdev/AudioUnit

或者

博客:http ://atastypixel.com/blog/using-remoteio-audio-unit/

static OSStatus recordingCallback(void *inRefCon, 
                          AudioUnitRenderActionFlags *ioActionFlags, 
                          const AudioTimeStamp *inTimeStamp, 
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames, 
                          AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample

AudioBuffer buffer;

buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );

// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;

// Then:
// Obtain recorded samples

OSStatus status;

status = AudioUnitRender([iosAudio audioUnit], 
                     ioActionFlags, 
                     inTimeStamp, 
                     inBusNumber, 
                     inNumberFrames, 
                     &bufferList);
checkStatus(status);

// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];

// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);

return noErr;
}
于 2016-12-14T07:01:17.250 回答