24

我在 iOS 6.1.3 iPad2 和新 iPad 上运行 SIP 音频流应用程序。

我在我的 iPad 上启动我的应用程序(没有插入任何东西)。
音频作品。
我插上耳机。
应用程序崩溃:malloc:对象 0x 的错误 ....:未分配被释放的指针或 EXC_BAD_ACCESS

或者:

我在 iPad 上启动我的应用程序(插入耳机)。
音频从耳机中传出。
我拔下耳机。
应用程序崩溃:malloc:对象 0x 的错误 ....:未分配被释放的指针或 EXC_BAD_ACCESS

应用程序代码使用基于http://code.google.com/p/ios-coreaudio-example/示例代码的 AudioUnit api(见下文)。

我使用 kAudioSessionProperty_AudioRouteChange 回调来获得更改意识。因此操作系统声音管理器有三个回调:
1)处理录制的麦克风样本
2)为扬声器提供样本
3)通知音频硬件存在

经过大量测试后,我的感觉是棘手的代码是执行麦克风捕获的代码。在插入/拔出操作之后,大多数情况下,记录回调在调用 RouteChange 之前被调用几次,导致后来的“分段错误”,并且永远不会调用 RouteChange 回调。更具体地说,我认为 AudioUnitRender 函数会导致“内存错误访问”,而根本不会抛出异常。

我的感觉是,非原子记录回调代码与操作系统更新与声音设备相关的结构竞争。因此,录制回调的非原子性更可能是操作系统硬件更新和录制回调的并发。

我修改了我的代码以使记录回调尽可能薄,但我的感觉是我的应用程序的其他线程带来的高处理负载正在满足前面描述的并发竞争。因此,由于 AudioUnitRender 访问错误,代码的其他部分会出现 malloc/free 错误。

我试图通过以下方式减少录制回调延迟:

UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);

AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_MaximumFramesPerSlice,
    kAudioUnitScope_Global,
    0,
    &numFrames,
    dataSize);

我试图提升有问题的代码:

dispatch_async(dispatch_get_main_queue(), ^{

有人对此有提示或解决方案吗?为了重现错误,这里是我的音频会话代码:

//
//  IosAudioController.m
//  Aruts
//
//  Created by Simon Epskamp on 10/11/10.
//  Copyright 2010 __MyCompanyName__. All rights reserved.
//

#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1

IosAudioController* iosAudio;

void checkStatus(int status) {
    if (status) {
        printf("Status not 0! %d\n", status);
        // exit(1);
    }
}

/**
 * This callback is called when new audio data from the microphone is available.
 */
static OSStatus recordingCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {

    // Because of the way our audio format (setup below) is chosen:
    // we only need 1 buffer, since it is mono
    // Samples are 16 bits = 2 bytes.
    // 1 frame includes only 1 sample

    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    NSLog(@"Recording Callback 1 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // Then:
    // Obtain recorded samples

    OSStatus status;
    status = AudioUnitRender([iosAudio audioUnit],
        ioActionFlags, 
        inTimeStamp,
        inBusNumber,
        inNumberFrames,
        &bufferList);
        checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    NSLog(@"Recording Callback 2 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;
}

/**
 * This callback is called when the audioUnit needs new data to play through the
 * speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {
        // Notes: ioData contains buffers (may be more than one!)
        // Fill them up as much as you can.
        // Remember to set the size value in each 
        // buffer to match how much data is in the buffer.

    for (int i=0; i < ioData->mNumberBuffers; i++) {
        // in practice we will only ever have 1 buffer, since audio format is mono
        AudioBuffer buffer = ioData->mBuffers[i];

        // NSLog(@"  Buffer %d has %d channels and wants %d bytes of data.", i, 
            buffer.mNumberChannels, buffer.mDataByteSize);

        // copy temporary buffer data to output buffer
        UInt32 size = min(buffer.mDataByteSize,
            [iosAudio tempBuffer].mDataByteSize);

        // dont copy more data then we have, or then fits
        memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
        // indicate how much data we wrote in the buffer
        buffer.mDataByteSize = size;

        // uncomment to hear random noise
        /*
         * UInt16 *frameBuffer = buffer.mData;
         * for (int j = 0; j < inNumberFrames; j++) {
         *     frameBuffer[j] = rand();
         * }
         */
    }

    return noErr;
}

@implementation IosAudioController
@synthesize audioUnit, tempBuffer;

void propListener(void *inClientData,
    AudioSessionPropertyID inID,
    UInt32 inDataSize,
    const void *inData) {

    if (inID == kAudioSessionProperty_AudioRouteChange) {

        UInt32 isAudioInputAvailable;
        UInt32 size = sizeof(isAudioInputAvailable);
        CFStringRef newRoute;
        size = sizeof(CFStringRef);

        AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);

        if (newRoute) {
            CFIndex length = CFStringGetLength(newRoute);
            CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
                kCFStringEncodingUTF8);

            char *buffer = (char *)malloc(maxSize);
            CFStringGetCString(newRoute, buffer, maxSize,
                kCFStringEncodingUTF8);

            //CFShow(newRoute);
            printf("New route is %s\n",buffer);

            if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == 
                kCFCompareEqualTo) // headset plugged in
            {
                printf("Headset\n");
            } else {
                printf("Another device\n");

                UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
                AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                    sizeof (audioRouteOverride),&audioRouteOverride);
            }
            printf("New route is %s\n",buffer);
            free(buffer);
        }
        newRoute = nil;
    } 
}

/**
 * Initialize the audioUnit and allocate our own temporary buffer.
 * The temporary buffer will hold the latest data coming in from the microphone,
 * and will be copied to the output when this is requested.
 */
- (id) init {
    self = [super init];
    OSStatus status;

    // Initialize and configure the audio session
    AudioSessionInitialize(NULL, NULL, NULL, self);

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, 
        sizeof(audioCategory), &audioCategory);
    AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, 
        propListener, self);

    Float32 preferredBufferSize = .020;
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
        sizeof(preferredBufferSize), &preferredBufferSize);

    AudioSessionSetActive(true);

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = 
        kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Input, 
        kInputBus,
        &flag, 
        sizeof(flag));
        checkStatus(status);

    // Enable IO for playback
    flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Output, 
        kOutputBus,
        &flag, 
        sizeof(flag));

    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = 8000.00;
    //audioFormat.mSampleRate = 44100.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = 
        kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger | 
        kAudioFormatFlagIsPacked*/;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Output, 
        kInputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Input, 
        kOutputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        AudioOutputUnitProperty_SetInputCallback, 
        kAudioUnitScope_Global, 
        kInputBus, 
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback, 
        kAudioUnitScope_Global, 
        kOutputBus,
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to 
    // pass in our own)

    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_ShouldAllocateBuffer,
        kAudioUnitScope_Output, 
        kInputBus,
        &flag, 
        sizeof(flag)); 


    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_ShouldAllocateBuffer, 
        kAudioUnitScope_Output,
        kOutputBus,
        &flag,
        sizeof(flag));

    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per 
    // frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames,
    // if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(audioUnit);
    checkStatus(status);
}

/**
 * Stop the audioUnit
 */
- (void) stop {
    OSStatus status = AudioOutputUnitStop(audioUnit);
    checkStatus(status);
}

/**
 * Change this function to decide what is done with incoming
 * audio data from the microphone.
 * Right now we copy it to our own temporary buffer.
 */
- (void) processAudio: (AudioBufferList*) bufferList {
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    // fix tempBuffer size if it's the wrong size
    if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        free(tempBuffer.mData);
        tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }

    // copy incoming audio data to temporary buffer
    memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, 
        bufferList->mBuffers[0].mDataByteSize);
    usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY

}

/**
 * Clean up.
 */
- (void) dealloc {
    [super dealloc];
    AudioUnitUninitialize(audioUnit);
    free(tempBuffer.mData);
}

@end
4

1 回答 1

8

根据我的测试,触发 SEGV 错误的行最终是

AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                                    sizeof (audioRouteOverride),&audioRouteOverride);

在飞行途中更改 AudioUnit 链的属性总是很棘手,但是如果您在重新路由之前停止 AudioUnit,然后重新启动它,它会用完它存储的所有缓冲区,然后继续使用新参数。

这是可以接受的,还是您需要在更改路线和重新开始录制之间减少间隔?

我所做的是:

void propListener(void *inClientData,
              AudioSessionPropertyID inID,
              UInt32 inDataSize,
              const void *inData) {

[iosAudio stop];
// ...

[iosAudio start];
}

我的 iPhone 5 不再崩溃(您的里程可能会因不同的硬件而异)

我对这种行为的最合乎逻辑的解释(这些测试在一定程度上支持)是渲染管道是异步的。如果您花很长时间来操作缓冲区,它们就会一直排队。但是,如果您更改 AudioUnit 的设置,则会在渲染队列中触发大规模重置,并带来未知的副作用。麻烦的是,这些更改是同步的,它以追溯的方式影响所有耐心等待轮到它们的异步调用。

如果您不关心错过的样本,您可以执行以下操作:

static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

static OSStatus playbackCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

// ...

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    checkStatus(status);

    isStopped = NO;
}

/**
 * Stop the audioUnit
 */
- (void) stop {

    isStopped = YES;

    OSStatus status = AudioOutputUnitStop(_audioUnit);
    checkStatus(status);
}

// ...
于 2013-05-13T18:42:16.003 回答