2

好的,所以我使用核心音频从 10 个不同的样本源中提取音频,然后在我的回调函数中将它们混合在一起。

它在模拟器中完美运行,一切都很好。但是,当我尝试在 4.2 iphone 设备上运行它时遇到了麻烦。

如果我在回调中混合 2 个音频文件,一切正常。如果我混合 5 或 6 个音频文件,音频会播放,但一段时间后音频会降级,最终不会有音频进入扬声器。(回调不会停止)。

如果我尝试混合 10 个音频文件,回调会运行,但根本没有音频出来。

几乎就像回调没有时间了,这可以解释我混合 5 或 6 个的情况,但不能解释最后一种情况,混合 10 个音频源而根本没有播放音频。

我不确定以下是否有任何影响,但是当我调试时,此消息总是打印到控制台。这是否可以说明问题所在?

mem 0x1000 0x3fffffff cache
mem 0x40000000 0xffffffff none
mem 0x00000000 0x0fff none
run
Running…
[Switching to thread 11523]
[Switching to thread 11523]
Re-enabling shared library breakpoint 1
continue
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found).

**设置我的回调**

#pragma mark -
#pragma mark Callback setup & control

- (void) setupCallback

{
    OSStatus status;


    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    UInt32 flag = 1;
    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioOutputUnitProperty_EnableIO, 
                                  kAudioUnitScope_Output, 
                                  kOutputBus,
                                  &flag, 
                                  sizeof(flag));

    //Apply format
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_StreamFormat, 
                                  kAudioUnitScope_Input, 
                                  kOutputBus, 
                                  &stereoStreamFormat, 
                                  sizeof(stereoStreamFormat));

    // Set up the playback  callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!!
    //set the reference to "self" this becomes *inRefCon in the playback callback
    callbackStruct.inputProcRefCon = self;

    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_SetRenderCallback, 
                                  kAudioUnitScope_Global, 
                                  kOutputBus,
                                  &callbackStruct, 
                                  sizeof(callbackStruct));

    // Initialise
    status = AudioUnitInitialize(audioUnit); // error check this status


}

回调

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType *suml=0;
            AudioUnitSampleType *sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<10;j++)

            {


                AudioUnitSampleType valuetoaddl=0;
                AudioUnitSampleType valuetoaddr=0;


                //valuetoadd = [remoteIOplayer getSample:j ];
                valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ];
                //valuetoaddl = [remoteIOplayer getSample:j];
                valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ];

                suml = suml+(valuetoaddl/10);
                sumr = sumr+(valuetoaddr/10);

            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}

我的音频提取功能

-(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch;

{

    AudioUnitSampleType returnvalue= 0;

    soundStruct snd=soundStructArray[index];    
    UInt64 sn= snd.frameCount;  
    UInt64 st=sampletime;
    UInt64 read= (UInt64)(st%sn);


    if(ch==0)
    {
        if (snd.sendvalue==1) {
            returnvalue = snd.audioDataLeft[read];

        }else {
            returnvalue=0;
        }

    }else if(ch==1)

    {
        if (snd.sendvalue==1) {
        returnvalue = snd.audioDataRight[read];
        }else {
            returnvalue=0;
        }

        soundStructArray[index].sampleNumber=read;
    }


    if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount)
    {
        soundStructArray[index].sampleNumber=0;

    }

    return returnvalue;


}

编辑 1

作为对@andre 的回应,我将回调更改为以下内容,但它仍然没有帮助。

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType suml=0;
            AudioUnitSampleType sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<16;j++)

            {



                soundStruct snd=remoteIOplayer->soundStructArray[j];
                UInt64 sn= snd.frameCount;  
                UInt64 st=remoteIOplayer.sampletime;
                UInt64 read= (UInt64)(st%sn);

                suml+=  snd.audioDataLeft[read];
                suml+= snd.audioDataRight[read];


            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}
4

3 回答 3

3
  1. 就像 Andre 说的,回调中最好不要有任何 Objective-C 函数调用。您还应该将 inputProcRefCon 更改为 C-Struct 而不是 Objective-C 对象。

  2. 此外,看起来您可能会逐帧“手动”将数据复制到缓冲区中。相反,使用 memcopy 复制一大块数据。

  3. 另外,我很确定您没有在回调中执行磁盘 I/O,但如果您这样做,也不应该这样做。

于 2010-12-01T19:56:15.877 回答
2

根据我的经验,尽量不要在 RemoteIO 回调中使用 Objective-C 函数调用。他们会放慢速度。尝试使用 C 结构在回调中移动“getNonInterleavedSample”函数以访问音频数据。

于 2010-12-01T16:04:17.750 回答
1

我假设您受 CPU 限制;模拟器在处理速度方面比各种设备强大得多。

回调可能跟不上它被调用的频率。

编辑:您能否“预先计算”混合(提前或在另一个线程中进行),以便在回调触发时已经混合,并且回调要做的工作更少?

于 2010-12-01T14:56:43.513 回答