2

我已经修改了 Tim Boldstad http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/提供的代码(愿上帝保佑他),并添加了一个小滑块以便能够更改输出音调频率从 40Hz 到 200000Hz。我现在希望能够在生成的音调上使用 LPF。

首先,any1 是否有详细的指南来解释如何执行此操作。我试过简单地在两者之间添加一个节点,但它不起作用,显然,我需要将 16 位整数样本转换为浮动 8.24 格式,然后将音频样本输入提供给过滤器,然后我必须转换它回到 16 位整数。这是问题吗?还是我错误地连接了节点?我应该在哪里设置滤波器截止频率和其他参数?

谁能解释一下 AudioUnitGetProperty 的作用?有关这些主题的 Apple 文档非常零散且毫无价值:(

-(void) initializeAUGraph
{

OSStatus result= noErr;

    result = NewAUGraph(&mGraph);

    AUNode outputNode;
    AUNode mixerNode;
    AUNode effectsNode;

    AudioComponentDescription effects_desc;
    effects_desc.componentType = kAudioUnitType_Effect;
    effects_desc.componentSubType = kAudioUnitSubType_LowPassFilter;
    effects_desc.componentFlags = 0;
    effects_desc.componentFlagsMask = 0;
    effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    AudioComponentDescription mixer_desc;
    mixer_desc.componentType=kAudioUnitType_Mixer;
    mixer_desc.componentSubType=kAudioUnitSubType_MultiChannelMixer;
    mixer_desc.componentFlags=0;
    mixer_desc.componentFlagsMask=0;
    mixer_desc.componentManufacturer=kAudioUnitManufacturer_Apple;

    AudioComponentDescription output_desc;
    output_desc.componentType = kAudioUnitType_Output;
    output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
    output_desc.componentFlags = 0;
    output_desc.componentFlagsMask = 0;
    output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

   result= AUGraphAddNode(mGraph, &output_desc, &outputNode);
   result= AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
    result=AUGraphAddNode(mGraph, &effects_desc, &effectsNode);

    result=AUGraphConnectNodeInput(mGraph, mixerNode, 0, effectsNode, 0);
    result=AUGraphConnectNodeInput(mGraph, effectsNode, 0, outputNode, 0);

    result=AUGraphOpen(mGraph);

    //getting mixxer

    result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
    result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);

    UInt32 numbuses = 1;
    UInt32 size = sizeof(numbuses);
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, size);


    //=====

    CAStreamBasicDescription desc;

    // Loop through and setup a callback for each source you want to send to the mixer.
    // Right now we are only doing a single bus so we could do without the loop.
    for (int i = 0; i < numbuses; ++i) 
    {

        // Setup render callback struct
        // This struct describes the function that will be called
        // to provide a buffer of audio samples for the mixer unit.
        AURenderCallbackStruct renderCallbackStruct;
        renderCallbackStruct.inputProc = &renderInput;
        renderCallbackStruct.inputProcRefCon = self;

        // Set a callback for the specified node's specified input
        result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &renderCallbackStruct);

        // Get a CAStreamBasicDescription from the mixer bus.
        size = sizeof(desc);
        result = AudioUnitGetProperty(  mMixer,
                                      kAudioUnitProperty_StreamFormat,
                                      kAudioUnitScope_Input,
                                      i,
                                      &desc,
                                      &size);
        // Initializes the structure to 0 to ensure there are no spurious values.
        memset (&desc, 0, sizeof (desc));                               

        // Make modifications to the CAStreamBasicDescription
        // We're going to use 16 bit Signed Ints because they're easier to deal with
        // The Mixer unit will accept either 16 bit signed integers or
        // 32 bit 8.24 fixed point integers.

        desc.mSampleRate = kGraphSampleRate; // set sample rate
        desc.mFormatID = kAudioFormatLinearPCM;
        desc.mFormatFlags      = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        desc.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
        desc.mChannelsPerFrame = 1;
        desc.mFramesPerPacket = 1;
        desc.mBytesPerFrame = ( desc.mBitsPerChannel / 8 ) * desc.mChannelsPerFrame;
        desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;

        printf("Mixer file format: "); desc.Print();
        // Apply the modified CAStreamBasicDescription to the mixer input bus
        result = AudioUnitSetProperty(  mMixer,
                                      kAudioUnitProperty_StreamFormat,
                                      kAudioUnitScope_Input,
                                      i,
                                      &desc,
                                      sizeof(desc));
    }

    // Apply the CAStreamBasicDescription to the mixer output bus
    result = AudioUnitSetProperty(   mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  sizeof(desc));

    //************************************************************
    //*** Setup the audio output stream ***
    //************************************************************

    // Get a CAStreamBasicDescription from the output Audio Unit
    result = AudioUnitGetProperty(  mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  &size);

    // Initializes the structure to 0 to ensure there are no spurious values.
    memset (&desc, 0, sizeof (desc));

    // Make modifications to the CAStreamBasicDescription
    // AUCanonical on the iPhone is the 8.24 integer format that is native to the iPhone.
    // The Mixer unit does the format shifting for you.
    desc.SetAUCanonical(1, true);
    desc.mSampleRate = kGraphSampleRate;

    // Apply the modified CAStreamBasicDescription to the output Audio Unit
    result = AudioUnitSetProperty(  mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  sizeof(desc));

    // Once everything is set up call initialize to validate connections
    result = AUGraphInitialize(mGraph);
}
4

1 回答 1

0

谁能解释一下 AudioUnitGetProperty 的作用?

好吧,它从音频单元获取属性的值。“属性”通常是您作为程序员处理的东西(例如音频流格式、连接状态),而“参数”通常是您向用户公开的东西(例如低通截止频率、混音器音量)。请注意,有AudioUnitGetParameterAudioUnitSetParameter功能可以补充AudioUnitGetPropertyAudioUnitSetProperty功能。

您基本上应该“只知道”音频单元的属性/参数是什么以及它们期望的值是什么。这方面的最佳文档来源是 AudioUnit.framework 中的两个标头。即,AudioUnitProperties.hAudioUnitParameters.h。下一个最佳来源是 Xcode 的自动完成功能。例如,AULowPass 的参数是kLowPassParam_CutoffFrequencyand kLowPassParam_Resonance,因此您只需键入kLowPassParam,Xcode 就会向您显示可用的内容。其他 AU 通常遵循此方案。

...但它显然不起作用

我需要更多信息。你的意思是你听不出区别?AULowPass 以非常高的截止频率开始,因此除非您将其设置得较低,否则您可能根本听不到任何差异。

尝试将截止频率设置为相当低的值,例如 500hz。你这样做:

AudioUnitSetParameter(mEffects,
                      kLowPassParam_CutoffFrequency,
                      kAudioUnitScope_Global,
                      0,
                      500,
                      0);
于 2012-10-11T15:06:32.413 回答