5

周末,我在学习如何在 iOS 上编程音频合成时遇到了一个绊脚石。我已经在 iOS 上开发了几年,但我只是进入音频合成方面。现在,我只是在编写演示应用程序来帮助我学习这些概念。我目前已经能够在音频单元的播放渲染器中构建和堆叠正弦波而没有问题。但是,我想了解渲染器中发生了什么,这样我就可以在每个左右通道中渲染 2 个单独的正弦波。目前,我假设在我的初始音频部分中,​​我需要进行以下更改:

从:

AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = kSampleRate;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

至:

AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = kSampleRate;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 2;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 4;
    audioFormat.mBytesPerFrame = 4;

但是,渲染器对我来说有点希腊化。我一直在研究我能找到的任何教程或示例代码。我可以使事情适用于给定的单声道信号上下文,但我不能让渲染器生成立体声信号。我想要的只是左声道中的一个不同频率和右声道中的不同频率 - 但老实说,我对渲染器的了解不足以让它工作。我已尝试将 memcpy 函数放入 mBuffers[0] 和 mbuffers[1],但这会使应用程序崩溃。我的渲染在下面(它当前包含堆叠的正弦波,但对于立体声示例,我可以在每个通道中只使用一个设定频率的波)。

#define kOutputBus 0
#define kSampleRate 44100
//44100.0f
#define kWaveform (M_PI * 2.0f / kSampleRate)

OSStatus playbackCallback(void *inRefCon,
                          AudioUnitRenderActionFlags *ioActionFlags,
                          const AudioTimeStamp *inTimeStamp,
                          UInt32 inBusNumber,
                          UInt32 inNumberFrames,
                          AudioBufferList *ioData) {

        HomeViewController *me = (HomeViewController *)inRefCon;

    static int phase = 1;
    static int phase1 = 1;

    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++) {

        int samples = ioData->mBuffers[i].mDataByteSize / sizeof(SInt16);

        SInt16 values[samples];

        float waves;
        float volume=.5;
        float wave1;

        for(int j = 0; j < samples; j++) {


            waves = 0;
            wave1 = 0;

            MyManager *sharedManager = [MyManager sharedManager];


            wave1 = sin(kWaveform * sharedManager.globalFr1 * phase1)*sharedManager.globalVol1;
            if (0.000001f > wave1) {
                [me setFr1:sharedManager.globalFr1];
                phase1 = 0;
                //NSLog(@"switch");
            }

            waves += wave1;
            waves += sin(kWaveform * sharedManager.globalFr2 * phase)*sharedManager.globalVol2;
            waves += sin(kWaveform * sharedManager.globalFr3 * phase)*sharedManager.globalVol3;
            waves += sin(kWaveform * sharedManager.globalFr4 * phase)*sharedManager.globalVol4;
            waves += sin(kWaveform * sharedManager.globalFr5 * phase)*sharedManager.globalVol5;
            waves += sin(kWaveform * sharedManager.globalFr6 * phase)*sharedManager.globalVol6;
            waves += sin(kWaveform * sharedManager.globalFr7 * phase)*sharedManager.globalVol7;
            waves += sin(kWaveform * sharedManager.globalFr8 * phase)*sharedManager.globalVol8;
            waves += sin(kWaveform * sharedManager.globalFr9 * phase)*sharedManager.globalVol9;
            waves *= 32767 / 9; // <--------- make sure to divide by how many waves you're stacking

            values[j] = (SInt16)waves;
            values[j] += values[j]<<16;

            phase++;
            phase1++;

        }

        memcpy(ioData->mBuffers[i].mData, values, samples * sizeof(SInt16));

    }


    return noErr;

}

提前感谢您的帮助!

4

1 回答 1

3

OP 似乎已经解决了他的问题,但我认为发布一个明确的答案会对我们其他人有所帮助。

我有同样的问题,想将音调独立地引导到左右声道。用 Matt Gallagher 现在标准的 An iOS 音调发生器(AudioUnits 简介)来描述是最容易的。

要进行的第一个更改是设置(在@jwkerr 之后)streamFormat.mChannelsPerFrame = 2;(而不是streamFormat.mChannelsPerFrame = 1;createToneUnit。一旦完成并且每帧中有两个通道/缓冲区,您需要分别填充左右缓冲区RenderTone()

// Set the left and right buffers independently
Float32 tmp;
Float32 *buffer0 = (Float32 *)ioData->mBuffers[0].mData;
Float32 *buffer1 = (Float32 *)ioData->mBuffers[1].mData;

// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
    tmp = sin(theta) * amplitude;

    if (channelLR[0]) buffer0[frame] = tmp; else buffer0[frame] = 0;
    if (channelLR[1]) buffer1[frame] = tmp; else buffer1[frame] = 0;

    theta += theta_increment;
    if (theta > 2.0 * M_PI) theta -= 2.0 * M_PI;
}

当然channelLR[2]是一个bool数组,您设置其元素以指示相应通道是否可听。请注意,程序需要将静音通道的帧显式设置为零,否则会发出一些有趣的音调。

于 2013-04-27T17:19:16.890 回答