5

我正在使用 Apple 的 Accelerate 框架在 iPhone 上实现 FFT 音高检测 如前所述

我了解相位偏移、bin 频率,并研究了几个使用 FFT 技术(简单音高检测、自相关、倒谱等)检测音高的开源调谐器。这是我的问题:

我的 FFT 结果始终偏离 5-10 Hz (+/-),即使这些 bin 仅相隔 1-2 赫兹。我尝试了不同的算法,即使是在高分辨率下采样的简单 FFT 也会在看似错误的地方显示幅度峰值。这不是一致的偏移量;有些太高,有些太低。

例如,一个 440Hz 的音调是 445.2Hz;a 220Hz 为 214Hz;880Hz 为 874Hz;1174Hz 作为 1183Hz 使用音调发生器。用于 Mac的类似开源调谐器使用几乎完全相同的算法,可以完美地检测音高。(在设备上与模拟器上的这些差异是不同的,但它们仍然是关闭的。)

我认为问题不在于 bin 分辨率,因为在实际音调和检测到的幅度尖峰之间通常有几个 bin。就好像输入只是听到错误的音调。

我在下面粘贴了我的代码。一般流程很简单:

将一步推到 FFT 缓冲区 -> Hann Window -> FFT -> Phase/Magnitude -> Max pitch 错误。

enum {
    kOversample = 4,
    kSamples = MAX_FRAME_LENGTH,
    kSamples2 = kSamples / 2,
    kRange = kSamples * 5 / 16,
    kStep = kSamples / kOversample
};



const int PENDING_LEN = kSamples * 5;
static float pendingAudio[PENDING_LEN * sizeof(float)];
static int pendingAudioLength = 0;

- (void)processBuffer {
    static float window[kSamples];
    static float phase[kRange];
    static float lastPhase[kRange];
    static float phaseDeltas[kRange];
    static float frequencies[kRange];
    static float slidingFFTBuffer[kSamples];
    static float buffer[kSamples];

    static BOOL initialized = NO;
    if (!initialized) {
        memset(lastPhase, 0, kRange * sizeof(float));

        vDSP_hann_window(window, kSamples, 0);
        initialized = YES;
    }

    BOOL canProcessNewStep = YES;
    while (canProcessNewStep) {        

        @synchronized (self) {
            if (pendingAudioLength < kStep) {
                break; // not enough data
            }            
            // Rotate one step's worth of pendingAudio onto the end of slidingFFTBuffer
            memmove(slidingFFTBuffer, slidingFFTBuffer + kStep, (kSamples - kStep) * sizeof(float));
            memmove(slidingFFTBuffer + (kSamples - kStep), pendingAudio, kStep * sizeof(float));
            memmove(pendingAudio, pendingAudio + kStep, (PENDING_LEN - kStep) * sizeof(float));
            pendingAudioLength -= kStep;   
            canProcessNewStep = (pendingAudioLength >= kStep);
        }

        // Hann Windowing
        vDSP_vmul(slidingFFTBuffer, 1, window, 1, buffer, 1, kSamples);      
        vDSP_ctoz((COMPLEX *)buffer, 2, &splitComplex, 1, kSamples2);        

        // Carry out a Forward FFT transform.
        vDSP_fft_zrip(fftSetup, &splitComplex, 1, log2f(kSamples), FFT_FORWARD);        

        // magnitude to decibels
        static float magnitudes[kRange];        
        vDSP_zvmags(&splitComplex, 1, magnitudes, 1, kRange);        
        float zero = 1.0;
        vDSP_vdbcon(magnitudes, 1, &zero, magnitudes, 1, kRange, 0); // to decibels

        // phase
        vDSP_zvphas(&splitComplex, 1, phase, 1, kRange); // compute magnitude and phase        
        vDSP_vsub(lastPhase, 1, phase, 1, phaseDeltas, 1, kRange); // compute phase difference
        memcpy(lastPhase, phase, kRange * sizeof(float)); // save old phase

        double freqPerBin = sampleRate / (double)kSamples;
        double phaseStep = 2.0 * M_PI * (float)kStep / (float)kSamples;

        // process phase difference ( via https://stackoverflow.com/questions/4633203 )
        for (int k = 1; k < kRange; k++) {
            double delta = phaseDeltas[k];
            delta -= k * phaseStep;  // subtract expected phase difference
            delta = remainder(delta, 2.0 * M_PI);  // map delta phase into +/- M_PI interval
            delta /= phaseStep;  // calculate diff from bin center frequency
            frequencies[k] = (k + delta) * freqPerBin;  // calculate the true frequency
        }               

        NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

        MCTunerData *tunerData = [[[MCTunerData alloc] initWithSize:MAX_FRAME_LENGTH] autorelease];        

        double maxMag = -INFINITY;
        float maxFreq = 0;
        for (int i=0; i < kRange; i++) {
            [tunerData addFrequency:frequencies[i] withMagnitude:magnitudes[i]];
            if (magnitudes[i] > maxMag) {
                maxFreq = frequencies[i];
                maxMag = magnitudes[i];
            }
        }

        NSLog(@"Max Frequency: %.1f", maxFreq);

        [tunerData calculate];

        // Update the UI with our newly acquired frequency value.
        [self.delegate frequencyChangedWithValue:[tunerData mainFrequency] data:tunerData];

        [pool drain];
    }

}

OSStatus renderCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, 
                       const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, 
                       AudioBufferList *ioData)
{
    MCTuner* tuner = (MCTuner *)inRefCon;    

    OSStatus err = AudioUnitRender(tuner->audioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, tuner->bufferList);
    if (err < 0) {
        return err;
    }

    // convert SInt16 to float because iOS doesn't support recording floats directly
    SInt16 *inputInts = (SInt16 *)tuner->bufferList->mBuffers[0].mData;

    @synchronized (tuner) {
        if (pendingAudioLength + inNumberFrames < PENDING_LEN) {

            // Append the audio that just came in into the pending audio buffer, converting to float
            // because iOS doesn't support recording floats directly
            for(int i = 0; i < inNumberFrames; i++) {
                pendingAudio[pendingAudioLength + i] = (inputInts[i] + 0.5) / 32767.5;
            }
            pendingAudioLength += inNumberFrames;
        } else {
            // the buffer got too far behind. Don't give any more audio data.
            NSLog(@"Dropping frames...");
        }
        if (pendingAudioLength >= kStep) {
            [tuner performSelectorOnMainThread:@selector(processBuffer) withObject:nil waitUntilDone:NO];
        }
    }

    return noErr;
}
4

4 回答 4

3

我没有详细阅读您的代码,但这让我很震惊:

vDSP_zvmags(&splitComplex, 1, magnitudes, 1, kRange);

重要的是要记住,从实数到复数的 fft 的结果包含在一个有点奇怪的布局中。如果将第 j 个傅立叶系数的实部和虚部分别用 R(j) 和 I(j) 表示,则对象的realimag分量splitComplex有以下内容:

.real = {  R(0) , R(1), R(2), ... , R(n/2 - 1) } 
.imag = { R(n/2), I(1), I(2), ... , I(n/2 - 1) }

因此,您的幅度计算正在做一些奇怪的事情;您的幅度向量中的第一个条目是sqrt(R(0)^2 + R(n/2)^2),它应该是|R(0)|。我没有仔细研究所有的常数,但似乎这会导致你失去奈奎斯特带 ( R(n/2)) 或类似的错误。此类错误可能会导致频带被视为比实际更宽或更窄,这将导致在整个范围内放大或缩小一个小的音调,这匹配你所看到的。

于 2011-03-31T17:51:45.077 回答
1

我相信它实际上不是我的算法中的任何东西。相反,我在使用 Apple 的 AUGraph 时出了点问题。当我删除它以仅使用普通音频单元而不设置图表时,我能够让它正确识别音高。

于 2011-04-06T04:10:02.593 回答
1

FFT 是电锯,而不是手术刀。通常,要对 FFT 编码进行现实检查,(1)使用 Parseval 定理进行测试(时域中的均方幅度应在舍入范围内等于频谱的总和)和(2)逆 FFT 并听它。抱歉,但您似乎对 fft 的绝对准确性期望过高。你根本不会得到它。但是,有一个小事情清单可以检查您的代码。大多数算法移动 DC 和 Nyquist 以使内存分配均匀,但您必须手动将 Nyquist 项移动到它所属的位置并将各种事物归零:

A.realp[NOVER2] = A.imagp[0];   // move real Nyquist term to where it belongs
A.imagp[NOVER2] = 0.0;          // this is zero
A.imagp[0] = 0.0;               // make this a true zero

On audio data, DC should be zero (e.g., amplitudes have zero-mean), but in small windows, it may not be. I leave it alone. You are doing much more than you need to to find the max bin (comment about phase vocoder is correct). IMHO using a hamm window hurts accuracy. I have much better results padding the end of the real data with lots (4x) of zeros. Good luck.

于 2012-02-28T01:04:13.867 回答
0

您似乎不仅使用 FFT,而且在 FFT 之后使用相位声码器来调整估计的 bin 频率。根据相位增益和限制,相位声码器频率调整可以将估计频率拉到 FFT bin 宽度之外。如果发生这种情况,使用更窄的 bin(更长的 FFT)将无济于事。您可能需要进行完整性检查以查看是否要将峰值频率拉到其 FFT 频率箱之外。或者尝试取出相位声码器,看看单独的 FFT 是否返回更合理的结果。

于 2011-04-01T01:01:35.710 回答