2

我正在使用android平台,从以下参考问题中我了解到,使用返回原始数据的AudioRecord类我可以过滤音频范围取决于我的需要,但为此我需要算法,有人可以帮我找到过滤范围 b/w 14,400 bph 和 16,200 bph 的算法。

我尝试了“JTransform”,但我不知道是否可以使用 JTransform 来实现这一点?目前我正在使用“jfftpack”来显示效果非常好但我无法使用它实现音频过滤器的视觉效果。

参考这里

帮助感谢提前感谢。以下是我上面提到的代码,我正在使用“jfftpack”库来显示您可能会在代码中找到这个库引用,请不要混淆

private class RecordAudio extends AsyncTask<Void, double[], Void> {
        @Override
        protected Void doInBackground(Void... params) {
try {
    final AudioRecord audioRecord = findAudioRecord();
                    if(audioRecord == null){
                        return null;
                    }

                    final short[] buffer = new short[blockSize];
                    final double[] toTransform = new double[blockSize];

                    audioRecord.startRecording();


    while (started) {
                        final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);

                        for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
                            toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
                        }

                        transformer.ft(toTransform);
                        publishProgress(toTransform);

                    }
audioRecord.stop();
                audioRecord.release();
} catch (Throwable t) {
                Log.e("AudioRecord", "Recording Failed");
            }
            return null;

/**
         * @param toTransform
         */
        protected void onProgressUpdate(double[]... toTransform) {
            canvas.drawColor(Color.BLACK);
            for (int i = 0; i < toTransform[0].length; i++) {
                int x = i;
                int downy = (int) (100 - (toTransform[0][i] * 10));
                int upy = 100;
                canvas.drawLine(x, downy, x, upy, paint);
            }
            imageView.invalidate();
        }
4

1 回答 1

3

在这个过程中有很多微小的细节可能会让你陷入困境。此代码未经测试,我不经常进行音频过滤,因此您在这里应该非常怀疑。这是过滤音频的基本过程:

  1. 获取音频缓冲区
  2. 可能的音频缓冲区转换(字节到浮点数)
  3. (可选)应用窗口功能,即汉宁
  4. 采取 FFT
  5. 过滤频率
  6. 采取逆FFT

我假设您对 Android 和录音有一些基本知识,因此将在此处介绍步骤 4-6。

//it is assumed that a float array audioBuffer exists with even length = to 
//the capture size of your audio buffer

//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type

//Take the FFT
mFFT.realForward(audioBuffer);

//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way.  To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE

//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2

//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;

//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){        
    //Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
    float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;

    //Now filter the audio, I'm assuming you wanted to keep the
    //frequencies of interest rather than discard them.
    if(frequency  < freqMin || frequency > freqMax){
        //Calculate the index where the real and imaginary parts are stored
        int real = 2 * fftBin;
        int imaginary = 2 * fftBin + 1;

        //zero out this frequency
        audioBuffer[real] = 0;
        audioBuffer[imaginary] = 0;
    }
}

//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);
于 2012-06-06T18:59:05.960 回答