1

我是 java 编程的新手,我是一般的,特别是 DSP 编程的,但我试图将混响应用于 .wav 文件。我使用的一些代码来自:混响算法,但似乎我并不完全理解它。当我运行我的代码时,我会得到一个随着时间的推移而增长的噪音,最终也会“剪裁”。我认为这与我的缓冲区大小或我使用“剪辑”播放混响音频流的事实有关,但我不确定。如果有人可以看看它并提出一些改进我的代码的想法,我将不胜感激。

我使用这段代码:

public void Reverbstart() throws InterruptedException, UnsupportedAudioFileException, IOException, LineUnavailableException {

    int bufferLength = 4000_000;
    Clip clip;
    Line line;
    Line.Info linfo = new Line.Info(Clip.class);
    line = AudioSystem.getLine(linfo);
    clip = (Clip) line;

    File sourceFile = new File("some audiofile");

    AudioFileFormat fileFormat = AudioSystem.getAudioFileFormat(sourceFile);

    AudioFormat audioFormat = fileFormat.getFormat();

    System.out.println(audioFormat);

    AudioInputStream ais = AudioSystem.getAudioInputStream(sourceFile);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();

    int nBufferSize = bufferLength * audioFormat.getFrameSize();
    byte[]  byteBuffer = new byte[nBufferSize];

    int nBytesRead = ais.read(byteBuffer);
    baos.write(byteBuffer, 0, nBytesRead);

    byte[] AudioData = baos.toByteArray();

    int delayMilliseconds = 3000; 
    int delaySamples = (int)((float)delayMilliseconds * 44.1f); //44100 Hz sample rate
    float decay = 0.5f;

    for (int i = 0; i < AudioData.length - delaySamples; i++){

        AudioData[i] += (short)((float)AudioData[i]);
        AudioData[i + delaySamples] += (short)((float)AudioData[i] * decay);
    }

    ByteArrayInputStream bais = new ByteArrayInputStream(AudioData);
    AudioInputStream outputAis = new AudioInputStream(bais, audioFormat,AudioData.length/ audioFormat.getFrameSize());


    clip.open(outputAis);

    clip.start();
    Thread.sleep(10000);
    System.out.println(clip.getFramePosition());


}
4

0 回答 0