0

我正在开发一个简单的 Beatbox 应用程序。首先我用纯 Java 编写了所有东西,然后我发现了奇妙的 tarsosdsp 框架。但是现在我遇到了一个我无法解决的问题。你能帮助我吗?

我正在设置一个 SilenceDetector - 效果很好。然后我想在处理方法中用来自 audioEvent 的数据填充一个 byte[] 缓冲区。我失败了......变量audioBuffer的类型是ByteArrayOutputStream,并且在运行时被重用。请参阅相关代码片段:

    private void setNewMixer(Mixer mixer) throws LineUnavailableException,
UnsupportedAudioFileException {

    if(dispatcher!= null){
        dispatcher.stop();
    }
    currentMixer = mixer;

    //final AudioFormat format = new AudioFormat(sampleRate, frameRate, channel, true, true);
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, audioFormat);
    final TargetDataLine line;
    line = (TargetDataLine) mixer.getLine(dataLineInfo);
    final int numberOfSamples = bufferSize;
    line.open(audioFormat, numberOfSamples);
    line.start();
    final AudioInputStream stream = new AudioInputStream(line);

    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
    // create a new dispatcher
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);

    // add a processor, handle percussion event.
    silenceDetector = new SilenceDetector(threshold,false);

    dispatcher.addAudioProcessor(bufferFiller);
    dispatcher.addAudioProcessor(silenceDetector);
    dispatcher.addAudioProcessor(this);

    // run the dispatcher (on a new thread).
    new Thread(dispatcher,"GunNoiseDetector Thread").start();

}

final AudioProcessor bufferFiller = new AudioProcessor() {

    @Override
    public boolean process(AudioEvent audioEvent) {

        if(isAdjusting){        

                byte[] bb = audioEvent.getByteBuffer().clone();

                try {
                    audioBuffer.write(bb);
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }

                System.out.println("current buffer.size():: "+audioBuffer.size());

        } 
        else {          
            if (audioBuffer.size() > 0) {
                try {
                    byte[] ba = audioBuffer.toByteArray();
                    samples.add(ba);
                    System.out.println("stored: "+ba.length);
                    audioBuffer.flush();
                    audioBuffer.close();
                    audioBuffer = new ByteArrayOutputStream();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }           
        }

        return true;
    }

    @Override
    public void processingFinished() {
        // TODO Auto-generated method stub
    }

}; 

@Override
public boolean process(AudioEvent audioEvent) {
    if(silenceDetector.currentSPL() > threshold){           
        isAdjusting = true;
        lastAction = System.currentTimeMillis();                        
    } 
    else {                  
        isAdjusting = false;            
    }

    return true;

}

有什么建议么?

4

1 回答 1

0

我找到了它不起作用的原因!就像这里提到的:AudioFormat中帧速率的含义是什么?

对于 PCM、A-law 和 μ-law 数据,一帧是属于一个采样间隔的所有数据。这意味着帧速率与采样率相同。

所以我的 AudioFormat 是错误的!

于 2016-12-19T23:35:22.617 回答