我在尝试使用 webRTC 在 android 上进行回声消除时遇到问题。我大部分时间都 在关注此处发布的项目,但是我正在尝试直接从远程设备流式传输。
/* Prepare AEC */
MobileAEC aecm = new MobileAEC(null);
aecm.setAecmMode(MobileAEC.AggressiveMode.MILD)
.prepare();
/*Get Minimum Buffer Size */
int minBufSize = AudioRecord.getMinBufferSize(HBConstants.SAMPLE_RATE,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT) ;
int audioLength=minBufSize/2;
byte[] buf = new byte[minBufSize];
short[] audioBuffer = new short[audioLength];
short[] aecOut = new short[audioLength];
/*Prepare Audio Track */
AudioTrack speaker = new AudioTrack(AudioManager.STREAM_MUSIC,
HBConstants.SAMPLE_RATE,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, audioLength,
AudioTrack.MODE_STREAM);
speaker.play();
isRunning = true;
/*Loop around and read incoming network buffer. PlayerQueue is a read LinkedBlockingQueue set elsewhere containing incoming network data */
while (isRunning) {
try {
buf = playerQueue.take();
/* Convert to short buffer and send to aecm*/
ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder())
.asShortBuffer().get(audioBuffer);
aecm.farendBuffer(audioBuffer, audioLength);
aecm.echoCancellation(audioBuffer, null, aecOut,
(short) (audioLength), (short) 10);
/*Send output to speeker */
speaker.write(aecOut, 0, audioLength);
} catch (Exception ie) {
}
try {
Thread.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
}
}
当我这样做时,我得到了这个异常:
12-23 17:31:11.290: W/System.err(8717): java.lang.Exception: setFarendBuffer() failed due to invalid arguments.
12-23 17:31:11.290: W/System.err(8717): at com.android.webrtc.audio.MobileAEC.farendBuffer(MobileAEC.java:204)
12-23 17:31:11.290: W/System.err(8717): at com.example.twodottwo.PlayerThread.run(PlayerThread.java:62)
12-23 17:31:11.290: W/System.err(8717): at java.lang.Thread.run(Thread.java:841)
现在我深入研究了代码,发现采样器一次只能接受 80 或 160 个样本。为了弥补这一点,我尝试一次只获取 160 个样本,但这小于 AudioRecord 对象中的最小缓冲区大小并产生错误。
所以为了解决这个问题,我也尝试了这段代码,并将队列设置为一次最多只能传送 320 个字节(因为我们使用 2 个字节来表示短):
ShortBuffer sb = ShortBuffer.allocate(audioLength);
int samples = audioLength / 160;
while(i < samples) {
buf = playerQueue.take();
ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder()).asShortBuffer().get(audioBuffer);
aecm.farendBuffer(audioBuffer, 160);
aecm.echoCancellation(audioBuffer, null, aecOut, (short) (160), (short) 10);
sb.put(aecOut);
i ++;
}
speaker.write(sb.array(), 0, audioLength);
现在这应该缓冲每个 160 元素数组并将其传递给 WebRtc 库以执行回声消除。它似乎只是产生随机噪音。我也尝试反转结果数组的顺序,但仍会产生随机噪声。
有什么方法可以拆分声音样本并以 WebRTC 喜欢的方式使其听起来像原始声音?或者有没有办法让 WebRtc 一次接受更多样本?我认为两者都很好,但目前我有点卡住了。