所以我正在为在 libGDX 中工作的预先存在的视频解码器编写音频解码器。问题是,当音频代码没有线程化时,音频和视频断断续续。音频会播放一个块,然后视频会播放一个块。
我的解决方案是做一些多线程,并让视频的东西完成它的工作(因为 libGDX 渲染线程不是线程安全的,并且与它们搞乱会导致坏事不会失败)。那么自然的选择是使用线程的东西来做音频。
这修复了视频的断断续续,但不仅音频仍然断断续续,而且到处都是伪影。
这是我第一次尝试认真的音频编程,所以请记住,我可能不知道一些基本的东西。执行器服务是一个 SingleThreadExecutor,其想法是音频需要按顺序解码和写出。
这是更新方法:
public boolean update(float dtSeconds) {
if(playState != PlayState.PLAYING) return false;
long dtMilliseconds = (long)(dtSeconds * 1000);
playTimeMilliseconds += dtMilliseconds;
sleepTimeoutMilliseconds = (long) Math.max(0, sleepTimeoutMilliseconds - dtMilliseconds);
if(sleepTimeoutMilliseconds > 0) {
// The playhead is still ahead of the current frame - do nothing
return false;
}
while(true) {
int packet_read_result = container.readNextPacket(packet);
if(packet_read_result < 0) {
// Got bad packet - we've reached end of the video stream
stop();
return true;
}
if(packet.getStreamIndex() == videoStreamId)
{
// We have a valid packet from our stream
// Allocate a new picture to get the data out of Xuggler
IVideoPicture picture = IVideoPicture.make(
videoCoder.getPixelType(),
videoCoder.getWidth(),
videoCoder.getHeight()
);
// Attempt to read the entire packet
int offset = 0;
while(offset < packet.getSize()) {
// Decode the video, checking for any errors
int bytesDecoded = videoCoder.decodeVideo(picture, packet, offset);
if (bytesDecoded < 0) {
throw new RuntimeException("Got error decoding video");
}
offset += bytesDecoded;
/* Some decoders will consume data in a packet, but will not
* be able to construct a full video picture yet. Therefore
* you should always check if you got a complete picture
* from the decoder
*/
if (picture.isComplete()) {
// We've read the entire packet
IVideoPicture newPic = picture;
// Timestamps are stored in microseconds - convert to milli
long absoluteFrameTimestampMilliseconds = picture.getTimeStamp() / 1000;
long relativeFrameTimestampMilliseconds = (absoluteFrameTimestampMilliseconds - firstTimestampMilliseconds);
long frameTimeDelta = relativeFrameTimestampMilliseconds - playTimeMilliseconds;
if(frameTimeDelta > 0) {
// The video is ahead of the playhead, don't read any more frames until it catches up
sleepTimeoutMilliseconds = frameTimeDelta + sleepTolleranceMilliseconds;
return false;
}
/* If the resampler is not null, that means we didn't get the video in
* BGR24 format and need to convert it into BGR24 format
*/
if (resampler != null) {
// Resample the frame
newPic = IVideoPicture.make(
resampler.getOutputPixelFormat(),
picture.getWidth(), picture.getHeight()
);
if (resampler.resample(newPic, picture) < 0) {
throw new RuntimeException("Could not resample video");
}
}
if (newPic.getPixelType() != IPixelFormat.Type.BGR24) {
throw new RuntimeException("Could not decode video" + " as BGR 24 bit data");
}
// And finally, convert the BGR24 to an Java buffered image
BufferedImage javaImage = Utils.videoPictureToImage(newPic);
// Update the current texture
updateTexture(javaImage);
// Let the caller know the texture has changed
return true;
}
}
}
else if(packet.getStreamIndex() == this.audioStreamId)
{
IAudioSamples samples = IAudioSamples.make(1024, audioCoder.getChannels());
Thread thread = new Thread(new DecodeSoundRunnable(samples));
thread.setPriority(Thread.MAX_PRIORITY);
this.decodeThreadPool.execute(thread);
}
}
这是音频线程:
private class DecodeSoundRunnable implements Runnable
{
IAudioSamples samples;
int offset = 0;
IStreamCoder coder;
public DecodeSoundRunnable(IAudioSamples samples)
{
this.samples = samples.copyReference();
this.coder = audioCoder.copyReference();
}
@Override
public void run() {
while(offset < packet.getSize())
{
int bytesDecoded = this.coder.decodeAudio(samples, packet, offset);
if (bytesDecoded < 0)
break;//throw new RuntimeException("got error decoding audio in: " + videoPath);
offset += bytesDecoded;
}
playJavaSound(samples, 0);
//writeOutThreadPool.execute(new WriteOutSoundRunnable(samples, 0));
}
}