我正在尝试使用MediaCodec
和实现对视频的精确搜索MediaExtractor
。通过关注 Grafika 的MoviePlayer,我已经成功实现了前向搜索。但是我仍然有向后搜索的问题。相关的代码在这里:
public void seekBackward(long position){
final int TIMEOUT_USEC = 10000;
int inputChunk = 0;
long firstInputTimeNsec = -1;
boolean outputDone = false;
boolean inputDone = false;
mExtractor.seekTo(position, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
Log.d("TEST_MEDIA", "sampleTime: " + mExtractor.getSampleTime()/1000 + " -- position: " + position/1000 + " ----- BACKWARD");
while (mExtractor.getSampleTime() < position && position >= 0) {
if (VERBOSE) Log.d(TAG, "loop");
if (mIsStopRequested) {
Log.d(TAG, "Stop requested");
return;
}
// Feed more data to the decoder.
if (!inputDone) {
int inputBufIndex = mDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (inputBufIndex >= 0) {
if (firstInputTimeNsec == -1) {
firstInputTimeNsec = System.nanoTime();
}
ByteBuffer inputBuf = mDecoderInputBuffers[inputBufIndex];
// Read the sample data into the ByteBuffer. This neither respects nor
// updates inputBuf's position, limit, etc.
int chunkSize = mExtractor.readSampleData(inputBuf, 0);
if (chunkSize < 0) {
// End of stream -- send empty frame with EOS flag set.
mDecoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
MediaCodec.BUFFER_FLAG_END_OF_STREAM);
inputDone = true;
if (VERBOSE) Log.d(TAG, "sent input EOS");
} else {
if (mExtractor.getSampleTrackIndex() != mTrackIndex) {
Log.w(TAG, "WEIRD: got sample from track " +
mExtractor.getSampleTrackIndex() + ", expected " + mTrackIndex);
}
long presentationTimeUs = mExtractor.getSampleTime();
mDecoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
presentationTimeUs, 0 /*flags*/);
if (VERBOSE) {
Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" + chunkSize);
}
inputChunk++;
mExtractor.advance();
}
} else {
if (VERBOSE) Log.d(TAG, "input buffer not available");
}
}
if (!outputDone) {
int decoderStatus = mDecoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (VERBOSE) Log.d(TAG, "no output from decoder available");
} else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not important for us, since we're using Surface
if (VERBOSE) Log.d(TAG, "decoder output buffers changed");
} else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat newFormat = mDecoder.getOutputFormat();
if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat);
} else if (decoderStatus < 0) {
throw new RuntimeException(
"unexpected result from decoder.dequeueOutputBuffer: " +
decoderStatus);
} else { // decoderStatus >= 0
if (firstInputTimeNsec != 0) {
// Log the delay from the first buffer of input to the first buffer
// of output.
long nowNsec = System.nanoTime();
Log.d(TAG, "startup lag " + ((nowNsec-firstInputTimeNsec) / 1000000.0) + " ms");
firstInputTimeNsec = 0;
}
boolean doLoop = false;
if (VERBOSE) Log.d(TAG, "surface decoder given buffer " + decoderStatus +
" (size=" + mBufferInfo.size + ")");
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (VERBOSE) Log.d(TAG, "output EOS");
if (mLoop) {
doLoop = true;
} else {
outputDone = true;
}
}
boolean doRender = (mBufferInfo.size != 0);
// As soon as we call releaseOutputBuffer, the buffer will be forwarded
// to SurfaceTexture to convert to a texture. We can't control when it
// appears on-screen, but we can manage the pace at which we release
// the buffers.
if (doRender && mFrameCallback != null) {
mFrameCallback.preRender(mBufferInfo.presentationTimeUs);
}
mDecoder.releaseOutputBuffer(decoderStatus, doRender);
doRender = false;
if (doRender && mFrameCallback != null) {
mFrameCallback.postRender();
}
if (doLoop) {
Log.d(TAG, "Reached EOS, looping");
mExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
inputDone = false;
mDecoder.flush(); // reset decoder state
mFrameCallback.loopReset();
}
}
}
}
}
基本上和MoviePlayer的doExtract
方法是一样的。我只是添加了一些修改以返回到前一个关键帧,而不是向前解码到我想要的位置。我也在这里遵循了fadden的建议,但收效甚微。
另一个问题,据我了解,ExoPlayer 是基于 构建的MediaCodec
,那么为什么它可以很好地播放 iOS 录制的视频,而 MoviePlayer 的纯实现MediaCodec
却不能呢?