16

我正在修改一个Android 框架示例,以将 MediaCodec 生成的基本 AAC 流打包到一个独立的 .mp4 文件中。我正在使用一个MediaMuxer包含由一个实例生成的 AAC 轨道的单个MediaCodec实例。

但是,我最终总是在调用以下内容时收到错误消息mMediaMuxer.writeSampleData(trackIndex, encodedData, bufferInfo)

E/MPEG4Writer﹕timestampUs 0 < lastTimestampUs XXXXX for Audio track

当我将原始输入数据排队时,mCodec.queueInputBuffer(...)我提供 0 作为每个框架示例的时间戳值(我也尝试使用单调递增的时间戳值,结果相同。我已经成功地将原始相机帧编码为 h264/mp​​4 文件同样的方法)。

查看完整来源

最相关的片段:

private static void testEncoder(String componentName, MediaFormat format, Context c) {
    int trackIndex = 0;
    boolean mMuxerStarted = false;
    File f = FileUtils.createTempFileInRootAppStorage(c, "aac_test_" + new Date().getTime() + ".mp4");
    MediaCodec codec = MediaCodec.createByCodecName(componentName);

    try {
        codec.configure(
                format,
                null /* surface */,
                null /* crypto */,
                MediaCodec.CONFIGURE_FLAG_ENCODE);
    } catch (IllegalStateException e) {
        Log.e(TAG, "codec '" + componentName + "' failed configuration.");

    }

    codec.start();

    try {
        mMediaMuxer = new MediaMuxer(f.getAbsolutePath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    } catch (IOException ioe) {
        throw new RuntimeException("MediaMuxer creation failed", ioe);
    }

    ByteBuffer[] codecInputBuffers = codec.getInputBuffers();
    ByteBuffer[] codecOutputBuffers = codec.getOutputBuffers();

    int numBytesSubmitted = 0;
    boolean doneSubmittingInput = false;
    int numBytesDequeued = 0;

    while (true) {
        int index;

        if (!doneSubmittingInput) {
            index = codec.dequeueInputBuffer(kTimeoutUs /* timeoutUs */);

            if (index != MediaCodec.INFO_TRY_AGAIN_LATER) {
                if (numBytesSubmitted >= kNumInputBytes) {
                    Log.i(TAG, "queueing EOS to inputBuffer");
                    codec.queueInputBuffer(
                            index,
                            0 /* offset */,
                            0 /* size */,
                            0 /* timeUs */,
                            MediaCodec.BUFFER_FLAG_END_OF_STREAM);

                    if (VERBOSE) {
                        Log.d(TAG, "queued input EOS.");
                    }

                    doneSubmittingInput = true;
                } else {
                    int size = queueInputBuffer(
                            codec, codecInputBuffers, index);

                    numBytesSubmitted += size;

                    if (VERBOSE) {
                        Log.d(TAG, "queued " + size + " bytes of input data.");
                    }
                }
            }
        }

        MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
        index = codec.dequeueOutputBuffer(info, kTimeoutUs /* timeoutUs */);

        if (index == MediaCodec.INFO_TRY_AGAIN_LATER) {
        } else if (index == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
            MediaFormat newFormat = codec.getOutputFormat();
            trackIndex = mMediaMuxer.addTrack(newFormat);
            mMediaMuxer.start();
            mMuxerStarted = true;
        } else if (index == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
            codecOutputBuffers = codec.getOutputBuffers();
        } else {
            // Write to muxer
            ByteBuffer encodedData = codecOutputBuffers[index];
            if (encodedData == null) {
                throw new RuntimeException("encoderOutputBuffer " + index +
                        " was null");
            }

            if ((info.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                // The codec config data was pulled out and fed to the muxer when we got
                // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.
                if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
                info.size = 0;
            }

            if (info.size != 0) {
                if (!mMuxerStarted) {
                    throw new RuntimeException("muxer hasn't started");
                }

                // adjust the ByteBuffer values to match BufferInfo (not needed?)
                encodedData.position(info.offset);
                encodedData.limit(info.offset + info.size);

                mMediaMuxer.writeSampleData(trackIndex, encodedData, info);
                if (VERBOSE) Log.d(TAG, "sent " + info.size + " audio bytes to muxer with pts " + info.presentationTimeUs);
            }

            codec.releaseOutputBuffer(index, false);

            // End write to muxer
            numBytesDequeued += info.size;

            if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                if (VERBOSE) {
                    Log.d(TAG, "dequeued output EOS.");
                }
                break;
            }

            if (VERBOSE) {
                Log.d(TAG, "dequeued " + info.size + " bytes of output data.");
            }
        }
    }

    if (VERBOSE) {
        Log.d(TAG, "queued a total of " + numBytesSubmitted + "bytes, "
                + "dequeued " + numBytesDequeued + " bytes.");
    }

    int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
    int channelCount = format.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
    int inBitrate = sampleRate * channelCount * 16;  // bit/sec
    int outBitrate = format.getInteger(MediaFormat.KEY_BIT_RATE);

    float desiredRatio = (float)outBitrate / (float)inBitrate;
    float actualRatio = (float)numBytesDequeued / (float)numBytesSubmitted;

    if (actualRatio < 0.9 * desiredRatio || actualRatio > 1.1 * desiredRatio) {
        Log.w(TAG, "desiredRatio = " + desiredRatio
                + ", actualRatio = " + actualRatio);
    }


    codec.release();
    mMediaMuxer.stop();
    mMediaMuxer.release();
    codec = null;
}

更新:我发现了一个我认为存在的根本症状MediaCodec。:

我发送presentationTimeUs=1000queueInputBuffer(...)info.presentationTimeUs= 33219在调用后接收MediaCodec.dequeueOutputBuffer(info, timeoutUs)。fadden 留下了与此行为相关的有用评论。

4

3 回答 3

7

感谢 fadden 的帮助,我在 Github 上有了一个概念验证音频编码器视频+音频编码器。总之:

AudioRecord的样本发送到MediaCodec+MediaMuxer包装器。如果您经常轮询以避免填满 AudioRecord 的内部缓冲区(以避免在您调用 read 的时间和 AudioRecord 记录样本的时间之间漂移),那么使用系统时间audioRecord.read(...)作为音频时间戳可以很好地工作。太糟糕了 AudioRecord 不直接传达时间戳......

// Setup AudioRecord
while (isRecording) {
    audioPresentationTimeNs = System.nanoTime();
    audioRecord.read(dataBuffer, 0, samplesPerFrame);
    hwEncoder.offerAudioEncoder(dataBuffer.clone(), audioPresentationTimeNs);
}

请注意,AudioRecord仅保证支持 16 位 PCM 样本,尽管MediaCodec.queueInputBuffer输入为byte[]. 传递一个byte[]toaudioRecord.read(dataBuffer,...)会将16 位样本截断为 8 位。

我发现以这种方式轮询仍然偶尔会产生timestampUs XXX < lastTimestampUs XXX for Audio track错误,所以我在调用之前添加了一些逻辑来跟踪bufferInfo.presentationTimeUs报告的报告mediaCodec.dequeueOutputBuffer(bufferInfo, timeoutMs)并在必要时进行调整mediaMuxer.writeSampleData(trackIndex, encodedData, bufferInfo)

于 2013-09-23T18:35:30.343 回答
3

The code from above answer https://stackoverflow.com/a/18966374/6463821 also provides timestampUs XXX < lastTimestampUs XXX for Audio track error, because if you read from AudioRecord`s buffer faster then need, duration between generated timstamps will smaller than real duration between audio samples.

So my solution for this issue is generate first timstamp and each next sample increase timestamp by duration of your sample (depends on bit-rate, audio format, channel config).

BUFFER_DURATION_US = 1_000_000 * (ARR_SIZE / AUDIO_CHANNELS) / SAMPLE_AUDIO_RATE_IN_HZ;

...

long firstPresentationTimeUs = System.nanoTime() / 1000;

...

audioRecord.read(shortBuffer, OFFSET, ARR_SIZE);
long presentationTimeUs = count++ * BUFFER_DURATION + firstPresentationTimeUs;

Reading from AudioRecord should be in separate thread, and all read buffers should be added to queue without waiting for encoding or any other actions with them, to prevent losing of audio samples.

worker =
        new Thread() {

            @Override
            public void run() {
                try {

                    AudioFrameReader reader =
                            new AudioFrameReader(audioRecord);

                    while (!isInterrupted()) {
                        Thread.sleep(10);

                        addToQueue(
                                reader
                                        .read());
                    }

                } catch (InterruptedException e) {
                    Log.w(TAG, "run: ", e);
                }
            }
        };
于 2017-06-02T14:04:42.627 回答
0

出现问题是因为您无序地接收缓冲区:尝试添加以下测试:

if(lastAudioPresentationTime == -1) {
    lastAudioPresentationTime = bufferInfo.presentationTimeUs;
}
else if (lastAudioPresentationTime < bufferInfo.presentationTimeUs) {
    lastAudioPresentationTime = bufferInfo.presentationTimeUs;
}
if ((bufferInfo.size != 0) && (lastAudioPresentationTime <= bufferInfo.presentationTimeUs)) {
    if (!mMuxerStarted) {
        throw new RuntimeException("muxer hasn't started");
    }
    // adjust the ByteBuffer values to match BufferInfo (not needed?)
    encodedData.position(bufferInfo.offset);
    encodedData.limit(bufferInfo.offset + bufferInfo.size);
    mMuxer.writeSampleData(trackIndex.index, encodedData, bufferInfo);
}

encoder.releaseOutputBuffer(encoderStatus, false);
于 2014-09-16T17:08:32.753 回答