1

我目前正在尝试从 RTP 流中解析 H264 数据,然后将其发送到 MediaCodec 以在 SurfaceView for Android 上呈现。

但是,我不确定如何:

  • 从 RTP 数据包正确构建 H264 切片
  • 将 H264 切片组装成切片后,将其发送到媒体编解码器

我还没有看到任何以清晰简洁的方式实现的示例,而且我还没有发现 MediaCodec 文档有帮助。

有人有这个领域的经验吗?

void videoCodec(ByteBuffer input, int flags) {

    bufferInfo.set(0, 0, 0, flags);

    int inputBufferId = codec.dequeueInputBuffer(10000);

    if (inputBufferId >= 0) {

        //put data
        ByteBuffer inputData = inputBuffers[inputBufferId];

        inputData.clear();
        inputData.put(input);

        //queue it up
        codec.queueInputBuffer(inputBufferId, 0, input.limit(), 0, flags);
    }

    int outputBufferId = codec.dequeueOutputBuffer(bufferInfo, 10000);

    if (outputBufferId >= 0) {
        // outputBuffers[outputBufferId] is ready to be processed or rendered.
        Timber.e("Rendering Data with Index of: %s", outputBufferId);
        codec.releaseOutputBuffer(outputBufferId, true);

    } else if (outputBufferId == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
        outputBuffers = codec.getOutputBuffers();
    } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
        // Subsequent data will conform to new format.
        //format = codec.getOutputFormat();
    }
}

 MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 1920, 1080);
                    codec = MediaCodec.createDecoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
                    codec.configure(format, surfaceVideo.getHolder().getSurface(), null, 0);
                    codec.start();


                    inputBuffers = codec.getInputBuffers();
                    outputBuffers = codec.getOutputBuffers();

      while (streaming) {

          //receive RTP Packet
          h264Parser(rtpPacket.getPayload());

      }

h264Parser 看起来像这样:

void h264Parser(byte[] payload) {

    int packetType = (byte) payload[0] & (byte) 0x1F;
    boolean startBit = (payload[1] & 0x80) != 0;
    boolean endBit = (payload[1] & 0x40) != 0;
    int flags = 0;

    switch (packetType) {
        case 7:
            pps = new ByteArrayOutputStream();
            pps.write(prefix);
            pps.write(payload);
            break;
        case 8:
            if (pps.size() > 0) {
               pps.write(payload);
               hasPps = true;
               flags = MediaCodec.BUFFER_FLAG_CODEC_CONFIG;
               payload = pps.toByteArray();
               //Send packet to decoder
               videoCodec(ByteBuffer.wrap(payload), flags);
            break;
        case 28:

            if (hasPps) {
                if (startBit) {
                    baos = new ByteArrayOutputStream();
                    baos.write(prefix);
                    baos.write(payload);
                } else if (endBit) {
                        if(baos != null) {
                            baos.write(payload);
                            flags = MediaCodec.BUFFER_FLAG_KEY_FRAME;
                            payload = baos.toByteArray();
                            //Send packet to decoder
                            videoCodec(ByteBuffer.wrap(payload), flags);
                            hasPps = false;
                } else {
                        if(baos != null ) {
                            baos.write(payload);
                        }
                }
            }

            break;
        case 1:
            break;
        default:
    }
4

1 回答 1

1

据我记得 MediaCodec 使用完整的访问单元,而不仅仅是切片(有人纠正我我错了)

因此,您必须从 RTP 构建一个完整的访问单元并将其提供给解码器(遗憾的是,我没有 RTP 经验,无法帮助您构建一个)。

您将访问单元发送到解码器,如下所示:

出列输入缓冲区

int inputBufferIndex = decoder.dequeueInputBuffer(TIMEOUT_USEC); ByteBuffer inputBuffer = videoDecoderInputBuffers[videoInputBufIndex];

用您的访问单元填充它

inputBuffer.put(acessUnit); inputBuffer.flip();

将缓冲区排队以进行解码

decoder.queueInputBuffer(inputBufferIndex,0,inputBuffer.limit(), 0, FLAGS);

希望这会有帮助

于 2016-04-27T13:48:32.137 回答