2

我在设备上保存了一系列 mp4 文件,这些文件需要合并在一起以制作一个 mp4 文件。

video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4

我研究过的解决方案(例如 mp4parser 框架)使用了已弃用的代码。

我能找到的最佳解决方案是使用 MediaMuxer 和 MediaExtractor。

代码运行,但我的视频没有合并(仅显示 video_p1.mp4 中的内容,并且它是横向的,而不是纵向的)。

谁能帮我解决这个问题?

    public static boolean concatenateFiles(File dst, File... sources) {
    if ((sources == null) || (sources.length == 0)) {
        return false;
    }

    boolean result;
    MediaExtractor extractor = null;
    MediaMuxer muxer = null;
    try {
        // Set up MediaMuxer for the destination.
        muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        // Copy the samples from MediaExtractor to MediaMuxer.
        boolean sawEOS = false;
        //int bufferSize = MAX_SAMPLE_SIZE;
        int bufferSize = 1 * 1024 * 1024;
        int frameCount = 0;
        int offset = 100;

        ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
        MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();

        long timeOffsetUs = 0;
        int dstTrackIndex = -1;

        for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
            int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);

            // Set up MediaExtractor to read from the source.
            extractor = new MediaExtractor();
            extractor.setDataSource(sources[fileIndex].getPath());

            // Set up the tracks.
            SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
            for (int i = 0; i < extractor.getTrackCount(); i++) {
                extractor.selectTrack(i);
                MediaFormat format = extractor.getTrackFormat(i);
                if (dstTrackIndex < 0) {
                    dstTrackIndex = muxer.addTrack(format);
                    muxer.start();
                }
                indexMap.put(i, dstTrackIndex);
            }

            long lastPresentationTimeUs = 0;
            int currentSample = 0;

            while (!sawEOS) {
                bufferInfo.offset = offset;
                bufferInfo.size = extractor.readSampleData(dstBuf, offset);

                if (bufferInfo.size < 0) {
                    sawEOS = true;
                    bufferInfo.size = 0;
                    timeOffsetUs += (lastPresentationTimeUs + 0);
                }
                else {
                    lastPresentationTimeUs = extractor.getSampleTime();
                    bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
                    bufferInfo.flags = extractor.getSampleFlags();
                    int trackIndex = extractor.getSampleTrackIndex();

                    if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
                        muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
                    }
                    extractor.advance();

                    frameCount++;
                    currentSample++;
                    Log.d("tag2", "Frame (" + frameCount + ") " +
                                "PresentationTimeUs:" + bufferInfo.presentationTimeUs +
                                " Flags:" + bufferInfo.flags +
                                " TrackIndex:" + trackIndex +
                                " Size(KB) " + bufferInfo.size / 1024);

                }
            }
            extractor.release();
            extractor = null;
        }

        result = true;
    }
    catch (IOException e) {
        result = false;
    }
    finally {
        if (extractor != null) {
            extractor.release();
        }
        if (muxer != null) {
            muxer.stop();
            muxer.release();
        }
    }
    return result;
}

public static int getNumberOfSamples(File src) {
    MediaExtractor extractor = new MediaExtractor();
    int result;
    try {
        extractor.setDataSource(src.getPath());
        extractor.selectTrack(0);

        result = 0;
        while (extractor.advance()) {
            result ++;
        }
    }
    catch(IOException e) {
        result = -1;
    }
    finally {
        extractor.release();
    }
    return result;
}
4

1 回答 1

0

我正在使用这个库来混合视频:ffmpeg-android-java

梯度依赖:

implementation 'com.writingminds:FFmpegAndroid:0.3.2'

这是我在项目中使用它在 kotlin 中复用视频和音频的方法:VideoAudioMuxer 所以基本上它就像终端中的 ffmpeg 一样工作,但是您将命令作为字符串数组以及侦听器输入到方法中。

fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {

您必须搜索如何在 ffmpeg 中合并视频并将命令转换为所需参数的字符串数组。

您可能几乎可以做任何事情,因为 ffmpeg 是一个非常强大的工具。

于 2020-01-25T10:11:18.000 回答