7

要求

这听起来有点不同,但这是我想要实现的。我想反向制作电影(.mov)文件。就像我们如何倒带电影文件一样。我还希望保持与我的视频所包含的帧速率相同。

注意:我不只是想以相反的顺序播放视频文件。我想生成以相反顺序播放的新电影文件。

我的探索

我想到了以下步骤来执行相同的操作。

  1. 使用特定帧速率制作大块视频文件AVAssetExportSession
  2. AVMutableComposition使用和将所有这些视频块合并到单个电影文件中AVAssetExportSession
  3. 在合并过程中还将每个文件的音频合并到新的视频文件中。

使用上述步骤,我可以反向实现生成的视频文件,但我有以下问题。

  1. 如果视频持续时间长,则需要大量时间。
  2. 它还消耗大量 CPU 周期和内存来完成此过程。

有没有人有任何其他优化的方式来实现这一目标?任何建议将不胜感激。

4

3 回答 3

4

这是我的解决方案,也许它可以帮助你。 https://github.com/KayWong/VideoReverse

于 2015-12-21T20:01:57.767 回答
2

Swift 5,归功于 Andy Hin,因为我基于http://www.andyhin.com/post/5/reverse-video-avfoundation

    class func reverseVideo(inURL: URL, outURL: URL, queue: DispatchQueue, _ completionBlock: ((Bool)->Void)?) {
        let asset = AVAsset.init(url: inURL)
        guard
            let reader = try? AVAssetReader.init(asset: asset),
            let videoTrack = asset.tracks(withMediaType: .video).first
        else {
            assert(false)
            completionBlock?(false)
            return
        }

        let width = videoTrack.naturalSize.width
        let height = videoTrack.naturalSize.height

        let readerSettings: [String : Any] = [
            String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
        ]
        let readerOutput = AVAssetReaderTrackOutput.init(track: videoTrack, outputSettings: readerSettings)
        reader.add(readerOutput)
        reader.startReading()

        var buffers = [CMSampleBuffer]()
        while let nextBuffer = readerOutput.copyNextSampleBuffer() {
            buffers.append(nextBuffer)
        }
        let status = reader.status
        reader.cancelReading()
        guard status == .completed, let firstBuffer = buffers.first else {
            assert(false)
            completionBlock?(false)
            return
        }
        let sessionStartTime = CMSampleBufferGetPresentationTimeStamp(firstBuffer)

        let writerSettings: [String:Any] = [
            AVVideoCodecKey : AVVideoCodecType.h264,
            AVVideoWidthKey : width,
            AVVideoHeightKey: height,
        ]
        let writerInput: AVAssetWriterInput
        if let formatDescription = videoTrack.formatDescriptions.last {
            writerInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerSettings, sourceFormatHint: (formatDescription as! CMFormatDescription))
        } else {
            writerInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerSettings)
        }
        writerInput.transform = videoTrack.preferredTransform
        writerInput.expectsMediaDataInRealTime = false

        guard
            let writer = try? AVAssetWriter.init(url: outURL, fileType: .mp4),
            writer.canAdd(writerInput)
        else {
            assert(false)
            completionBlock?(false)
            return
        }

        let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor.init(assetWriterInput: writerInput, sourcePixelBufferAttributes: nil)
        let group = DispatchGroup.init()

        group.enter()
        writer.add(writerInput)
        writer.startWriting()
        writer.startSession(atSourceTime: sessionStartTime)

        var currentSample = 0
        writerInput.requestMediaDataWhenReady(on: queue) {
            for i in currentSample..<buffers.count {
                currentSample = i
                if !writerInput.isReadyForMoreMediaData {
                    return
                }
                let presentationTime = CMSampleBufferGetPresentationTimeStamp(buffers[i])
                guard let imageBuffer = CMSampleBufferGetImageBuffer(buffers[buffers.count - i - 1]) else {
                    WLog("VideoWriter reverseVideo: warning, could not get imageBuffer from SampleBuffer...")
                    continue
                }
                if !pixelBufferAdaptor.append(imageBuffer, withPresentationTime: presentationTime) {
                    WLog("VideoWriter reverseVideo: warning, could not append imageBuffer...")
                }
            }

            // finish
            writerInput.markAsFinished()
            group.leave()
        }

        group.notify(queue: queue) {
            writer.finishWriting {
                if writer.status != .completed {
                    WLog("VideoWriter reverseVideo: error - \(String(describing: writer.error))")
                    completionBlock?(false)
                } else {
                    completionBlock?(true)
                }
            }
        }
    }
于 2019-08-27T19:54:41.893 回答
-1

你需要参考 AVFoundation 库来完成你的任务..

我只用AVAssetExportSession&完成了 30 秒的视频编辑AVMutableComposition

这是您需要参考的链接,它非常有帮助。

http://www.subfurther.com/blog/category/avfoundation/

而且,如果您想参考 WWDC 会议 PDF 来编辑媒体,那会更好。

此链接总来源:https ://developer.apple.com/videos/wwdc/2010/ 此链接涵盖使用 AVFoundation 编辑媒体

关于内存周期..它在导出时也会消耗更多内存。

于 2012-09-12T06:00:57.410 回答