10

尽管 StackOverflow 上有很好的信息,但我还是束手无策......

我正在尝试将 OpenGL 渲染缓冲区写入 iPad 2 上的视频(使用 iOS 4.3)。这更确切地说是我正在尝试的:

A) 设置一个 AVAssetWriterInputPixelBufferAdaptor

  1. 创建一个指向视频文件的 AVAssetWriter

  2. 使用适当的设置设置 AVAssetWriterInput

  3. 设置一个 AVAssetWriterInputPixelBufferAdaptor 向视频文件添加数据

B)使用该 AVAssetWriterInputPixelBufferAdaptor 将数据写入视频文件

  1. 将 OpenGL 代码渲染到屏幕上

  2. 通过 glReadPixels 获取 OpenGL 缓冲区

  3. 从 OpenGL 数据创建一个 CVPixelBufferRef

  4. 使用 appendPixelBuffer 方法将该 PixelBuffer 附加到 AVAssetWriterInputPixelBufferAdaptor

但是,我在这样做时遇到了问题。我现在的策略是在按下按钮时设置 AVAssetWriterInputPixelBufferAdaptor。一旦 AVAssetWriterInputPixelBufferAdaptor 有效,我设置一个标志来指示 EAGLView 创建一个像素缓冲区,并通过 appendPixelBuffer 将其附加到视频文件中给定数量的帧。

现在我的代码正在崩溃,因为它试图附加第二个像素缓冲区,给我以下错误:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

这是我的 AVAsset 设置代码(很多都是基于 Rudy Aramayo 的代码,它确实适用于普通图像,但不适用于纹理):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480, 320);
  frameLength = CMTimeMake(1, 5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                                 [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                 [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

好的,现在我的 videoWriter 和适配器已经设置好了,我告诉我的 OpenGL 渲染器为每一帧创建一个像素缓冲区:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d", currentFrame);
      currentTime = CMTimeAdd(currentTime, frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

最后,我将视频移至相机胶卷:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL, NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                }
                              }];
  [library release];
}

但是,正如我所说,我在调用 appendPixelBuffer 时崩溃了。

很抱歉发送了这么多代码,但我真的不知道我做错了什么。更新将图像写入视频的项目似乎很简单,但我无法获取通过 glReadPixels 创建的像素缓冲区并附加它。这让我疯狂!如果有人有任何建议或 OpenGL 的工作代码示例 --> 视频会很棒......谢谢!

4

7 回答 7

10

基于上面的代码,我刚刚在我的开源GPUImage框架中得到了类似的东西,所以我想我会为此提供我的工作解决方案。就我而言,我可以使用 Srikumar 建议的像素缓冲池,而不是为每个帧手动创建的像素缓冲区。

我首先配置要录制的电影:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用以下代码抓取每个渲染帧glReadPixels()

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我尝试附加两个具有相同整数时间值的像素缓冲区(在提供的基础中),整个记录将失败,并且输入永远不会占用另一个像素缓冲区。同样,如果我在从池中检索失败后尝试附加像素缓冲区,它将中止记录。因此,上面代码中的早期救助。

除了上面的代码之外,我还使用了一个颜色混合着色器将我的 OpenGL ES 场景中的 RGBA 渲染转换为 BGRA,以便 AVAssetWriter 进行快速编码。有了这个,我可以在 iPhone 4 上以 30 FPS 录制 640x480 视频。

同样,所有代码都可以在GPUImage存储库中的 GPUImageMovieWriter 类下找到。

于 2012-03-01T18:51:39.033 回答
2

看起来有几件事要做 -

  1. 根据文档,创建像素缓冲区的推荐方法似乎是CVPixelBufferPoolCreatePixelBufferadaptor.pixelBufferPool.
  2. 然后,您可以通过获取地址来填充缓冲区,CVPixelBufferLockBaseAddress然后在将其传递给适配器之前CVPixelBufferGetBaseAddress使用解锁内存。CVPixelBufferUnlockBaseAddress
  3. 像素缓冲区可以在writerInput.readyForMoreMediaDatais时传递给输入YES。这意味着“等到准备好”。Ausleep直到它变得YES有效,但您也可以使用键值观察。

其余的东西都还好。有了这么多,原始代码会产生一个可播放的视频文件。

于 2011-11-13T12:00:27.317 回答
2

“万一有人偶然发现这个,我终于让它工作了......现在比我更了解它。我在上面的代码中有一个错误,我在调用 appendPixelBuffer 之前释放了从 glReadPixels 填充的数据缓冲区。也就是说,我认为释放它是安全的,因为我已经创建了 CVPixelBufferRef。我已经编辑了上面的代码,所以像素缓冲区现在有数据了!– 安格斯·福布斯 2011 年 6 月 28 日 5:58”</p>

这是你崩溃的真正原因,我也遇到了这个问题。即使您创建了 CVPixelBufferRef,也不要释放缓冲区。

于 2012-02-03T09:54:57.513 回答
1

似乎是不正确的内存管理。错误表明消息被发送到__NSCFDictionary而不是被发送到的事实AVAssetWriterInputPixelBufferAdaptor是高度可疑的。

为什么需要retain手动转接适配器?由于 CocoaTouch 完全是ARC ,因此这看起来很老套。

这是确定内存问题的开始。

于 2011-10-26T07:50:14.480 回答
0

从您的错误消息-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0 看起来像您的 pixelBufferAdapter 已发布,现在它指向字典。

于 2014-12-03T01:11:26.917 回答
0

我为此工作的唯一代码是:

https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html

  // [_context presentRenderbuffer:GL_RENDERBUFFER];

dispatch_async(dispatch_get_main_queue(), ^{
    @autoreleasepool {
        // To capture the output to an OpenGL render buffer...
        NSInteger myDataLength = _backingWidth * _backingHeight * 4;
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
        glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

        // To swap the pixel buffer to a CoreGraphics context (as a CGImage)
        CGDataProviderRef provider;
        CGColorSpaceRef colorSpaceRef;
        CGImageRef imageRef;
        CVPixelBufferRef pixelBuffer;
        @try {
            provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
            int bitsPerComponent = 8;
            int bitsPerPixel = 32;
            int bytesPerRow = 4 * _backingWidth;
            colorSpaceRef = CGColorSpaceCreateDeviceRGB();
            CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
            CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
            imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        } @catch (NSException *exception) {
            NSLog(@"Exception: %@", [exception reason]);
        } @finally {
            if (imageRef) {
                // To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
                pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
                // To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
                imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
            }
            CGDataProviderRelease(provider);
            CGColorSpaceRelease(colorSpaceRef);
            CGImageRelease(imageRef);
        }

    }
});

. . .

释放 CGDataProvider 类实例中的数据的回调:

static void releaseDataCallback (void *info, const void *data, size_t size) {
    free((void*)data);
}

CVCGImageUtil 类接口和实现文件,分别为:

@import Foundation;
@import CoreMedia;
@import CoreGraphics;
@import QuartzCore;
@import CoreImage;
@import UIKit;

@interface CVCGImageUtil : NSObject

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;

@end

#import "CVCGImageUtil.h"

@implementation CVCGImageUtil

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
    // CVPixelBuffer to CoreImage
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
    CGPoint origin = [image extent].origin;
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];

    // CoreImage to CGImage via CoreImage context
    CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];

    // CGImage to UIImage (OPTIONAL)
    //UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
    //return (CGImageRef)uiImage.CGImage;

    return cgImage;
}

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
    CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
                                  CGImageGetHeight(image));
    NSDictionary *options =
    [NSDictionary dictionaryWithObjectsAndKeys:
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGImageCompatibilityKey,
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGBitmapContextCompatibilityKey,
     nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status =
    CVPixelBufferCreate(
                        kCFAllocatorDefault, frameSize.width, frameSize.height,
                        kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
                        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(
                                                 pxdata, frameSize.width, frameSize.height,
                                                 8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                                 rgbColorSpace,
                                                 (CGBitmapInfo)kCGBitmapByteOrder32Little |
                                                 kCGImageAlphaPremultipliedFirst);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
    CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
    CMSampleBufferRef newSampleBuffer = NULL;
    CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
    CMVideoFormatDescriptionRef videoInfo = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(
                                                 NULL, pixelBuffer, &videoInfo);
    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
                                       pixelBuffer,
                                       true,
                                       NULL,
                                       NULL,
                                       videoInfo,
                                       &timimgInfo,
                                       &newSampleBuffer);

    return newSampleBuffer;
}

@end

这完全回答了您问题的 B 部分。A部分在一个单独的答案中......

于 2016-11-04T00:22:58.597 回答
0

使用此代码,我从来没有失败过将视频文件读写到 iPhone;在您的实现中,您只需将在实现方法末尾找到的 processFrame 方法中的调用替换为对您将像素缓冲区作为参数传递给其等效方法的任何方法的调用,否则修改该方法以返回根据上面的示例代码生成的像素缓冲区 - 这是基本的,所以你应该没问题:

//
//  ExportVideo.h
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "GLKitView.h"

@interface ExportVideo : NSObject
{
    AVURLAsset                           *_asset;
    AVAssetReader                        *_reader;
    AVAssetWriter                        *_writer;
    NSString                             *_outputURL;
    NSURL                                *_outURL;
    AVAssetReaderTrackOutput             *_readerAudioOutput;
    AVAssetWriterInput                   *_writerAudioInput;
    AVAssetReaderTrackOutput             *_readerVideoOutput;
    AVAssetWriterInput                   *_writerVideoInput;
    CVPixelBufferRef                      _currentBuffer;
    dispatch_queue_t                      _mainSerializationQueue;
    dispatch_queue_t                      _rwAudioSerializationQueue;
    dispatch_queue_t                      _rwVideoSerializationQueue;
    dispatch_group_t                      _dispatchGroup;
    BOOL                                  _cancelled;
    BOOL                                  _audioFinished;
    BOOL                                  _videoFinished;
    AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor;
}

@property (readwrite, retain) NSURL *url;
@property (readwrite, retain) GLKitView *renderer;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer;
- (void)startProcessing;
@end


//
//  ExportVideo.m
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import "ExportVideo.h"
#import "GLKitView.h"

@implementation ExportVideo

@synthesize url = _url;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer {
    NSLog(@"ExportVideo");
    if (!(self = [super init])) {
        return nil;
    }

    self.url = url;
    self.renderer = renderer;

    NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
    _mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

    NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
    _rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);

    NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
    _rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

    return self;
}

- (void)startProcessing {
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    _asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
    NSLog(@"URL: %@", self.url);
    _cancelled = NO;
    [_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{
        dispatch_async(_mainSerializationQueue, ^{
            if (_cancelled)
                return;
            BOOL success = YES;
            NSError *localError = nil;
            success = ([_asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
            if (success)
            {
                NSFileManager *fm = [NSFileManager defaultManager];
                NSString *localOutputPath = [self.url path];
                if ([fm fileExistsAtPath:localOutputPath])
                    //success = [fm removeItemAtPath:localOutputPath error:&localError];
                    success = TRUE;
            }
            if (success)
                success = [self setupAssetReaderAndAssetWriter:&localError];
            if (success)
                success = [self startAssetReaderAndWriter:&localError];
            if (!success)
                [self readingAndWritingDidFinishSuccessfully:success withError:localError];
        });
    }];
}


- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
    // Create and initialize the asset reader.
    _reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError];
    BOOL success = (_reader != nil);
    if (success)
    {
        // If the asset reader was successfully initialized, do the same for the asset writer.
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        _outputURL = paths[0];
        NSFileManager *manager = [NSFileManager defaultManager];
        [manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil];
        _outputURL = [_outputURL stringByAppendingPathComponent:@"output.mov"];
        [manager removeItemAtPath:_outputURL error:nil];
        _outURL = [NSURL fileURLWithPath:_outputURL];
        _writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError];
        success = (_writer != nil);
    }

    if (success)
    {
        // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
        AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
        NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
        if ([audioTracks count] > 0)
            assetAudioTrack = [audioTracks objectAtIndex:0];
        NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
        if ([videoTracks count] > 0)
            assetVideoTrack = [videoTracks objectAtIndex:0];

        if (assetAudioTrack)
        {
            // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
            NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
            _readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
            [_reader addOutput:_readerAudioOutput];
            // Then, set the compression settings to 128kbps AAC and create the asset writer input.
            AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
            };
            NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
            NSDictionary *compressionAudioSettings = @{
                                                       AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                       AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                       AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                       AVChannelLayoutKey    : channelLayoutAsData,
                                                       AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                       };
            _writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
            [_writer addInput:_writerAudioInput];
        }

        if (assetVideoTrack)
        {
            // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
            NSDictionary *decompressionVideoSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                         };
            _readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
            [_reader addOutput:_readerVideoOutput];
            CMFormatDescriptionRef formatDescription = NULL;
            // Grab the video format descriptions from the video track and grab the first one if it exists.
            NSArray *formatDescriptions = [assetVideoTrack formatDescriptions];
            if ([formatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
            CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
            };
            // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
            if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
            else
                trackDimensions = [assetVideoTrack naturalSize];
            NSDictionary *compressionSettings = nil;
            // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
            if (formatDescription)
            {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                    cleanAperture = @{
                                      AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                      AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                      AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                      AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                      };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                    pixelAspectRatio = @{
                                         AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                         AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                         };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                    NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                    if (cleanAperture)
                        [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                    if (pixelAspectRatio)
                        [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                    compressionSettings = mutableCompressionSettings;
                }
            }
            // Create the video settings dictionary for H.264.
            NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                           AVVideoCodecKey  : AVVideoCodecH264,
                                                                           AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                           AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                           };
            // Put the compression settings into the video settings dictionary if we were able to grab them.
            if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
            // Create the asset writer input and add it to the asset writer.
            _writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
            NSDictionary *pixelBufferAdaptorSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                         (id)kCVPixelBufferWidthKey               : [NSNumber numberWithDouble:trackDimensions.width],
                                                         (id)kCVPixelBufferHeightKey              : [NSNumber numberWithDouble:trackDimensions.height]
                                                         };

            _pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings];

            [_writer addInput:_writerVideoInput];
        }
    }
    return success;
}

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
    BOOL success = YES;
    // Attempt to start the asset reader.
    success = [_reader startReading];
    if (!success) {
        *outError = [_reader error];
        NSLog(@"Reader error");
    }
    if (success)
    {
        // If the reader started successfully, attempt to start the asset writer.
        success = [_writer startWriting];
        if (!success) {
            *outError = [_writer error];
            NSLog(@"Writer error");
        }
    }

    if (success)
    {
        // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
        _dispatchGroup = dispatch_group_create();
        [_writer startSessionAtSourceTime:kCMTimeZero];
        _audioFinished = NO;
        _videoFinished = NO;

        if (_writerAudioInput)
        {
            // If there is audio to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
            [_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_audioFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next audio sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer];
                    if (sampleBuffer != NULL)
                    {
                        BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                    BOOL oldFinished = _audioFinished;
                    _audioFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerAudioInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }

        if (_writerVideoInput)
        {
            // If we had video to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
            [_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_videoFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next video sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer];

                    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                    _currentBuffer = pixelBuffer;
                    [self performSelectorOnMainThread:@selector(processFrame) withObject:nil waitUntilDone:YES];

                    if (_currentBuffer != NULL)
                    {
                        //BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer];
                        BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                    BOOL oldFinished = _videoFinished;
                    _videoFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerVideoInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }
        // Set up the notification that the dispatch group will send when the audio and video work have both finished.
        dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;
            // Check to see if the work has finished due to cancellation.
            if (_cancelled)
            {
                // If so, cancel the reader and writer.
                [_reader cancelReading];
                [_writer cancelWriting];
            }
            else
            {
                // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                if ([_reader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [_reader error];
                    NSLog(@"_reader finalError: %@", finalError);
                }
                // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                [_writer finishWritingWithCompletionHandler:^{
                    [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]];
                }];
            }
            // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.

        });
    }
    // Return success here to indicate whether the asset reader and writer were started successfully.
    return success;
}

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
    if (!success)
    {
        // If the reencoding process failed, we need to cancel the asset reader and writer.
        [_reader cancelReading];
        [_writer cancelWriting];
        dispatch_async(dispatch_get_main_queue(), ^{
            // Handle any UI tasks here related to failure.
        });
    }
    else
    {
        // Reencoding was successful, reset booleans.
        _cancelled = NO;
        _videoFinished = NO;
        _audioFinished = NO;
        dispatch_async(dispatch_get_main_queue(), ^{
            UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil);
        });
    }
    NSLog(@"readingAndWritingDidFinishSuccessfully success = %@ : Error = %@", (success == 0) ? @"NO" : @"YES", error);
}

- (void)processFrame {

    if (_currentBuffer) {
        if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly))
        {
            [self.renderer processPixelBuffer:_currentBuffer];
            CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly);
        } else {
            NSLog(@"processFrame END");
            return;
        }
    }
}

@end
于 2016-11-04T00:24:36.837 回答