4

Setting the scene

I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.

  1. Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
  2. Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
  3. A music audio track.
  4. An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.

I have this all working, can composite it and export it correctly using AVExportSession.

The problem

What I now want to do is apply filters and gradients to the ugcVideoComposition.

My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.

I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.

    BOOL done = NO;
    while (!done)
    {
        while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
        {
            CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
            if (sampleBuffer)
            {
                // Let's try create an image....
                CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];

                // < Apply filters and transformations to the CIImage here

                // < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >

                // Write things back out.
                [assetWriterVideoInput appendSampleBuffer:sampleBuffer];

                CFRelease(sampleBuffer);
                sampleBuffer = NULL;
            }
            else
            {
                // Find out why we couldn't get another sample buffer....
                if (assetReader.status == AVAssetReaderStatusFailed)
                {
                    NSError *failureError = assetReader.error;
                    // Do something with this error.
                }
                else
                {
                    // Some kind of success....
                    done = YES;
                    [assetWriter finishWriting];

                }
            }
         }
      }

As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.

The question

Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?

Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.

Much appreciated and thanks!

4

2 回答 2

3

尝试使用:SDAVAssetExportSession

GITHub 上的 SDAVAssetExportSession

然后实现一个委托来处理像素

- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer

{ Do CIImage and CIFilter inside here }
于 2014-04-05T23:57:45.450 回答
1

我最终通过挖掘大量来自 Apple 的半完整样本和糟糕的 AVFoundation 文档找到了解决方案。

最大的困惑是,虽然在高层次上,AVFoundation 在 iOS 和 OSX 之间是“合理”一致的,但低层次的项目表现不同,有不同的方法和不同的技术。此解决方案适用于 OSX。

设置你的 AssetWriter

第一件事是确保在设置资产编写器时,添加一个适配器以从 CVPixelBuffer 读取。此缓冲区将包含修改后的帧。

    // Create the asset writer input and add it to the asset writer.
    AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
    // Now create an adaptor that writes pixels too!
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                   assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
                                                 sourcePixelBufferAttributes:nil];
    assetWriterVideoInput.expectsMediaDataInRealTime = NO;
    [assetWriter addInput:assetWriterVideoInput];

读写

这里的挑战是我无法在 iOS 和 OSX 之间找到直接可比较的方法 - iOS 能够将上下文直接渲染到 PixelBuffer,而 OSX 不支持该选项。在 iOS 和 OSX 之间,上下文的配置也不同。

请注意,您也应该将 QuartzCore.Framework 包含到您的 XCode 项目中。

在 OSX 上创建上下文。

    CIContext *context = [CIContext contextWithCGContext:
                      [[NSGraphicsContext currentContext] graphicsPort]
                                             options: nil]; // We don't want to always create a context so we put it outside the loop

现在您想要循环,读取 AssetReader 并写入 AssetWriter……但请注意,您是通过之前创建的适配器写入的,而不是使用 SampleBuffer。

    while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
    {
        CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
        if (sampleBuffer)
        {
            CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

            // GRAB AN IMAGE FROM THE SAMPLE BUFFER
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                     [NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
                                     [NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
                                     nil];

            CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];

            //-----------------
            // FILTER IMAGE - APPLY ANY FILTERS IN HERE

            CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];
            [filter setDefaults];
            [filter setValue: inputImage forKey: kCIInputImageKey];
            [filter setValue: @1.0f forKey: kCIInputIntensityKey];

            CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];


            //-----------------
            // RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
            // 1. Firstly render the image
            CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

            // 2. Grab the size
            CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

            // 3. Convert the CGImage to a PixelBuffer
            CVPixelBufferRef pxBuffer = NULL;
            // pixelBufferFromCGImage is documented below.
            pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

            // 4. Write things back out.
            // Calculate the frame time
            CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
            CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.

            // Finally write out using the adaptor.
            [adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

            CFRelease(sampleBuffer);
            sampleBuffer = NULL;
        }
        else
        {
            // Find out why we couldn't get another sample buffer....
            if (assetReader.status == AVAssetReaderStatusFailed)
            {
                NSError *failureError = assetReader.error;
                // Do something with this error.
            }
            else
            {
                // Some kind of success....
                done = YES;
                [assetWriter finishWriting];
            }
        }
    }
}

创建像素缓冲区

必须有一个更简单的方法,但是现在,这是可行的,并且是我发现在 OSX 上直接从 CIImage 到 PixelBuffer(通过 CGImage)的唯一方法。以下代码是从AVFoundation + AssetWriter 中剪切和粘贴的:Generate Movie With Images and Audio

    - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
        CVPixelBufferRef pxbuffer = NULL;

        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                                      size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                      &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);

        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                             size.height, 8, 4*size.width, rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                       CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);

        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

        return pxbuffer;
    }
于 2014-04-03T02:55:00.970 回答