1

在我的项目中,我需要将视频的每一帧的一部分复制到一个唯一的结果图像上。

捕获视频帧并不是什么大问题。它会是这样的:

// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
    UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];

    CGRect destinationRect = [self getDestinationRect:i];
    [self drawImage:image inRect:destinationRect fromRect:originRect];

    // UI feedback
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}

当我尝试实现drawImage:inRect:fromRect:方法时,问题就来了。
我尝试了这段代码,其中:

  1. 从视频帧创建一个新的 CGImageCGImageCreateWithImageInRect以提取图像块。
  2. 在 ImageContext 上创建一个 CGContextDrawImage 来绘制块

但是当视频达到 12-14 秒时,我的 iPhone 4S 正在宣布他的第三次内存警告并崩溃。我已经使用泄漏工具对应用程序进行了分析,它根本没有发现泄漏...

我在石英方面不是很强。有没有更好的优化方法来实现这一点?

4

2 回答 2

1

最后,我保留了代码中的 Quartz 部分并更改了检索图像的方式。

现在我使用 AVFoundation,这是一个更快的解决方案。

// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
                                             options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];

// Retrieving the video properties
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
frameDuration = CMTimeGetSeconds(composition.frameDuration);
CGSize renderSize = composition.renderSize;
CGFloat totalFrames = round(duration/frameDuration);

// Selecting each frame we want to extract : all of them.
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
for (int i=0; i<totalFrames; i++) {
    NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
    [times addObject:time];
}

__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
    if (result == AVAssetImageGeneratorSucceeded) {
        int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
        CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
        [self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
    }
    else
        NSLog(@"Ouch: %@", error.description);
    i++;
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
    if(i == totalFrames) {
        [self performSelectorOnMainThread:@selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
    }
};

// Launching the process...
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.maximumSize = renderSize;
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];

即使有很长的视频,它也需要时间,但它永远不会崩溃!

于 2013-02-26T20:32:29.387 回答
0

除了马丁的回答,我建议缩小通过该调用获得的图像的大小;也就是添加一个属性[generator.maximumSize = CGSizeMake(width,height)];让图片尽可能的小,这样就不会占用太多内存

于 2014-06-27T14:14:57.087 回答