1

所以我一直在做一个视频捕捉项目,它允许用户捕捉图像和视频并应用过滤器。我正在使用 AVfoundation 框架,我成功地捕获了静止图像,并将视频帧捕获为 UIImage 对象......剩下的唯一事情就是录制视频。

这是我的代码:

- (void)initCapture {

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;



    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];




    stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];
    [session addOutput:stillImageOutput];


    captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 


    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 

    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings];    

    [session addOutput:captureOutput]; 

    [session startRunning];    
}




- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);  
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 


    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace);


    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    CGImageRelease(newImage);

    UIImage *ima = [filter applyFilter:image];

    /*if(isRecording == YES)
    {
        [imageArray addObject:ima];  
    }
     NSLog(@"Count= %d",imageArray.count);*/

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:ima waitUntilDone:YES];


    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

   [pool drain];

} 

我尝试将 UIImages 存储在一个可变数组中,但这是一个愚蠢的想法。有什么想法吗?任何帮助将不胜感激

4

1 回答 1

1

你在使用 CIFilter 吗?如果没有,也许您应该考虑使用它来实现基于 GPU 的快速转换。

您可能希望在生成相应图像后直接将其记录到 AVAssetWriter 中。查看 Apple 提供的 RosyWriter 示例代码,了解他们如何执行此操作。总之,他们利用 AVAssetWriter 将帧捕获到一个临时文件,然后在完成后将该文件存储到相机。

不过,一个警告是 RosyWriter 在我的第 4 代 iPod touch 上获得了 4fps。他们正在对 CPU 上的像素进行暴力更改。Core Image 执行基于 GPU 的过滤器,我能够达到 12fps,在我看来,这仍然不是应该的。

祝你好运!

于 2012-07-17T15:25:36.367 回答