1

我正在尝试使用 AVCaptureSession 从前置摄像头获取图像进行处理。到目前为止,每次有一个新帧可用时,我只是将它分配给一个变量,然后运行一个 NSTimer,它每十分之一秒检查一次是否有新帧,如果有新帧则处理它。

我想获得一个帧,冻结相机,并随时获得下一帧。[captureSession getNextFrame] 你知道吗?

这是我的代码的一部分,尽管我不确定它有多大帮助:

- (void)startFeed {

 loopTimerIndex = 0;

    NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[captureDevices objectAtIndex:1] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.minFrameDuration = CMTimeMake(1, 10);
    captureOutput.alwaysDiscardsLateVideoFrames = true;

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", nil);

    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
    NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [captureOutput setVideoSettings:videoSettings];

    captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetLow;
    [captureSession addInput:captureInput];
    [captureSession addOutput:captureOutput];

    imageView = [[UIImage alloc] init];

    [captureSession startRunning];

}

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection {

 loopTimerIndex++;

    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);

    imageView = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
    [delegate updatePresentor:imageView];
    if(loopTimerIndex == 1) {
        [delegate feedStarted];
    }

    CGImageRelease(newImage);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    [pool drain];

}
4

1 回答 1

3

您不会主动轮询相机以获取帧,因为这不是捕获过程的架构方式。相反,如果您只想每十分之一秒而不是每 1/30 或更快显示帧,则应该忽略其间的帧。

例如,您可以维护一个时间戳,以便与每次-captureOutput:didOutputSampleBuffer:fromConnection:触发时进行比较。如果时间戳大于等于从现在开始的0.1秒,则处理并显示相机帧并将时间戳重置为当前时间。否则,忽略框架。

于 2010-11-22T20:08:14.407 回答