我正在尝试在我的 ios 6 应用程序中使用 Apple 的新功能AVMetadataFaceObject,它可以让您识别人脸。基本上他们想要实现的是创建AVCaptureMetadataOutput
对象并将其设置为现有的AVAVCaptureSession
作为输出。所以,我从这个链接得到了 squarecam Apple 的示例代码
我试图创建这样的对象:
CaptureObject = [[AVCaptureMetadataOutput alloc]init];
objectQueue = dispatch_queue_create("VideoDataOutputQueue", NULL);//dispatch_queue_create("newQueue", NULL);
[CaptureObject setMetadataObjectsDelegate:self queue:objectQueue];
我在这里将输入添加到会话中:
- (void)setupAVCapture
{ NSError *error = nil;
AVCaptureSession *session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
[session setSessionPreset:AVCaptureSessionPreset640x480];
else
[session setSessionPreset:AVCaptureSessionPresetPhoto];
// Select a video device, make an input
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
require( error == nil, bail );
isUsingFrontFacingCamera = NO;
if ( [session canAddInput:deviceInput] )
[session addInput:deviceInput];
// Make a still image output
stillImageOutput = [AVCaptureStillImageOutput new];
[stillImageOutput addObserver:self forKeyPath:@"capturingStillImage" options:NSKeyValueObservingOptionNew context:AVCaptureStillImageIsCapturingStillImageContext];
if ( [session canAddOutput:stillImageOutput] )
[session addOutput:stillImageOutput ];
**[session addOutput:CaptureObject];//////HERE///////**
// Make a video data output
videoDataOutput = [AVCaptureVideoDataOutput new];
// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoDataOutput setVideoSettings:rgbOutputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image)
// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order
// see the header doc for setSampleBufferDelegate:queue: for more information
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
if ( [session canAddOutput:videoDataOutput] )
[session addOutput:videoDataOutput];
[[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:NO];
effectiveScale = 1.0;
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
CALayer *rootLayer = [previewView layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
[session startRunning];
}}
所以基本上委托应该调用这个方法:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
}
但没有发生。
任何的想法?