我的问题
我正在使用 AVFoundation 每 5 秒捕获一次视频输出的帧。当用户点击一个按钮时,输入摄像头应该从前一个切换到后一个,反之亦然。问题是,每次我将后置摄像头切换到前置摄像头时,它都会冻结(奇怪的是,它的工作方式相反 - 意思是前置摄像头到后置摄像头)!
我使用的代码
为了在相机之间切换,我使用了取自 Apple 的AVCam 示例代码的确切代码,只是稍微更改了变量名称(因此它将与我的代码匹配):
- (BOOL)toggleCamera
{
BOOL success = NO;
if ([self cameraCount] > 1) {
NSError *error;
AVCaptureDeviceInput *newVideoInput;
AVCaptureDevicePosition position = [[self.videoInput device] position];
if (position == AVCaptureDevicePositionBack)
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self frontFacingCamera] error:&error];
else if (position == AVCaptureDevicePositionFront)
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:&error];
else
goto bail;
if (newVideoInput != nil) {
// Start configuring the session.
[[self captureSession] beginConfiguration];
// Remove the current video input device.
[[self captureSession] removeInput:[self videoInput]];
if ([[self captureSession] canAddInput:newVideoInput]) {
[[self captureSession] addInput:newVideoInput];
[self setVideoInput:newVideoInput];
}
else {
[[self captureSession] addInput:[self videoInput]];
}
[[self captureSession] commitConfiguration];
success = YES;
[newVideoInput release];
}
else if (error) {
NSLog(@"PICTURE TAKER: Failed to toggle cameras."
@"\nError: %@", error.localizedDescription);
}
}
bail:
return success;
}
为了设置AVCaptureSession
,我再次使用AVCam 示例代码中的完全相同的代码,在添加的输出中略有变化,(以满足我的需要):
- (BOOL) setupSession
{
BOOL success = NO;
// Set torch and flash mode to auto
if ([[self backFacingCamera] hasFlash]) {
if ([[self backFacingCamera] lockForConfiguration:nil]) {
if ([[self backFacingCamera] isFlashModeSupported:AVCaptureFlashModeAuto]) {
[[self backFacingCamera] setFlashMode:AVCaptureFlashModeAuto];
}
[[self backFacingCamera] unlockForConfiguration];
}
}
if ([[self backFacingCamera] hasTorch]) {
if ([[self backFacingCamera] lockForConfiguration:nil]) {
if ([[self backFacingCamera] isTorchModeSupported:AVCaptureTorchModeAuto]) {
[[self backFacingCamera] setTorchMode:AVCaptureTorchModeAuto];
}
[[self backFacingCamera] unlockForConfiguration];
}
}
// Init the device inputs
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self frontFacingCamera] error:nil];
// Create a video output & configure it.
AVCaptureVideoDataOutput *newVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
newVideoDataOutput.videoSettings = @{(NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
// Set the output's delegate.
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[newVideoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Create session (use default AVCaptureSessionPresetHigh)
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
// Add inputs and output to the capture session
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newVideoDataOutput]) {
[newCaptureSession addOutput:newVideoDataOutput];
}
[self setVideoOutput:newVideoDataOutput];
[self setVideoInput:newVideoInput];
[self setCaptureSession:newCaptureSession];
[newVideoDataOutput release];
[newVideoInput release];
[newCaptureSession release];
// Get our view's layer.
CALayer *viewLayer = self.view.layer;
// Add the the session's preview layer.
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = self.view.bounds;
[viewLayer insertSublayer:captureVideoPreviewLayer below:self.imageViewOverlay.layer];
[captureVideoPreviewLayer release];
success = YES;
return success;
}
*重要提示:调用输出代理后,相机立即冻结。
谁能帮我解决这个问题?好像我几乎什么都试过了!
更新#1
根据要求, self 指的是一个视图控制器,它呈现会话的预览层并管理与相机相关的所有事物。此外,我发布了处理 frontFacingCamera 部分的代码,该代码(就像我迄今为止发布的所有代码一样)取自 Apple 的AVCam 示例代码:
// Find a camera with the specificed AVCaptureDevicePosition, returning nil if one is not found
- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == position) {
return device;
}
}
return nil;
}
// Find a front facing camera, returning nil if one is not found
- (AVCaptureDevice *) frontFacingCamera
{
return [self cameraWithPosition:AVCaptureDevicePositionFront];
}