我设置了AVCaptureSession
预设的 PhotoPreset
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
然后在我的视图中添加一个新图层
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [self.view layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
到目前为止一切顺利,但是当我想捕捉图像时,我使用下面的代码
AVCaptureConnection *videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
[self.session stopRunning];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData ];
self.imageView.image = image; //IMAGEVIEW IS WITH THE BOUNDS OF SELF.VIEW
image = nil;
}];
捕获图像很好,但是与AVCaptureVideoPreviewLayer
在屏幕上显示的图像相比,捕获的图像有所不同。我真正想做的是显示捕获的内容就像出现在AVCapturePreviewLayer
图层上一样。我怎样才能做到这一点?我应该如何调整和裁剪捕获的图像的边界self.view
?