我当前的设置如下(基于Brad Larson的 ColorTrackingCamera 项目):
我正在使用一AVCaptureSession
组AVCaptureSessionPreset640x480
,我让输出作为纹理通过 OpenGL 场景运行。然后这个纹理由片段着色器操作。
我需要这种“低质量”预设,因为我想在用户预览时保持高帧率。然后,当用户拍摄静止照片时,我想切换到更高质量的输出。
首先我想我可以改变它sessionPreset
,AVCaptureSession
但这会迫使相机重新聚焦,这会破坏可用性。
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession commitConfiguration];
目前我正在尝试向AVCaptureStillImageOutput
AVCaptureSession 添加一秒钟,但我得到了一个空的像素缓冲区,所以我觉得我有点卡住了。
这是我的会话设置代码:
...
// Add the video frame output
[captureSession beginConfiguration];
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(@"Couldn't add video output");
}
[captureSession commitConfiguration];
// Add still output
[captureSession beginConfiguration];
stillOutput = [[AVCaptureStillImageOutput alloc] init];
if([captureSession canAddOutput:stillOutput])
{
[captureSession addOutput:stillOutput];
}
else
{
NSLog(@"Couldn't add still output");
}
[captureSession commitConfiguration];
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if(![captureSession isRunning])
{
[captureSession startRunning];
};
...
这是我的捕获方法:
- (void)prepareForHighResolutionOutput
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
NSLog(@"%i x %i", width, height);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}];
}
(width
结果height
为0)
我已经阅读了 AVFoundation 文档的文档,但似乎我没有得到必要的东西。