0

我目前正在从事一个涉及AVCaptureVideoDataOutputSampleBufferDelegate眨眼检测的项目。

dispatch_async我在委托方法中有以下块

(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{

//Initialisation of buffer and UIImage and CIDetector, etc.

    dispatch_async(dispatch_get_main_queue(), ^(void) {
        if(features.count > 0){
            CIFaceFeature *feature = [features objectAtIndex:0];
            if([feature leftEyeClosed]&&[feature rightEyeClosed]){
                flag = TRUE;
            }else{
                if(flag){
                    blinkcount++;
                    //Update UILabel containing blink count. The count variable is incremented from here.
                }
            flag = FALSE;
            }
    }
}

上面显示的方法被连续调用并处理来自相机的视频输入。flag布尔值跟踪在最后一帧中眼睛是闭合还是睁开,以便可以检测到眨眼。有大量帧被丢弃,但仍然正确检测到眨眼,所以我猜处理的 fps 是足够的。

我的问题是UILabel在执行眨眼后经过相当长的延迟(约 1 秒)后更新。这使应用程序看起来滞后且不直观。我尝试在没有调度的情况下编写 UI 更新代码,但这是不行的。有什么我可以做的,以便UILabel在眨眼后立即更新吗?

4

1 回答 1

1

如果没有更多代码,很难确切知道这里发生了什么,但是在调度代码之上,你说:

//Initialisation of buffer and UIImage and CIDetector, etc.

如果你真的每次都在初始化检测器,那可能不是最理想的——让它长寿命。我不确定初始化 CIDetector 是否昂贵,但它是一个开始的地方。此外,如果您真的在这里使用 UIImage,那也是次优的。不要通过UIImage,走更直接的路线:

CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib];
NSArray* features = [longLivedDetector featuresInImage: ciImage];

最后,在后台线程上进行特征检测,并且只将 UILabel 更新封送回主线程。像这样:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    if (!_longLivedDetector) {
        _longLivedDetector = [CIDetector detectorOfType:CIDetectorTypeFace context: ciContext options: whatever];
    }

    CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib];
    NSArray* features = [_longLivedDetector featuresInImage: ciImage];
    if (!features.count)
        return;

    CIFaceFeature *feature = [features objectAtIndex:0];
    const BOOL leftAndRightClosed = [feature leftEyeClosed] && [feature rightEyeClosed];

    // Only trivial work is left to do on the main thread.
    dispatch_async(dispatch_get_main_queue(), ^(void){
        if (leftAndRightClosed) {
            flag = TRUE;
        } else {
            if (flag) {
                blinkcount++;
                //Update UILabel containing blink count. The count variable is incremented from here.
            }
            flag = FALSE;
        }
    });
}

最后,您还应该记住,面部特征检测是一项重要的信号处理任务,需要大量计算(即时间)才能完成。我希望会有一个点,如果没有硬件更快,就没有办法让它更快。

于 2013-10-14T11:51:03.483 回答