4

我希望能够从相机源中跟踪用户的脸。我看过这个SO 帖子。我使用了答案中给出的代码,但它似乎没有做任何事情。我听说过

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)

在 swift 4 中已更改为其他内容。这可能是代码的问题吗?

在面部跟踪时,我还想使用 CIFaceFeature 监控面部标志。我该怎么做?

4

1 回答 1

1

我在这里找到了一个起点:https ://github.com/jeffreybergier/Blog-Getting-Started-with-Vision 。

基本上,您可以启动一个视频捕获会话,声明一个惰性变量,如下所示:

private lazy var captureSession: AVCaptureSession = {
    let session = AVCaptureSession()
    session.sessionPreset = AVCaptureSession.Preset.photo
    guard
        let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
        let input = try? AVCaptureDeviceInput(device: frontCamera)
        else { return session }
    session.addInput(input)
    return session
}()

然后在viewDidLoad你里面开始会话

self.captureSession.startRunning()

最后你可以在里面执行你的请求

func captureOutput(_ output: AVCaptureOutput, 
    didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}

例如:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: 
    CMSampleBuffer, from connection: AVCaptureConnection) {
    guard
        // make sure the pixel buffer can be converted
        let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        else { return }

    let faceRequest = VNDetectFaceRectanglesRequest(completionHandler: self.faceDetectedRequestUpdate)

    // perform the request
    do {
        try self.visionSequenceHandler.perform([faceRequest], on: pixelBuffer)
    } catch {
        print("Throws: \(error)")
    }
}

然后你定义你的faceDetectedRequestUpdate功能。

无论如何,我不得不说我无法弄清楚如何从这里创建一个工作示例。我发现的最佳工作示例在 Apple 的文档中:https ://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time

于 2018-08-14T15:10:53.853 回答