我在这里找到了一个起点:https ://github.com/jeffreybergier/Blog-Getting-Started-with-Vision 。
基本上,您可以启动一个视频捕获会话,声明一个惰性变量,如下所示:
private lazy var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.photo
guard
let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
let input = try? AVCaptureDeviceInput(device: frontCamera)
else { return session }
session.addInput(input)
return session
}()
然后在viewDidLoad
你里面开始会话
self.captureSession.startRunning()
最后你可以在里面执行你的请求
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
例如:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer:
CMSampleBuffer, from connection: AVCaptureConnection) {
guard
// make sure the pixel buffer can be converted
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
else { return }
let faceRequest = VNDetectFaceRectanglesRequest(completionHandler: self.faceDetectedRequestUpdate)
// perform the request
do {
try self.visionSequenceHandler.perform([faceRequest], on: pixelBuffer)
} catch {
print("Throws: \(error)")
}
}
然后你定义你的faceDetectedRequestUpdate
功能。
无论如何,我不得不说我无法弄清楚如何从这里创建一个工作示例。我发现的最佳工作示例在 Apple 的文档中:https ://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time