1

我正在使用 face-api 库。 https://github.com/justadudewhohacks/face-api.js

我正在尝试获取视频中的面部位置。

我想做一个申请。它可以设置我的脸优先位置。然后它可以提供信息我的脸移动了多少。

例如,我的视频宽度 = 600 像素,高度 = 400 像素。然后我想得到我的眼睛位置,比如我的左眼位置是距离右侧 200 像素,距离底部 300 像素。那是我左眼的第一个位置。设置第一个位置后,如果我移动,应用程序会显示警报或弹出窗口。

4

1 回答 1

7

首先,创建视频、流式传输并加载所有模型。确保在方法中加载所有模型。Promise.all()

您可以像这样设置Face DetectionFace Landmarks

video.addEventListener('play', () => {
    // Create canvas from our video element
    const canvas = faceapi.createCanvasFromMedia(video);
    document.body.append(canvas);
    // Current size of our video
    const displaySize = { width: video.width, height: video.height }
    faceapi.matchDimensions(canvas, displaySize);
    // run the code multiple times in a row --> setInterval
    //  async func 'cause it's a async library
    setInterval(async () => {
        // Every 100ms, get all the faces inside of the webcam image to video element
        const detections = await faceapi.detectAllFaces(video, 
        new faceapi.TinyFaceDetectorOptions())
        .withFaceLandmarks().withFaceExpressions();
        // boxes will size properly for the video element
        const resizedDetections = faceapi.resizeResults(detections, displaySize);
        // get 2d context and clear it from 0, 0, ...
        canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
        faceapi.draw.drawDetections(canvas, resizedDetections);
        faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
        faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
    }, 100)
});

然后您可以检索Face Landmark点和轮廓。

这适用于所有Face Landmark职位:

const landmarkPositions = landmarks.positions

这对于单个标记的位置:

// only available for 68 point face landmarks (FaceLandmarks68)
const jawOutline = landmarks.getJawOutline();
const nose = landmarks.getNose();
const mouth = landmarks.getMouth();
const leftEye = landmarks.getLeftEye();
const rightEye = landmarks.getRightEye();
const leftEyeBrow = landmarks.getLeftEyeBrow();
const rightEyeBrow = landmarks.getRightEyeBrow();

对于左眼的位置,您可以创建一个异步函数video.addEventListener并获取左眼的第一个位置:

video.addEventListener('play', () => {
    ...
    async function leftEyePosition() {
         const landmarks = await faceapi.detectFaceLandmarks(video)
         const leftEye = landmarks.getLeftEye();
         console.log("Left eye position ===>" + JSON.stringify(leftEye));
    }
});
于 2019-11-05T22:43:13.497 回答