1

我需要它来检测眼睛(分别打开或关闭),裁剪它们并将它们保存为图像。它有效,但并非在每张照片中都有效。

我尝试了我能想到的一切。我为 scaleFactor 和 minNeighbors 尝试了不同的值,还尝试为检测到的眼睛添加最小和最大尺寸(没有太大区别)。

我仍然遇到问题。它有时会检测到超过 2 只眼睛,有时只能检测到 1 只。有时它甚至会将鼻孔误认为是眼睛:D。尤其是闭上眼睛时,错误非常频繁。

我可以做些什么来提高准确性?这对我的程序的其余部分非常重要。

  face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
  eyes_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml')

  faces_detected = face_cascade.detectMultiScale(img, scaleFactor=1.1, minNeighbors=5)

  (x, y, w, h) = faces_detected[0]
  cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 1);

  eyes = eyes_cascade.detectMultiScale(img[y:y + h, x:x + w], scaleFactor=1.1, minNeighbors=5)
  count = 1
  for (ex, ey, ew, eh) in eyes:
      cv2.rectangle(img, (x + ex, y + ey), (x + ex + ew, y + ey + eh), (255, 255, 255), 1)
      crop_img = img[y + ey:y + ey + eh, x + ex:x + ex + ew]
      s1 = 'Images/{}.jpg'.format(count)
      count = count + 1
      cv2.imwrite(s1, crop_img)
4

1 回答 1

1

对于人脸检测,我的首选是dlibPython API)。它涉及更多且速度较慢,但​​会产生更高质量的结果。

第 1 步是从 转换OpenCVdlib

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

接下来,您可以使用dlib人脸检测器来检测人脸(第二个参数表示上采样 1 倍):

detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

然后使用预训练的 68 点预测器找到面部标志:

sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

注意:从这里你可以得到面部筹码dlib.get_face_chip(img, faces[0])

现在您可以获得边界框和眼睛的位置:

bb = faces[0].rect

right_eye = [faces[0].part(i) for i in range(36, 42)]
left_eye = [faces[0].part(i) for i in range(42, 48)]

以下是根据pyimagesearch的所有映射:

mouth: 48 - 68
right_eyebrow: 17 - 22
left_eyebrow: 22 - 27
right_eye: 36 - 42
left_eye: 42 - 48
nose: 27 - 35
jaw: 0 - 17

这是我放在一起的结果和代码: 示例 1 示例 2

import dlib
import cv2

# Load image
img = cv2.imread("monalisa.jpg")

# Convert to dlib
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# dlib face detection
detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

# Find landmarks
sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

# Bounding box and eyes
bb = [i.rect for i in faces]
bb = [((i.left(), i.top()),
       (i.right(), i.bottom())) for i in bb]                            # Convert out of dlib format

right_eyes = [[face.part(i) for i in range(36, 42)] for face in faces]
right_eyes = [[(i.x, i.y) for i in eye] for eye in right_eyes]          # Convert out of dlib format

left_eyes = [[face.part(i) for i in range(42, 48)] for face in faces]
left_eyes = [[(i.x, i.y) for i in eye] for eye in left_eyes]            # Convert out of dlib format

# Display
imgd = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)             # Convert back to OpenCV
for i in bb:
    cv2.rectangle(imgd, i[0], i[1], (255, 0, 0), 5)     # Bounding box

for eye in right_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 0, 255), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 255, 0), -1)

for eye in left_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 255, 0), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 0, 255), -1)

cv2.imwrite("output.jpg", imgd)

cv2.imshow("output", imgd)
cv2.waitKey(0)
于 2019-11-10T23:32:40.600 回答