1

我想使用在线提升结合两个功能。

我阅读了更多使用 boosting 解释在线提升和联合特征的论文,论文是:

使用跟随机器人的人的颜色、身高和步态特征识别特定人

使用在线增强目标模型在 RGB-D 数据中进行人员跟踪

在线提升和愿景

服务机器人的快速 RGB-D 人员跟踪

Boosting with a Joint Feature Pool from different Sensors , Convolutional Channel Features-Based Person Identification for Person following Robots , 代码在这里

这里解释一下AdaBoostClassifier

我从理论上理解它,但我无法在 python 中实现它。我对 C++ 的经验为零。任何人帮助我,这是我的简单代码:

import cv2
import time
import numpy as np

person_cascade = cv2.CascadeClassifier(('haarcascade_upperbody.xml'))
cap = cv2.VideoCapture(0)
while True:
    r, frame = cap.read()
    #================================color feature===================
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    green_lower = np.array([0, 0, 0], np.uint8)
    green_upper = np.array([180, 255, 30], np.uint8)
    green = cv2.inRange(hsv, green_lower, green_upper)
    (_, contours, hierarchy) = cv2.findContours(green, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    for pic, contour in enumerate(contours):
        area1 = cv2.contourArea(contour)
        if (area1 > 300):
            # print area1

            x1, y1, w1, h1 = cv2.boundingRect(contour)
            img = cv2.rectangle(frame, (x1, y1), (x1 + w1, y1 + h1), (255, 0, 0), 2)
            cv2.putText(frame, "green Colour", (x1, y1), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0))
    #====================================upperbody feature=============       
    if r:
        start_time = time.time()
        frame = cv2.resize(frame,(640,360)) # Downscale to improve frame rate
        gray_frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) # Haar-cascade classifier needs a grayscale image
        rects = person_cascade.detectMultiScale(gray_frame)

        for (x, y, w, h) in rects:
            cv2.rectangle(frame, (x,y), (x+w,y+h),(0,255,0),2)
        cv2.imshow("preview", frame)
    k = cv2.waitKey(1)
    if k & 0xFF == ord("q"): # Exit condition
        break
4

0 回答 0