-1

我正在尝试从视频输入中检测人体物体和球,我能够识别这两个物体并在识别的物体周围绘制一个方框,但是如何在它们移动的轨迹中绘制一条连续线?我已经从 YoloV5 Github Repo 下载了 detect.py 文件,并自定义了要识别的对象。

我想画一条连接前一点和当前点的连续线,直到对象在视频中失焦?

我需要像这张图片一样在球轨迹上画一条线,

在此处输入图像描述

# Apply Classifier
if classify:
    pred = apply_classifier(pred, modelc, img, im0s)

# Process detections
for i, det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
    else:
        p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale boxes from img_size to im0 size
        det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

        # Print results
        for c in det[:, -1].unique():
            n = (det[:, -1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Write results
        for *xyxy, conf, cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                with open(txt_path + '.txt', 'a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bbox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p), im0)
        cv2.waitKey(1)  # 1 millisecond
4

2 回答 2

1

假设您只需要跟踪一个球。在所有帧中检测到球后,您需要做的就是从检测到的第一帧画一条黄色透明线,从球的中心开始,到下一帧的球中心。线的宽度将是(比方说)该帧中球宽度的 30%。只需保留对象中心和大小的列表。

现在,如果你有几个永远不会相交的球,你需要做的就是让每个球找到在前几帧中更接近的那个。

最后,如果两个球确实相交,找到它们的运动矢量(从两个“球”对象变成一个或不再被识别为球然后再次分裂的那一刻起,前后进行几帧的回归),并且根据他们的历史位置分配轨迹。

如果轨迹线过于抖动,请使用移动中值平滑轨迹/宽度。

于 2021-06-20T21:12:45.607 回答
1

以下结构可能会有所帮助:

这是在每帧只有 1 次检测的情况下


# A list to store centroids of detected
cent_hist = []

def draw_trajectory(frame: numpy.ndarray ,cent_hist: list = cent_hist,trajectory_length: int = 50) -> numpy.ndarray:
    if len(cent_hist)>trajectory_length:
        while len(cent_hist)!=trajectory_length:
            cent_hist.pop(0)
    for i in range(len(cent_hist)-1):
        frame = cv2.line(frame, cent_hist[i], cent_hist[i+1], (0,0,0))
    return frame

for i, det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
    else:
        p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale boxes from img_size to im0 size
        det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

        # Print results
        for c in det[:, -1].unique():
            n = (det[:, -1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Write results
        for *xyxy, conf, cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                with open(txt_path + '.txt', 'a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bbox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')
    
    ### Calculate centroid here
    centroid = (50,50) # Change this

    cent_hist.append(centroid)
    im0 = draw_trajectory(im0, cent_hist,50)

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p), im0)
        cv2.waitKey(1)  # 1 millisecond

如果您想将其用于多个检测,那么我建议您使用一些对象跟踪算法,例如:link这将帮助您更好地解决分配问题(当您有多个点时)。

于 2021-06-23T17:40:53.543 回答