1

我目前正在通过 solvePnP() 获取 apriltag 对象的姿势并使用 projectPoints() 投影点

这在 videoStream 上调用,因此为了尝试优化solvePnP(),我试图获取前一个姿势(前一帧中对象的姿势),并将该姿势传递给当前帧的solvePnP()。

这是代码:

# This function takes the image frame, and previous rotation and translation vectors as params: img, pvecs, tvecs

# If 1 or more apriltags are detected
if num_detections > 0:
    # If the camera was calibrated and the matrix is supplied
    if mtx is not None:
        # Image points are the corners of the apriltag
        imagePoints = detection_results[0].corners.reshape(1,4,2) 
        
        # objectPoints are obtained within another function

        # If pose is None, call solvePnP() without Guessing extrinsics
        if [x for x in (prvecs, ptvecs) if x is None]:
            success, prvecs, ptvecs = cv2.solvePnP(objectPoints, imagePoints, mtx, dist, flags=cv2.SOLVEPNP_ITERATIVE)
        else:
        # Else call solvePnP() with predefined rvecs and tvecs
            print("Got prvecs and tvecs")
            success, prvecs, ptvecs = cv2.solvePnP(objectPoints, imagePoints, mtx, dist, prvecs, ptvecs, True, flags=cv2.SOLVEPNP_ITERATIVE)

        # If the pose is obtained successfully, the project the 3D points 
        if success:
            imgpts, jac = cv2.projectPoints(opointsArr, prvecs, ptvecs, mtx, dist)
      
            # Draw the 3D points onto image plane
            draw_contours(img, dimg, imgpts)

在视频流功能中:

# Create a cv2 window to show images
window = 'Camera'
cv2.namedWindow(window)

# Open the first camera to get the video stream and the first frame
cap = cv2.VideoCapture(0)
success, frame = cap.read()

if dist is not None:
    frame = undistort_frame(frame)

prvecs = None
ptvecs = None
# Obtain previous translation and rotation vectors (pose)
img, dimg, prvecs, ptvecs = apriltag_real_time_pose_estimation(frame, prvecs, ptvecs)

while True:

    success, frame = cap.read()
    if dist is not None:
        frame = undistort_frame(frame)

    if not success:
        break
    
    # Keep on passing the pose obtained from the previous frame
    img, dimg, prvecs, ptvecs = apriltag_real_time_pose_estimation(frame, prvecs, ptvecs)

我现在想获得姿势的速度和加速度,并将其传递给solvePnP()。

对于姿势速度,我知道我只需要从当前平移向量中减去先前的平移向量,但我不确定如何处理旋转矩阵(通过 Rodrigues() 获得,以及获得加速度.我在想,也许我必须得到两个旋转矩阵之间的角度,而差异会产生变化,从而产生角速度?或者就像从solvePnP()中找到旋转向量之间的差异一样简单?

我的问题是:

  1. 如何根据旋转矩阵获得姿势之间的速度,就获得平移速度而言,仅从当前平移向量中减去前一个平移向量的方法是否正确?
  2. 如何获得平移向量和旋转矩阵的加速度?
  3. 我用来获取先前姿势的方法是最好的方法吗?
  4. 对于加速度,既然只是速度的变化,那么只跟踪帧之间获得的速度并获取差异以获得加速度是否明智?

任何帮助将不胜感激!

4

0 回答 0