我在理解如何使用 aruco.estimatePoseSingleMarkers 时遇到了一些麻烦。目前,我有一个图像或一个 aruco 标签建模,我的相机安装在 Gazebo sim 中的无人机上,该无人机正在飞向标签。标签被识别,我估计了标记的姿势。我假设正在发生的事情是 TVecs 正在返回与我的相机相关的标记的 X、Y、Z。从那我正确地转换坐标系并用它来指挥我的无人机。但是,我发现使用错误的无人机会自行移动,因此 aruco 标签位于图像的 0,0 处。
我混淆了estimatePoseSingleMarker 函数的功能吗?也许我需要做些什么才能完成这项工作?
这是我正在使用的代码片段。有一些代码可以获取相机坐标并根据无人机的 YPR 将它们转换为与无人机的世界坐标偏移量。
(corners, ids, rejected) = cv2.aruco.detectMarkers(img, self.alg.ARUCO_DICT, parameters=self.alg.ARUCO_PARAMS)
if ids is not None:
for id in ids:
#Set the length of the ID detected.
if(id[0] == 20):
aruco_len = 0.25
#Get the rotation vec and translation vec of the camera to the aruco I believe. can use this to control the quad.
rvecs, tvecs = cv2.aruco.estimatePoseSingleMarkers(corners[0], aruco_len, self.alg.camera_matrix, self.alg.camera_dist)
print tvecs
cam_pts = np.array([[-tvecs[0][0][1]],
[tvecs[0][0][0]],
[tvecs[0][0][2]],
[1]])
print cam_pts
error = self.cam2Local(cam_pts)
def cam2Local(self, cam_pts):
#Here in the function build the Rot matrix of the quadcopter, take XYZ points from camera, [y x z 1] format. Rot_Mat * Cam_Pts will give Aruco wrld pts
roll = self.roll
pitch = self.pitch
yaw = self.yaw
#print str("Pitch: " + str(pitch) + " Roll: " + str(roll) + " Yaw: " + str(yaw))
rot_mat = np.array([[math.cos(roll)*math.cos(pitch), (math.cos(roll)*math.sin(pitch)*math.sin(yaw))-(math.sin(roll)*math.cos(yaw)), math.cos(roll)*math.sin(pitch)*math.cos(yaw)+math.sin(roll)*math.sin(yaw), 0],
[math.sin(roll)*math.cos(pitch), (math.sin(roll)*math.sin(pitch)*math.sin(yaw))+(math.cos(pitch)*math.cos(yaw)), math.sin(roll)*math.sin(pitch)*math.cos(yaw)-math.cos(roll)*math.sin(yaw), 0],
[ -math.sin(pitch), math.cos(pitch)*math.sin(yaw), math.cos(pitch)*math.cos(yaw), 0],
[ 0, 0, 0, 1]])
Local_pts = np.matmul(rot_mat, cam_pts)
#print rot_mat
return Local_pts