我正在尝试获取飞机上相机的相机姿态估计。相机朝下,我将图像中的点与以 ECEF 表示的世界坐标相关联。
当我运行solvePNP时,我得到一个rvec和一个tvec。据我了解,rvec 是我的相机相对于世界参考系的旋转矢量,而 tvec 是从世界原点到我的相机的平移。所以我的主要问题是,这是否意味着 tvec 是我的相机在世界框架中的 XYZ 位置?如果我在 XYZ 开始 [0,0,0] 并移动到相机原点,那是我在 XYZ 坐标中的相机位置还是我误解了什么?
我问的主要原因是,为了在世界坐标中实际获得我的相机位置,我必须使用以下等式:
cam_world_pos = np.dot(-r_mat.T,t_vec)
SolvePNP tvec output: [ 445315.92613106, 400875.77255437, 6342447.84646723]
Actual Camera location: [511964.10153676, -4879715.551228, 4063475.03766107]
完整代码如下。
object_points[:,:,0] = np.array([[511943.715162702, -4878626.96826019, 4062794.60137282],
[511940.291626567, -4878673.71821000, 4062739.26743133],
[511832.963975293, -4878650.96219925, 4062779.84268338],
[511832.736349870, -4878657.65071521, 4062771.89324488],
[511781.395685648, -4878656.50934662, 4062779.67897580],
[511681.546443719, -4878743.79853387, 4062688.05274913],
[511830.212307712, -4878660.65753048, 4062768.62262194],
[511891.605259940, -4878698.05477778, 4062716.33230839],
[511671.583398601, -4878750.47505608, 4062681.33520162],
[511829.009431822, -4878622.29096120, 4062814.53569581],
[511933.847068750, -4878671.90386977, 4062742.23819396],
[511665.756515402, -4878772.22726829, 4062656.11730890],
[511820.451260988, -4878629.31561360, 4062807.22788101],
[511784.352076247, -4878654.28205995, 4062781.96571512]])
image_points[:,:,0] = np.array([[457.251949864898, 278.044548653805],
[561.563021437094, 395.562922992176],
[324.136638716142, 487.040270938409],
[339.336493099835, 503.664106701581],
[249.253927264257, 570.528444117084],
[284.916025242008, 918.030704517594],
[342.023402026407, 514.466220034638],
[536.326205396700, 520.598672959005],
[283.069716399754, 947.250674705873],
[249.200363292437, 424.003883259338],
[547.194442129373, 399.446296325373],
[325.818913664043, 1008.20369490127],
[252.238475231599, 452.579518673907],
[248.976782258982, 561.867810808243]])
distCoeffs = np.array([-0.4832617131991951, 0.2909008797638061, -0.0123677302055096, -0.01059311004963829, 0.005726971649077564])
K = np.array([[2304.69, 0.0, 526.54],
[0.0, 2303.17, 504.61],
[0.0, 0.0, 1.0]])
_, r_vec, t = cv2.solvePnP(object_points, image_points, K, distCoeffs)
r_vec = r_vec.reshape((3,1))
t = t.reshape((3,1))
print('rotation vector:')
print(r_vec)
# convert rotation vector into R matrix
r_mat, jacobian = cv2.Rodrigues(r_vec)
print('rotation matrix')
print(r_mat)
% Camera location
cam_worl_pos = np.dot(-r_mat.T, t)
print('Given K and DCM')
print(cam_worl_pos)