我试图从 2 帧估计世界坐标的 3D 位置。这些帧是用同一台相机从不同位置捕获的。问题是,估计是错误的。
我有
Camera Intrinsic parameters
K = [4708.29296875, 0, 1218.51806640625;
0, 4708.8935546875, 1050.080322265625;
0, 0, 1]
Translation and Rotation data:
Frame X-Coord Y-Coord Z-Coord(altitude) Pitch Roll Yaw
1 353141.23 482097.85 38.678 0.042652439 1.172694124 16.72142499
2 353141.82 482099.69 38.684 0.097542931 1.143224387 16.79931141
Note: GPS data uses cartesian coordinate system (X,Y,Z Coordinates) is in meter units based on British National Grid GPS system.
为了获得旋转矩阵,我使用 了基于http://www.tobias-weis.de/triangulate-3d-points-from-3d-imagepoints-from-的https://stackoverflow.com/a/56666686/16432598移动相机/。使用上述数据,我计算外部参数和投影矩阵如下。
Rt0 = [-0.5284449976982357, 0.308213375891041, -0.7910438668806931, 353141.21875;
-0.8478960766271159, -0.2384055118949635, 0.4735346398506075, 482097.84375;
-0.04263950806535898, 0.9209600028339713, 0.3873171123665929, 38.67800140380859]
Rt1 = [-0.4590975294881605, 0.3270290779984009, -0.8260032933114635, 353141.8125;
-0.8830316937622665, -0.2699087096524321, 0.3839326975722462, 482099.6875;
-0.097388326965866, 0.905649640091175, 0.4126914624432091, 38.68399810791016]
P = K * Rt;
P1 = [-2540.030877954028, 2573.365272473235, -3252.513377560185, 1662739447.059914;
-4037.427278644764, -155.5442017945203, 2636.538291686695, 2270188044.171295;
-0.04263950806535898, 0.9209600028339713, 0.3873171123665929, 38.67800140380859]
P2 = [-2280.235105924588, 2643.299156802081, -3386.193495224041, 1662742249.915956;
-4260.36781710715, -319.9665173096691, 2241.257388910372, 2270196732.490808;
-0.097388326965866, 0.905649640091175, 0.4126914624432091, 38.68399810791016]
triangulatePoints(Points2d, projection_matrices, out);
现在,我在两幅图像中选择相同的点进行三角测量
p2d_1(205,806) and p2d_2(116,813)
对于这个特定点的 3D 位置,我期望类似;
[353143.7, 482130.3, 40.80]
而我计算
[549845.5109014747, -417294.6070425579, -201805.410744677]
我知道我的内在参数和 GPS 数据非常准确。
谁能告诉我这里缺少什么或我做错了什么?
谢谢