我试图通过扫描从相机拍摄的两张图像来估计相机的姿势,检测图像中的特征,匹配它们,创建基本矩阵,使用相机内在函数计算基本矩阵,然后分解它以找到旋转和翻译。
这是matlab代码:
I1 = rgb2gray(imread('1.png'));
I2 = rgb2gray(imread('2.png'));
points1 = detectSURFFeatures(I1);
points2 = detectSURFFeatures(I2);
points1 = points1.selectStrongest(40);
points2 = points2.selectStrongest(40);
[features1, valid_points1] = extractFeatures(I1, points1);
[features2, valid_points2] = extractFeatures(I2, points2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = valid_points1(indexPairs(:, 1), :);
matchedPoints2 = valid_points2(indexPairs(:, 2), :);
F = estimateFundamentalMatrix(matchedPoints1,matchedPoints2);
K = [2755.30930612600,0,0;0,2757.82356074384,0;1652.43432833339,1234.09417974414,1];
%figure; showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
E = transpose(K)*F*K;
W = [0,-1,0;1,0,0;0,0,1];
Z = [0,1,0;-1,0,0;0,0,0];
[U,S,V] = svd(E);
R = U*inv(W)*transpose(V);
T = U(:,3);
thetaX = radtodeg(atan2(R(3,2),R(3,3)));
thetaY = radtodeg(atan2(-R(3,1),sqrt(R(3,2)^2 +R(3,3)^2)));
thetaZ = radtodeg(atan2(R(2,1),R(1,1)));
我面临的问题是 R 和 T 总是不正确的。ThetaZ 大多数时候等于 ~90,如果我重复计算很多次,有时我会得到预期的角度。(但仅在某些情况下)
我似乎不明白为什么。这可能是因为我计算的基本矩阵是错误的。还是有其他地方我出错了?
另外 T 的比例/单位是多少?(平移向量)还是推断不同。
PS 计算机视觉新手...