0

我想使用 Structure-from-motion 算法进行 3D 重建。我正在使用 opencv 在 python 中执行此操作。但是有些获得的点云是如何分成两半的。我的输入图像是: 图像 1 图像 2 图像 3。 我将每 2 个图像匹配,例如 image1 与 image2 和 image2 与图像 3。我尝试了不同的特征检测器,如 SIFT、KAZE 和 SURF。使用获得的点,我计算基本矩阵。我从 Opencv 的相机校准中获得了相机内在函数,并存储在下面代码中的变量“mtx”和“dist”中。

```file = os.listdir('Path_to _images')
file.sort(key=lambda f: int(''.join(filter(str.isdigit,f))))
path = os.path.join(os.getcwd(),'Path_to_images/')

for i in range(0, len(file)-1):

    if(i == len(file) - 1):
        break

    path1 = cv2.imread(path + file[i], 0)
    path1 = cv2.equalizeHist(path1)

    path2 = cv2.imread(path + file[i+1], 0)
    path2 = cv2.equalizeHist(path2)

# Feature Detection #
    sift = cv2.xfeatures2d.SIFT_create()
    kp1, des1 = sift.detectAndCompute(path1,None)
    kp2, des2 = sift.detectAndCompute(path2,None)

# Feature Matching #
    FLANN_INDEX_KDTREE = 0              
    index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
    search_params = dict(checks=50)   
    flann = cv2.FlannBasedMatcher(index_params,search_params)
    matches = flann.knnMatch(des1,des2,k=2)

    good = []
    pts1 = []
    pts2 = []

    for j, (m,n) in enumerate(matches):
        if m.distance < 0.8*n.distance:
            good.append(m)
            pts2.append(kp2[m.trainIdx].pt)
            pts1.append(kp1[m.queryIdx].pt)

    pts1 = np.int32(pts1)
    pts2 = np.int32(pts2)

    pts1 = np.array([pts1],dtype=np.float32)
    pts2 = np.array([pts2],dtype=np.float32)

# UNDISTORTING POINTS #

    pts1_norm = cv2.undistortPoints(pts1, mtx, dist)
    pts2_norm = cv2.undistortPoints(pts2, mtx, dist)

# COMPUTE FUNDAMENTAL MATRIX #

    F, mask = cv2.findFundamentalMat(pts1_norm,pts2_norm,cv2.FM_LMEDS)

# COMPUTE ESSENTIAL MATRIX #

    E, mask = cv2.findEssentialMat(pts1_norm, pts2_norm, focal=55.474, pp=(33.516, 16.630), method=cv2.FM_LMEDS, prob=0.999, threshold=3.0)


# POSE RECOVERY #
    points, R, t, mask = cv2.recoverPose(E, pts1_norm, pts2_norm)
    anglesBetweenImages = rotationMatrixToEulerAngles(R)

    sys.stdout = open('path_to_folder/angles.txt', 'a')
    print(anglesBetweenImages)

#  COMPOSE PROJECTION MATRIX OF R, t #
    matrix_1 = np.hstack((R, t))
    matrix_2 = np.hstack((np.eye(3, 3), np.zeros((3, 1))))

    projMat_1 = np.dot(mtx, matrix_1)
    projMat_2 = np.dot(mtx, matrix_2)

# TRIANGULATE POINTS #
    point_4d_hom = cv2.triangulatePoints(projMat_1[:3], projMat_2[:3], pts1[:2].T, pts2[:2].T)


# HOMOGENIZE THE 4D RESULT TO 3D #


    point_4d = point_4d_hom

    point_3d = point_4d[:3, :].T                # Obtains 3D points
    np.savetxt('/path_to_folder/'+ file[i] +'.txt', point_3d)

在 cv2.triangulatePoints 之后,我希望获得一个点云。但是我得到的结果有 2 个表面,如下图所示。

结果 1. 如果有人能告诉我我的算法出了什么问题,我真的很感激。谢谢!

4

1 回答 1

0

您需要以交互方式执行此操作

像这样:

cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);

int size0 = m_history.getHistorySize();

for(int i = 0; i < size0; i++){
  cv::Point pt1 = m_history.getOriginalPoint(0, i);
  cv::Point pt2 = m_history.getOriginalPoint(1, i);

  pointsMat1.at<double>(0,0) = pt1.x;
  pointsMat1.at<double>(1,0) = pt1.y;
  pointsMat2.at<double>(0,0) = pt2.x;
  pointsMat2.at<double>(1,0) = pt2.y;

  cv::Mat pnts3D(4, 1, CV_64F);

  cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}
于 2020-06-20T05:37:22.487 回答