7

在没有得到这个问题的答案之后,我最终遇到了一些看起来很有趣的可能解决方案:

这篇文章中的 Robust Matcher 以及这篇文章的 Canny Detector 。

在设置 a Canny Edge Detector,引用它的Documentation并实现Robust Matcher我链接的第一页中显示的内容后,我获得了一些徽标/服装图像,并且将两者结合起来取得了不错的成功:

徽标图片与带有该徽标的服装图片相匹配

但在其他非常相似的情况下,它是关闭的:

具有“完全相同”设计的不同徽标图像,与上述相同的服装图像。

所以这让我想知道,有没有一种方法可以匹配图像上的几个特定点来定义给定图像的某些区域?

因此,与其读取图像然后进行所有匹配keypoints,丢弃“坏”keypoints等。是否有可能让系统知道一个keypoint与另一个相关的位置,然后丢弃一个图像上正确的匹配彼此相邻,但在另一个完全不同的地方?

(如左侧图像中相邻的浅蓝色和宝蓝色“匹配”所示,但在右侧图像的完全独立部分匹配)

编辑

对于米卡

在此处输入图像描述

“矩形”绘制在(添加在油漆中)白框的中心。

cv::Mat ransacTest(const std::vector<cv::DMatch>& matches, const std::vector<cv::KeyPoint>& trainKeypoints, const std::vector<cv::KeyPoint>& testKeypoints, std::vector<cv::DMatch>& outMatches){
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
cv::Mat fundemental;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin(); it!= matches.end(); ++it){
    // Get the position of left keypoints
    float x= trainKeypoints[it->queryIdx].pt.x;
    float y= trainKeypoints[it->queryIdx].pt.y;
    points1.push_back(cv::Point2f(x,y));
    // Get the position of right keypoints
    x= testKeypoints[it->trainIdx].pt.x;
    y= testKeypoints[it->trainIdx].pt.y;
    points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(), 0);
if (points1.size() > 0 && points2.size() > 0){
    cv::Mat fundemental= cv::findFundamentalMat(
    cv::Mat(points1),cv::Mat(points2), inliers, CV_FM_RANSAC, distance, confidence);
    // matching points - match status (inlier or outlier) - RANSAC method - distance to epipolar line - confidence probability - extract the surviving (inliers) matches
    std::vector<uchar>::const_iterator itIn= inliers.begin();
    std::vector<cv::DMatch>::const_iterator itM= matches.begin();
    // for all matches
    for ( ;itIn!= inliers.end(); ++itIn, ++itM){
        if (*itIn) { // it is a valid match
            outMatches.push_back(*itM);
        }
    }
    if (refineF){
        // The F matrix will be recomputed with
        // all accepted matches
        // Convert keypoints into Point2f
        // for final F computation
        points1.clear();
        points2.clear();

        for(std::vector<cv::DMatch>::const_iterator it = outMatches.begin(); it!= outMatches.end(); ++it){
            // Get the position of left keypoints
            float x = trainKeypoints[it->queryIdx].pt.x;
            float y = trainKeypoints[it->queryIdx].pt.y;
            points1.push_back(cv::Point2f(x,y));
            // Get the position of right keypoints
            x = testKeypoints[it->trainIdx].pt.x;
            y = testKeypoints[it->trainIdx].pt.y;
            points2.push_back(cv::Point2f(x,y));
        }

        // Compute 8-point F from all accepted matches
        if (points1.size() > 0 && points2.size() > 0){
            fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), CV_FM_8POINT); // 8-point method
        }
    }
}

Mat imgMatchesMat;
drawMatches(trainCannyImg, trainKeypoints, testCannyImg, testKeypoints, outMatches, imgMatchesMat);//, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
Mat H = findHomography(points1, points2, CV_RANSAC, 3); // -- Little difference when CV_RANSAC is changed to CV_LMEDS or 0
//-- Get the corners from the image_1 (the object to be "detected")
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint(trainCannyImg.cols, 0);
obj_corners[2] = cvPoint(trainCannyImg.cols, trainCannyImg.rows); obj_corners[3] = cvPoint(0, trainCannyImg.rows);
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(imgMatchesMat, scene_corners[0] + Point2f(trainCannyImg.cols, 0), scene_corners[1] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[1] + Point2f(trainCannyImg.cols, 0), scene_corners[2] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[2] + Point2f(trainCannyImg.cols, 0), scene_corners[3] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[3] + Point2f(trainCannyImg.cols, 0), scene_corners[0] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
//-- Show detected matches
imshow("Good Matches & Object detection", imgMatchesMat);
waitKey(0);

return fundemental;
}

单应输出

略有不同的输入场景(不断改变周围的事物,需要很长时间才能找出完美重复上图的确切条件)但结果相同:

Object (52, 37)
Scene  (219, 151)
Object (49, 47)
Scene  (241,139)
Object (51, 50)
Scene  (242, 141)
Object (37, 53)
Scene  (228, 145)
Object (114, 37.2)
Scene  (281, 162)
Object (48.96, 46.08)
Scene  (216, 160.08)
Object (44.64, 54.72)
Scene  (211.68, 168.48)

有问题的图片:

在此处输入图像描述

4

0 回答 0