2

我是 OpenCV 的新手。我正在尝试在 iOS 上的 OpenCV 中使用 FLANN/SURF 绘制图像之间的特征匹配。我正在关注这个例子:

http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann

这是我的代码稍作修改(将示例中的代码包装在一个函数中,该函数返回 UIImage 作为结果并从包中读取起始图像):

UIImage* SURFRecognition::test()
{
    UIImage *img1 = [UIImage imageNamed:@"wallet"];
    UIImage *img2 = [UIImage imageNamed:@"wallet2"];

    Mat img_1;
    Mat img_2;

    UIImageToMat(img1, img_1);
    UIImageToMat(img2, img_2);

    if( !img_1.data || !img_2.data )
    {
        std::cout<< " --(!) Error reading images " << std::endl;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    int minHessian = 400;

    SurfFeatureDetector detector( minHessian );

    std::vector<KeyPoint> keypoints_1, keypoints_2;

    detector.detect( img_1, keypoints_1 );
    detector.detect( img_2, keypoints_2 );

    //-- Step 2: Calculate descriptors (feature vectors)
    SurfDescriptorExtractor extractor;

    Mat descriptors_1, descriptors_2;

    extractor.compute( img_1, keypoints_1, descriptors_1 );
    extractor.compute( img_2, keypoints_2, descriptors_2 );

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    FlannBasedMatcher matcher;
    std::vector< DMatch > matches;
    matcher.match( descriptors_1, descriptors_2, matches );

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for( int i = 0; i < descriptors_1.rows; i++ )
    { double dist = matches[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist );
    printf("-- Min dist : %f \n", min_dist );

    //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
    //-- PS.- radiusMatch can also be used here.
    std::vector< DMatch > good_matches;

    for( int i = 0; i < descriptors_1.rows; i++ )
    { if( matches[i].distance <= 2*min_dist )
    { good_matches.push_back( matches[i]); }
    }

    //-- Draw only "good" matches
    Mat img_matches;
    drawMatches( img_1, keypoints_1, img_2, keypoints_2,
                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //-- Show detected matches
    //imshow( "Good Matches", img_matches );

    UIImage *imgTemp = MatToUIImage(img_matches);

    for( int i = 0; i < good_matches.size(); i++ )
    {
        printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
    }

    return imgTemp;
}

上面函数的结果是:

在此处输入图像描述

仅显示连接匹配项的线,但不显示原始图像。如果我理解得很好,drawMatches 函数会返回一个 cv::Mat,其中包含图像和相似特征之间的连接。这是正确的还是我遗漏了什么?有人能帮我吗?

4

1 回答 1

8

我自己找到了解决方案。经过大量搜索,似乎drawMatches需要img1和img2才能使用1到3个通道。我打开一个带有 alpha 的 PNGa,所以这些是 4 通道图像。这是我审查的代码:

添加

UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);

cvtColor(img_1, img_1, CV_BGRA2BGR);
cvtColor(img_2, img_2, CV_BGRA2BGR);
于 2013-09-10T11:10:24.110 回答