1

我使用 OpenCV Surf 方法来获取关键点和描述符,它工作得很好,但要花很多时间。

我的代码是: -

NSLog(@"Keypoint Detects");
//-- Step 1: Detect the keypoints using SURF Detector

int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_1, keypoints_object );
detector.detect( img_2, keypoints_scene );

//-- Step 2: Calculate descriptors (feature vectors)
NSLog(@"Descriptor Detects");
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_1, keypoints_object, descriptors_object );
extractor.compute( img_2, keypoints_scene, descriptors_scene );

//-- Step 3: Matching descriptor vectors using FLANN matcher
NSLog(@"Matching Detects");
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

Xcode 时间结果:-

2015-10-26 13:22:27.282 AVDemo[288:26112] 关键点检测

2015-10-26 13:22:28.361 AVDemo[288:26112] 描述符检测

2015-10-26 13:22:30.077 AVDemo[288:26112] 匹配检测

这里需要 2 秒来计算


我还使用了另一种方法来获取: -

NSLog(@"Detect Keypoints");
cv::Ptr<cv::BRISK> ptrBrisk = cv::BRISK::create();
ptrBrisk->detect(img_1, camkeypoints);

//for keypoints
NSLog(@"Compute Keypoints");
ptrBrisk->compute(img_1, camkeypoints,camdescriptors);
if(camdescriptors.type()!=CV_32F) {
    camdescriptors.convertTo(camdescriptors, CV_32F);
}
NSLog(@"camera image conversion end");

这也可以正常工作,但有相同的 Time Xcode 结果问题:-

2015-10-26 14:19:47.939 AVDemo[305:32700] 检测关键点

2015-10-26 14:19:49.787 AVDemo[305:32700] 计算关键点

2015-10-26 14:19:49.818 AVDemo[305:32700] 摄像头图像转换结束

怎样才能最大限度地减少这个时间?

现在我使用了 FASTfeatureDetector,它可以最大限度地减少一些时间,但 SurfDescriptorExtractor 仍然需要时间。

新代码是: -

NSLog(@"Keypoint Detects");

//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 15;

FastFeatureDetector detector( minHessian );

std::vector<KeyPoint> keypoints_object, keypoints_scene;

detector.detect( img_1, keypoints_object );
detector.detect( img_2, keypoints_scene );

//-- Step 2: Calculate descriptors (feature vectors)
NSLog(@"Descriptor Detects");
SurfDescriptorExtractor extractor;

Mat descriptors_object, descriptors_scene;

extractor.compute( img_1, keypoints_object, descriptors_object );
extractor.compute( img_2, keypoints_scene, descriptors_scene );

//-- Step 3: Matching descriptor vectors using FLANN matcher
NSLog(@"Matching Detects");
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

Xcode:-

2015-10-26 16:06:19.018 AVDemo[375:47824] 关键点检测

2015-10-26 16:06:19.067 AVDemo[375:47824] 描述符检测

2015-10-26 16:06:21.117 AVDemo[375:47824] 匹配检测

4

0 回答 0