I think this has to do with the border keypoints. The detector detects the keypoints, but for the SURF descriptor to return consistent values it needs pixel data in a block of pixels around it, which is not available in the border pixels. You can use the following snippet to remove border points after keypoints are detected but before descriptors are computed. I suggest using borderSize of 20 or more.
removeBorderKeypoints( vector<cv::KeyPoint>& keypoints, const cv::Size imageSize, const boost::int32_t borderSize )
{
if( borderSize > 0)
{
keypoints.erase( remove_if(keypoints.begin(), keypoints.end(),
RoiPredicatePic((float)borderSize, (float)borderSize,
(float)(imageSize.width - borderSize),
(float)(imageSize.height - borderSize))),
keypoints.end() );
}
}
Where RoiPredicatePic is implemented as:
struct RoiPredicatePic
{
RoiPredicatePic(float _minX, float _minY, float _maxX, float _maxY)
: minX(_minX), minY(_minY), maxX(_maxX), maxY(_maxY)
{}
bool operator()( const cv::KeyPoint& keyPt) const
{
cv::Point2f pt = keyPt.pt;
return (pt.x < minX) || (pt.x >= maxX) || (pt.y < minY) || (pt.y >= maxY);
}
float minX, minY, maxX, maxY;
};
Also, approximate nearest neighbor indexing is not the best way to match features between pairs of images. I would suggest you to try other simpler matchers.