0

应用于给定图像的变换矩阵可以在同一图像的缩放版本上重复使用吗?

解释一下:我成功地使用'cv::findHomography'来计算正交参考图像和失真校正照片之间的3x3单应矩阵'Href',首先收集两个图像之间的对应关系:

Href = findHomography(mpts_2,
mpts_1,
cv::RANSAC,
Settings::getHomography_ransacReprojThr(),
outlier_mask);

有关照片输入和正射影像参考的示例,请参见下文。这是一个考古计算项目,我们正在为拉美西斯二世在埃及底比斯建造的一座寺庙的所有墙壁创建埃及学参考:

输入照片(上)、正射影像参考(下)

使用上面的矩阵,我可以使用 'cv::warpPerspective' 创建一个插值图像,该图像正确模拟了正射镶嵌参考对象的姿势——见下图,右图。

我相信下面提供的代码会产生正确的结果(参见代码部分 A),但我现在想将相同的“Href”矩阵应用于上面相同输入图像“src”的更大版本。

这可能吗?

我尝试缩放代理图像上早期转换的结果以将其应用于全分辨率照片会导致失真,如下左图所示,与右侧的正确结果形成对比:

不正确的结果(左),我想要的结果供参考(右)

总而言之,我能够转换较小的代理图像,但不确定是否可以在较大的全分辨率版本的图像上使用相同的矩阵。

代码部分 A

这是根据正交参考图像转换代理图像的工作代码。这里的大部分代码都与输出图像的大小和偏移有关;'cv::warpPerspective' 调用位于块的末尾:

// http://en.wikipedia.org/wiki/Transformation_matrix
cv::namedWindow(REMAP_WINDOW, CV_WINDOW_AUTOSIZE); // create homography display window
bool redraw = true;
// load image associated with current image
src = cv::imread("input.jpg", 1); 
dst.create(src.size(), src.type()); // create destination and the maps
// Identify source image corners
std::vector<cv::Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0);
obj_corners[1] = cvPoint(src.cols, 0);
obj_corners[2] = cvPoint(src.cols, src.rows);
obj_corners[3] = cvPoint(0, src.rows);
std::vector<cv::Point2f> scene_corners(4);
cv::perspectiveTransform(obj_corners, scene_corners, Href); // Transform source image corners by Href to find transformed bounds
int minCols = 0, maxCols = 0, minRows = 0, maxRows = 0;
for(int i=0; i < scene_corners.size(); i++)
{
//cout << "scene_corners.at(i).y: " << scene_corners.at(i).y << "scene_corners.at(i).x: " << scene_corners.at(i).x << endl;
if(maxRows < scene_corners.at(i).y)
maxRows = scene_corners.at(i).y;
if(minRows > scene_corners.at(i).y)
minRows = scene_corners.at(i).y;
if(maxCols < scene_corners.at(i).x)
maxCols = scene_corners.at(i).x;
if(minCols > scene_corners.at(i).x)
minCols = scene_corners.at(i).x;
}

int imageWidth = (maxCols-minCols)+30;
int imageHeight = (maxRows-minRows)+30;
double w = (double)imageWidth, h = (double)imageHeight;
int f = 500;
int x = -minCols; // This is an approximation only!
int y = -minRows; // This is an approximation only!

// Projection 2D -> 3D matrix
cv::Mat A1 = (cv::Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);

// Camera Intrinsics matrix 3D -> 2D
cv::Mat A2 = (cv::Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);

// Translation matrix on the X and Y axis
cv::Mat T = (cv::Mat_<double>(4, 4) <<
1, 0, 0, x,
0, 1, 0, y,
0, 0, 1, 500,
0, 0, 0, 1);

// Apply matrix transformation
cv::Mat transfo = A2 * (T * A1);

// Apply image interpolation
cv::warpPerspective(src, dst, Href * transfo, cv::Size(imageWidth, imageHeight), CV_INTER_CUBIC);

imshow(REMAP_WINDOW, dst);

代码部分 B

第二部分显示了我将“Href”矩阵应用于缩放图像(即全分辨率照片,而不是较小的代理)的非工作尝试:

src = cv::imread("C:\\Users\\insight\\Documents\\Visual Studio 2010\\Projects\\find-object\\bin\\Release\\genies\\Img4913_pt.jpg", 1);
dst.create(src.size(), src.type()); // create destination and the maps 
// Scale existing min/max cols/rows to fit larger image
int imageWidth = ((maxCols-minCols)*(src.cols/image.cols))+30; // Arbitrary border of 30 pixels
int imageHeight = ((maxRows-minRows)*(src.rows/image.rows))+30;
double w = (double)imageWidth, h = (double)imageHeight;
cout << "original image width: " << src.cols << ", original image height: " << src.rows << endl;
cout << "transformed image width: " << imageWidth << ", transformed image height: " << imageHeight << endl;
int f = 500;
int x = (minCols*(src.cols/image.cols))*2; // This is an approximation only!
int y = (minRows*(src.rows/image.rows))*2; // This is an approximation only!

vector<cv::Point2f> corners;
corners.push_back(cv::Point2f(0, 0));
corners.push_back(cv::Point2f(image.cols, 0));
corners.push_back(cv::Point2f(image.cols, image.rows));
corners.push_back(cv::Point2f(0, image.rows));

// Corners of the destination image
vector<cv::Point2f> output_corner;
output_corner.push_back(cv::Point2f(0, 0));
output_corner.push_back(cv::Point2f(dst.cols, 0));
output_corner.push_back(cv::Point2f(dst.cols, dst.rows));
output_corner.push_back(cv::Point2f(0, dst.rows));

// Get transformation matrix
cv::Mat Hscale = getPerspectiveTransform(corners, output_corner);

int j = 0;
x = -14500;
y = -9500;
int z = 4000;
int xfactor = 0;
int yfactor = 0;
int width = dst.cols;
int height = dst.rows;

// Projection 2D -> 3D matrix
cv::Mat A1 = (cv::Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);

// Camera Intrinsics matrix 3D -> 2D
cv::Mat A2 = (cv::Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);

// Translation matrix on the X and Y axis
cv::Mat T = (cv::Mat_<double>(4, 4) <<
1, 0, 0, x,
0, 1, 0, y,
0, 0, 1, z,
0, 0, 0, 1);

// Apply matrix transformation
cv::Mat transfo = A2 * (T * A1);

warpPerspective(src, dst, Href * Hscale * transfo, cv::Size(imageWidth, imageHeight), CV_INTER_CUBIC, cv::BORDER_CONSTANT, 0);
cv::imwrite("C:\\Users\\Kevin\\Documents\\Find-Object\\image.png", dst);
4

1 回答 1

0

您的代码示例有点混乱,我不会对此发表评论。但是,您的问题的答案是肯定的,因为单应性定义为标量乘法常数。您可以通过注意到对于每个同应性 H、齐次二维点 x 和标量 s、H * x 和 (s * H) * x 在除以第三结果的坐标(当然,无穷远点除外)。

因此,您应该将单应性估计在较小的图像上不变地应用于较大的图像。

但是,有一个警告:放大也会放大估计误差,而且估计的单应性越接近奇异,它们可能变得无法容忍。

我建议您首先仔细检查您的估计所基于的匹配项。

于 2013-09-26T12:31:57.383 回答