0

我目前正在使用带有 c++ 的 opencv 库,我的目标是取消图像上的鱼眼效果(“使其成为平面”)我正在使用函数“undistortImage”来取消效果,但我需要在执行相机校准之前为了找到参数 K、Knew 和 D,但我并不完全理解文档(链接:http ://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gga37375a2741e88052ce346884dfc9c6a0a0899eaa2f96d6eed9927c4b4f4464e05)。据我了解,我应该给出两个点列表,并且“校准”函数应该返回我需要的数组。所以我的问题如下:给定一个鱼眼图像,我应该如何选择两个点列表来获得结果?这是目前我的代码,非常基本,只是拍照,显示它,执行不失真并显示新图像。矩阵中的元素是随机的,所以目前的结果并不像预期的那样。感谢您的回答。

#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>


using namespace std;
using namespace cv;

int main(){

    cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
    Mat image;
    image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR);   // Read the file
    if (!image.data)                              // Check for invalid input
    {
        cout << "Could not open or find the image" << endl;
        return -1;
    }
    cout << "Input image depth: " << image.depth() << endl;

    namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window", image);                   // Show our image inside it.

    Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
    Mat Da = Mat::ones(1, 4, CV_64F);
    Mat dstImage(image.rows, image.cols, CV_32F);

    cout << "K matrix depth: " << Ka.depth() << endl;
    cout << "D matrix depth: " << Da.depth() << endl;

    Mat Knew = Mat::eye(3, 3, CV_64F);
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;
    int flag = 0; 
    std::vector<Point3d> objectPoints1 = { Point3d(0,0,0),  Point3d(1,1,0),  Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0), 
        Point3d(6,6,0),  Point3d(7,7,0),  Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0),  Point3d(8,5,0),  Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
    std::vector<Point2d> imagePoints1 = { Point(107,84),  Point(110,90),  Point(116,96), Point(126,107), Point(142,123), Point(168,147),
        Point(202,173),  Point(232,192),  Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112),  Point(248,133),  Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };

    std::vector<std::vector<cv::Point2d> > imagePoints(1);
    imagePoints[0] = imagePoints1;
    std::vector<std::vector<cv::Point3d> > objectPoints(1);
    objectPoints[0] = objectPoints1;
    fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
    cout << Ka<< endl;
    cout << Da << endl;
    fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
    namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window 2", dstImage);                   // Show our image inside it.

    waitKey(0);                                          // Wait for a keystroke in the window
    return 0;
}
4

1 回答 1

1

对于校准,cv::fisheye::calibrate您必须提供

objectPoints    vector of vectors of calibration pattern points in the calibration pattern coordinate space. 

这意味着提供点的已知真实世界坐标(必须是与 imagePoints 中的点对应的点),但您可以任意选择坐标系位置(但 carthesian),因此您必须知道您的对象 - 例如平面测试图案.

imagePoints vector of vectors of the projections of calibration pattern points

这些点必须与 objectPoints 中的点相同,但在图像坐标中给出,因此对象点的投影命中您的图像(从图像中读取/提取坐标)。

例如,如果您的相机确实捕获了此图像(取自此处):

由鱼眼相机捕获的测试图案的图像

你必须知道你的测试模式的尺寸(最多一个比例),例如你可以选择左上角正方形的左上角作为位置(0,0,0),顶部的右上角-left 正方形为 (1,0,0),左上角正方形的左下角为 (1,1,0),因此您的整个测试模式将放置在 xy 平面上。

然后你可以提取这些对应关系:

pixel        real-world
(144,103)    (4,3,0)
(206,75)     (7,2,0)
(109,151)    (2,5,0)
(253,159)    (8,6,0)

对于这些点(标记为红色):

在此处输入图像描述

像素位置可能是您的imagePoints列表,而实际位置可能是您的objectPoints列表。

这回答了你的问题了吗?

于 2016-02-10T15:11:19.100 回答