7

我正在检测我的 iPad 拍摄的图像上的标记。因为我想计算它们之间的平移和旋转,我想改变这些图像的透视图,所以看起来我直接在标记上方捕获它们。

现在我正在使用

points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));

Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));

这给出了我的这些结果(查看右下角的结果warpPerspective):

照片 1 照片 2 照片 3

您可能会看到结果图像在结果图像的左上角包含已识别的标记。我的问题是我想捕获整个图像(不裁剪),以便以后可以检测到该图像上的其他标记。

我怎样才能做到这一点?也许我应该使用来自的旋转/平移向量solvePnP函数中的旋转/平移向量?

编辑:

不幸的是,改变扭曲图像的大小并没有多大帮助,因为图像仍然被翻译,所以标记的左上角位于图像的左上角。

例如,当我使用以下命令将大小翻倍时:

cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));

我收到了这些图片:

照片 4 照片 5

4

3 回答 3

3

您的代码似乎并不完整,因此很难说问题出在哪里。

在任何情况下,与输入图像相比,变形图像的尺寸可能完全不同,因此您必须调整用于 warpPerspective 的尺寸参数。

例如尝试将大小加倍:

cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));

编辑:

为确保整个图像在此图像内,原始图像的所有角都必须变形以位于结果图像内。因此,只需计算每个角点的扭曲目的地并相应地调整目的地点。

为了更清楚一些示例代码:

// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);

// calculate warped position of all corners

cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);

cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);

cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);

cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);

// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));

// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;

// adjust target points accordingly
for (int i=0; i<4; i++) {
    points2D[i] += cv::Point2f(x,y);
}

// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);

// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
于 2013-11-01T10:39:57.157 回答
2

我在 python 中实现了 littleimp 的答案,以防有人需要它。应该注意的是,如果多边形的消失点落在图像内,这将无法正常工作。

    import cv2
    import numpy as np
    from PIL import Image, ImageDraw
    import math
    
    
    def get_transformed_image(src, dst, img):
        # calculate the tranformation
        mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
        
            
        # new source: image corners
        corners = np.array([
                        [0, img.size[0]],
                        [0, 0],
                        [img.size[1], 0],
                        [img.size[1], img.size[0]]
                    ])
    
        # Transform the corners of the image
        corners_tranformed = cv2.perspectiveTransform(
                                      np.array([corners.astype("float32")]), mat)
    
        # These tranformed corners seems completely wrong/inverted x-axis 
        print(corners_tranformed)
        
        x_mn = math.ceil(min(corners_tranformed[0].T[0]))
        y_mn = math.ceil(min(corners_tranformed[0].T[1]))
    
        x_mx = math.ceil(max(corners_tranformed[0].T[0]))
        y_mx = math.ceil(max(corners_tranformed[0].T[1]))
    
        width = x_mx - x_mn
        height = y_mx - y_mn
    
        analogy = height/1000
        n_height = height/analogy
        n_width = width/analogy
    
    
        dst2 = corners_tranformed
        dst2 -= np.array([x_mn, y_mn])
        dst2 = dst2/analogy 
    
        mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
                                           dst2.astype("float32"))
    
    
        img_warp = Image.fromarray((
            cv2.warpPerspective(np.array(image),
                                mat2,
                                (int(n_width),
                                int(n_height)))))
        return img_warp
    
    
    # image coordingates
    src=  np.array([[ 789.72, 1187.35],
     [ 789.72, 752.75],
     [1277.35, 730.66],
     [1277.35,1200.65]])
    
    
    # known coordinates
    dst=np.array([[0, 1000],
                 [0, 0],
                 [1092, 0],
                 [1092, 1000]])
    
    # Create the image
    image = Image.new('RGB', (img_width, img_height))
    image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
    draw = ImageDraw.Draw(image)
    draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
    #image.show()
    
    warped =  get_transformed_image(src, dst, image)
    warped.show()
于 2020-10-30T12:20:10.977 回答
1

您需要做两件事:

  1. 增加输出的大小cv2.warpPerspective
  2. 平移扭曲的源图像,使扭曲的源图像的中心与cv2.warpPerspective输出图像的中心匹配

以下是代码的外观:

# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)
于 2020-08-25T13:43:41.033 回答