16

我正在尝试将图像旋转几度,然后将其显示在窗口中。我的想法是旋转,然后在一个新窗口中显示它,新窗口的宽度和高度是根据旧的宽度和高度计算的:

new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle

我期待结果如下所示:

在此处输入图像描述

但结果是这样的:

在此处输入图像描述

我的代码在这里:

#!/usr/bin/env python -tt
#coding:utf-8

import sys
import math
import cv2
import numpy as np 

def rotateImage(image, angle):#parameter angle in degrees

    if len(image.shape) > 2:#check colorspace
        shape = image.shape[:2]
    else:
        shape = image.shape
    image_center = tuple(np.array(shape)/2)#rotation center

    radians = math.radians(angle)

    x, y = im.shape
    print 'x =',x
    print 'y =',y
    new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
    new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
    new_x = int(new_x)
    new_y = int(new_y)
    rot_mat = cv2.getRotationMatrix2D(image_center,angle,1.0)
    print 'rot_mat =', rot_mat
    result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
    return result, new_x, new_y

def show_rotate(im, width, height):
#    width = width/2
#    height = height/2
#    win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
#    cv2.cv.ResizeWindow('ro_win', width, height)
    win = cv2.namedWindow('ro_win')
    cv2.imshow('ro_win', im)
    if cv2.waitKey() == '\x1b':
        cv2.destroyWindow('ro_win')

if __name__ == '__main__':

    try:
        im = cv2.imread(sys.argv[1],0)
    except:
        print '\n', "Can't open image, OpenCV or file missing."
        sys.exit()

    rot, width, height = rotateImage(im, 30.0)
    print width, height
    show_rotate(rot, width, height)

我的代码中一定有一些愚蠢的错误导致了这个问题,但我想不通......而且我知道我的代码不够pythonic :( ..抱歉..

谁能帮我?

最好的,</p>

熊熊

4

3 回答 3

16

正如 BloodyD 的回答所说,cv2.warpAffine转换后的图像不会自动居中。相反,它只是使用变换矩阵变换每个像素。(这可以将像素移动到笛卡尔空间中的任何位置,包括原始图像区域之外。)然后,当您指定目标图像大小时,它会抓取该大小的区域,从 (0,0) 开始,即左上角原来的框架。转换后的图像中不在该区域内的任何部分都将被截断。

这是用于旋转和缩放图像的 Python 代码,结果居中:

def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
    (oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
    M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.

    #choose a new image size.
    newX,newY = oldX*scaleFactor,oldY*scaleFactor
    #include this if you want to prevent corners being cut off
    r = np.deg2rad(degreesCCW)
    newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))

    #the warpAffine function call, below, basically works like this:
    # 1. apply the M transformation on each pixel of the original image
    # 2. save everything that falls within the upper-left "dsize" portion of the resulting image.

    #So I will find the translation that moves the result to the center of that region.
    (tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
    M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
    M[1,2] += ty

    rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
    return rotatedImg

在此处输入图像描述

于 2015-10-20T21:45:41.070 回答
5

当你得到这样的旋转矩阵时:

rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)

您的“比例”参数设置为 1.0,因此如果您使用它将图像矩阵转换为相同大小的结果矩阵,它必然会被剪裁。

您可以改为获得这样的旋转矩阵:

rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)

它会旋转和收缩,在边缘留出空间(你可以先放大它,这样你仍然会得到一个大图像)。

此外,您似乎混淆了图像大小的 numpy 和 OpenCV 约定。OpenCV 使用 (x, y) 来表示图像大小和点坐标,而 numpy 使用 (y,x)。这可能就是您要从纵向到横向纵横比的原因。

我倾向于像这样明确地说明它:

imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)

ETC...

最终,这对我来说很好:

def rotateImage(image, angel):#parameter angel in degrees
    height = image.shape[0]
    width = image.shape[1]
    height_big = height * 2
    width_big = width * 2
    image_big = cv2.resize(image, (width_big, height_big))
    image_center = (width_big/2, height_big/2)#rotation center
    rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
    result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
    return result

更新:

这是我执行的完整脚本。只是 cv2.imshow("winname", image) 和 cv2.waitkey() 没有参数来保持它打开:

import cv2

def rotateImage(image, angel):#parameter angel in degrees
    height = image.shape[0]
    width = image.shape[1]
    height_big = height * 2
    width_big = width * 2
    image_big = cv2.resize(image, (width_big, height_big))
    image_center = (width_big/2, height_big/2)#rotation center
    rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
    result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
    return result

imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)

cv2.imshow("Rotated", imageRotated)
cv2.waitKey()

那里真的不多......if __name__ == '__main__':如果它是你正在处理的真正模块,那么你绝对是正确的。

于 2012-08-02T06:53:47.873 回答
3

好吧,这个问题似乎不是最新的,但我遇到了同样的问题,花了一段时间才解决它,而没有放大和缩小原始图像。我将发布我的解决方案(不幸的是 C++ 代码,但如果需要,它可以很容易地移植到 python):

#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)

void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){

    int w = src.size().width, h = src.size().height;

    // resize the destination image
    Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
    dest = Mat(new_size, src.type());

    // this is our rotation point
    Size2d old_size = src.size();
    Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);

    // and this is the rotation matrix
    // same as in the opencv docs, but in 3x3 form
    double a = COS(angle), b = SIN(angle);
    Mat rot_mat   = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);

    // next the translation matrix
    double offsetx = (new_size.width - old_size.width) / 2,
           offsety = (new_size.height - old_size.height) / 2;
    Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);

    // multiply them: we rotate first, then translate, so the order is important!
    // inverse order, so that the transformations done right 
    Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);

    // now just apply the affine transformation matrix
    warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}

一般的解决方法是将旋转的图片旋转平移到正确的位置。所以我们创建了两个变换矩阵(第一个用于旋转,第二个用于平移)并将它们相乘到最终的仿射变换。由于 opencv 的 getRotationMatrix2D 返回的矩阵只有 2x3,因此我必须以 3x3 格式手动创建矩阵,以便它们可以相乘。然后只取前两行并应用仿射变换。

编辑:我创建了一个要点,因为我在不同的项目中经常需要这个功能。它还有一个 Python 版本:https ://gist.github.com/BloodyD/97917b79beb332a65758

于 2015-04-16T09:32:08.533 回答