这不起作用的原因是 Maya 在视口中的相机操作不使用轨迹球界面。您要做的是Maya 的翻滚命令。我找到的用于解释这一点的最佳资源是Orr 教授的计算机图形课程中的这份文档。
左右移动鼠标对应于方位角,并指定围绕世界空间 Y 轴的旋转。上下移动鼠标对应于仰角,并指定围绕视图空间 X 轴的旋转。目标是生成新的世界视野矩阵,然后根据您对相机进行参数化的方式,从该矩阵中提取新的相机方向和眼睛位置。
从当前的世界观矩阵开始。接下来,我们需要在世界空间中定义轴心点。任何枢轴点都可以开始工作,使用世界原点可能是最简单的。
回想一下,纯旋转矩阵会生成以原点为中心的旋转。这意味着要围绕任意枢轴点旋转,首先要平移到原点,执行旋转,然后平移回来。还要记住,转换组合是从右到左发生的,因此到达原点的负平移发生在最右边:
translate(pivotPosition) * rotate(angleX, angleY, angleZ) * translate(-pivotPosition)
我们可以使用它来计算方位角旋转分量,它是围绕世界 Y 轴的旋转:
azimuthRotation = translate(pivotPosition) * rotateY(angleY) * translate(-pivotPosition)
我们必须为高程旋转组件做一些额外的工作,因为它发生在视图空间中,围绕视图空间 X 轴:
elevationRotation = translate(worldToViewMatrix * pivotPosition) * rotateX(angleX) * translate(worldToViewMatrix * -pivotPosition)
然后我们可以得到新的视图矩阵:
newWorldToViewMatrix = elevationRotation * worldToViewMatrix * azimuthRotation
现在我们有了新的 worldToView 矩阵,我们不得不从视图矩阵中提取新的世界空间位置和方向。为此,我们需要 viewToWorld 矩阵,它是 worldToView 矩阵的逆矩阵。
newOrientation = transpose(mat3(newWorldToViewMatrix))
newPosition = -((newOrientation * newWorldToViewMatrix).column(3))
此时,我们将元素分开。如果您的相机已参数化,因此您只存储一个四元数作为您的方向,您只需要进行旋转矩阵 -> 四元数转换。当然,Maya 将转换为欧拉角以显示在通道框中,这将取决于相机的旋转顺序(请注意,当旋转顺序改变时,翻转的数学不会改变,只是旋转的方式矩阵 -> 欧拉角转换完成)。
这是 Python 中的示例实现:
#!/usr/bin/env python
import numpy as np
from math import *
def translate(amount):
'Make a translation matrix, to move by `amount`'
t = np.matrix(np.eye(4))
t[3] = amount.T
t[3, 3] = 1
return t.T
def rotateX(amount):
'Make a rotation matrix, that rotates around the X axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[1, 0, 0, 0],
[0, c,-s, 0],
[0, s, c, 0],
[0, 0, 0, 1],
])
def rotateY(amount):
'Make a rotation matrix, that rotates around the Y axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c, 0, s, 0],
[0, 1, 0, 0],
[-s, 0, c, 0],
[0, 0, 0, 1],
])
def rotateZ(amount):
'Make a rotation matrix, that rotates around the Z axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c,-s, 0, 0],
[s, c, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
])
def rotate(x, y, z, pivot):
'Make a XYZ rotation matrix, with `pivot` as the center of the rotation'
m = rotateX(x) * rotateY(y) * rotateZ(z)
I = np.matrix(np.eye(4))
t = (I-m) * pivot
m[0, 3] = t[0, 0]
m[1, 3] = t[1, 0]
m[2, 3] = t[2, 0]
return m
def eulerAnglesZYX(matrix):
'Extract the Euler angles from an ZYX rotation matrix'
x = atan2(-matrix[1, 2], matrix[2, 2])
cy = sqrt(1 - matrix[0, 2]**2)
y = atan2(matrix[0, 2], cy)
sx = sin(x)
cx = cos(x)
sz = cx * matrix[1, 0] + sx * matrix[2, 0]
cz = cx * matrix[1, 1] + sx * matrix[2, 1]
z = atan2(sz, cz)
return np.array((x, y, z),)
def eulerAnglesXYZ(matrix):
'Extract the Euler angles from an XYZ rotation matrix'
z = atan2(matrix[1, 0], matrix[0, 0])
cy = sqrt(1 - matrix[2, 0]**2)
y = atan2(-matrix[2, 0], cy)
sz = sin(z)
cz = cos(z)
sx = sz * matrix[0, 2] - cz * matrix[1, 2]
cx = cz * matrix[1, 1] - sz * matrix[0, 1]
x = atan2(sx, cx)
return np.array((x, y, z),)
class Camera(object):
def __init__(self, worldPos, rx, ry, rz, coi):
# Initialize the camera orientation. In this case the original
# orientation is built from XYZ Euler angles. orientation is the top
# 3x3 XYZ rotation matrix for the view-to-world matrix, and can more
# easily be thought of as the world space orientation.
self.orientation = \
(rotateZ(rz) * rotateY(ry) * rotateX(rx))
# position is a point in world space for the camera.
self.position = worldPos
# Construct the world-to-view matrix, which is the inverse of the
# view-to-world matrix.
self.view = self.orientation.T * translate(-self.position)
# coi is the "center of interest". It defines a point that is coi
# units in front of the camera, which is the pivot for the tumble
# operation.
self.coi = coi
def tumble(self, azimuth, elevation):
'''Tumble the camera around the center of interest.
Azimuth is the number of radians to rotate around the world-space Y axis.
Elevation is the number of radians to rotate around the view-space X axis.
'''
# Find the world space pivot point. This is the view position in world
# space minus the view direction vector scaled by the center of
# interest distance.
pivotPos = self.position - (self.coi * self.orientation.T[2]).T
# Construct the azimuth and elevation transformation matrices
azimuthMatrix = rotate(0, -azimuth, 0, pivotPos)
elevationMatrix = rotate(elevation, 0, 0, self.view * pivotPos)
# Get the new view matrix
self.view = elevationMatrix * self.view * azimuthMatrix
# Extract the orientation from the new view matrix
self.orientation = np.matrix(self.view).T
self.orientation.T[3] = [0, 0, 0, 1]
# Now extract the new view position
negEye = self.orientation * self.view
self.position = -(negEye.T[3]).T
self.position[3, 0] = 1
np.set_printoptions(precision=3)
pos = np.matrix([[5.321, 5.866, 4.383, 1]]).T
orientation = radians(-60), radians(40), 0
coi = 1
camera = Camera(pos, *orientation, coi=coi)
print 'Initial attributes:'
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
print 'Attributes after tumbling:'
camera.tumble(azimuth=radians(-40), elevation=radians(-60))
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)