嗨我有一个相机投影矩阵,我想得到相应的相机变换矩阵。
投影矩阵(P)为 3*4,将 3D 点的齐次坐标转换为平面齐次坐标。
现在在我的场景中,我只能指定相机变换矩阵,它描述了相机的定位方式。那么如何从相机投影矩阵中得到矩阵变换矩阵呢?
编辑
投影矩阵是从数据集中读取的,我正在尝试渲染该数据。但是渲染采用大小为 3*4 的相机变换矩阵,而数据集提供了 3*4 的投影矩阵。这两个矩阵的区别是:变换矩阵是全等的,而投影不是。将投影矩阵直接传递到变换中会产生“相机上的非全等变换”错误。
嗨我有一个相机投影矩阵,我想得到相应的相机变换矩阵。
投影矩阵(P)为 3*4,将 3D 点的齐次坐标转换为平面齐次坐标。
现在在我的场景中,我只能指定相机变换矩阵,它描述了相机的定位方式。那么如何从相机投影矩阵中得到矩阵变换矩阵呢?
投影矩阵是从数据集中读取的,我正在尝试渲染该数据。但是渲染采用大小为 3*4 的相机变换矩阵,而数据集提供了 3*4 的投影矩阵。这两个矩阵的区别是:变换矩阵是全等的,而投影不是。将投影矩阵直接传递到变换中会产生“相机上的非全等变换”错误。
EDIT
OK, so the rendering library has a fixed projection that you can't change but you still want to use your own. A standard perspective transform is indeed non-congruent due to a divide by zero (the matrix itself is congruent though afaik, are you sure the dataset doesn't provide an invalid projection matrix?) but since all points are clipped between the near and far planes there generally isn't an issue.
If you can extract the actual projection matrix used by the rendering engine, perhaps you can calculate the matrix that when multiplied with it produces the desired projection matrix.
csVert = libProjectionMat * M * objectMat * osVert
, where M
is the camera transform provided to the librarycsVert = myProjectionMat * myCameraMat * objectMat * osVert
M
M
for libProjectionMat * M = myProjectionMat * myCameraMat
libProjectionMatInverse * libProjectionMat * M = libProjectionMatInverse * myProjectionMat * myCameraMat
M = libProjectionMatInverse * myProjectionMat * myCameraMat
Then pass M
as your camera transform. If the library still reports M
as non-congruent, I don't think there's much you can do. It sounds like a pretty inflexible library.
It's common to work with 4x4 matrices all the way through, is there any reason for 3x4? The combined matrix is projection * camera
(or projection * view
). There might also be an object/model transform too, in which case mvp = projection * view * model
(all standard matrix multiples). This matrix takes a vertex in object space all the way to 2D clip space (don't forget to divide by w
to normalize the homogeneous coordinates, but only after any interpolation has occurred).
Your question is a little unclear.
If you can only specify a single matrix in your scenario, does passing in mvp
or projection * view
solve your problem?
If you have the final matrix and want to extract the view * model
component, you could multiply mvp
by the inverse projection: mv = projectionInv * mvp
. This is more expensive and of course less accurate than just keeping all matrices and/or combinations separate.