终于弄清楚了,所以我想我会把它留在这里
# transposed_frame is the 3d image that needs to be transformed (shape (632, 352, 35))
# Meshgrid of the matrix coordinates
x_grid, y_grid, z_grid = np.meshgrid(range(transposed_frame.shape[0]), range(transposed_frame.shape[1]), range(transposed_frame.shape[2]), indexing = 'ij')
# inverse transformation that needs to be applied to the image (shape (3, 632, 352, 35))
# the first dimension goes over the different components of the vector field x y z
# So transform[0, i, j, k] is the x coordinate of the vector field in the point [i, j, k]
transform = vectorized_interp_field(x_grid, y_grid, z_grid)
# Transforming the image through map_coordinates is then as simple as
inverse_transformed = map_coordinates(transposed_frame, transform)
我不了解 map_coordinates 的部分正是映射矩阵应该具有的高维数据形式。它似乎通常如下工作
B = map_coordinates(A, mapping)
B[i, j, k] = A[mapping[0, i, j, k], mapping[1, i, j, k], mapping[2, i, j, k]]