我正在尝试从我的 MS Kinect V2 生成真实世界的坐标。
我设法拼凑出一个 pyqt + opengl 散点图,并使用 pylibfreenect2 显示来自 Kinect 的深度数据。
我立即注意到深度数据与点云数据不同。注意我房间的天花板非常扭曲(应该是平的天花板开始像曲棍球棒图)
经过一些阅读和挖掘源文件后,我设法找到了一个看起来非常有前途的功能。
getPointXYZ - 在点云中构造一个 3-D 点。
因为它一次只适用于一个像素,所以我编写了一个简单的嵌套 for 循环。在下面的代码中,您应该看到以下行:
out = np.zeros((d.shape[0]*d.shape[1], 3)) #shape = (217088, 3)
for row in range(d.shape[0]):
for col in range(d.shape[1]):
world = registration.getPointXYZ(undistorted, row, col) #convert depth pixel to real-world coordinate
out[row + col] = world
不知道那里发生了什么。它看起来更像一条直线,有时它像一个矩形,而且非常平坦(但它在所有三个维度上都位于任意角度)。当我将手移到传感器前面时,我可以看到一些点在移动,但看不到可声明的形状。似乎所有的点都挤在一起了。
以下是一个 Python 脚本,它将显示一个包含 openGL 散点图的 pyQt 应用程序窗口。通过 pylibfreenect2 从 Kinect 传感器接收帧,并通过迭代深度数据的每一行和每一列并通过 getPointXYZ 发送它来生成散点图的点(这真的很慢而且不起作用......)。
# coding: utf-8
# An example using startStreams
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
import numpy as np
import cv2
import sys
from pylibfreenect2 import Freenect2, SyncMultiFrameListener
from pylibfreenect2 import FrameType, Registration, Frame, libfreenect2
fn = Freenect2()
num_devices = fn.enumerateDevices()
if num_devices == 0:
print("No device connected!")
sys.exit(1)
serial = fn.getDeviceSerialNumber(0)
device = fn.openDevice(serial)
types = 0
types |= FrameType.Color
types |= (FrameType.Ir | FrameType.Depth)
listener = SyncMultiFrameListener(types)
# Register listeners
device.setColorFrameListener(listener)
device.setIrAndDepthFrameListener(listener)
device.start()
# NOTE: must be called after device.start()
registration = Registration(device.getIrCameraParams(),
device.getColorCameraParams())
undistorted = Frame(512, 424, 4)
registered = Frame(512, 424, 4)
#QT app
app = QtGui.QApplication([])
w = gl.GLViewWidget()
w.show()
g = gl.GLGridItem()
w.addItem(g)
#initialize some points data
pos = np.zeros((1,3))
sp2 = gl.GLScatterPlotItem(pos=pos)
w.addItem(sp2)
def update():
frames = listener.waitForNewFrame()
ir = frames["ir"]
color = frames["color"]
depth = frames["depth"]
d = depth.asarray()
registration.apply(color, depth, undistorted, registered)
#There are 3 optionally commented methods for generating points data (the last one is not commented here).
#First will generate points using depth data only.
#Second will generate colored points and pointcloud xyz coordinates.
#Third is simply the pointcloud xyz coordinates without the color mapping.
"""
#Format depth data to be displayed
m, n = d.shape
R, C = np.mgrid[:m, :n]
out = np.column_stack((d.ravel() / 4500, C.ravel()/m, (-R.ravel()/n)+1))
"""
"""
#Format undistorted and regisered data to real-world coordinates with mapped colors (dont forget color=out_col in setData)
out = np.zeros((d.shape[0]*d.shape[1], 3)) #shape = (217088, 3)
out_col = np.zeros((d.shape[0]*d.shape[1], 3)) #shape = (217088, 3)
for row in range(d.shape[0]):
for col in range(d.shape[1]):
world = registration.getPointXYZRGB(undistorted, registered, row, col)
out[row + col] = world[0:3]
out_col[row + col] = np.array(world[3:6]) / 255
"""
# Format undistorted data to real-world coordinates
out = np.zeros((d.shape[0]*d.shape[1], 3)) #shape = (217088, 3)
for row in range(d.shape[0]):
for col in range(d.shape[1]):
world = registration.getPointXYZ(undistorted, row, col)
out[row + col] = world
sp2.setData(pos=out, size=2)
listener.release(frames)
t = QtCore.QTimer()
t.timeout.connect(update)
t.start(50)
## Start Qt event loop unless running in interactive mode.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
device.stop()
device.close()
sys.exit(0)
我不确定下一步应该做什么才能获得实际的点云坐标数据。
有人对我做错了什么有任何建议吗?
我的操作系统是带有 Python 3.5 的 Ubuntu 16.0.4
谢谢。