I have started looking at PCL and its SDK for Kinect. I am having a very basic problem. I have calibrated the RGB and IR cameras using RGBDemo with Checkerboard pattern as the control images. I have received the distortion coefficients and the offsets. I am having a problem of using the coefficients to receive a calibrated point cloud.
What I am trying to figure out is a procedure to receive the input depth image to implement the calibration model. I have found the openni_wrapper::DepthImage
class with getDepthMetaData().Data()
function which provides the depth in millimeters.
Is this the raw depth?
If not, is there another function to receive the raw depth image without prior calibration using PCL?