The documentation about Kinect calibration at http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV4?from=Research.KinectRgbDemo says the input to the calibration is the depth images in METERS. So it assumes that the depth computed by Kinect is already accurate? So what is the point of depth calibration and how are the depth images used in calibration because we want to get better conversion from disparity to depth by estimating c0 and c1 parameters of the conversion.
Thank you for your time answering my question.