http://openkinect.org/wiki/Imaging_Information
Seems like this is step one before you can really do anything else.
thanks
here's the most relevant commit:
https://github.com/OpenKinect/libfreenect/commit/baef89f81be00d1e386a5f84b96a64530b47a50e
in short, it builds two lookup tables.
one does translation/scaling/undistortion (image space modifications).
this only gets you so far.
the second uses the depth values to account for the disparity between
the color and ir cameras.
this technique uses numbers that the kinect reports, which originally
come from in-factory calibration and vary from kinect to kinect. our
interpretation of these numbers is based on reading the openni
registration implementation.
kyle
#include <libfreenect-registration.h> |
|
depth_in_mm = reg.raw_to_mm_shift[raw_depth_at_index]; // The raw_to_mm_shift table is generally useful when you need to convert depth values from raw->mm y = reg.registration_table[index][1]; |
x = (reg.registration_table[index][0] + reg.depth_to_rgb_shift[depth_in_mm]) >> 8; |
Thanks much for the description. I will git a copy of the unstable version and try it out tonight...I am hoping there are some useful function calls that do something like:int depth = freenect_depth_from_pixel( x, y );
A related question... the depth map is not as high of a resolution as the RGB map, correct ? So one depth map entry may map to more than one pixel (would be 4 pixels if the depth map is half the width and height). I am wondering if there is some interpolation algorithms that could be used that look at neighboring pixels to interpolate (sort of anti-alias, the depth values) Just a thought.
It is a standard calibration, based on the data on each Kinect.
In part. Note that the source depth image from which we derive the
aligned depth image is not perfect - your fingertips may not appear
the same in the depth image as they do in the IR image. If the Kinect
can't figure out which IR dot in the dot pattern is on your
fingertips, or if no dots hit your fingertips, then as far as the
depth camera is concerned, you're invisible. This is also why some
surfaces like most matte computer screens appear to produce no depth
data - the IR camera fails to see a reflection of the dot pattern.
Since the Kinect can't reliably see everything, it makes conservative
guesses that tend to make things look like they're a few pixels
smaller around the edges than they actually are.
How far off are we talking, here? Given a screenshot, I can probably
tell you if it seems like normal variation or error out of the
ordinary.
-Drew