My question is, is there anything we can do to modify the hardware so
it can scan smaller objects at a closer distance.
Now I mounted +2.5 glasses on my device: http://imageshack.us/f/846/kinectzoom.jpg/
I could now get the object at a FIRST distance of 35cm and could capture some more details (sure now the scale and depth decoding cann´t be right any more so the bigger one is the zoom result):
http://imageshack.us/f/402/zoomtest.jpg/
Also tracking worked up to a certain point and concentrating on the floor I could also get the floor:
http://imageshack.us/f/513/zoomtest2.jpg/
A rough comparison of the size of the resulting shoe shows that the zoomed result is about 1.2 times bigger. Means the resolution should be 1.2 times better. The IR beams are condensed on a 0.8x0.8x0.8 volume:
http://imageshack.us/f/10/zoomdimens.jpg/
http://imageshack.us/f/37/zoomtest3.jpg/
Ok, one can easily notice the resulting distortion, due to the added lens distortion. Think that´s why tracking is a bit harder. Comparing new distorted depth data with the already reconstructed ones (somehow interpolated) is a hard job for ReconstructMe and trying to get a 360° reconstruction, data from one side damage the result taken from another position.
Don´t know if that might work to get rid of such distortions:
Assuming one adds the same lens on cam, sensor and emitter, could it work to get the added lens distortion by comparing a standard camera calibration with(a) and without(b) lens of the Kinect RGB cam? So b-a should be the added lens distortion. Maybe the depth image stream could be corrected by these parameters?Am 01.03.2012 um 11:07 schrieb Steven:
Never thought such a result would be possible with a totally wrong
calibrated device.
I was surprised to see that my Kinect works down to a distance of 45cm without any modifications:
I scanned a shoe on the floor being 45cm away from the Kinect. It was captured with the FIRST frame after hitting "p" totally and without clippings.Now I mounted +2.5 glasses on my device: http://imageshack.us/f/846/kinectzoom.jpg/
I could now get the object at a FIRST distance of 35cm and could capture some more details (sure now the scale and depth decoding cann´t be right any more so the bigger one is the zoom result):
http://imageshack.us/f/402/zoomtest.jpg/
A rough comparison of the size of the resulting shoe shows that the zoomed result is about 1.2 times bigger. Means the resolution should be 1.2 times better. The IR beams are condensed on a 0.8x0.8x0.8 volume:
http://imageshack.us/f/10/zoomdimens.jpg/
http://imageshack.us/f/37/zoomtest3.jpg
Ok, one can easily notice the resulting distortion, due to the added lens distortion. Think that´s why tracking is a bit harder. Comparing new distorted depth data with the already reconstructed ones (somehow interpolated) is a hard job for ReconstructMe and trying to get a 360° reconstruction, data from one side damage the result taken from another position.
Don´t know if that might work to get rid of such distortions:
Assuming one adds the same lens on cam, sensor and emitter, could it work to get the added lens distortion by comparing a standard camera calibration with(a) and without(b) lens of the Kinect RGB cam? So b-a should be the added lens distortion. Maybe the depth image stream could be corrected by these parameters?Am 01.03.2012 um 11:07 schrieb Steven:
Please keep in mind that there is a big difference in between resolution and accuracy. It isn't a fact that making your resolution higher also improves the accuracy. We thought about hardware changes almost a year ago, and indeed, these special lenses also exist already for several months now. This lens was developed for small living rooms, so that people who planned to use their kinect could be skeletically tracked at closer distances compared to the original Xbox software. Accuracy wasn't an issue and the device is not used in that case as a 3D scanning device.
Even when you can see more detail, this doesn't mean that you scan more accurat.
To evaluate this you need a more accurat scanner to campare the 3D surfaces. We know that, depending on the object you scan and the distance, the absolute accuracy from the Kinect is about 2-3mm when the object is within the range of 1.5m from the sensor. When you take the Artec scanner this goes down to 0.25mm and for an ATOS scanner we talk about 0.008mm =8µm! The difference is without any doubt noticable when you buy these.
Hehe, ready for industry ;)
Do you think a depth camera calibration can fix these issues? Since the pattern gets also distorted, I wonder if one needs to correct that as well.
Here's the catch, we don't use the RGB for anything other than displaying purposes yet