in the last days there has been quite some discussion about using
multiple kinects. Main reason is, we could get rid of occlusions and
really use it for full real time 3d reconstruction. A first starting
point would be to use an stereo setup.
Sorry for this long post, but I kept working on this like crazy in the
past two days so I have quite some stuff to say, hope it turns out
useful to you. Interesting stuff on the bottom ;)
When you think of, how the depth sensor works one issue comes
immediately to your mind, A structured light pattern is projecteder in
IR, while the viewing frustum of the projector has some slight offset
to the one of an ir camera, but they are axis aligned. From just this
it is possible to reconstruct depth. However, if you would use
multiple kinects, there could be interferences, when the structured
light is visible for the other kinect or when the laser projector is
directly visible without any reflection. So it was common sens, that
multiple kinects won't work by now.
Some ideas to address this issue were stated, not all of them are
practical. Some comments on those ideas:
* Time multiplexing with 1/30s time slots. This basically means that
in a stereo setup, you would subsequently capture one frame at one
kinect and disable the laser projector on the other kinect. First keep
in mind that all devices are running asynchronously. Depending on the
exposure time for the ir-camera, one frame from one kinect can
"overlap" two frames in the other kinect. In the worst cast of 1/30s
exposure time, we would need to wait for an entire frame, before
starting to capture. This means, in a stereo setup we would drop from
the orifinal 30fps, to 7.5fps. lame. In addition, we don't know yet,
if the device allows us to enable and disable the ir projector frame
by frame. If not, someone would need to do this by altering hardware
and switching the diode separately with a micro controler (or the
built in motor controler, if you want to be clever :)
* Frequency multiplexing. Using different narrow band ir lasers and
filters. This would definitely work, if you get the optics right. Tho
quite complicated imho.
* Polarization. Won't work. A reflection on an arbitrary surface won't
preserve polarization well enough. The reason why polarization
projection screens for e.g. work, is because they use a special
coating. This won't work on human skin or other materials tho.
Because all this ideas have some sort of issue I tried something
different. Normally someone would use synchronized devices, very short
exposure times and time multiplexing. But that won't be possible with
the kinect. My approach now was to add separate shutters to both, the
camera and projector and to do time multiplexing within one single
frame of the ir camera. This can be done with a high speed shutter.
For the ir-projector the best approach would be to modulate the laser
diode accordingly. For the camera, some sort of mechanical or lcd
shutter is needed. Shutters would be added to both kinects and the
shutters have to be synchronized of course, while the ir-cameras of
the kinects are running asynchronously. Of course, the exposure time
of one single frame is shorter now.
To find out, how the depth sensor would behave, if i add a shutter, i
made different experiments.This is only with one kinect by now. One
setup looked like this:
http://atommuell.mum.jku.at/~aurel/kinect_roling_shuter_proto_2.jpg
The depth sensor gives slightly more errors here, because of the short
exposure time, but in general the sensor seems to be pretty robust
when doing such things as adding rolling shutters.
The next step was to get a second kinect and to build an actual
prototype, where both sensors are stereo separated by shutters.
However, I was quite surprised when i pointed two plain kinects on the
same object and started capturing. My intention was, and as it was
discussed all around the net by now, that things would be completely
screwed up. Both sensors were working quite good with some added
errors. See for yourself:
http://atommuell.mum.jku.at/~aurel/two_kinects_same_direction_c_both.png
Compared to using just one kinect:
http://atommuell.mum.jku.at/~aurel/two_kinects_same_direction_c_single.png
Interference is imho very low, much lower than expected. Also an
interesting test, to lower the luminance of one of the projects to
50%:
http://atommuell.mum.jku.at/~aurel/two_kinects_same_direction_c_both_one_proj_shutter.png
I don't think that time multiplexing is completely of the table now,
but multiple kinects are working way better than expected, so maybe
the hardware could be used without any modification in some cases. We
also would have to see about interference when using more than two
kinects.
that's it for now,
aurel
Cheers,
Radu.
Interesting - ok I have to admit that I haven't tried 3 kinects yet on my system. I think that each of them might
require a separate USB controller, as we had issues with 2 on the same controller earlier today. Can you make sure
they're plugged in different controllers?
Cheers,
Radu.
I only have 2 Kinects here, but I'll try with 3 or more on Monday at work.
Are you using ROS-kinect btw? We've already tested this and it works great there. You can easily do device selection as
a command line parameter or a ROS launch parameter, then pop up individual views of each camera in
rviz/image_view/pcl_visualization, etc. I am pretty sure that if you use that, you can get it to work for 3 cameras too,
unless there's some hardware limitation that we're not aware of.
Also make sure that whatever libfreenect fork you're using includes the multiple cameras patch that we wrote.
Cheers,
Radu.
Cheers,
Radu.