I assume here you are talking about IR light depth interference . I have
an installation with four KinectV2s with overlapping fields of view
along a 27' wide video wall. Everywhere along the wall is illuminated
by at least two cameras, and in some places by three or even all four.
I see very little interference, so little I combine all the pointclouds
together to use for user and gesture tracking (had to write my own
software for this, skeleton tracking does not work when the cameras are
sideways).
There is a very rare (a few times a day) single-frame artifact that
shows up on one camera at a time of a blob of points in open space; I'm
guessing it's because of light bounces happening to just synchronize
perfectly every so often.
I think the low interference is mainly due to the fact that all the
cameras are at a large angle from each other, and in those spots where
they are not, one is much closer than the other. In my experiments when
two cameras were pretty much equidistant and both looking at close to
the same angle at a scene, there was very pronounced "wobble" along the
Z axis (depth), rendering them useless. So I suggest setting a couple
up and experimenting.
In this installation each K2 is running on it's own small PC; the
integrated Intel HD 4400 graphics handles the depth image to point cloud
decoding in a DX11 compute shader and then each pointcloud and body
track is combined on a host PC.
Good luck!
- Lorne
P.S. - And Baker, would love to see your code. I've wanted to give a
shot at trying more the one K2 on a system, having to have a box for
each camera is a pain.
http://noirflux.com