Multiple Kinect2 sensors running simuntaneously

417 показвания
Преминаване към първото непрочетено съобщение

Michael Gratton

непрочетено,
17.04.2016 г., 22:46:0817.04.16 г.
до openk...@googlegroups.com

Hi all,

Has anyone succeeded in running multiple Kinect2 simultaneously without
interference? I understand that the hack used for the Kinect1 -
selectively disabling the emitter - won't work for the Kinect2 because
of the warm-up period required.

We are looking to allow multiple Kinect2's to operate with too great a
loss of frame rate. Ideally we would be able to have n sensors running
simultaneously, each returning ideally 30/n frames per second.

Some alternatives that don't look appealing or possible are:

1. Start/stop the camera stream via the host API - these are likely
non-trivial operations, not suitable to get the kind of per-frame
interleaving we are looking for
2. Start/stop the the emitter/frame capture/whatever vi the host API -
I don't think this is supported by the official firmware
3. Lock each sensor to a specific wavelength otherwise used in its
normal random hops via the API or firmware - not supported either
4. Use an optical shutter - intrusive, difficult to coordinate,
possibly noisy and prone to breakdown

What else is out there that we haven't thought of? A custom firmware
perhaps? Something else?

Thanks,
//Mike

--
Michael James Gratton <http://www.cse.unsw.edu.au/~mikeg/>
UNSW School of Computer Science and Engineering.



Bryan Baker

непрочетено,
18.04.2016 г., 0:16:5918.04.16 г.
до openk...@googlegroups.com
We got it working year or so back. Let me see if I can share the code.

Baker

--
You received this message because you are subscribed to the Google Groups "OpenKinect" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openkinect+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Gratton

непрочетено,
18.04.2016 г., 3:24:3618.04.16 г.
до openk...@googlegroups.com

Hi Baker,

On Mon, Apr 18, 2016 at 2:16 PM, Bryan Baker <barzi...@gmail.com>
wrote:
> We got it working year or so back. Let me see if I can share the code.


That would be great, thanks! Can you also say anything about the setup
- were they all pluuged into the same machine, or distributed across
many?

Cheers,

Lorne Covington

непрочетено,
18.04.2016 г., 14:09:0618.04.16 г.
до openk...@googlegroups.com

I assume here you are talking about IR light depth interference . I have
an installation with four KinectV2s with overlapping fields of view
along a 27' wide video wall. Everywhere along the wall is illuminated
by at least two cameras, and in some places by three or even all four.
I see very little interference, so little I combine all the pointclouds
together to use for user and gesture tracking (had to write my own
software for this, skeleton tracking does not work when the cameras are
sideways).

There is a very rare (a few times a day) single-frame artifact that
shows up on one camera at a time of a blob of points in open space; I'm
guessing it's because of light bounces happening to just synchronize
perfectly every so often.

I think the low interference is mainly due to the fact that all the
cameras are at a large angle from each other, and in those spots where
they are not, one is much closer than the other. In my experiments when
two cameras were pretty much equidistant and both looking at close to
the same angle at a scene, there was very pronounced "wobble" along the
Z axis (depth), rendering them useless. So I suggest setting a couple
up and experimenting.

In this installation each K2 is running on it's own small PC; the
integrated Intel HD 4400 graphics handles the depth image to point cloud
decoding in a DX11 compute shader and then each pointcloud and body
track is combined on a host PC.

Good luck!

- Lorne

P.S. - And Baker, would love to see your code. I've wanted to give a
shot at trying more the one K2 on a system, having to have a box for
each camera is a pain.


http://noirflux.com

Michael Gratton

непрочетено,
29.04.2016 г., 0:57:1829.04.16 г.
до openk...@googlegroups.com

Hey Lorne,

On Tue, Apr 19, 2016 at 4:09 AM, Lorne Covington
<mediado...@gmail.com> wrote:
> I assume here you are talking about IR light depth interference .

Yep, that is the case.

> I have an installation with four KinectV2s with overlapping fields
> of view along a 27' wide video wall. Everywhere along the wall is
> illuminated by at least two cameras, and in some places by three or
> even all four. I see very little interference, so little I combine
> all the pointclouds together to use for user and gesture tracking
> (had to write my own software for this, skeleton tracking does not
> work when the cameras are sideways).

This setup is not too far from what we are planning on doing, but on a
smaller scale.

> I think the low interference is mainly due to the fact that all the
> cameras are at a large angle from each other, and in those spots
> where they are not, one is much closer than the other. In my
> experiments when two cameras were pretty much equidistant and both
> looking at close to the same angle at a scene, there was very
> pronounced "wobble" along the Z axis (depth), rendering them useless.
> So I suggest setting a couple up and experimenting.

When you say "close to the same angle", do you mean when they are
positioned at close to each other, or if they are positioned at a
distance apart and the looking at the same point equi-distant from each
in the scene?

> In this installation each K2 is running on it's own small PC; the
> integrated Intel HD 4400 graphics handles the depth image to point
> cloud decoding in a DX11 compute shader and then each pointcloud and
> body track is combined on a host PC.

This is good to know. We have been looking at using an i7 based Intel
NUC for each, so I assume that should be able to handle the load pretty
easily.

Thanks for your feedback, looks like we should indeed purchase some and
start doing some testing.

> P.S. - And Baker, would love to see your code. I've wanted to give a
> shot at trying more the one K2 on a system, having to have a box for
> each camera is a pain.

Seconded!

Florian Echtler

непрочетено,
29.04.2016 г., 4:51:3429.04.16 г.
до openk...@googlegroups.com
On 29.04.2016 06:57, Michael Gratton wrote:

>> P.S. - And Baker, would love to see your code. I've wanted to give a
>> shot at trying more the one K2 on a system, having to have a box for
>> each camera is a pain.
>
> Seconded!

Just for the record, that _is_ possible using libfreenect2; I know of at
least one installation which has been using 5 Kinect2 on a single
machine. However, you need a really beefy GPU and a dedicated USB3
controller card for each camera.

Best, Florian
--
SENT FROM MY DEC VT50 TERMINAL

signature.asc

Lorne Covington

непрочетено,
29.04.2016 г., 14:31:2829.04.16 г.
до openk...@googlegroups.com
On 4/29/2016 12:57 AM, Michael Gratton wrote:

On Tue, Apr 19, 2016 at 4:09 AM, Lorne Covington <mediado...@gmail.com> wrote:

I think the low interference is mainly due to the fact that all the cameras are at a large angle from each other, and in those spots where they are not, one is much closer than the other.  In my experiments when two cameras were pretty much equidistant and both looking at close to the same angle at a scene, there was very pronounced "wobble" along the Z axis (depth), rendering them useless.  So I suggest setting a couple up and experimenting.

When you say "close to the same angle", do you mean when they are positioned at close to each other, or if they are positioned at a distance apart and the looking at the same point equi-distant from each in the scene?

When they are equi-distant and the angle separating them from the target is less than 90 degrees.  I found that when say the two cameras were six feet apart facing a point ten feet away I would get pronounced depth wobble, meaning the depth images of both cameras would vary by about 5% at sub-second rates.  The whole scene would appear to be wobbling in and out.



This is good to know. We have been looking at using an i7 based Intel NUC for each, so I assume that should be able to handle the load pretty easily.

Using an i5 NUC I get full 30fps using the MS SDK to get the depth image and my own compute shader to do the pointcloud conversion (I do all my work in vvvv).  When I get some time I want to try OpenKinect and compare FPS.

And believe it or not, I just tested a $99 Kangaroo Win10 PC with a K2 and it can run at 12-15fps, and it's small enough to velcro on the K2:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16883722001

That's not fast enough for highly responsive interaction, but certainly good enough for lots of things.  Folks say it can run Linux too.

Good luck!

- Lorne


--
http://noirflux.com


Отговор до всички
Отговор до автора
Препращане
0 нови съобщения