Hi
I have been trying to make this multiple skeleton thingy work for
weeks now but in vain.
Here are my conclusions so maybe it will help some.
First, let's try to understand what we have when we enumerate the user
generators nodes.
So when I plug in 2 kinects (on two different usb controllers. I know
that because I managed to make 2 depth generators work in parralel),
and then call context.EnumerateProductionTrees(XN_NODE_TYPE_USER,
NULL, list, NULL) here's what I get :
http://pastebin.com/VQH2ShWj
Each * represents a potential user generator, I displayed the whole
tree with the needed nodes for each user generator, the type, class
and version (the number is the type XnPredefinedProductionNodeType see
XnCppWrapper.h).
We can make these conclusions :
- There are two types of user generators, one needing only a depth
generator, (tree structure 6->2->1), available with all the versions
of NITE and OpenNI and one with a more complicated structure, needing
a gesture generator and scene analyzer (tree structure 6->(9->2->1),
(10->2->1)) available only with the last version of OpenNI and NITE.
After some tests, the complicated version is the one not needing user
pose for calibration while the simpler version needs it.
- We somehow need to pick the right user generator nodes in order to
avoid the data from one kinect at one level getting mixed with the
data from another kinect (assuming this is possible).
Then, I decided to go with the more developed tree structure, and try
to manually build two different user generator nodes with the whole
tree structure, making sure no node from the first tree structure was
depending on one from the other. I proceeded from the bottom up,
enumerating the devices (i found only 2 so I assumed one is kinect1
and the other kinect2), the I created 2 depth nodes each one depending
on one of these devices nodes, then 2 gesture nodes then scene nodes
and so on. I used instance names to make the difference between the
nodes. I set an instance name to the node info before calling
context.CreateProductionTree(...), and I also specifically chose that
the nodes are from the same and last version available.
At the end, my structure of created nodes was like that :
* Node: 6 XnVSkeletonGenerator 1.5.2.21 User 1
-Node: 9 XnVGestureGenerator 1.5.2.21 Gesture 1
--Node: 2 SensorKinect 5.1.0.25 Depth 1
---Node: 1 SensorKinect 5.1.0.25 Kinect 1
-Node: 10 XnVSceneAnalyzer 1.5.2.21 Scene 1
--Node: 2 SensorKinect 5.1.0.25 Depth 1
---Node: 1 SensorKinect 5.1.0.25 Kinect 1
* Node: 6 XnVSkeletonGenerator 1.5.2.21 User 2
-Node: 9 XnVGestureGenerator 1.5.2.21 Gesture 2
--Node: 2 SensorKinect 5.1.0.25 Depth 2
---Node: 1 SensorKinect 5.1.0.25 Kinect 2
-Node: 10 XnVSceneAnalyzer 1.5.2.21 Scene 2
--Node: 2 SensorKinect 5.1.0.25 Depth 2
---Node: 1 SensorKinect 5.1.0.25 Kinect 2
So up to here, everything seems fine.
I run the program displaying basic data from each user generator, like
the confidence and the position of the head.
But unfortunately, the data is not good, there's still inconsistency,
for example, if I place myself in front of one kinect only, the other
one facing a wall, I still obtain data for the second kinect.
Here's the result of that experiment (user in front of kinect1,
kinect2 facing wall) :
http://pastebin.com/PwAeZnmb
So, how can we explain that ?
I think it's one out of the three following hypothesis :
1. With the present implementation of XnVSkeletonTracking in NITE, it
is NOT possible to track 2 users simultaneously, then, we're fucked
and we can move on to do something else and pray/wish that people at
PrimeSense will take care of this.
And honestly, I think this is the most likely explanation.
2. The way I implemented the solution makes data go nuts. for example
I used the same callbacks for both user generators, and stuff like
that. But here I think I can't do anything more because I've tried
everything. Maybe if someone has a look at the code he'll find
something.
3. Multiple skeleton tracking doesn't work with the complicated user
generator nodes but may work with the simpler.
-> NO, I've tried the same thing with other simpler nodes and still
nothing, it was even worse (no tracking at all, stuck in user
detection/calibration)
4. A workaround of trick to make the multiple skeleton tracking work
would be to pick 2 user generators with different versions -> they may
be unrelated/independent and thus avoiding the problems I stated
earlier.
I'm not sure, I don't think so, but I didn't try it because I had
enough.
Voila, hope this helps someone, or if someone got it working, please
tell me where I fucked up.
My code is available on
https://github.com/tokou/KinectDrone (the git
repo is still a mess but you can find what you're looking for in the
TestMultiKinect directory) please have a look, feel free to ask if you
wanna know anything else.
Oh and if some people from PrimeSense read this, please tell us if you
know that it is impossible to do, or if you know how to do it, or if
you know that it will be implemented in next versions... because I saw
a lot of people lately trying to to this without any luck and I'm one
of them. Thanks in advance.
Tarek