Skeleton tracking with multiple kinects not solved with new OpenNI ??

474 views
Skip to first unread message

david

unread,
Apr 17, 2011, 8:14:13 PM4/17/11
to openn...@googlegroups.com
With the old version of OpenNI I tried to track the skeleton of the user using two kinects but I found the problem that each user generator that should be assigned to each of the kinects took the data of the same kinect. Thus, the skeleton information retrieved for each of the user generator was the same. I found it quite strange due to each kinect was initialized with a different context... I was hoping that the new release of OpenNI would solve this problem but I tried and I got the same result... Does anybody tried it with success ?? Is it supported the tracking with multiple devices ??

Thanks in advance...

Felix

unread,
Apr 19, 2011, 12:49:28 PM4/19/11
to OpenNI
Yeah, I also tried it again with two Kinects.
And I got a bit different result, by using EnumerateProductionTrees()
on ONE context with different node types I get:
- 2 depth generators
- 2 image and ir generators
- but 4 user generators

The depth, image and ir generators provide data for both Kinects.
But - and here comes the important difference - the first 3 user
generators provide data for the first Kinect and only the forth user
generator provides data for the second Kinect.

So by using the first and forth user generator, I succeeded in
tracking users in front of both Kinect simultaneously!

Still a bit weird. Seems that NITE still has problems with mutliple
sensors (or at least Kinects).
Also, there is still no serial number provided for the device nodes of
the Kinects, so distinguishing between the two is still hard (Maybe
that is the problem for NITE).

Felix

unread,
Apr 19, 2011, 12:53:01 PM4/19/11
to OpenNI
Forgot to mention that the new NiViewer example can use mutliple
sensors (also Kinects, but without user tracking, though).
But they in spite initialize with XML.
I haven't tried that yet, but maybe that is the solution for our
problem.

david

unread,
Apr 20, 2011, 9:14:15 PM4/20/11
to openn...@googlegroups.com
Thanks Felix for your reply. I have tried and i got the tracking of the two skeletons using the two kinects, but as you said using the first and fourth user generators... In spite of being a little bit weird having to use the first of fourth generator, at least is something..

The NiViewer example supports multiple kinects but i think they are not running at the same time. The application simply allows you to choose which kinect you want to run.

One think that is happening to me is that if I only connect one kinect, the calibration process is faster and i can even calibrate taking into account only the upper body. However, when I have the two kinects connected, the calibration process fails when only looking the upper body and success when full body but is slower than having only one kinect.. Does eveybody happen the same ?

thanks,

pixel

unread,
Apr 20, 2011, 11:26:44 PM4/20/11
to OpenNI
Felix,
I am having the same problem you do. Enumerating all nodes i keep
getting 2 images, 2 depths and 4 users. I haven't tried multiple user
skeleton tracking but i found it to be weird.
Is performance your only problem with full body tracking or is there
anything else that you noticed?
thanks

david

unread,
Apr 20, 2011, 11:39:32 PM4/20/11
to openn...@googlegroups.com
Having more than one kinect running makes the callibration process not so fast and the need of having full body... Once the callibration is done, I have not noticed anything else that changes from having only one kinect..

pixel

unread,
Apr 20, 2011, 11:43:04 PM4/20/11
to OpenNI
Oh my mistake, i was to ask you, not Felix :)
thanks david.

dtr

unread,
May 1, 2011, 2:35:19 PM5/1/11
to OpenNI
David,
I'm diving into OpenNI and compiling with only scripting experience
(Processing, Actionscript, etc). After some messing around I managed
to compile OSCeleton on my Mac with Xcode. Now my next step is to
adapt it to use 2 or more Kinects but I'm really getting lost in the
libraries, drivers and code snippets in forum posts...
Would you be willing to post the entire code of a working app with
multiple Kinects? That 'd be a great help to get me (and I bet others
too) on the right track. It doesn't matter if you're on another OS.
I'll work it out.

My aim for having multiple cams is extending the scan range and
tracking a performer from multiple angles.

Best, Dieter

david

unread,
May 5, 2011, 4:46:06 AM5/5/11
to openn...@googlegroups.com
hi dtr,

I have also tried to match the two point clouds finding the physical rotation and transformation matrix using stereovision calibration but i could not get a very good accuracy, there is always some offset and the matching is different depends on the distance of the user to the camera. I guess because I would need to calibrate the depth camera, which i think is quite tricky to calibrate. Does anybody achieved to get a good matching of two point clouds ? 

this is the code i have used to run multiple kinects:

Context _g_context;

std::vector<DepthGenerator*> _g_depth;
std::vector<ImageGenerator*> _g_image;
std::vector<DepthMetaData*> _g_depthMD;
std::vector<ImageMetaData*> _g_imageMD;

int _nKinects;

int kinectManager::init() {

XnStatus status;
status = _g_context.Init();

static xn::NodeInfoList node_info_list;
static xn::NodeInfoList depth_nodes;
static xn::NodeInfoList image_nodes;

status = _g_context.EnumerateProductionTrees (XN_NODE_TYPE_DEVICE, NULL, node_info_list);

if (status != XN_STATUS_OK && node_info_list.Begin () != node_info_list.End ()) { 
printf ("Enumerating devices failed. Reason: %s", xnGetStatusString (status)); 
return -1;
}
else if (node_info_list.Begin () == node_info_list.End ()) {
printf("No devices found.\n");
return -1;
}

for (xn::NodeInfoList::Iterator nodeIt = node_info_list.Begin (); nodeIt != node_info_list.End (); ++nodeIt) {
_nKinects++;
}

status = _g_context.EnumerateProductionTrees (XN_NODE_TYPE_IMAGE, NULL, image_nodes, NULL);

if (status != XN_STATUS_OK && image_nodes.Begin () != image_nodes.End ()) { 
printf ("Enumerating devices failed. Reason: %s", xnGetStatusString (status)); 
return -1;
}
else if (image_nodes.Begin () == image_nodes.End ()) {
printf("No devices found.\n");
return -1;
}

status = _g_context.EnumerateProductionTrees (XN_NODE_TYPE_DEPTH, NULL, depth_nodes, NULL);

if (status != XN_STATUS_OK && depth_nodes.Begin () != depth_nodes.End ()) { 
printf ("Enumerating devices failed. Reason: %s", xnGetStatusString (status)); 
return -1;
}
else if (depth_nodes.Begin () == depth_nodes.End ()) {
printf("No devices found.\n");
return -1;
}


int i = 0;
for (xn::NodeInfoList::Iterator nodeIt =image_nodes.Begin(); nodeIt != image_nodes.End(); ++nodeIt, i++) {
xn::NodeInfo info = *nodeIt;
const XnProductionNodeDescription& description = info.GetDescription();
printf("image: vendor %s name %s, instance %s\n",description.strVendor, description.strName, info.GetInstanceName());

XnMapOutputMode mode;
mode.nXRes = 640;
mode.nYRes = 480;
mode.nFPS = 30;

status = _g_context.CreateProductionTree (info);

ImageGenerator* g_image = new ImageGenerator();
ImageMetaData* g_imageMD = new ImageMetaData();

status = info.GetInstance (*g_image);

g_image->SetMapOutputMode(mode);
g_image->GetMetaData(*g_imageMD);
g_image->StartGenerating();

_g_image.push_back(g_image);
_g_imageMD.push_back(g_imageMD);
}


i = 0;
for (xn::NodeInfoList::Iterator nodeIt =depth_nodes.Begin(); nodeIt != depth_nodes.End(); ++nodeIt, i++) {
xn::NodeInfo info = *nodeIt;
const XnProductionNodeDescription& description = info.GetDescription();
printf("image: vendor %s name %s, instance %s\n",description.strVendor, description.strName, info.GetInstanceName());

XnMapOutputMode mode;
mode.nXRes = 640;
mode.nYRes = 480;
mode.nFPS = 30;

status = _g_context.CreateProductionTree (info);

DepthGenerator* g_depth = new DepthGenerator();
DepthMetaData* g_depthMD = new DepthMetaData();

status = info.GetInstance (*g_depth);

g_depth->SetMapOutputMode(mode);
g_depth->GetMetaData(*g_depthMD);
g_depth->StartGenerating();

_g_depth.push_back(g_depth);
_g_depthMD.push_back(g_depthMD);
}

for (int i = 0; i < _nKinects; i++) {
_g_image[i]->GetMirrorCap().SetMirror(false);
_g_depth[i]->GetAlternativeViewPointCap().SetViewPoint(*_g_image[i]);
_g_depth[i]->GetMirrorCap().SetMirror(false);
}
 
status =_g_context.StartGeneratingAll();
return 1;
}

void kinectManager::update() {
XnStatus status = XN_STATUS_OK;

status = _g_context.WaitAndUpdateAll();

if (status != XN_STATUS_OK) {
printf("Read failed: %s\n", xnGetStatusString(status));
return;
}

for (int i = 0; i<_nKinects; i++) {

_g_image[i]->GetMetaData(*_g_imageMD[i]);
_g_depth[i]->GetMetaData(*_g_depthMD[i]);

}
}

dtr

unread,
May 5, 2011, 1:53:42 PM5/5/11
to OpenNI
Hi David,

I didn't try yet but I guess it makes sense that the readings diverge
in relation to the range. For each Kinect a portion of the scanned
range will be most accurate and the more distant from that range the
more inaccurate the readings will be (most likely the further away the
less accurate). I can imagine there's a sweet spot possible where the
most accurate scan ranges of 2 kinects overlap. Outside of that 1 of
the 2 Kinects will have a more accurate reading than the other,
dependent on placement. The most accurate should be kept and the other
one discarded.

Perhaps the algorithm could work like this:
- for readings inside the 2 Kinects overlapping accurate range >
average the 2 readings
- for readings outside the overlapping accurate range > select the
reading with the best accuracy

I guess the exact ranges would have to be empirically defined.

Does this make any sense? I'm sure people with more experience in this
field would have much better insights than I do...


Best, Dieter

david

unread,
May 11, 2011, 4:13:00 AM5/11/11
to openn...@googlegroups.com
Hi dtr,

I think the algorithm should be fine, however, the problem is that in the overlapping zone, to do the average you would need to know for each 3d point coming from the point cloud of one kinect, which is its correspondance from the other point cloud. To do that maybe you will have to use the fundamental matrix to know the correspondences in pixel coordinates... I wanted to look at it soon to see if it works... I let you know if I get good results..

One drawback I see with this approach is that following this algorithm you don't take advantage of using multiple devices to get more resolution...

Best,

dtr

unread,
May 11, 2011, 4:31:00 PM5/11/11
to OpenNI
> the problem is that in the
> overlapping zone, to do the average you would need to know for each 3d point
> coming from the point cloud of one kinect, which is its correspondance from
> the other point cloud.

true, the math involved here is a bit over my head...

david

unread,
May 26, 2011, 1:13:53 AM5/26/11
to openn...@googlegroups.com
Have anyone tried with success running two user generator (to track skeleton using two kinects) and at the same time running two Hand and two Gestures generators (the latest two used to track NITE gestures) ??

I am using the following code:

After the enumerations of production nodes

status = context.
EnumerateProductionTrees(XN_NODE_TYPE_USER, NULL, user_generator_nodes, NULL);
status = context.EnumerateProductionTrees(XN_NODE_TYPE_HANDS, NULL, hand_generator_nodes, NULL);
status = context.EnumerateProductionTrees(XN_NODE_TYPE_GESTURE, NULL, gesture_generator_nodes, NULL);

 if i do:

    xn::NodeInfo info = *gesture_generator_nodes.Begin();
    context.CreateProductionTree(info);
    status = info.GetInstance (*gestures);

    info = *hand_generator_nodes.Begin();
    context.CreateProductionTree(info);
    status = info.GetInstance (*hands);

    info = *user_generator_nodes.Begin();
    context.CreateProductionTree(info);
    status = info.GetInstance (g_UserGenerator);

It does not work. I tried it with only one kinect too and it does not work either. Seems it does not even start the callibration...

However if i replace

    info = *user_generator_nodes.Begin();
    context.CreateProductionTree(info);
    status = info.GetInstance (g_UserGenerator);

for

   g_UserGenerator.Create(context);

then it works. But this replacement can be done if it is used one kinect only...

Any suggestions ?

Thanks,




JustinK

unread,
May 30, 2011, 10:55:14 PM5/30/11
to openn...@googlegroups.com
i plug two kincts into my computer via two usb2.0 ports,but one of the kinect cant install the camera drive ,anyone know what happen?

2011/5/26 david <davida...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.

Diego S. Maranan

unread,
Nov 14, 2011, 10:01:32 PM11/14/11
to openn...@googlegroups.com
Hi everyone,

I'm wondering if anyone has had success with tracking a skeleton using 2 or more kinects.

Thanks.

Cheers,
Diego
Reply all
Reply to author
Forward
0 new messages