2 kinects: alignment/calibration and how to prevent front-back mix ups

930 views
Skip to first unread message

dtr

unread,
Sep 5, 2012, 2:03:08 PM9/5/12
to openn...@googlegroups.com
Hi list,

I have 2 questions about combining 2 kinects for 360° skeleton tracking. The 2 cams looking at the center of the space, offset under a horizontal angle of 120° (this angle is dictated by other components of the physical installation).

1. Till now I've been aligning the 2 tracked skeletons manually. This is a pain in the *ss as first I have to try and position the camera's as exactly as possible (this is a mobile art project thus impossible to get 100% right like in an academic lab) then tweak rotations/translations in software to compensate for positioning errors. I would like to automate this. I'm sure there's a way to do it by registering a number of points (4?) with for example a hand, calculate the offset between those points, and use the resulting transform as alignment/calibration. I don't know where to start with this though. Does anyone know the math to do it or a resource where to find it?

2. Because of the 120° offset angle one cam is often seeing the front of my body while the other sees the back. It happens quite frequently that the one seeing the back mixes up front and back. So one skeleton tracker analyzes correctly while the other flips the limbs etc left-right. This is a problem because I'm merging the 2 skeletons. When this happens hands etc end up in the middle position between left and right, totally screwing the skeleton. Did anyone find a way to prevent this?

FYI, this is the project I'm working on: http://www.dietervandoren.net/index.php?/project/integration04/

BTW, I'm still using 2 computers for the 2 kinects. Please do tell if there's an easy way to get 2 skeleton trackers working on 1 machine. I'm not a pro-coder, using the SimpleOpenNI library for Processing right now.


Best, Dieter

eight

unread,
Sep 6, 2012, 9:34:41 AM9/6/12
to openn...@googlegroups.com
Given correspondance points, PCL can calculate translation/rotation matrix, using ICP (http://pointclouds.org/documentation/tutorials/iterative_closest_point.php). Manual pointcloud registration can be done in MeshLab.

--8

dtr

unread,
Sep 6, 2012, 2:28:22 PM9/6/12
to openn...@googlegroups.com
Thanks for the advices!

I'm gonna try PCL. Looks like I could register the points using my SimpleOpenNI app, load them into the PCL app to calculate the transform matrix and load that into my Max/MSP/Jitter rendering app to align the skeletons.

I wish OpenNI/NITE was as well documented as PCL...


Btw, I now found out that SimpleOpenNI has built in functionality for generating a UserCoordsystem that can be used to align two skeletons/depth maps: https://groups.google.com/forum/?fromgroups=#!topic/simple-openni-discuss/e6Ag6IwMEuQ

Though my first tests show there 's a large error margin. I think it's due to the method of clicking points in the RGB images of both cams inducing inaccuracies. The ICP method seems more robust as you start with the same points for both.


I've quickly tried Meshlab with some of its included samples as well. Works great but I didn't find the transformation data so that I could use that in another program. Can it be extracted?


Best, Dieter

eight

unread,
Sep 7, 2012, 12:28:50 PM9/7/12
to openn...@googlegroups.com
In my worklow I extract two frames –– one from each kinect –– that correspond to the same time, import them as pointclouds into MeshLab, do the alignment of the second point cloud to the first (it involves a manual alignment using hand picked correspondences, followed by ICP in MeshLab, get the transformation matrix, plump it back into application to apply to the point cloud coming out off the second kinect. Transformation matrix is printed in the lower right corner in MeshLab as a 4x4 table.

--8

dieter vandoren

unread,
Sep 7, 2012, 2:24:14 PM9/7/12
to openn...@googlegroups.com
Sounds good, gonna try that!

D.

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To view this discussion on the web visit https://groups.google.com/d/msg/openni-dev/-/68d--ggdj8UJ.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.

dtr

unread,
Sep 11, 2012, 10:38:50 AM9/11/12
to openn...@googlegroups.com
8,

I've been trying this out but the process fails with:

"Failure. No succesful arc among candidate Alignment arcs. Nothing Done."

But still Meshlab lines up the 2 meshes and there's a transform matrix in the corner of the window. I've tried using this matrix but my corrected vertices end up way off.

Is it supposed to work with this matrix even though the process failed? I was led to think so since Meshlab did rotate and translate the 2nd match so it lines up as well as possible. I'm not surprised it can't line up 100% as I'm importing 2 skeletons taken from different angles at the same instant. I don't think they can ever fully match as 2 skeleton processes will always have different error margins.

What do you think?

Best, D.

eight

unread,
Sep 11, 2012, 11:27:12 AM9/11/12
to openn...@googlegroups.com
Are you doing manual selection of the correspondence points first? There was a tut on how to do it in MeshLab. ICP (the one producing the "arc" error) will not work without it. As far as I recall I was getting the "arc" problem myself, and fixed it by "better" correspondences, and better manual alignment –– got it on the third or fourth try. With the "bad" correspondences the transform would flip one mesh relative to another or do other crazy things.

If you get the registration that looks good, and ICP is still "failing", you still should be able to use the transform matrix. Check the latter by manually introducing rotations/translations and scaling it suggests, starting from the original unaligned state to confirm that it is the correct one -- all in MeshLab.


--8

Aurel W.

unread,
Sep 11, 2012, 8:35:25 PM9/11/12
to openn...@googlegroups.com
Hi,

finding a set of point correspondences is rather difficult in such a
setup. Imho the way to go is to use corresponding planes, you can use
a simple planar marker and extract it with ransac. By considering the
thickness of the marker you can specify the same plane exactly, even
with an offset of 120 degrees. By moving and sampling the plane
several times you can solve for the transformation between the two
kinects.

There are some papers out there describing this and as far as I can
see this is also used by some commercial motion capture softwares:
www.ee.oulu.fi/~dherrera/papers/2011-depth_calibration.pdf

It may be a little bit advanced for your application, and I am not
aware of a currently available implementation (something pcl could use
by the way). But basically this would be the way to do it.

aurel
> --
> You received this message because you are subscribed to the Google Groups
> "OpenNI" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/openni-dev/-/FP-qhYS3Z58J.
Reply all
Reply to author
Forward
0 new messages