Head tracking

13 views
Skip to first unread message

Carlos Roberto

unread,
Sep 22, 2011, 1:29:40 PM9/22/11
to activat...@googlegroups.com
Hi
Once that ICM does not have a head tracking to allow measure the angle in head movement, is it possible to build a motion database that represents different head angles, say 30 and 45 degrees and use this database to compare with user motion and determine if the user head is positioned in 30 or 45 degrees? If it is possible, which sample can I use as guideline to implement this idea?
Thanks

--
Carlos Roberto
Software Eng. Consultant @ IBM
My Blog
My LinkedIn
follow me @ twitter


Carlos Roberto, MSc
Software Eng. Consultant @ IBM

My profiles: LinkedIn Twitter Blogger

Shaun Kime

unread,
Sep 22, 2011, 2:44:38 PM9/22/11
to activat...@googlegroups.com
Since OpenNI doesn't provide meaningful skeletal data for the head, using a motion database won't really help.

Instead, you'd need to go back to the segmented depth map. The head's rough location can be found using the OpenNI skeleton. From there you'd need to cull out the rest of the body, largely by rejecting any pixels that aren't within a head-sized sphere of the head location. You would then compute the centroid of the head in the depth map. This should be much more stable than the head bone reported by OpenNI. You could draw a line from the midpoint of OpenNI's shoulders to the head centroid and that would give you the head "roll". Computing all three axes will be tough unless you look for color stream features like the eyes, nose, and mouth.

More advanced algorithms have been described at:

Shaun

Shaun Kime

unread,
Sep 22, 2011, 2:57:43 PM9/22/11
to activat...@googlegroups.com
One more paper that I found, which looks to be very close to what you might ultimately want. 

Unfortunately if you want good results, the technology gets pretty complex quite fast.

Shaun

Carlos Roberto

unread,
Sep 23, 2011, 12:04:38 AM9/23/11
to activat...@googlegroups.com
Hi Shaun,
Thanks for your answer.
FaceAPI is a very interesting framework and it is very close what I need but looks like it does not work with Kinect.
Thanks.



Carlos Roberto, MSc
Software Eng. Consultant @ IBM

My profiles: LinkedIn Twitter Blogger


Shaun Kime

unread,
Sep 23, 2011, 9:24:54 AM9/23/11
to activat...@googlegroups.com
Carlos,
   My guess is that you could take the color stream output from Kinect and feed it to FaceAPI programmatically.

Shaun

carl...@gmail.com

unread,
Sep 23, 2011, 11:51:20 AM9/23/11
to activat...@googlegroups.com
Is there some code, sample to guide how to do that?
Cheers


Sent from Samsung Mobile


-------- Mensagem original -------- Subject: Re: Head tracking De: Shaun Kime Para: activat...@googlegroups.com CC:

Shaun Kime

unread,
Sep 23, 2011, 2:42:53 PM9/23/11
to activat...@googlegroups.com
Looking at the product brochure for FaceAPI, they mention integration with "Low-level 'shared-memory' image interface (allows for integration of custom video devices)." This would be the type of integration that I mentioned. You'd have to look at their API's to see how this was done.

As for how to access the OpenNI color stream in our API's, you can take a look at OpenGLAnimationApp's A3DOpenNIDebug::DrawImageMap, which depends on you adding a color stream to your SamplerUser.xml file for OpenNI.

Shaun
Reply all
Reply to author
Forward
0 new messages