A question on mocap

116 views
Skip to first unread message

郭金锋

unread,
Dec 13, 2013, 3:27:44 AM12/13/13
to python_in...@googlegroups.com
Hey Guys,
Good day!
I want to ask something not so closely related to Python or codes, hope you guys could help me some.
The thing is about mocap, entry-level devices such as Microsoft Kinect deliver skeleton data stream in the form of joint point 3D positions. Since character animation on joints within Maya or MoBu(or, basically any 3D package) is all about joint rotation, how could raw device data(joint translation) be mapped onto characters(joint rotation)?
I don't know mocap a lot, so hope it won't bother you if it is a silly question.:)
Thanks!
Jerry

Justin Israel

unread,
Dec 13, 2013, 4:27:42 AM12/13/13
to python_in...@googlegroups.com
Disclaimer: I'm not an experienced Character TD / Rigger, nor experienced in mocap. But I can share my experience on the topic.

At my previous studio, I was asked to experiment with an XBox Kinect to see what I could come up with. I used (and contributed a couple bits where I could to) PyOpenNI, as part of my OSX stack.  

After writing all the custom transport code, my first pass was to drive a character purely on the joint translations, which is like you said the "raw data". 
For my second round of playing, I came up with a different idea. I created a rigged skeleton puppet and drove its joints similar to the previous test, and then I drove the joint rotations of my actual target character off the puppet joints. It worked pretty well :-)

I'm sure there are more technically correct ways to use the data.



--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/02409477-eba3-443c-a0a4-5a37f9114ae4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Mark Jackson

unread,
Dec 16, 2013, 4:18:54 AM12/16/13
to python_inside_maya
we did some tests with the new Kinect at work, grabbing the internal data as joints and then mapping that to our rig using the animationBinder stuff I did for the MasterClass:


Skip to the end there's a chunk about binding rigs to point cloud data. This is a few years old now and there's many, many ways of doing this, but if you want a solution that you can download, setup and just run take a look. The animBinder tool is now part of the Red9 Studio Pack and I keep meaning to upgrade it, just haven't had the time recently
cheers

Mark



For more options, visit https://groups.google.com/groups/opt_out.



--
-------------------------------------
Mark Jackson
Technical Animation Director
http://markj3d.blogspot.com/

damon shelton

unread,
Dec 16, 2013, 11:38:49 AM12/16/13
to python_in...@googlegroups.com
Doing this process is not a trivial thing to calculate but there are many ways to go about doing it.I use a plugin at work called peel solver for maya. They have a rigid body constraint available that is just for this purpose. I am working on writing my own that basically uses a best fit oriented bounding box for the points to construct a transform that you can use to drive anything else. some more limited ways (limited in that it might make you max out your allowed point count to be 3-4) is to generate a polygon using the points and attach to the surface. This method is also not that reliable if the points aren't rigid.


郭金锋

unread,
Dec 17, 2013, 8:44:48 PM12/17/13
to python_in...@googlegroups.com
Hi All,
Thanks Justin, I think maybe in MoBu such process could be passed to the application itself, like you said, the more technically correct way. I've watched the video Mark pasted, and I find you two are thinking in about the same direction, (your) great minds think alike!
Thanks Mark, to tell you the truth, I've translated your Red9 tools into Chinese for my co-workers. Although there are still several functions we have not yet figured out, it's a great package! No worries, it's still called Red9 and all I did was translated the UI. BTW, what you called few years old is still pretty helpful for me.
I just checked my gmail and haven't got the time to check the peel solver for maya, but I think it would be a great solution, 'cause I searched it just now and one webpage titled, "Kick ass Maya tool PeelSolve_for_Maya". So, thanks Damon.
Well, actually I have partly figured out some, here is it. I compiled two files, one .exe monitors Kinect and sends position data via TCP/IP socket, and the other .dll receives that data and create optical device marker set in MoBu, and it worked! Basically it's done using Kinect for Windows SDK and MoBu SDK, I think MoBu would be a better choice for such scenario(Am I right?).
What bothers me now is that Kinect is low in accuracy, and the data needs tons of modification, so I'm thinking about connecting two Kinects. Since each Kinect device emits exactly the same light(signal frequency and wave length), two devices will definitely interference each other. Has anyone ever done stuff like multiple Kinects mocap?  Is it practically possible?
Thanks!
Jerry

On Tuesday, December 17, 2013 12:38:49 AM UTC+8, damonshelton wrote:
Doing this process is not a trivial thing to calculate but there are many ways to go about doing it.I use a plugin at work called peel solver for maya. They have a rigid body constraint available that is just for this purpose. I am working on writing my own that basically uses a best fit oriented bounding box for the points to construct a transform that you can use to drive anything else. some more limited ways (limited in that it might make you max out your allowed point count to be 3-4) is to generate a polygon using the points and attach to the surface. This method is also not that reliable if the points aren't rigid.
On Mon, Dec 16, 2013 at 1:18 AM, Mark Jackson <mar...@gmail.com> wrote:
we did some tests with the new Kinect at work, grabbing the internal data as joints and then mapping that to our rig using the animationBinder stuff I did for the MasterClass:


Skip to the end there's a chunk about binding rigs to point cloud data. This is a few years old now and there's many, many ways of doing this, but if you want a solution that you can download, setup and just run take a look. The animBinder tool is now part of the Red9 Studio Pack and I keep meaning to upgrade it, just haven't had the time recently
cheers

Mark
On 13 December 2013 09:27, Justin Israel <justin...@gmail.com> wrote:
Disclaimer: I'm not an experienced Character TD / Rigger, nor experienced in mocap. But I can share my experience on the topic.

At my previous studio, I was asked to experiment with an XBox Kinect to see what I could come up with. I used (and contributed a couple bits where I could to) PyOpenNI, as part of my OSX stack.  

After writing all the custom transport code, my first pass was to drive a character purely on the joint translations, which is like you said the "raw data". 
For my second round of playing, I came up with a different idea. I created a rigged skeleton puppet and drove its joints similar to the previous test, and then I drove the joint rotations of my actual target character off the puppet joints. It worked pretty well :-)

I'm sure there are more technically correct ways to use the data.

On Fri, Dec 13, 2013 at 9:27 PM, 郭金锋 <guojinf...@gmail.com> wrote:
Hey Guys,
Good day!
I want to ask something not so closely related to Python or codes, hope you guys could help me some.
The thing is about mocap, entry-level devices such as Microsoft Kinect deliver skeleton data stream in the form of joint point 3D positions. Since character animation on joints within Maya or MoBu(or, basically any 3D package) is all about joint rotation, how could raw device data(joint translation) be mapped onto characters(joint rotation)?
I don't know mocap a lot, so hope it won't bother you if it is a silly question.:)
Thanks!
Jerry

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsub...@googlegroups.com.
--
-------------------------------------
Mark Jackson
Technical Animation Director
http://markj3d.blogspot.com/

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsub...@googlegroups.com.

Mark Jackson

unread,
Dec 18, 2013, 7:11:07 AM12/18/13
to python_inside_maya
Red9 in Chinese, I'd love to see a screen shot of that!  
I take it you've also looked at iPi ? we use it in-house as a fast way to test out motion for the animators and the new version supports 3 Kinects, the data is actually pretty good considering where it's coming from, be good when they get access to Kinect2's.


Last time I checked the Peel stuff it hadn't been compiled for a few versions of Maya, we were wanting to use the c3d importer to get point cloud data into Maya.

Mark


To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/d94cc4c1-43f5-4a4f-a6da-9d03d41f845f%40googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

damon shelton

unread,
Dec 18, 2013, 2:55:59 PM12/18/13
to python_in...@googlegroups.com
Hey, Mark,
I am sure if you contact alistair (about peel) I am sure he has builds available or can try to build you a version).
Also I run into a lot of issues with the kinect based data with feet sliding all over the place. I have some decent results using different types of IR absorbing materials on the floor and strongly contrasted solid colors for socks vs pants etc...


but for looking into accurate orientations for point cloud data you want to look into



郭金锋

unread,
Dec 18, 2013, 10:47:41 PM12/18/13
to python_in...@googlegroups.com
Yes, of course Mark. I have sent you the screet shot to your gmail.
We want to do it in real-time, so...
But anyway, iPi has the ability to collaborate more than one kinects, which is cool, since I haven't figured out how to do it. Considering the complexity of collaborating two or more data streams, I think it's pretty difficult to do multiple kinects in real-time...
Has anyone happen to have some idea to share? 
Thanks!
Hi All,
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsubscribe@googlegroups.com.
--
-------------------------------------
Mark Jackson
Technical Animation Director
http://markj3d.blogspot.com/

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages