I reworked a little bit mocap tools file to work with namespace on control. In the beginning, you have to have any control from your referenced rig selected.
Hope someone can check it and let me know if it works!
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
This page contains links and information about Motion Capture software and datasets.BVH conversions of the 2500-motion Carnegie-Mellon motion capture dataset:1. Daz-friendly version (released July 2010, by B. Hahne)2. 3dsMax-friendly version...
The simplest motion capture file is just a massive table with 'XYZ' coordinates for each point attached to a recorded subject, and for every frame captured. I know that I can find individual methods and functions in R to perform complex operations (like principal component analysis) or I can plot time series for all the points. But when I am looking for examples that could also educate me statistically about analysing human movement, and provide with nice toolbox for visual representation of data, R turns out to be a cold desert. On the other hand, MATLAB has Motion capture toolbox and MoCap Toolbox and especially the latter has quite good options for plotting and analysing the captures. But let's be honest - MATLAB has quite ugly visualisation engine comparing to R.
In the example below, we have gesture data for 8 points on the upper body: spine, shoulder center, head, left shoulder, left wrist, right shoulder, and right wrist. The subject has his hands down and his right arm is making an upward movement.
Each line from the original dataset corresponds to a time observation, and the coordinates of each body point are defined in sets of 4 (every four columns is one body point). So at each line, we have "x", "y", "z", "br" for the spine, then "x", "y", "z", "br" for the shoulder center, and so on. The "br" is always 1, in order to separate the three coordinates (x,y,z) of each body part.
Judging by a quick search on RSeek, there isn't a motion capture package available for R. It looks like you'll need to find equivalents for each function. The more general ones should be fairly easy to find (interpolation, subsetting, transformation/ projection, time-series analysis, pca, matrix analysis etc) and the very process of writing your own custom functions for specific things like estimating instantaneous kinetic energy is probably the best way to learn!
I created this nightclub scene entirely in Unreal Engine. I recorded all of the character animations with the Xsens Link suit and Manus Prime II gloves in Unreal using the MVN Live Link plugin. I learned a lot from this.
I learned a lot about rigging and retargeting data, particularly with the fingers. Setting up multiple character animations in sequencer was a little tedious but considering how many characters I originally started with (nine), I ended up not needing all of them. But being able to have so much creative freedom thanks to all of this new mocap tech is just Unreal!!!
The most important lesson I learned though, is one person can create a little film with all of this amazing technology. It was just me in my home office, with the suit and gloves on and having fun in Unreal.
Granted, I have only been using Unreal just shy of 4 months, but this virtual production community has been the reason I was able to learn how to utilize all of these tools. I still have so much to learn and realize I am just at the beginning of this journey, but I have never had more fun in my life, so I can only imagine what excitement lies ahead.
I have to give thanks to Winbush and the guys at Mograph.com for making that Unreal Engine course (I even learned Cinema 4d from it), JSFilmz for his awesome tutorials and encouragement, to Manus for giving me my first taste of motion capture equipment, and to the lovely ladies at Xsens for opening up a door to a world that I never thought I would fall madly in love with, motion capture, that is!
My journey with face mocap is still progressing, as I am discovering it is a challenge to find a rig with face morphs/blendshapes. At least when it is just you, self-funding yourself. When I do get that rig that I can test out the Faceware Studio software with the Glassbox Live Client plugin to Unreal with the Xsens and Manus body/finger data streaming simultaneously, I will be beside myself!!! I cannot wait!
I tested out the mocap data I recorded with the Xsens Link suit and Manus Prime II gloves in Unreal Engine using the MVN Live Link plugin to Unreal, with Glassbox Dragonfly virtual camera. All I can say is I did not realize how much fun this was!
Testing out this virtual camera tool was just beyond exciting. Dragonfly offers so many options. You can choose your lenses, sensors, focal length and it even has an option to smooth out the camera movements after you have recorded your data.
I used my iphone with the Dragonfly app, created joystick buttons on the phone through the app and added the plugin folder to my Unreal project. Thanks to the awesome tutorials offered by Glassbox, I was up and running within minutes! The plugin is set up in such a way, that all you have to do is activate the link from the Unreal project to your phone and you can control the Dragonfly camera from your phone, press record and move around the scene. Pretty amazing.
I find all of this technology so incredible. What is even more incredible is the people that make up this motion capture community. Never have I been shown such kindness and support than from the people from Xsens (Katie Jo & Audrey Stevens), from Manus (Serdal & Arsene), from Faceware (Karen Chan, Josh & Peter) and from Glassbox (Norman) who have all been there to help me every step of the way. Thank you! It has taken an army of people to help me to get to this point in my journey.
Here is a little glimpse of what the workflow looks like when combining the body mocap data from Xsens MVN Animate with Manus Core inside of Unreal Engine. This was my first time filming myself and doing a screen recording.
I wish I had more to show for this, as this test was so exciting. I have to thank Kevin Cooney for reaching out to me and introducing the idea of testing out a remote multi-user session using the mocap data from Xsens MVN. He introduced me to Aiden Wilson, who I believe is an absolute genius!
One of the challenges for this was finding a time that worked for all of us, as being based in Florida, Kevin is 5 hours ahead and Aiden is 15 hours ahead. For my first multi-user session, it was Aiden and I. Being in the same project with someone else, and both of us doing different tasks simultaneously was unreal!
The MVN live link worked and we did some testing. We are both in the same project, and streaming live mocap data from MVN Animate into Unreal using the Live Link plugin. Here is a quick glimpse into this project.
I really cannot say enough about my mentor, Victoria Jorgensen. Wow! What a lady! This week Victoria introduced me to Gail Evenari. Gail has created educational content by utilizing Virtual Reality with the use of a 360 camera to immerse her viewers/students into the world of her subject. I have never seen anything like this before and it is truly brilliant!!!
With regards to my Antarctica/Mars project that I am preparing to create inside of Unreal Engine, being based on a meteorite that was discovered in Antarctica and is known to be from Mars, Victoria is putting me in touch with a historian/professor and traveler whose interests lie in the Polar regions. For anyone who knows me, they know that going to Antarctica has always been a dream of mine, in order to film the first 5 pages of my screenplay.
However, with the way the world is today, my only option is to do this with Unreal Engine. I would do this by creating topographical maps of the region the story takes place in, creating the characters (ANSMET team) that discovered this meteorite and using motion capture data I record with the Xsens suit, Manus gloves and hopefully Faceware to drive the characters facial expressions.
When I got the message from Victoria about the Polar historian, I will admit, I did shed some tears as this project is so important to me. Knowing someone is helping me to turn this dream into a reality, I was touched. Thank you, Victoria. No words can describe my appreciation.
We have some built-in entities and templates around this sort of work, but I wanted to get some feedback from the community about how different studios have approached this challenge. Here are some specific questions to start the discussion, feel free to answer any or all of them based on your experience:
Thanks in advance for sharing any thoughts or feedback here. This will not only help new teams adoption Shotgun in the mocap space, but will give our Product and Design teams some great insight into how we can support this critical industry segment even better in the future.
Both the Slate and Take entity is pretty empty so you would have to create the required fields for your workflows.
I would advise to standardize these for the studio and make them Project agnostic so you can define one workflow and use the same workflow on every project going forward.
So, our mocap result generated a bunch of actions, one action per bone. And actions takes a lot of space apparently. It increases our project file size from 2MB to 500MB. (78MB after compression). Is this normal for 2 minutes of mocap data with fingers? We used Motive for this.
I found the solution: after a successful retarget animation, there should be only 1 new combined action. And that is the only thing you need. Delete all other actions. Access actions list from Outliner -> Display Mode: Blender File -> Actions. Also delete the unused armature leftover.
I publish mocap data from Optitrack software to mavros/mocap/pose and tf data on mavros/mocap/tf as per _extras documentation. Running px4.launch I see FCU: [inav] Mocap data valid msg. However I mavros/global_position/global topic does not publish anything. The local_position topic publishes nothing. As a result of that I cannot switch to auto_takeoff mode which I guess requires some source publishing global position data.
c80f0f1006