SkeletonPrimitive - for discussion

12 views
Skip to first unread message

Carsten Kolve

unread,
Feb 17, 2011, 12:48:39 AM2/17/11
to cort...@googlegroups.com
Hi,

I've just added DrD's SkeletonPrimtive code for discussion purposes to
the trunk (it clearly not fit for contribution at this point - we just
took it as far as needed by us)
http://code.google.com/p/cortex-vfx/source/detail?r=4100

As you can see we intentionally kept it very simple. Just an array of
matrices to describe a pose, jointnames and an intArray for parenting
information + various update and space change functions. Our initial
skeleton did have multiple matrices to describe a single joint - but
we ditched this approach for simplicity and speed purposes, reasoning
that if more complicated joint manipulations (like offset animation,
joint orientation, length manipulation etc) would be needed it could
be done through ops instead (in practice we rarely find a need for
this).

Any questions, comments - let us know....

Cheers,
Carsten

--
// carsten kolve - www.kolve.com

JohnH

unread,
Feb 18, 2011, 2:58:21 PM2/18/11
to cortexdev
Hi Carsten,
Thanks for committing this - it definitely makes for a good solid
point of discussion. I've got a few questions/comments to get the ball
rolling...

* Is the choice of representing the pose using only matrices
influenced by your particular use case, where as I understand it you
pretty much only need to load animation which has already been
generated elsewhere? I'd imagine that if we wanted to perform
animation directly on a skeleton then we'd want to be able to
manipulate things at the level of rotations, translations and scales
rather than matrices, and then derive the final matrices from those on
demand. I imagine we also need to work at the level of rotations etc
if we want to reliably interpolate poses.

* Do you tend to have many SkeletonPrimitives in memory at once, or is
it more a case of repeatedly loading different animations on the same
skeleton before rendering it? I wonder if for doing large simulations
it might be worth separating out the concept of the topology of the
skeleton (names, parenting, and perhaps the format of rotations/
translations) and the poses - the values for rotation etc. A
simulation might then have a small number of unique skeleton
topologies but a huge number of poses - one per agent.

* Are Skeletons really Primitives or should we separate out the
functionality for manipulating Skeletons from the functionality for
displaying them? If they are Primitives then should per-joint
attribute like the names and matrices be PrimitiveVariables?

Thanks once again for getting things started - I hope my questions
seem relevant and look forward to hearing what you think...
Cheers...
John

On Feb 16, 9:48 pm, Carsten Kolve <cars...@kolve.com> wrote:
> Hi,
>
> I've just added DrD's SkeletonPrimtive code for discussion purposes to
> the trunk (it clearly not fit for contribution at this point - we just
> took it as far as needed by us)http://code.google.com/p/cortex-vfx/source/detail?r=4100

Carsten Kolve

unread,
Feb 19, 2011, 3:21:51 AM2/19/11
to cort...@googlegroups.com
> * Is the choice of representing the pose using only matrices
> influenced by your particular use case, where as I understand it you
> pretty much only need to load animation which has already been
> generated elsewhere?

I'd say that is true. However we kind of justified it to ourselves by
liking the joint of a skeletons to the vertices of a mesh. On their
own they are simple and dump (all they can do is some space changing),
but then external ops (in the case of the mesh, deformers) actually do
create the animation.

I'd imagine that if we wanted to perform
> animation directly on a skeleton then we'd want to be able to
> manipulate things at the level of rotations, translations and scales
> rather than matrices, and then derive the final matrices from those on
> demand.

Again, yes - we just chose to do this kind of stuff outside of the
skeleton. We reasoned that for our use case that would be the
exception rather than the norm. We also have to deal with some fairly
complex skeletons (300+ joints), so adding the necessary matrix math
overhead seemed excessive. If we'd wanted to for example add some
offset animation to a specific joint we can limit the necessary
computation to only that one joint, rather than the whole skeleton.
But you are right, this might only makes sense if you have a motion
generation system like mocap, massive, maya etc. somewhere else. What
is the use case you have in mind?

> I imagine we also need to work at the level of rotations etc
> if we want to reliably interpolate poses.

Yes, I think we might have added something to the matrix interpolator
to do some quaternion interpolation on the rotations (I'm not sure
though, go to check)

> * Do you tend to have many SkeletonPrimitives in memory at once, or is
> it more a case of repeatedly loading different animations on the same
> skeleton before rendering it? I wonder if for doing large simulations
> it might be worth separating out the concept of the topology of the
> skeleton (names, parenting, and perhaps the format of rotations/
> translations) and the poses - the values for rotation etc. A
> simulation might then have a small number of unique skeleton
> topologies but a huge number of poses - one per agent.

We thought of that, too - to save on duplicating the static data we
have added a small function "shareStaticData" - the idea is you could
share jointNames and hierarchies across multiple instances of a
SkeletonPrimitive. How much we use that in practice - I'm unsure about
atm (gotta dbl check)

> * Are Skeletons really Primitives or should we separate out the
> functionality for manipulating Skeletons from the functionality for
> displaying them? If they are Primitives then should per-joint
> attribute like the names and matrices be PrimitiveVariables?

Good question. We usually use skeletonPrimitives to hold our data and
for space changing (again, similar to a mesh) - and it is convenient
to have it as a primitive so we'd get the instant rendering in gl for
debugging purposes and can also render it in a renderman context. We
haven't looked at interpolating skeletonPrimitives directly for motion
blurring the skeleton render. Usually we use the worldSpace matrices
from a skeleton together with the PointSmoothSkinningOp to deform
meshes, so we didn't worry about that.

Cheers,
Carsten

JohnH

unread,
Feb 21, 2011, 8:40:19 PM2/21/11
to cortexdev
On Feb 19, 12:21 am, Carsten Kolve <cars...@kolve.com> wrote:
> > * Is the choice of representing the pose using only matrices
> > influenced by your particular use case, where as I understand it you
> > pretty much only need to load animation which has already been
> > generated elsewhere?
>
> I'd say that is true. However we kind of justified it to ourselves by
> liking the joint of a skeletons to the vertices of a mesh. On their
> own they are simple and dump (all they can do is some space changing),
> but then external ops (in the case of the mesh, deformers) actually do
> create the animation.

> > animation directly on a skeleton then we'd want to be able to
> > manipulate things at the level of rotations, translations and scales
> > rather than matrices, and then derive the final matrices from those on
> > demand.
>
> Again, yes - we just chose to do this kind of stuff outside of the
> skeleton. We reasoned that for our use case that would be the
> exception rather than the norm. We also have to deal with some fairly
> complex skeletons (300+ joints), so adding the necessary matrix math
> overhead seemed excessive. If we'd wanted to for example add some
> offset animation to a specific joint we can limit the necessary
> computation to only that one joint, rather than the whole skeleton.
> But you are right, this might only makes sense if you have a motion
> generation system like mocap, massive, maya etc. somewhere else. What
> is the use case you have in mind?

I guess the use cases I have in mind are actually generating, editing
and blending animations. Since we don't have any nice ways of doing
that here I'd be looking for the main cortex Skeleton class to provide
a data structure which is capable of it. But even for very simple
rendering-only requirements I think there's also a good case for
storing something other than matrices. In Ollie's use case he has an
ASCII file from massive which has only rotation information, and he
needs to get all the way to matrices from that to do the skinning - to
me it would seem ideal to have the Skeleton do that. I also think
there's a good case for caching at the rotation/translation level than
at the matrix level - both to get reliable interpolation and a more
compact representation on disk. I understand your worries regarding
the overhead of computing the matrices each time - I wonder if you
have any sense of what the overhead would be? My gut feeling is it
might not be too bad but I guess the only way to be sure is to measure
something.

We went through a similar set of deliberations when we implemented
transform caching here. We originally cached matrices and then slowly
came to the conclusion that the interpolation was basically never
going to work for the general case. We now cache
IECore::TransformationMatrix instead, which is essentially recreating
Maya's transformations including pivots and the like, and returns
matrices computed on demand. I'm not really happy with that solution
either as we frequently cache more than we need, and I suspect some
other package will have a series of transformation components that
doesn't match.

I've been following the Alembic dev list a little and there was some
mention of representing transformations as a series of primitive
operations like translate/rotate/scale etc. I like the idea of
specifying such a sequence for each joint of the Skeleton - so along
with the joint names and parenting we also specify a series of
operations and their values for the default pose. For Ollie's case
it'd be easy enough to specify that each bone expects only an animated
rotation. For your special case of doing offset animation on only one
bone, that bone can have one additional operation in it's
transformation sequence. We could even emulate your existing setup by
having a simple setMatrix operation along with the others - that way
for your case you could just load matrices and bypass all the math
anyway, but other cases would be free to do all the math they like.
I'd think that such a TransformationSequence class would also make a
much better replacement for the current IECore::TransformationMatrix
class, and we could adopt it for all transform caching in general.

What do you think? Am I overcomplicating things?

Carsten Kolve

unread,
Feb 22, 2011, 8:10:55 PM2/22/11
to cort...@googlegroups.com, JohnH
> I've been following the Alembic dev list a little and there was some
> mention of representing transformations as a series of primitive
> operations like translate/rotate/scale etc. I like the idea of
> specifying such a sequence for each joint of the Skeleton - so along
> with the joint names and parenting we also specify a series of
> operations and their values for the default pose. For Ollie's case
> it'd be easy enough to specify that each bone expects only an animated
> rotation. For your special case of doing offset animation on only one
> bone, that bone can have one additional operation in it's
> transformation sequence. We could even emulate your existing setup by
> having a simple setMatrix operation along with the others - that way
> for your case you could just load matrices and bypass all the math
> anyway, but other cases would be free to do all the math they like.
> I'd think that such a TransformationSequence class would also make a
> much better replacement for the current IECore::TransformationMatrix
> class, and we could adopt it for all transform caching in general.
>
> What do you think? Am I overcomplicating things?

The idea of a transform as a list of primtive operations does sound
interesting. I'm not sure how that would work in practice, though. -
Maybe doing a few pseudo code prototypes would be great?
I guess you'd break it down to the individual channels (tx, rz, sy
etc.) so you wouldn't have to bother with defining rotation orders
etc?
How would you deal (or would you want to deal) with predefined joint
orientations? Would they just be the first entry in the per joint op
chain? or would you have a pre-transform and a transform op chain?
Also, once you have a defined op chain, how would you go about
updating it (or individual ops in the chain) over time?
I like the idea of being able to bypass all those calculations by
directly setting a matrix. When it comes to how much time is actually
spend in these calculations and if it is a bother - I guess that is a
subjective call depending on the number of characters and the amount
of joints. In our case shots we easily end with more than than 12
million individual joint to update at any given time step, so making
this update not too heavy seemed a good thing to do. It might not be
too much of an overhead when looking at it for rendering only, but we
are also using it for interactive playback.

Ollie Rankin

unread,
Feb 23, 2011, 3:09:01 AM2/23/11
to cort...@googlegroups.com
Not wanting to throw a spanner among the pigeons, but how do you guys feel about the idea of an explicit joint class that has things like degrees of freedom and a rotation order, a default transformation, a current transformation (potentially a stack of primitive transforms) and an array of children? And then the skeleton class wraps up an array of top-level joints and offers a bunch of methods for manipulating skeletons en toto.

It's certainly not the lean mean half-million penguin crunching machine that Carsten speaks of and I'm really not wanting to bloat this thing as interactivity is something we want, too, but by the same token, it would be a shame to my mind to have a general skeleton class that didn't contain enough information to reconstruct an identical skeleton when saved out and read back into Maya - or any other 3d package we might want to build or use skeletons in.

Carsten, incidentally, if you are only reading cached animation from disk, do you keep all the skeletons live or do you iteratively reuse the same one skeleton for every character that shares the same skeleton topology, apply the animation and skinning and move on to the next?

cheers

> --
> You received this message because you are subscribed to the "cortexdev" group.
> To post to this group, send email to cort...@googlegroups.com
> To unsubscribe from this group, send email to cortexdev-...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/cortexdev?hl=en

Carsten Kolve

unread,
Feb 23, 2011, 5:36:04 PM2/23/11
to cort...@googlegroups.com, Ollie Rankin
Hi there,

On Wed, Feb 23, 2011 at 7:09 PM, Ollie Rankin <ol...@31770.net> wrote:
> Not wanting to throw a spanner among the pigeons, but how do you guys feel about the idea of an explicit joint class that has things like degrees of freedom and a rotation order, a default transformation, a current transformation (potentially a stack of primitive transforms) and an array of children? And then the skeleton class wraps up an array of top-level joints and offers a bunch of methods for manipulating skeletons en toto.

We started out this way and subsequently got rid of a joint class to
simplify things - we thought the transformationMatrix too complex and
didn't come across a use case for using a joint on its own (not within
a skeleton) and opted against it. I agree that it makes sense from an
object oriented pov, but at the time our lacking familiarity with how
to properly integrate with the rest of the cortex api made us go the
simple route. Not saying that it doesn't make sense for a "proper"
implementation though.


>
> It's certainly not the lean mean half-million penguin crunching machine that Carsten speaks of and I'm really not wanting to bloat this thing as interactivity is something we want, too, but by the same token, it would be a shame to my mind to have a general skeleton class that didn't contain enough information to reconstruct an identical skeleton when saved out and read back into Maya - or any other 3d package we might want to build or use skeletons in.


I guess we are just a different parts of the possibilities - we are at
the bare minimum end, with the other option to emulate whats happening
on the commercial 3d app side - one could opt to find the closest
compromise between all the common 3d packages - but already maya and
houdini are representing a joint differently.
As requirements differ from case to case I like John's idea of a
configurable (joint)transform that is as complicated or as simple as
you want it to be. So you could use Maya's approach of using 6
matrices to describe a single joint, or keep it simpler if you wanted
to. That should make everyone happy. All I was wondering of how one
would describe a default configuration and subsequent pose changes (ie
animation).

>
> Carsten, incidentally, if you are only reading cached animation from disk, do you keep all the skeletons live or do you iteratively reuse the same one skeleton for every character that shares the same skeleton topology, apply the animation and skinning and move on to the next?

For rendering we do keep the skeleton live, apply the animation, get
the world space pose, do the skinning, move on to the next and reuse
the same skeleton. So for rendering we only have one per character
type around.

Carsten

JohnH

unread,
Feb 23, 2011, 6:49:03 PM2/23/11
to cortexdev
> On Wed, Feb 23, 2011 at 7:09 PM, Ollie Rankin <ol...@31770.net> wrote:
> > Not wanting to throw a spanner among the pigeons, but how do you guys feel about the idea of an explicit joint class that has things like degrees of freedom and a rotation order, a default transformation, a current transformation (potentially a stack of primitive transforms) and an array of children? And then the skeleton class wraps up an array of top-level joints and offers a bunch of methods for manipulating skeletons en toto.
>
> We started out this way and subsequently got rid of a joint class to
> simplify things - we thought the transformationMatrix too complex and
> didn't come across a use case for using a joint on its own (not within
> a skeleton) and opted against it. I agree that it makes sense from an
> object oriented pov, but at the time our lacking familiarity with how
> to properly integrate with the rest of the cortex api made us go the
> simple route. Not saying that it doesn't make sense for a "proper"
> implementation though.

I don't have any particularly strong feeling on this either way. I can
see that a Joint class might make for nicer syntax, but I definitely
think it should be at most a relatively lightweight thing rather than
a full-blown Object derived class. I'd think that the majority of the
functionality might be in the TransformSequence (TransformStack?)
class anyway as I'd like to see that used elsewhere.

> > It's certainly not the lean mean half-million penguin crunching machine that Carsten speaks of and I'm really not wanting to bloat this thing as interactivity is something we want, too, but by the same token, it would be a shame to my mind to have a general skeleton class that didn't contain enough information to reconstruct an identical skeleton when saved out and read back into Maya - or any other 3d package we might want to build or use skeletons in.
>
> I guess we are just a different parts of the possibilities - we are at
> the bare minimum end, with the other option to emulate whats happening
> on the commercial 3d app side - one could opt to find the closest
> compromise between all the common 3d packages - but already maya and
> houdini are representing a joint differently.
> As requirements differ from case to case I like John's idea of a
> configurable (joint)transform that is as complicated or as simple as
> you want it to be. So you could use Maya's approach of using 6
> matrices to describe a single joint, or keep it simpler if you wanted
> to. That should make everyone happy. All I was wondering of how one
> would describe a default configuration and subsequent pose changes (ie
> animation).

It sounds like we're all in agreement on the need for flexibility in
the transform representation then. I had a quick look at the Alembic
equivalent and it seems they provide options for tagging particular
transform primitives with respect to particular 3d packages (just maya
at the moment). So for instance you might have a translate primitive
which is tagged as mapping to the maya rotate pivot.

Your question of default vs subsequent animated poses seems a good one
Carsten. Perhaps each transform primitive has an associated flag
specifying whether or not it receives animated values? Then when we
push poses out into AttributeCache files (or some other IndexedIO
based thing) we just save out a FloatVectorData of all the animated
values?

It sounds like we might be converging to a point where knocking
together some pseudocode/headers might be a good way of furthering the
discussion...

Cheers...
John

Carsten Kolve

unread,
Mar 1, 2011, 12:59:12 AM3/1/11
to cort...@googlegroups.com, JohnH
> Your question of default vs subsequent animated poses seems a good one
> Carsten. Perhaps each transform primitive has an associated flag
> specifying whether or not it receives animated values? Then when we
> push poses out into AttributeCache files (or some other IndexedIO
> based thing) we just save out a FloatVectorData of all the animated
> values?
>
> It sounds like we might be converging to a point where knocking
> together some pseudocode/headers might be a good way of furthering the
> discussion...
>

Just thinking aloud here - trying to model maya's joints - the
different elements of a sequ do look a lot like parameters ..
wondering if this is a good way to go, or if that should be kept much
simpler...

- - - -
a = IECore.TransformSequence()

a.append( name = "scale", type = V3f, writable = false, value = V3f(1,1,1) )
a.append( name = "scaleOrientation", type = Quat, writable = false,
value = Quatf )
a.append( name = "rotation", type = V3f, writable = true, value = V3f(90,0,0) )
a.append( name = "jointOrientation", type = M44f, writable = false,
value = M44f.identity() )
a.append( name = "parentScaleInverse", type = M44f, writable = false,
value = M44f() )
a.append( name = "translation", type = V3f, writable = false, value =
V3f(0,5,0) )

a.append( name = "translation", type = V3f, writable = false, value =
V3f(0,6,0) )
> ERROR, already have a sequence entry called "translation"

a["translation"] = V3f(6,6,6)
> ERROR, not writable

a["translation"].replace(value = V3f(6,6,6))

a["rotation"] = V3f(0,90,0)

# get complete transformation, all the different element of the
transform sequence in sequence
mat = a.getTransformationMatrix()
> some M44f

a.writeableParameterNames()
> ["rotation"]

a.nonWriteableParameterNames()
> ["scale", "scaleOrientation", "jointOrientation", "parentScaleInverse", "translation" )

Reply all
Reply to author
Forward
0 new messages