Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

"Copy Attributes" based workflow

105 views
Skip to first unread message

Paolo DE LUCIA

unread,
Jan 31, 2025, 11:29:04 AMJan 31
to gaffer-dev
Hello Gaffer gurus,
I'm currently testing a workflow in gaffer based on the "copyAttributes" node.
Instead of reading a cache (usd or abc) coming from animation, and shading it, I'd like to read a shaded model and transfer the P attribute from an animated cache (as well as the UVs from a texturing department beforehand).

I'm wondering if, performance-wise, gaffer will make a difference in the evaluation of these two different ways. To avoid transfering P on all the vertices of a "transformed" object (and not deformed), I'm planning to use a custom attribute, to mimic what alembic seems to do automatically.

I'd like to avoid loosing the optimization of the gaffer scene reader, but also see the benefits of transfering the P attributes multiple times on a single shaded character, to generate several instances/variations of this character per shot.

Do you think it would be relevant in terms of performance (memory, frame change) to take this route ?

Daniel Dresser

unread,
Jan 31, 2025, 10:20:15 PMJan 31
to gaffer-dev
I'm a bit confused by the idea of an attribute "P" ... it sounds like you want the position of the animated cache to drive the position of static cache - in that case, you'd want to copy the transform of the animated cache. You could do that with a TransformQuery driving the translate/rotate/scale of a Transform - or if the anim cache for some reason has a "P" attribute instead of an animated transfom, you could use an AttributeQuery connected to a Transform node.

Or, if your animated cache is a point cloud, and you'd like to place the shaded model at each point in an animated point cloud, the Instancer node does this.

If you really want to write a "P" attribute onto your objects, you should be able to do that to ... but I don't yet understand why you would want to do that instead of actually moving the object with the transform.

-Daniel

Paolo DE LUCIA

unread,
Feb 1, 2025, 6:42:02 AMFeb 1
to gaffer-dev
Maybe I was not clear enough, but in the case you get a worldspace deformed alembic cache from animation, I don't see another way to transfer the geometry from the cache to a static model. There are no transform you can use, as the cache only contains points position.
I'll take the time to prepare a scene to illustrate my thoughts and will post it there monday.
Thanks for the answer.

Robert Kolbeins

unread,
Feb 1, 2025, 7:24:28 AMFeb 1
to gaffe...@googlegroups.com
Maybe the merge scene node can help you.  put the static alembic first and in the second the alembic cache, then look at the settings in the merge scene, object mode replace perhaps.

--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/gaffer-dev/08c37c33-39d5-475d-aace-3abce3c5ac7fn%40googlegroups.com.

Daniel Dresser

unread,
Feb 1, 2025, 1:53:29 PMFeb 1
to gaffe...@googlegroups.com
Hmm, you're getting worldspace deformed geo from anim ... there are two possibilities:

A) The objects actually need to deform ... then you need to actually deform them with a primvar, there's no getting around it.
B) The objects are rigid. It's unfortunate that they're being authored as if they were deforming in this case. It's possible to work with this, but it's pretty annoying. I would assume you probably need to extract not just position but also rotation from the deformed verts? If we can assume the vertex order is fixed, then you can find 3 vertices that are far apart ( either manually or by searching through the point ), take their positions in both the static and "deformed" mesh, and then compute the matrix that maps between the two sets of 3 points, and then apply that matrix as a transform. It should be possible to implement this using Python in Gaffer, but it could be a bit fiddly.

You received this message because you are subscribed to a topic in the Google Groups "gaffer-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/gaffer-dev/eU1GeHmr_4g/unsubscribe.
To unsubscribe from this group and all its topics, send an email to gaffer-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/gaffer-dev/CANFqvZTZSVV8Z0ZhrOMAiHXoLqPRK8BkqJuJJv4N_w6r%3DpA9Zw%40mail.gmail.com.

John Haddon

unread,
Feb 3, 2025, 4:55:11 AMFeb 3
to gaffe...@googlegroups.com
Is it possible that the main source of confusion here is that in Gaffer primitive variables (properties of an individual geometric primitive) are distinct from attributes (properties of any location, which can be inherited)? If so, I think you would just want to use a CopyPrimitiveVariables node to copy "P" from your deforming animated geometry, instead of a CopyAttributes node.

Paolo DE LUCIA

unread,
Feb 3, 2025, 7:50:11 AMFeb 3
to gaffer-dev
Exactly, John.
That's what I'm currently doing, I just wrote the bad node name... so sorry for that. These two terms have been floating around for 20 years, and I suffered so much with renderman that I've tried to forget the "primvar" one as soon as I could :)...

In terms of evaluation, I don't know how fast gaffer can process this copy, compared to a direct reading from a cache... especially if the object is about 10 millions polygons.
I used that workflow on a feature film not so long ago, and it proved to be convenient as soon as the topology doesn't change. But it wasn't in gaffer.

The purpose here is just to use the animation as a source for the point positions, like any other department would do by exporting its own data.
So in gaffer, top node would be the sculpted model, then injected uv sets (uv primvars) from texturing, then shaders/variations from lookdev, and finally the P primvar from the anim cache (with a collect scene in the case there are several instances of this character in the scene).
That would allow different departments to better work in parallel, as each of them could do its own UV sets that would be injected in the gaffer scene when needed. The anim cache wouldn't have to contain it and wouldn't need to be rebuild in the case of a change.

And again... sorry for the misunderstanding !

Robert Kolbeins

unread,
Feb 3, 2025, 8:29:20 AMFeb 3
to gaffe...@googlegroups.com
Have you looked into usd layers and clips?


John Haddon

unread,
Feb 3, 2025, 9:40:46 AMFeb 3
to gaffe...@googlegroups.com
On Mon, Feb 3, 2025 at 12:50 PM Paolo DE LUCIA <pao...@gmail.com> wrote:
That's what I'm currently doing, I just wrote the bad node name... so sorry for that.

Ah, cool. OK, we're on the same page now!
 
In terms of evaluation, I don't know how fast gaffer can process this copy, compared to a direct reading from a cache... especially if the object is about 10 millions polygons.

Gaffer doesn't process down to the level of granularity you might ideally want here. At each location there is a discrete compute for each of the transform, the geometry and the attributes. So in your scenario you will be paying for the topology of the geometry to be loaded redundantly by both the static SceneReader and the one for the animation. But I'd encourage you to measure it and see how it stacks up in practice before ruling it out. As suggested by Robert in his reply, you might want to look at doing the composition in USD before loading into Gaffer - if you do, I'd be very interested to hear how the two approaches measure up against each other.

Cheers...
John

Paolo DE LUCIA

unread,
Feb 3, 2025, 11:15:10 AMFeb 3
to gaffer-dev
Indeed it may be a good test.
I'll take a look on how to compose this USD scene. We don't have much experience with it at this time, but as it's on the roadmap, it's worth the effort.
Thanks, Robert, for the advise. Hopefully I'll come back later with something to compare.

Daniel Dresser

unread,
Feb 3, 2025, 1:20:19 PMFeb 3
to gaffer-dev
Apologies for the confusion - I was confused both by the term "attribute", which Gaffer uses differently, but also because when I think about optimizing geometry reuse, I'm usually thinking about actually instancing things in the renderer. In this case, if the copies are being deformed, we can't benefit at all from instancing in the renderer, so there aren't any significant performance gains that can be made once rendering starts.

Compared to the renderer needing to build independent accelerator structures, the cost of how the data is manipulated beforehand should be fairly minimal however you do it. CopyPrimitiveVariables should be fine. ( John was worried an extra copy of the topology being loaded, but if that ends up really being a problem, you could store your animation as just Points, with no topology - but I don't think that's going to be significant, I wouldn't mess around with that unless testing indicated it was necessary ).

Anyways, I'd certainly be interested if you did run a test using USD layers - I don't expect USD layers to be useful in this application, but if you did see better performance with that approach, that's definitely something we should look into fixing.

-Daniel

Martino Madeddu

unread,
Feb 3, 2025, 6:36:07 PMFeb 3
to gaffe...@googlegroups.com
This actually sounds a lot like what i have been exploring recently.
I too wanted to keep the ability to layer animation caches (with no topology ideally) over a model and in the case of USD also adding a lookdev file, so I started exploring usd in Gaffer doing the compositions and layering as python expression in the sceneRead.

Although there are things to refine and improve it is all working nicely.
I have made a little USD loader box where you can load:
- static model (containing all static primVars)
- anim cache
- lookdev file
- set the root prim location to insert the references into

I can't share the files nor the box atm but if interested i can prep a shareable version in the next couple of days.
I'd also love to see if you find any speed differences and overhead (if you run some tests).
I've attached some screengrabs to give an idea. Not sure if helpful.

usd_model1.png
usd_layer_anim1.png
usd_layer_lookdevFile1.png
cheers,
Martino


Martino Madeddu

 | 

VFX Supervisor


Untold Studios.

White Collar Factory | 1 Old Street Yard | London | EC1Y 8AF


+44 7526 594 487

 | 

+44 208 016 6111

untoldstudios.tv




--


Martino Madeddu

 | 

VFX Supervisor


Untold Studios.

White Collar Factory | 1 Old Street Yard | London | EC1Y 8AF


+44 7526 594 487

 | 

+44 208 016 6111

untoldstudios.tv


Message has been deleted

Robert Kolbeins

unread,
Feb 4, 2025, 10:59:50 AMFeb 4
to gaffer-dev
If your 10 million polygons asset comes with proxy then you could use the usd purpose attributes for visual reference.
Usd.ClipsAPI can also come handy dealing with animation caches.
I have a little workflow demo on how I deal with assets to instance in a usd composition where I need to loop animation from specific frames.
-R

Paolo DE LUCIA

unread,
Feb 4, 2025, 11:04:44 AMFeb 4
to gaffer-dev
Thanks Martino, I see we're all trying to find the right balance between the different ingredients in our VFX recipe : It's not obvious how USD can fit in the gaffer environment because some concepts are already managed by Gaffer, in a more visual way.

To continue the discussion, the tests I went through :

- On one side, I've built a USD cache of 50 hairy flies, weighting 3.6Gb in total. From there, I've exported a new cache after having converted the meshes to points (984Mb of particles - Normal, UVs and colorSet removed).
The two caches are in USD format.
When displaying the result, changing the frame in gaffer evaluates roughly two times faster with the particles, but it only seems to be related to the display, not the parsing of the cache.
Indeed, copying the P primvars from the two different sources (particles or full meshes) to a static mesh shows no difference at all in terms of evaluation time.
So the only benefit of keeping the point positions as particles seems to be the cache size (no one wants to display its scene in particles, no ?).

- Displaying the USD cache in full detail across the timeline took 46s. Displaying the static models after copying the P primvar from the USD cache took 50s. 
Here I'm really surprised by the copy speed, roughly ten percents of the total evaluation time. I'm just wondering if this will scale proportionally with the amount of polygons. I'm used to maya which totally collapses when reaching a certain amount of data... but the two softwares are from a different century.

- Adding lookdev, UVs and variants has no impact when changing the frame.

So for me, transferring the position that way is still a possibility. Next step would be to check how it translates to render performance (prep scene, rendertime and memory).

Paolo DE LUCIA

unread,
Feb 4, 2025, 11:19:55 AMFeb 4
to gaffer-dev
Thanks Robert, I took that route in Gaffer, indeed !
I'll gladly check your link once the firewall let me do so...
I also noticed that Gaffer doesn't update the bbox of my animated USD cache in the viewport. I have to display the full meshes to get the right position. I don't know if it's specific to my scene, but it may be something the gaffer devs would want to look into.
Reply all
Reply to author
Forward
0 new messages