Arnold and Alembic in Gaffer

1,111 views
Skip to first unread message

Espen Nordahl

unread,
Sep 4, 2014, 10:50:57 AM9/4/14
to gaffe...@googlegroups.com
Hi guys,

So far I've been tinkering around with Gaffer for a bit, both putting little scripts together and also playing around with writing nodes and altering existing ones. It all looks promising enough that I'd love to help take it to a state where we can actually push shots through Gaffer using Arnold. Right now manipulating cameras, lights and objects is too clunky for us to move over for actual lighting or lookdev, so for now I want to focus on shot assembly only. That is, loading in alembics, ass files and cameras, setting up render settings and producing .ass files at the end. From there our studio pipe can take over for the actual farm integration and rendering.

From what I can tell the Arnold integration is coming along nicely, but is not quite mature enough for production compared to something like MtoA. I'd be happy to try and push this along to meet our needs at Storm, but before I get started it would be super helpful to have get some understanding of the general philosophies of the Gaffer API, particularly related to how it interacts with Arnold.

I've been playing around and reading up on what I could find the last couple of weeks, and written down some questions along the way, pasted below. Some of these things I already have some understanding of from just poking my nose in the code, others I tried but couldn't make sense of and some I haven't spent any time looking into at all yet so I apologize if some of the questions are particularly stupid. Bottom line is that any answer ranging from "rtfm" or a nudge towards some part of the code all the way to detailed explanations would all be highly appreciated. Also, let me know if any of this makes more sense to discuss off list, or on Skype/gmail chat/your live communication tool of choice. 

Oh, and if there's anything I haven't asked that I probably should have, I'd love an answer to that as well ;)

Anyway, here goes:

* General arnold pipe - how an ass file is written, how the procedural works
So, from what I can tell the anatomy of a gaffer .ass file is pretty much a list of display drivers, render globals, cameras and lights etc, and then a call to ieProcedural.so for the geometry, which is passed a path to the Gaffer file/graph.

In a general sense, how does Gaffer generate the .ass file? Also, what does the ieProcedural do/how does it work? What does a typical code path look like for a small/contrived graph?

* Writing stuff to the .ass file

This kinda falls under the previous question, but more specifically I'm very interested in how a node can and should/shouldn't contribute content to the .ass file. The primary use case for now is for arnold procedurals, but if we do end up moving more of our pipe over I see it extending to lots of other stuff in the future.

* Shaders - where do they all come from

We're using our own branch of alShaders for shading, and I'd love to make that as seamless as possible in Gaffer. I got our shaders to load in the Gaffer interface by adding a path to the shader binaries to $ARNOLD_PLUGIN_PATH, much like in MtoA. Out of curiosity, where can I find the code that actually looks for shaders? Also, does Gaffer support reading shader UI files of the same type as either MtoA, SitoA or HtoA? If so, how do I make sure Gaffer knows about where these are located? If not, I'd be happy to write a Gaffer UI file generator into alShaders.

* Binding shaders to alembic files - vfx shots/asset pipeline

Like a lot of other vfx shops our shots pipe data flow can roughly be summed up as animators and td's writing geo caches to disk as alembic files, and lighting TDs picking these up, binding shaders and render attributes to each shape as defined in some published lookdev, and passing these along to the renderer. How would I do this shader/parameter binding at the shot level in Gaffer?

* Gaffer view - Arnold lights, MtoA like .ass viewer

Where is the viewport representation of a node defined? Right now the Arnold light sources all seem to be represented by a point light-ish looking view, which makes them hard to orient correctly. I'd also love to be able to view an .ass file used in a procedural much like in MtoA, where you can swap between bounding box, per-shape bounds, wireframe and vertex display, and the bounding box part can be cached in a seperate file for faster access.

Thanks,
-Espen

John Haddon

unread,
Sep 4, 2014, 11:55:42 AM9/4/14
to gaffe...@googlegroups.com
It would be really great to get Gaffer pushed along in its support for Alembic and Arnold - the goal is to provide good support for multiple renderers but things have got a little 3delight-centric because development is currently driven mostly by Image Engine production requirements.

I'll see if I can answer your questions and we can take it from there...

Ass file generation
---------------------

Gaffer talks to all renderers through an abstraction layer provided by Cortex, in the case of ass file generation the renderer abstraction is provided by IECoreArnold::Renderer(). For batch rendering (or batch ass generation), there's a single Gaffer::ExecutableRender base class, which uses the functions in GafferScene/RendererAlgo.h to output the basics of the scene via this abstraction layer. GafferArnold.ArnoldRender then derives from that, implementing _createRenderer() to return an instance of IECoreArnold::Renderer(), set up to write an ass file. Finally, it outputs the procedural, which will be used at render time to generate all the geometry.


Procedural
------------

The procedural is a call to "ieProcedural.so" which is provided by Cortex, and can load any Python based procedural and run it via the Cortex renderer abstraction layer. In Gaffer though, we just use this as a very simple wrapper that launches a GafferScene::SceneProcedural to do all the heavy lifting - this is all in C++ and threads pretty well, in contrast to our old Python procedurals.

SceneProcedural : 


The SceneProcedural simply outputs the attributes, transform and geometry for a scene location, and then outputs a new SceneProcedural for each child location. The renderer then chooses when to open the children to continue expanding the scene. When you're raytracing, you can expect the procedurals to all get opened pretty rapidly as rays get thrown around in all directions, but the multiple procedurals are still useful for letting the renderer multithread the scene evaluation.

Since each procedural deals with just one location in the scene, it just sets up the right Context for querying that location, and then pulls on the output plug for the node being rendered to get at the data it wants. Because the Gaffer graph is thread safe, and designed to be pulled on in multiple contexts concurrently, it maps well to this threaded procedural expansion.

There's a decent tutorial for the scene query API here :


Shaders
---------

You can see the shader loading code here :


Gaffer has strong support for UI definition via RenderMan shader annotations, but poor support for Arnold shader UIs. Gaffer also has it's own metadata system which it uses for defining its own UIs. The plan is to move the goodies out from the RenderMan module into the generic UI/Metadata system and then be able to reuse it for Arnold/OSL too. If we could collaborate on that, with you taking care of the mapping of Arnold->Gaffer metadata, that would be great.

I know about Arnold's native metadata format, but you mentioned UIs specific to MtoA, SitoA and HtoA - are these different?

Shader/Attribute Binding
----------------------------

This is all done with the ShaderAssignment and Attributes nodes, in conjunction with Filter nodes to choose locations. You can place a bunch of such assignments in a Box, and then export them out for referencing into shots, where they appear as a single node. I presume you were asking something more specific though, but it's not clear to me what?

Viewer
--------

This is currently the weakest part of Gaffer, simply because it's not getting pushed much at Image Engine, where we're embedding Gaffer in Maya and using the Maya viewport for light manipulation. The viewport drawing actually uses the same SceneProcedural as is used for batch rendering - this time used with an IECoreGL::Renderer backend which generates the GL scene for drawing. The point light and camera drawing are just hacked into the SceneProcedural at the moment, and there's no way of changing that code path per node.

I have started work on this side of things a little in my spare time (starting with writing a translate manipulator), but it's slow going due to not having a huge amount of spare time. As part of this work I envisaged a simple little factory mechanism for associating viewport drawing code with objects types within the scene.

May I ask why you want to embed .ass files in the scene still when you could be keeping everything live with Alembic caches and nodes for processing them? One of the appeals of a graph based system is that everything remains dynamic, and there's nothing baked out inside .ass or .rib files, so I'm curious as to what benefit you see there...

Other things
--------------

- The IECoreArnold::Renderer backend currently doesn't support motion blur, so that would definitely need addressing.
- Gaffer is able to compute bounding boxes for any location in the scene at any node, which means that the AlembicSource node must do the same. Alembic files currently only optionally store bounding boxes for each location, making it expensive to compute if they haven't been stored. So you would want to make sure you did store bounds per location (as Image Engine's scc format does automatically), or we'd add some modes to Gaffer to avoid computing bounds if asked nicely, to avoid the overhead.

Hope that helps with at least some of your questions - keep 'em coming!
Cheers…
John


From: gaffe...@googlegroups.com [gaffe...@googlegroups.com] on behalf of Espen Nordahl [espen....@gmail.com]
Sent: Thursday, September 04, 2014 7:50 AM
To: gaffe...@googlegroups.com
Subject: [gaffer] Arnold and Alembic in Gaffer

--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Espen Nordahl

unread,
Sep 4, 2014, 4:39:10 PM9/4/14
to gaffe...@googlegroups.com
Hi John,

This is absolutely brilliant! I'll have to digest your answers and tinker around some more before getting back to you on most of this, but for now let me answer your follow up questions:


I know about Arnold's native metadata format, but you mentioned UIs specific to MtoA, SitoA and HtoA - are these different?

We generate a few different files for alShaders. I believe it's something along the lines of this: mtd files which both MtoA and HtoA uses, although with a handful of application specific parameters for each, and .spdl files which I believe are for SitoA. There's also a <shaderName>Template.py, for the Attribute Editor in Maya (mostly for things like file browsers or other types of UIs that isn't supported in mtd.)

To support all these, we write the UI definition as python files through a very simple API, and there's a uigen.py which generates all the different ui files at build time. If you'd rather Gaffer is supplied with Gaffer-specific ui/metadata files it wouldn't be too hard to write an additional exporter. It's up to you if you'd rather I do that on the alShaders end, or work on improving Gaffer's ability to interpret the existing Arnold ui files.

I presume you were asking something more specific [related to shader/attributes assignent] though, but it's not clear to me what?

Sorry, my question was pretty vague there. What I specifically want to hear about is the intended workflow for managing look development for an asset at the asset level - binding usually a dozen or so shaders and lots of attributes to some thousands of shapes (so ideally in groups of shapes or using wildcards for names). Secondly, how to then grab this definition at the shot level, but this time bind it to an alembic sequence containing the same set of shape names.

May I ask why you want to embed .ass files in the scene still when you could be keeping everything live with Alembic caches and nodes for processing them? One of the appeals of a graph based system is that everything remains dynamic, and there's nothing baked out inside .ass or .rib files, so I'm curious as to what benefit you see there...

This comes purely from the goal of having a pipeline where Gaffer and MtoA lives side by side somewhat interchangably at first. I think there's too much missing from Gaffer to make a clean switch anywhere in the near future, but there's potential of filling a niche in our pipeline rather quickly for a limited set of use cases where MtoA is particularly clunky/bad. I'm hesitant to move asset look development over to Gaffer at first - as that burns the bridge of using MtoA as a fallback - but would rather start off by exporting out assets as shaded .ass files from Maya, and use Gaffer only at the shot level for scene assembly/management. This of course only really makes sense for "static" assets where the only animation is pre baked at the asset level, such as set extensions - which we happen to do quite a lot of, and Maya happens to be terrible for managing lots of shots with only one or two assets where lightrig, camera and render setting are the only thing changing between shots (and often just the camera and frame range).

Exporting pre shaded .ass files (and sequences for things like trees, crowds, atmos and chimney smoke) at the asset level and using these in all shots is our current workflow for these kinds of shots, so this would mean Gaffer could slide pretty elegantly into our pipeline as a first step. If that turns out to be a success we could move forward from there. The biggest hurdle after that I believe is the lacking toolset for lighters.

I'll find some time this weekend to dig into everything else you've explained.

Thanks,
-Espen

John Haddon

unread,
Sep 5, 2014, 5:26:54 AM9/5/14
to gaffe...@googlegroups.com
We write the UI definition as python files through a very simple API, and there's a uigen.py which generates all the different ui files at build time. If you'd rather Gaffer is supplied with Gaffer-specific ui/metadata files it wouldn't be too hard to write an additional exporter. It's up to you if you'd rather I do that on the alShaders end, or work on improving Gaffer's ability to interpret the existing Arnold ui files.

Cool - thanks for explaining things. I think I'd like to concentrate on good support for the standard .mtd files at first, as I presume uigen.py is specific to alShaders?

What I specifically want to hear about is the intended workflow for managing look development for an asset at the asset level - binding usually a dozen or so shaders and lots of attributes to some thousands of shapes (so ideally in groups of shapes or using wildcards for names). Secondly, how to then grab this definition at the shot level, but this time bind it to an alembic sequence containing the same set of shape names.

Right, cool. So, the basic idea is you'd do your lookdev on a Gaffer scene containing only the asset being lookdeved, so you'd have a hierarchy starting from "/assetA" for instance. You can use Set nodes to define groups of objects by name (including wildcards), and then use a SetFilter further down your lookdev graph to apply shaders/attributes to everything in a set. Or you can use a PathFilter, which uses the same wildcards as the sets, but is a bit more ad-hoc - the nice thing about the sets is that they separate out the grouping of objects from the assignments, and the sets flow through the graph below. You'd put all your assignments and set definitions inside a Box (a subnetwork) and save that out for later referencing - you can also choose to expose certain parameters externally if you'd like them to be available at the shot level.

When you want to render a shot, you'd bring in your asset animation with a SceneReader or AlembicSource, and then below it bring in a Reference node with the lookdev setup in it. Since the paths match, the lookdev setup will apply to the animation automatically. You'd have several such setups, in parallel, for each asset, and then feed them into a Group node to merge them into one scene for rendering. If you'd created sets in your lookdev, these would still be available in the combined scene and the paths there are automatically remapped to account for the name change introduced by the Group. So you can then still use them to assign shot specific tweaks further down your graph.

We're actually doing something a little different at Image Engine because the assets are grouped in a Maya scene before feeding into Gaffer, but what I've described above is the way I imagine it working in standalone scenarios.

Exporting pre shaded .ass files (and sequences for things like trees, crowds, atmos and chimney smoke) at the asset level and using these in all shots is our current workflow for these kinds of shots, so this would mean Gaffer could slide pretty elegantly into our pipeline as a first step.

Cool - makes sense. We don't yet support referencing .ass files within a Gaffer scene, but we have a ticket in the current milestone for supporting arbitrary 3rd party procedurals - it would be entirely natural to implement .ass embedding the same way.


Cheers…
John

ben toogood

unread,
Sep 5, 2014, 5:46:38 AM9/5/14
to gaffe...@googlegroups.com
We write the UI definition as python files through a very simple API, and there's a uigen.py which generates all the different ui files at build time. If you'd rather Gaffer is supplied with Gaffer-specific ui/metadata files it wouldn't be too hard to write an additional exporter. It's up to you if you'd rather I do that on the alShaders end, or work on improving Gaffer's ability to interpret the existing Arnold ui files.

Cool - thanks for explaining things. I think I'd like to concentrate on good support for the standard .mtd files at first, as I presume uigen.py is specific to alShaders?

Just to join the thinking up, here's the ticket discussing how .mtd files might be utilised. When I had a quick look earlier in the year, it seemed like you'd want to add extra info to existing .mtd files in order to make full use of the gaffer UI elements...
Ben


--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
ben tooogood
vimeo | linkedin

Espen Nordahl

unread,
Sep 5, 2014, 5:50:58 AM9/5/14
to gaffe...@googlegroups.com
Sweet, that makes sense. (And yeah, .ass embedding should be very straight forward once there's procedural support. I believe the Arnold API has support for grabbing the points/geometry/boundingbox inside the .ass which will be handy for the viewport display once nodes can supply their own as discussed in your first reply).

The uigen stuff is indeed alShaders specific, so having a solution that integrates well with the shaders library that ships with Arnold makes a lot of sense.

I think from here I'll spend another evening wrapping my head around what's there in Gaffer and what I need for us to start using it at Storm based on all your replies, and try to come up with a list we could make potentially into tickets/issues. I'd be happy to take on a bunch of them myself - although I'll likely not be very efficient until I get a hang of how everything works.

Thanks,
-Espen




--

Espen Nordahl

unread,
Sep 5, 2014, 6:06:00 AM9/5/14
to gaffe...@googlegroups.com
Hi Ben,

Just to join the thinking up, here's the ticket discussing how .mtd files might be utilised. When I had a quick look earlier in the year, it seemed like you'd want to add extra info to existing .mtd files in order to make full use of the gaffer UI elements...

That makes sense. From what I can tell that's how HtoA decided to do it - although the source code isn't available for the standard Arnold shaders so if we want those to look nice and pretty through extra Gaffer-specific attributes we'd have to get solidangle in on it (or find another way of editing/extending their .mtd files?). Unless I'm missing something, of course. 

For our shaders it would be trivial to add some extra attributes in the .mtd files, though.

Thanks,
-Espen

john haddon

unread,
Sep 5, 2014, 7:14:27 AM9/5/14
to gaffe...@googlegroups.com
I vaguely recall that it's possible to make your own .mtd files that specify metadata for the built-in nodes, so if we needed fancy gaffer items in addition to the standard ones, we could have a gaffer.mtd file that added things for the standard shaders.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
--
ben tooogood
vimeo | linkedin

--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+unsubscribe@googlegroups.com.

Espen Nordahl

unread,
Sep 5, 2014, 7:16:07 AM9/5/14
to gaffe...@googlegroups.com
Ah, neat. That should do the trick, then I suppose.


To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages