It would be really great to get Gaffer pushed along in its support for Alembic and Arnold - the goal is to provide good support for multiple renderers but things have got a little 3delight-centric because development is currently driven mostly by
Image Engine production requirements.
I'll see if I can answer your questions and we can take it from there...
Ass file generation
---------------------
Gaffer talks to all renderers through an abstraction layer provided by Cortex, in the case of ass file generation the renderer abstraction is provided by IECoreArnold::Renderer(). For batch rendering (or batch ass generation), there's a single Gaffer::ExecutableRender
base class, which uses the functions in GafferScene/RendererAlgo.h to output the basics of the scene via this abstraction layer. GafferArnold.ArnoldRender then derives from that, implementing _createRenderer() to return an instance of IECoreArnold::Renderer(),
set up to write an ass file. Finally, it outputs the procedural, which will be used at render time to generate all the geometry.
Procedural
------------
The procedural is a call to "ieProcedural.so" which is provided by Cortex, and can load any Python based procedural and run it via the Cortex renderer abstraction layer. In Gaffer though, we just use this as a very simple wrapper that launches a GafferScene::SceneProcedural
to do all the heavy lifting - this is all in C++ and threads pretty well, in contrast to our old Python procedurals.
SceneProcedural :
The SceneProcedural simply outputs the attributes, transform and geometry for a scene location, and then outputs a new SceneProcedural for each child location. The renderer then chooses when to open the children to continue expanding the scene. When you're
raytracing, you can expect the procedurals to all get opened pretty rapidly as rays get thrown around in all directions, but the multiple procedurals are still useful for letting the renderer multithread the scene evaluation.
Since each procedural deals with just one location in the scene, it just sets up the right Context for querying that location, and then pulls on the output plug for the node being rendered to get at the data it wants. Because the Gaffer graph is thread
safe, and designed to be pulled on in multiple contexts concurrently, it maps well to this threaded procedural expansion.
There's a decent tutorial for the scene query API here :
Shaders
---------
You can see the shader loading code here :
Gaffer has strong support for UI definition via RenderMan shader annotations, but poor support for Arnold shader UIs. Gaffer also has it's own metadata system which it uses for defining its own UIs. The plan is to move the goodies out from the RenderMan
module into the generic UI/Metadata system and then be able to reuse it for Arnold/OSL too. If we could collaborate on that, with you taking care of the mapping of Arnold->Gaffer metadata, that would be great.
I know about Arnold's native metadata format, but you mentioned UIs specific to MtoA, SitoA and HtoA - are these different?
Shader/Attribute Binding
----------------------------
This is all done with the ShaderAssignment and Attributes nodes, in conjunction with Filter nodes to choose locations. You can place a bunch of such assignments in a Box, and then export them out for referencing into shots, where they appear as a single
node. I presume you were asking something more specific though, but it's not clear to me what?
Viewer
--------
This is currently the weakest part of Gaffer, simply because it's not getting pushed much at Image Engine, where we're embedding Gaffer in Maya and using the Maya viewport for light manipulation. The viewport drawing actually uses the same SceneProcedural
as is used for batch rendering - this time used with an IECoreGL::Renderer backend which generates the GL scene for drawing. The point light and camera drawing are just hacked into the SceneProcedural at the moment, and there's no way of changing that code
path per node.
I have started work on this side of things a little in my spare time (starting with writing a translate manipulator), but it's slow going due to not having a huge amount of spare time. As part of this work I envisaged a simple little factory mechanism
for associating viewport drawing code with objects types within the scene.
May I ask why you want to embed .ass files in the scene still when you could be keeping everything live with Alembic caches and nodes for processing them? One of the appeals of a graph based system is that everything remains dynamic, and there's nothing
baked out inside .ass or .rib files, so I'm curious as to what benefit you see there...
Other things
--------------
- The IECoreArnold::Renderer backend currently doesn't support motion blur, so that would definitely need addressing.
- Gaffer is able to compute bounding boxes for any location in the scene at any node, which means that the AlembicSource node must do the same. Alembic files currently only optionally store bounding boxes for each location, making it expensive to compute
if they haven't been stored. So you would want to make sure you did store bounds per location (as Image Engine's scc format does automatically), or we'd add some modes to Gaffer to avoid computing bounds if asked nicely, to avoid the overhead.
Hope that helps with at least some of your questions - keep 'em coming!
Cheers…
John