Future of Gaffer/Cortex?

966 views
Skip to first unread message

Arnon Marcus

unread,
Aug 15, 2015, 9:36:48 AM8/15/15
to gaffer-dev
I'm just starting-in on learning about these projects (after re-mastering the presentation-videos), but I was wondering:
I know that these are Open-Source and free to use, but one just has to pin it against Fabric-Engine, especially given FE2's Node-Graph architecture, and given Fabric-Fifty...

I know some think there is a minor overlap and that they are aiming for different things, but I disagree:
The single biggest premise that both are aiming to provide is a cross-DCC-app framework for sharing code/representations between apps, be it capabilities, standardised scene-description/asset-management, UI-tools, etc. The second thing is to allow for much more 'scale' of amounts of assets, and heavy-lifting, in a more reasonable/acceptable performance characteristics. The actual specific 'use-cases' of these 2 core implementations (Gaffer/Corte and Fabric) that have thus-far materialised (at least on-the-open), is circumstantial, temporal, and ultimately inconsequential in the long-run - both will end-up having similar support and much more overlap in the next few years.

The stuff that have already been built using Fabric-Engine already can easily overlap what has been done with Gaffer:

And there is one thing that Fabric can already do, that Cortex/Gaffer don't even aim at doing yet - easy transportability of code to GPU-compute.
Fabric has taken a radically different approach with building a new programming-language using LLVM, which I think gives them much more potential than what Cortext/Gaffer can offer.

Now, I know that Image Engine has invested a lot in their internal-development - but so have other big studios:
MPC is a prime example of a very large studio shifting-away from its internally-developed solution in favor of Fabric-Engine - which then begs the question:
Is Image Engine next?

Another contender, is USD being Open-Sourced by Pixar - it may start by just opening-up the spec for the representations, but libraries and tooling will certainly follow, eventually overlapping with Gaffer, at least on the scene-assembly angle.
How does that fit into the Cortex/Gaffer equation?

Raising the level-of-abstraction of in-house development, just makes too much sense not to at least consider, so:
Is Gaffer/Cortex going away in the future? (at least from Image Engine's perspective?)

John Haddon

unread,
Aug 17, 2015, 6:02:00 AM8/17/15
to gaffe...@googlegroups.com
Hi Arnon,

Good questions. I'll start by saying that I don't have a crystal ball, and the decision is not mine, but that at present Image Engine appears committed to continuing to develop Gaffer and is in fact expanding its use into new areas of the pipeline. Personally, I'd say we're really beginning to see the fruits of our labours now - we're turning around new tools and workflows in short timeframes with a small team, and I don't think that would be possible without Cortex/Gaffer. As you noted in your other mail, we don't publicise ourselves very well, but Gaffer really is at the heart of our pipeline at this point.

I do think we'll see Cortex continuing to be slimmed down over time. When we started on it, there really was very little open-source specific to VFX, and we took too much of a kitchen sink approach. In fact, Gaffer already ignores a lot of Cortex functionality in favour of OpenColorIO and OpenImageIO as part of this slimming strategy. Over time I think Cortex will shrink down towards its core competency of host agnostic geometry representation and renderer backends.

As for LLVM, and inventing new programming languages, Gaffer does actually have some LLVM-optimising-goodness on board, courtesy of OSL. With GafferOSL you can use an OSL network to work on images and to deform objects and define primitive variables, and we intend to extend this support to writing OSL expressions alongside Python expressions. While there are arguments for "one language to rule them all", I actually think Gaffer's approach might not be a bad one. OSL for shader writers and TDs, with a familiar RSLish syntax and just-in-time compilation for the embarrassingly parallel type stuff of image and object manipulation. Python for higher level pipeline scripting where performance isn't such a concern, and where a big user base and wealth of 3rd party modules and support is available. C++ for the totally general purpose core, where again a big user base and a wealth of development tools and libraries exist.

I'm not an expert on USD, and am waiting patiently till summer 2016 to learn more, but from what I understand, I would say that it is complementary to Gaffer/Fabric rather than trying to compete with them. Gaffer/Fabric are designed to be powerful and flexible enough to process large scenes, and USD can deliver large scenes to be processed. One thing I'm not yet clear on with USD is what tools it aims to provide (beyond an API) for defining these scenes - it seems to me that the first thing I would want is a graphical means of doing so. I suspect USD doesn't aim to provide a full-blown app for that, so perhaps there is a niche for Gaffer to fill there.

Like I say, I don't have a crystal ball, but I think it's a pretty exciting time for VFX software in general, and I wouldn't rule Gaffer out just yet…

Cheers…
John


From: gaffe...@googlegroups.com [gaffe...@googlegroups.com] on behalf of Arnon Marcus [a.m.m...@gmail.com]
Sent: Saturday, August 15, 2015 6:36 AM
To: gaffer-dev
Subject: [gaffer] Future of Gaffer/Cortex?

--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

koen vroeijenstijn

unread,
Aug 17, 2015, 9:00:16 PM8/17/15
to gaffer-dev
>Over time I think Cortex will shrink down towards its core competency of host agnostic geometry representation and renderer backends.

hmm, so cortex will shrink to be like alembic ? ;)

cheers,
Koen

John Haddon

unread,
Aug 18, 2015, 5:30:42 AM8/18/15
to gaffe...@googlegroups.com
hmm, so cortex will shrink to be like alembic ? ;)

For people who like to make cheeky comments, apparently yes. But for people who remember previous conversations on the topic, probably less so ;)

Alembic is great if you want to do something like X->ABC->Y where X and Y are Maya/Houdini/Arnold etc - it's a good data interchange format. But it doesn't provide a class hierarchy which is useable as a modifiable in-memory representation of geometry, and tools like AbcExport go direct from Maya to disk. In contrast, Cortex provides classes intended primarily for use in processing geometry in memory, including the facility for making very lightweight copies, and it's conversion processes use this as an intermediate step. This makes it a great representation to use in a procedural data flow like Gaffer's, and also provides the capability to do the kind of hybrid Maya/Gaffer workflow Image Engine has developed and is using effectively in production.

Further to that, Alembic doesn't present a renderer abstraction layer, whereas Cortex allows you to talk to RenderMan, Arnold, Mantra and Appleseed in a renderer agnostic way. That's really rather handy in developing an app like Gaffer as well. Even if you're not using Gaffer, it means you can write a procedural once and use it in multiple renderers. There are other differences, but I won't labour the point.

It might be that USD does provide what is necessary in this area, in which case perhaps Gaffer should adopt a USD representation internally. That would obviously take some time though, and USD isn't actually released yet. The big premise of Gaffer is "if you combine the best-in-class of all the open source libraries in an open source app it will be awesome", and right now, for the representing-geometry-and-talking-to-renderers component, Cortex is the best fit…

Cheers…
John


Arnon Marcus

unread,
Aug 18, 2015, 6:22:57 AM8/18/15
to gaffer-dev, jo...@image-engine.com
How would you say that Gaffer compares with Coral?
I know it's basically 'abandon-ware', as Andrea (the lead-dev) went to work at FE a few years ago and abandoned his project.
Somehow Coral seems much simpler to use, but I'm just curious - the 2 projects sprang-out into the open roughly at the same time-frame, and do roughly the same things - are the implementations very different?

Arnon Marcus

unread,
Aug 18, 2015, 6:26:55 AM8/18/15
to gaffer-dev, jo...@image-engine.com
Fair enough.

What about the cool stuff shown in 2014 that are built on-top of Gaffer (Jabuka/Caribou) - any plans to open-source these as well, or was it just shown to present the theoretical-potential for other studios?

Daniel Dresser

unread,
Aug 18, 2015, 12:21:04 PM8/18/15
to gaffer-dev, jo...@image-engine.com
Coral looks a lot like Gaffer as applied to pipeline automation, which is one of the things we're doing with it.

But I don't see a lookdev/lighting tool in Coral, which is one of the biggest places where we are using Gaffer.

-Daniel

Arnon Marcus

unread,
Aug 18, 2015, 12:36:25 PM8/18/15
to gaffe...@googlegroups.com
If by 'look-dev' you're referring to shading-network definition and IPR, then I guess you're right - but isn't this part also closed-source? The way it was presented in the 2014 lecture, it seemed like only the bare-bone node-based framework and UI are part of the Open-Source portion of Gaffer, and the rest, all that was built on-top, are just examples of stuff that Image Engine has built 'using' Gaffer in-house. Maybe it's not the case, but that's how I understood it from the way it was presented - but it wasn't explicit/specific enough about what is and what isn't included in Gaffer-proper, and what is still proprietary in-house...

--
You received this message because you are subscribed to a topic in the Google Groups "gaffer-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/gaffer-dev/8u-l6gbMilE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to gaffer-dev+...@googlegroups.com.

Andrew Kaufman

unread,
Aug 18, 2015, 12:45:28 PM8/18/15
to gaffe...@googlegroups.com
Sorry for the confusion, the bulk of those lookdev/lighting tools are most definitely open sourced as a part of Gaffer (and under the hood, Cortex). If you browse the repositories you'll see modules for RenderMan, Arnold, and Appleseed in Gaffer. Also, check the Appleseed demo videos here, which use exclusively open source nodes.

The proprietary part of what was presented at Siggraph for lookdev/lighting (Caribou and Grizzly) is the Maya integration with live geometry streaming from Maya to Gaffer, and the beginnings of a hair system. The shaders themselves are proprietary, but Gaffer provides a (open source) means to load any RSL, OSL, or Arnold shader onto a node automatically. Also the automated shader-ball preview is proprietary at the moment, though we have discussed pushing it back into Gaffer at some point. But the underlying render nodes it uses
(including IPR), and most of the tools to manipulate them are open source.

Andrew

You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.

Daniel Dresser

unread,
Aug 18, 2015, 1:02:29 PM8/18/15
to gaffer-dev, and...@image-engine.com
Yeah, I think the main reason for some of the lack of clarity is that this stuff is all work-in-progress.

We're still a ways from being in a place where you would want to full on lookdev/light in show in standalone public Gaffer, but it's not unimaginable.  The two main pieces missing at the moment are:
* light manipulation in Gaffer ( visualisers, manipulators, keyframes )
    - currently done at IE in Maya and pulled through our proprietary bridge
    - we're currently doing some initial work on better viewport visualisation of light locations and parameters at least

* a good shader library
    - our current shader library involves some very messy IE specific hacks, which make it not a very good model for others to build on
    - we'll hopefully be able to bring what's publicly available more in line with what we're using at IE as we work to move IE to an OSL based shading pipeline

As Gaffer improves to be more useful to people out of the box, we can probably start getting a bit more aggressive on the publicity side as well.

-Daniel

Arnon Marcus

unread,
Aug 18, 2015, 1:33:41 PM8/18/15
to gaffe...@googlegroups.com
I've watched all the videos already - I define look-dev as covering a lot more ground than just the technical-ability to render using some render-engine - it's lighting-tools and shader-writing, material-authoring, texturing, UVs, all of this together - and if I may ask : Why reinvent that wheel? I mean, what is the added-value of having lookdev outside of the DCC-app? I mean, I can see the value of streaming the result out of a DCC-app to a render-engine (and Alembic is already supporting that pretty well), but the tooling? What is the use-case for using external-tooling for look-dev, that an internal-to-DCC-app tool-set with a stream-out channel "can't" give you?
DCC-apps have literally 'decades' of iterations and 'generations' of human-hours-of-work poured into these tools, there needs to be some insanely-strong reason to want to migrate that out to something else, that for the short term is going to take a long-time to catch-up-to in terms of usability and robustness.

Daniel Dresser

unread,
Aug 18, 2015, 1:47:24 PM8/18/15
to gaffer-dev
OK, I may be defining lookdev a bit more narrowly than you are.  Things like texture painting and UV'ing are indeed already handled fine by existing tools.

At ImageEngine, we have artists assigned to lookdev whose job it is to take textures and UV'ed models, and build shader networks that will across any setup and lighting condition.  This can potentially mean lot of procedural texturing, filtering onto different pieces of geometry depending on parameters, lots of stuff that requires non-destructive editing of a scene graph.

I wish the Chappie presentation from Siggraph this year was online - it goes a long way to showing why you might want to do some rather complicated things in lookdev.

For this specific pipeline stage, even in it's current, early form, Gaffer is already dramatically more usable and robust than any of the alternatives we've tested.  There may be some other good options we haven't tried - Clarisse looks pretty cool from what little I've seen of it.  But at IE, our main DCC is Maya, and this is not an area it's strong in.

-Daniel

John Haddon

unread,
Aug 18, 2015, 2:09:40 PM8/18/15
to gaffe...@googlegroups.com
I think you'll find it isn't uncommon for a studio to have some sort of custom approach to this particular problem. Here are a few reasons off the top of my head :

- A need to build shader networks for RSL or OSL, where they may not be well supported in a particular DCC.
- A need to hide scene complexity from the DCC - if geometry is in delayed-load caches and the DCC therefore isn't aware of individual objects, you can't use the DCC's native shader assignment tools.
- A need to do shader assignments procedurally by pattern matching or driven by an attribute, rather than explicitly.
- A desire to separate lookdev from the model and animation, so it's robust to changes coming down the pipe.
- A desire for non-destructive scene edits, where multiple render passes with entirely different setups can coexist peacefully.
- A need to do all this for huge scenes, where deferring evaluation as late in the pipeline as possible is a big win.

Of course, if none of this sounds useful, or your particular DCC already does it well, then Gaffer may not be interesting to you. If it is at all interesting though, I would suggest downloading it and giving it a go in its current form before continuing the discussion...

Cheers…
John


Sent: Tuesday, August 18, 2015 10:33 AM
To: gaffe...@googlegroups.com
Subject: Re: [gaffer] Re: Future of Gaffer/Cortex?

I've watched all the videos already - I define look-dev as covering a lot more ground than just the technical-ability to render using some render-engine - it's lighting-tools and shader-writing, material-authoring, texturing, UVs, all of this together - and if I may ask : Why reinvent that wheel? I mean, what is the added-value of having lookdev outside of the DCC-app? I mean, I can see the value of streaming the result out of a DCC-app to a render-engine (and Alembic is already supporting that pretty well), but the tooling? What is the use-case for using external-tooling for look-dev, that an internal-to-DCC-app tool-set with a stream-out channel "can't" give you?
DCC-apps have literally 'decades' of iterations and 'generations' of human-hours-of-work poured into these tools, there needs to be some insanely-strong reason to want to migrate that out to something else, that for the short term is going to take a long-time to catch-up-to in terms of usability and robustness.

--

Arnon Marcus

unread,
Aug 18, 2015, 2:19:09 PM8/18/15
to gaffe...@googlegroups.com
I must say, I find this view of Maya's node-based-procedural-material-authoring capabilities really ... odd ...
Maya has had node-based procedural material-authoring around 20 years ago already... For many years it had been probably the single most recognisable feature of it for a long time as I recall, and what drew me into learning it in the first place about 15 years ago. The new node-editor in maya looks extremely convenient to use, much more than what Gaffer currently has - hell, even Coral had a much nicer and more useful features 3 years ago - many little things, like, plug-colorization by-type (same for maya), polymorphic nodes that reflect their type-state via color, with a half-filled state for insufficiently-specialized node-plugs, dirty-state reflected in the node, etc. OR Maya's in-node-rendering of a material/map-state updating live as you play-around with attributes and connections along the way (for each node, not-'just' a single 'current/final' preview - mostly useful for debugging map-nodes and texture-nodes - and the flow of procedural properties and how they affect each node along the way), properly scaling text when zooming, the ability to change display-state of each node individually when needed (collapsed, hidden-unused-plugs, full/resizable-node-preview, etc.).
Really, I don't get it at all - maybe Gaffer today looks totally different than what it looked at the 2014 lecture, but I can only comment on what I actually saw - and it didn't seem anywhere near anything else I've mentioned.

--

David Minor

unread,
Aug 18, 2015, 2:52:44 PM8/18/15
to gaffer-dev
Yeah, maya's node graph has certainly improved since we started our big Gaffer development push. I don't think the other points on John's list have been addressed by newer versions of maya though, and these points were even more important to us than getting a new UI for building shader networks. What we needed when we embarked on this project was scalability - we needed to be able to manage shader assignments, render pass setups etc in scenes with millions of transforms, and whole scene processing via a node graph seems to be the best way of doing this at the moment.

Arnon Marcus

unread,
Aug 18, 2015, 3:07:54 PM8/18/15
to gaffer-dev, jo...@image-engine.com
- A need to build shader networks for RSL or OSL, where they may not be well supported in a particular DCC.
 
Writing a plugin that implements these in an existing DCC is less work than doing the same work for you own tool AND writing the tool itself... Not to mention maintenance-work that maintaining a tool would incur. I am no C++ guru by any stretch, and I just started to learn about the Maya API, but it took me less then an hour to understand how to implement a custom-node-type for a Maya-plugin - you have so much less code to write this way..  The core of it will have to written in both cases, but using an exiting tool saved you all the boilerplate and architectural-headaches.  
 

- A need to hide scene complexity from the DCC - if geometry is in delayed-load caches and the DCC therefore isn't aware of individual objects, you can't use the DCC's native shader assignment tools.

True. But Gaffer (or was it Cortex?) already implemented a delayed-ly load-able geometry-plugin for Maya (unless I completely misunderstood what I was seeing). Why not just store-away the material-assignment somehow for when it's unloaded? If you're already using Alembic, you don't even have to re-invent a storage-schema for that, or even implement it, by my understanding the alembic-library can support that feature - I would guess that it would only entail some callback-wiring in the plug-in. Perhaps I'm over-simplifying, but still, by itself, not a strong-justification to go outside Maya - hell, one could even gather support from the Maya user-base to demand that Autodesk work on implementing that themselves, if it's such a commonly-used case...

 
- A need to do shader assignments procedurally by pattern matching or driven by an attribute, rather than explicitly.

I write pass-creation scripts for these kinds of things - but we're considering some node-based tool for that - but again, why can't it be done as a plugin? Doesn't Maya already have some pattern-matching APIs for the DAG? How complex could it be to write something that capitalizes on that somehow, with some callback/event-wiering to keep it updating when it needs to..

 
- A desire to separate lookdev from the model and animation, so it's robust to changes coming down the pipe.

Unless you're doing some crazy voxel-based point-cloud transference of UV-coordinates (like I hear Mari does), you're always going to have to re-do some work when the model's topology changes on you from underneath (same for skinning) - and that has nothing to do with Maya per-say, and it's not clear to me how separating the tool outside of Maya solves that problem.

 
- A desire for non-destructive scene edits, where multiple render passes with entirely different setups can coexist peacefully.

Not sure what you mean - edits in Maya are non-destructive, for the most-part - what kinds edits are you referring to?
Maya's native render-layers can let you define as many different passes as you like, each with an entirely different setup.
I think what you mean, is to have a more asset-assembly-aware way of defining how render-passes should be 'generated', and have that persist as you add/remove/replace assets/LODs. That is EXACTLY what I've been working on at work for quite a while now, and yes - it's painful to write a lot of custom-code, but a well-defined architecture for plugin/script can do the work pretty nicely, if you have your own production-asset-scene-assembly API, already living inside your DCC-app (like I guess most of us already do), so you can build on that (that's what I did - I actually wrote the whole asset-assembly API with exactly that need in mind from the ground-up).
 

- A need to do all this for huge scenes, where deferring evaluation as late in the pipeline as possible is a big win.

We have a simpler approach, where I work. We don't brake things down to this extreme level of granularity, so there are less moving-parts and less stuff that can break in the process. As long as you know you can "update" your assets (substitute-in-place to a newer-version) at any time, and as long as the assets-themselves carry all the necessary information they need to get themselves rendered, than you can update their metadata at any time, and refresh the render-pass setup accordingly. Don't know about 'huge scenes' though, but my guess is that the less granular you brake things, the less work you have to do to re-construct them, and that gets multiplied times the amount of stuff you have, so the savings actually compound and accumulate for not-breaking-stuff, the more stuff there is.

 
Of course, if none of this sounds useful, or your particular DCC already does it well, then Gaffer may not be interesting to you. If it is at all interesting though, I would suggest downloading it and giving it a go in its current form before continuing the discussion...

Evaluating a solution like Gaffer, and doing it right, can take a lot longer than this discussion.
Plus, my current workplace is a windows-based shop - Image Engine isn't, and from what I could gather, there is a pretty weal momentum for porting Gaffer to Windows, with all the dependency-hell involved.  We would have to migrate a portion of our pipeline to linux just to use Gaffer.
But I might be moving to somewhere else soon, which IS a linux-shop, so m interest is mainly "academic" at this point.
 

Arnon Marcus

unread,
Aug 18, 2015, 3:36:04 PM8/18/15
to gaffer-dev
Anyway, I'm beginning to sound a little hard/overly-critical - that is not my intent - opening-up a source-code of a pipeline-framework is an admirable initiative - I wish more studios would allow their dev-teams to do that, so we could share more code/efforts.

I just want to understand the rationale behind this project, and similar ventures in other studios.
Maybe it's all just about 'scale/load' that DCC-apps have been lagging-behind in supporting, as demand grows for more heavy scenes in the VFX industry.
Seems like the pace of increasing-demand for more details on screen, has passed the rate of increase in software/hardware's ability to sustain/support the load. But I'm curious weather that is really the case, or are some studios (not necessarily IE) taking an approach based on past-scenarios that may no longer be relevant, or just being human and making unideal-judgment-calls along the way - or perhaps it's just me...

I mean, memory-chips are so cheap these days... You can affordably have hundreds of gigabytes - and the 64-bit address-space is so far away from being exhausted - why shouldn't we be able to hold scenes weighing hundreds-of-gigabytes all-in-memory at once at any given time, and have the DCC-app 'just handle-it'? Is the added-cost of complexity still worth it? I donno.. 

John Haddon

unread,
Aug 18, 2015, 3:40:28 PM8/18/15
to Arnon Marcus, gaffer-dev
All the approaches you mentioned are entirely valid, and over the years I've seen or used many of them. But you do get to a certain complexity of work where an approach along the lines of the one we're taking with Gaffer becomes attractive, if not essential. You are right that it's a big undertaking, especially for a team our size. One of the reasons I think it's paying off for us is because we're developing Gaffer as a framework rather than just as a lookdev tool. So that upfront effort is now beginning to see dividends in many areas of our pipeline…
Cheers…
John


From: Arnon Marcus [a.m.m...@gmail.com]
Sent: Tuesday, August 18, 2015 12:07 PM
To: gaffer-dev
Cc: John Haddon

Subject: Re: [gaffer] Re: Future of Gaffer/Cortex?

John Haddon

unread,
Aug 18, 2015, 3:55:05 PM8/18/15
to gaffe...@googlegroups.com
Production requirements seem to have an unfortunate habit of consistently scaling to the limits of the available hardware, and then just a little bit further :) There's also the issue that getting everything loaded can take significant time, so there is a real benefit to allowing an artist to load things on demand, and edit just what they need before pushing the whole thing back out to the farm. A procedural approach to generating scenes can also help to make the complexity more manageable by providing a higher level view, and tools for generating many changes across the whole scene in one go.

That said, Clarisse seems to be pushing hard in the "load all the things" department, so if that's your mindset, it's definitely one to check out...


Andrew Kaufman

unread,
Aug 18, 2015, 4:01:53 PM8/18/15
to gaffe...@googlegroups.com

Anyway, I'm beginning to sound a little hard/overly-critical - that is not my intent - opening-up a source-code of a pipeline-framework is an admirable initiative - I wish more studios would allow their dev-teams to do that, so we could share more code/efforts.


Thanks for acknowledging this.


why shouldn't we be able to hold scenes weighing hundreds-of-gigabytes all-in-memory at once at any given time, and have the DCC-app 'just handle-it'?

I think to accomplish some of the things we're doing, you really need to drive the scale up higher than you're suggesting. Sure, we could give an artist a mega-machine and have them load a scene weighing hundreds of gigabytes (though its not nearly as cheap as you imply, and the scene-load time would be unbearably long, if for no other reason than pulling that much data across the network). But now ask that artist to work on 5 different shots at the same time. They'll either need 5x the machine you just gave them, or they'll need some way of not loading all the data at once.

On top of that, our clients are requesting scenes that weigh dozens of terabytes, not hundreds of gigabytes. Next year (or the year after) it will be hundreds of terabytes instead. To assume that we'd ever be able to load it all at once and still have interactive feedback would be a bit short sighted to the extent of our clients' imaginations.

Andrew

Dan Kripac

unread,
Aug 18, 2015, 4:11:34 PM8/18/15
to gaffer-dev
Hey Arnon,

Just to butt-in here with an outsider perspective, i.e not actually coming from a studio that uses cortex/gaffer, but just as a comment on the balance between in-house or off-the-shelf solutions.

These decisions and evaluations would/should always be in some sort of state of flux at any studio.

The key thing with gaffer and cortex is that they are not an overnight development. They have been built up over many many years stemming all they way back to ideas and code born at older studios that the core dev team have worked at, then being refined and refined in production at Image Engine and other studios like Dr. D etc.

The key thing is that the tools in the current form and there current level of functioning at Image Engine are adding value and edge over current off-the-shelf tools.

This may change over the years depending on many factors.

Autodesk is keen to keep their vfx customers for maya by improving the product, but they have a very large user base spanning multiple industries that place a wide-spread of demands on the maya dev team. And maya itself is getting quite long-in-the-tooth. Both a good thing and a bad thing.

It may seem intuitive at a new studio that the tools inside maya provides all that you think you might need for your up-coming productions.

But an in-house team that have successfully polished a pipeline around an in-house developed technology like cortex/gaffer that the company artists and TDs have come to understand over the years are in a much more reactive state than Autodesk can be to the current needs of their production.

Sadly all studios are in a race to the bottom in terms of speed/cost/quality. Those studios that invested time and resources in refining in-house tools that that "successfully" (<- that's the key word) fit the culture of their company will have an edge in this race to the bottom.

But just to repeat, this is always in flux, and any studio will be wise also when they are able to - using a very cold term - "kill their babies" when they are not getting the value they need out of them.

It's a hard problem though, and it's not always easy to see in simplified terms.
But you are right also to question any techs worth to you and your studios projects. And you also need to go with tech that makes sense for your company culture.

anyways, my 2 cents


--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.

Arnon Marcus

unread,
Aug 18, 2015, 4:18:24 PM8/18/15
to gaffer-dev, and...@image-engine.com
Perhaps the problem is in the client's imagination then... :P

I mean, have you seen Transformers? All that excess-detail that artists toiled over that nobody cares about or is even able to notice (especially given the motion-blur..). I mean, we're already passed the threshold of diminishing-returns in many cases, just throwing more man-hours down the drain for not noticeable gain... It's already passed-ridiculous in certain cases...

As for network load - I was thinking a shared-resource-pool type of story - where the memory is as close to the storage as possible, and having terabytes of SSD disk-space that hold the 'hot' data at near-hand close to the memory  - maybe with virtualized GPUs  that would start to be possible - but I vastly digress... :)

David Minor

unread,
Aug 18, 2015, 4:38:12 PM8/18/15
to gaffer-dev, jo...@image-engine.com
I think the main point is that this node based scene processing approach helps us solve all of those things elegantly, and in a way that's readable and user friendly - a graph is just a good way of presenting that kind of proceduralism. I've had to attack similar problems in the past using scripting, and in my experience (and that of the artists who have been working with these tools) the node based approach works a lot more smoothly.

Simon Bunker

unread,
Aug 19, 2015, 5:25:44 AM8/19/15
to gaffe...@googlegroups.com, jo...@image-engine.com

I can think of several areas that Maya falls down.

For a start it is really bad at handling a lot of geometry. Mostly this is because standard meshes remain editable / animatable. This requires keeping around a lot more information and adversely affects memory. Geometry caches like Alembic do really help here, but this is a pretty new feature in Maya.

It is also useful to set up rules for modifying the scene - such as changing an attribute or a material based on an expression, attribute or spatial region. You can do this with Python or a plugin, but it is a destructive process and is hard to undo. In a tool like Gaffer or Katana, each node can retain it's own scene state so jumping between edits is as simple as picking an upstream node.

A node based view is also easier to manage. Eg you can have a scene loader node and wire it into a node to prune the scene with interactive controls. This would be really hard to do in Maya.

You could argue that Maya is procedural - but only to a point. I think it would drive you crazy trying to use the hypergraph as your main interface! (Unlike Houdini, Nuke, Katana, Gaffer).

It can also be useful having your own outliner. Then you can do nice things such as peek inside Alembic nodes (you would need a custom attribute editor too).

At this point you are only using Maya for its OpenGL display - which we have already established isn't that good - so it isn't a huge jump to creating your own tool rather than work around deficiencies in Maya.

Many studios have travelled this journey and come to the sane conclusions. The other good example of this is Katana. It is also considerably more expensive than Gaffer (to be fair it is a lot more mature as well).

I can certainly see good reasons for serving your studio needs more precisely than attempting to work around it in Maya.

Simon

Arnon Marcus

unread,
Aug 19, 2015, 12:21:37 PM8/19/15
to gaffer-dev, jo...@image-engine.com

You could argue that Maya is procedural - but only to a point. I think it would drive you crazy trying to use the hypergraph as your main interface! (Unlike Houdini, Nuke, Katana, Gaffer).


The new node-editor introduced at 2012 looks much nicer to work with - it's not just a replacement for the Hypershade, it can replace all node-editors, including the Hypergraph:

I think it's way nicer then Houdini and Nuke, and easily nicer then Gaffer - at least how it was shown at the 2014 thing..
Nuke's node-based UI looks like something out of the 90's... Fusion is much nicer, because you can see what you are connecting-to. Maya's new node-editor is much nicer than even Fusion's, for all the reasons I already mentioned.

As for Katana:
A friend of mine works at Framestore, in which they have an in-house thing like Gaffer that they have been working on for a long time. He says Katana does everything it does better. Point is, sometimes a larger dedicated team can do a much better job on a tool than a smaller team that works on many other stuff. Sometimes a tool like that emerges out of hibernation after you're already worked a lot on your own solution. At which point you're already too much 'invested' in your home-grown thing to want to switch - but sometimes it's ultimately the right thing to do...

But I think I'm starting to get it - it's all about 'scale' after all - you can change many things in an existing DCC app, but you can't make it handle a bigger load of stuff than it currently can, because this has to do with the core of it.
I can't argue with this angle of it.

koen vroeijenstijn

unread,
Aug 26, 2015, 2:16:48 PM8/26/15
to gaffer-dev, jo...@image-engine.com
Hello Anon, 

I used to work with the IE tools and pipeline every day ( in the early days of Gaffer). Since then I have worked at many places that rely mostly on the default workflows maya has to offer. Not a day goes by (well, maybe the weekends :) ) where I don't miss the image engine setup. Maya does not come close.  

While maya has put a lot of work in the UI of the node editor, the level of abstraction of the connections still is really wrong. As an exercise, try to swap a bend and a twist deformer using only the node editor, you'll be surprised how confusing that is.

The two area's that bother me most in maya, are the loading of data, and the shader assignments. Graphs work really well and reliably if you manipulate the attributes on the nodes (change model_v001.ma to model_v002.ma on a node for example) unfortunately, out of the box, most of the updating of models (even when referencing) requires breaking connections and deleting and re-creating nodes. This is not very reliable. The other thing is assigning materials. I would love to assign a material based on a node attribute or a prim-var. The direct connections maya makes here are much harder to work with.

Anyway, I suggest as well to just download gaffer, play around a bit and see if you like it. 

Cheers,
koen
Reply all
Reply to author
Forward
0 new messages