You're correct thus far, and we do both (1) and (2) at Image Engine (sort of). I liked Dan's approach to the last thread, so I'm going to copy him and start with some Gaffer terminology:
- TaskNode - The red nodes in your graph that represent offline processing, usually creating new files on disk.
- TaskBatch - A batch of tasks to be executed (e.g several frames of a TaskNode clumped together). Never represented visually.
- Dispatcher - Also a node, though not currently visible in the graph. When dispatched, walks a graph of TaskNodes, creating another (non-visual) graph of TaskBatches.
- `gaffer execute` app - a commandline application for executing a single TaskBatch, otherwise identical to the main gui (the `gaffer gui` app).
- LocalDispatcher - A specific dispatcher that executes a graph of TaskBatches on the user's current machine, in serial, using `gaffer execute` commandlines on a background thread.
- TractorDispatcher - A specific dispatcher that translates a graph of TaskBatches to a single Tractor Job and spools it to Tractor. Each task in Tractor will be a `gaffer execute` commandline, but this time, tasks might run in parallel (if the submitted graph allows for it).
I'm not familiar with how Deadline handles task-to-task dependencies, which is often where you'd hit downfalls when implementing a Gaffer Dispatcher for new farm software. Since Gaffer's TaskBatch graph is a true DAG, many farm managers aren't well suited to represent the submission exactly (e.g. both Qube and Deadline take a list based approach to tasks, while Tractor takes a tree based approach). As it turns out, Tractor can accept DAG submissions just fine, they just draw it a bit oddly in the UI. When we used Qube at IE, we were able to make DAG submissions work, via a quite complex intermediate layer. I imagine you could do something similar for Deadline if needed.
But if we ignore DAG submissions for a moment, and say you just have a simpler setup:
PythonCommand -> AppleseedRender -> ImageWriter -> SystemCommand
and say your PythonCommand is updating some asset database for about-to-be-created rendered images, and your SystemCommand is launching nuke (or ffmpeg) to generate a quicktime from the about-to-be-created slap comp images.
Then if you wanted to write a DeadlineDispatcher that can handle that setup, the most natural thing to implement (given my limited knowledge of deadline) would result in 4 jobs in Deadline, with dependencies between the jobs (or between the frames if that's possible). All 4 jobs would be `gaffer execute` commandlines, but the last one would be launching Gaffer just to call os.system with your nuke commandline. Presumably you can add some metadata/label to make it look like a Nuke job in Deadline if you prefer, but its easier to just let it call gaffer under the hood.
Alternatively, you could implement your DeadlineDispatcher such that the `gaffer execute` commandline is only used for the nodes that require it (e.g. AppleseedRender and ImageWriter), and the others are swapped out at submission time to be replaced by whatever commandline you'd like. We do that in certain cases at IE, though we're trending away from it over time.
Its also worth mentioning that your DeadlineDispatcher can inject some control plugs onto every TaskNode. We do this at IE, for example, to expose memory limits, machine groups, etc so the artist can define what resources each TaskNode will require when it gets to the farm.