To close the loop:
In a request to
/supply/item?wf_id=<name>:<id>
- the request (=URL+HTTP method) determines the *Event*
- the <name> determines the workflow type
- <name>:<id> identifies the particular data object (workflow instance)
The status and current content of the workflow instance comes (most likely)
from the database. It is very likely to be a JSON-serializable data structure,
e.g. nested dicts or something.
The workflow engine retrieves the instance, and maps its current status and the
event (request) to a certain pre-defined event handler (=node).
That's basically all the workflow engine does.
The event handler responds to the event, either by performing a certain action
or by providing a page with action items for the user to perform an action -
or both.
When the action is complete, the event handler udpates the status of the
workflow instance (= status transition).
Which action the event handler performs, and what the next status of the
workflow instance is depends on the event data - which is always both, the data
submitted by the user (if any) /and/ the related resource(s).
Let's assume that "actions" are standard REST methods (S3Methods):
Then the event handler can introspect /and/ manipulate the event data - at
least before and after the method has been performed, using a preprocess- and
a postprocess-hook equivalently to s3.prep and s3.postp (of course, we may
want to introduce additional hooks over time - but these are the two I was
thinking of first).
However, other "actions" could also be custom functions specific for the
particular workflow - it would be good to leave that open yet have a standard
interface. I do though believe that the S3Method interface could serve well
even for that purpose (yeah - we may need to extend it a little bit).
For every event, the event handler may need to perform multiple "actions" -
either sequentially, or conditionally or even in loops (e.g. if a component
has multiple instances).
The key points to resolve:
- configuration of event handlers (nodes) in a workflow
- storing and retrieving of workflow instances
- mapping of instance status + current event to a certain node
- interface between workflow engine (=event manager) and workflow nodes (=event
handler)
- interface between nodes and "actions" (=S3Methods?)
- workflow widgets (=widgets for users to perform workflow actions), and how to
put them into a page
I think these are the 6 top-stories of the implementation part.
We can certainly break them down into individual jobs, but I'd like to leave
that to you.
For the first story, we need to decide whether we have multiple event handler
(node) classes, or just one.
I'm currently in favour of a single event handler class with options to
customize it, rather than having custom event handler classes.
The advantage would be that you could serialize the options into e.g. XML, so
that they can be exported/imported easily, or modified using the GUI - thus
separated from Python code in a way that they can be worked on by application
designers without them having to touch the code.
However, that may be a /huge/ overhead to parse and handle all the possible
node config options - or the event handler class becomes too inflexible, so that
workflow definitions become rocket science.
Additionally - if the node configuration becomes more complicated than
programming Python, then this doesn't really make any sense. Python is
relatively easy and most people who design workflows in Eden are actually
Python-literate in some way.
So - I think this is pretty much all I have in mind right now.
Regards,
Dominic