Hello, I have more of a design question.
A) With Aggregation & State Management, I'll have an input switch that can be flipped on and off. When on, the system will start consuming an external data stream, as input
- Within a workflow, is there a pattern to push that data steam to an output? Ie, this is not a pure function.
- I can't think of an applicable plugin here.
- Do flow conditions apply here?
- Is there a concept such as sub-workflows? Challenge 1-3 was the closest possibility I saw.
B) Thinking of aggregation state inside of a workflow, I've only seen examples of basic data types used.
- Can we have a stateful job with things like async channels? especially with distributed state?
- I saw an example in Challenge 4-3, where an atom was created inside of a Lifecycle.
C) I also want to multiplex multiple data streams in the same channel.
- Is there a referentially transparent way to partition (for Kafka) a data steam?
Of course I've already scanned through the "onyx tests, onyx-examples, learn-onyx" resources. But didn't see anything that quite fit the bill. Are there recommended approaches for each situation?