My scenario involves processing json payloads with lots of rules logic, but the processing has a few clear phases (transformation, enrichment, and validation).
- Transformation: we support multiple input formats for json, xml, ... that get transformed into a common, internal format.
- Enrichment: the enrichment phase needs to call out to multiple services in order to gather the enrichment data to fill out the document in its internal format.
- Validation: validates the internal format and enrichment data to ensure it can be processed.
I've inherited a codebase using NRules and I'm confused about a few approaches I see in the code. The current version of the solution puts all the rules for the processing phases into a single session. This can get complex. I've seen that the current code already resorts to providing ordering hints for processing the rules, which seems like something to avoid when possible.
There are lots of rules, so I was looking to reduce the complexity and partition the rules by their phase in separate sessions. Basically like a persistent pipeline processing model where each phase hands off to the next phase in processing. I've read other posts explaining that using multiple sessions could help with parallelism as well, which could potentially be an added benefit. My main concern is around complexity in an already large number of rules, which are certain to expand even more as we try to include more domain-specific logic.
My goals are to not mix rules of different types (transformation, enrichment, and validation) into the same sessions, since there are quite a few rules due to the business domain being fairly complex. I want to keep the complexity down as much as possible and having multiple single purpose sessions for each well defined phase of processing seems to be a reasonable idea to me. I just want to make sure my logic is reasonable.
Thanks,
Jeremy