Hi folks,
Johanna’s contribution is a valid approach for many scenarios but there also is a great deal of value in being able to model factory methods on aggregates.
In some instances it’s only the creation of a related aggregate with no change of state on the AR providing the factory method, this approach can introduce understanding into the model of relationships and processes that aren’t as clear when represented as process managers.
But also like the original poster we’re encountering the need to both create a new AR and change some state in the original. In our case the most common requirement is for the parent AR to change state only the first time it creates and child to signify that it has been “used” and now has some degree of immutability that potentially impacts future method calls. Currently we use events and process managers but its proving costly to develop, somewhat opaque and in some cases far less scalable than aggregate factory methods would be.
Our infrastructure was inspired by the Axon framework and the way command handlers use aggregate repositories to bring AR’s in existence is currently our obstacle to using AR factories, so I would also be very interested in elegant solutions to this problem.
Simon
--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Hi Johanna et al,
One reasonably easy example is in our accounting module.
We have a requirement that when journals are posted against a general ledger certain aspects of the general ledger become immutable, for example the general ledger accounts are individually correctable by accounts staff up until the point they have a journal that references them, then they are locked in stone to prevent changing the meaning of the posted journals.
Our current model has separate GeneralLedger and GeneralJournal aggregates, the GeneralLedger encapsulates the configuration of the a ledger and a GeneralJournal aggregate represent each individual journal that’s posted against that ledger.
To satisfy the immutability after posting requirement we have an event listener that receives the JournalPosted event and creates commands to update the state of the referenced account in the GeneralLedger. I’m not happy about this approach for a number of reasons, but primarily because there are very large numbers of journals be posted but only one per account actually requires this change of state to occur. Another concern with this approach is the decision on whether the journal can be posted has been placed in the GeneralJournal aggregate, requiring a trusted client to provide the context for this validation. This seems to smell a bit. To solve that we could model the act of posting a journal on GeneralLedger but as we are event sourced the GeneralLedger would have potentially hundreds of thousands of GeneralJournalPosted events in its source that aren’t actually required to hydrate it current state.
I would rather model this as a factory method on the GeneralLedger aggregate that would verify the journal can be posted, change its own internal state if required, and then bring a GeneralJournal aggregate into existence.
There are of course a few other concerns I’ve not gone into but that the general picture.
Regards, Simon
“doesn't that break the immutability invariant?” – yes that’s another issue, this approach is more potentially inconsistent than eventual consistency as its reliant on data from the read store
“not putting that behavior on the GeneralLedger is a performance optimization” - yes, not the one I was referring to originally though, that is our current process manager approach that has to listen to many many events throughout the lifetime of the general ledger even though only a handful at the beginning actually have a follow on effect. I briefly considered a dynamic listener but that just highlighted to me that the model was likely wrong.
Your and Greg Young’s process manager approaches make sense but I am still drawn toward a factory approach for this particular case and it seems that all that is holding this back is an elegant way to update two aggregates from a single command in our infrastructure.