Ok, I was keeping the domain simple in this example so I could focus on the mechanics of CQRS.
However I think it would help if I gave more detail of the domain and what I'm trying to achieve.
Currently we have a CMS system that feeds multiple websites, each with a different focus. The CMS is old, it predates me at the company and I've been there over half a decade. It's a desktop application with a normalised db as it's back end. Triggers on this db copy data into partially denormalised databases that the websites use. There are a lot more moving parts, solr indexes, Neo4j etc. but I'll be here all day :) I have spent most of my time at this company working with these read only databases to make web applications. The CMS is a nightmare to work with and is long overdue a rebuild.
One impotent feature it does badly is "Drafting and Proofing". A users changes are held in a draft until they are submitted to an editor for proofing. This is currently is done by locking records, storing the changes in xml and merging them back into the DB when proofed, Anyone who has worked with file locking source control knows why this is bad. When exploring alternative approaches a team member pointed me towards Event Sourcing. Reading up on Event Sourcing introduced me to CQRS.
Lets say I had a light bulb moment. We currently kind of have a read and write side, our normalised and web DBs. CQRS looks like a much cleaner implementation. Messaging and command handlers contain the domain logic, stopping it sprawling all over the application and its very testable. So to test the concept and my ability to get to grips with it I have taken a small part of the legacy system, modelled a few commands and built an importer. Basically the importer attempts to mimic what a user would have done to enter this data. On the read side, I have a fully normalised DB representation of what is in the event store so I can inspect the contents easily. More importantly there is a copy of the existing web database, this will only be updated by Publish events, other edits will be ignored. This will allow us to update the CMS without updating all the websites all at once. When we redevelop a website we will create a bespoke query model for it. I see a lot of promise for this method but I need to be confident that I understand it properly before I commit to this path.
So the domain:
Organisations:
- Can have 0..n Positions. The order is important.
- Can have 0..1 ParentOrganisations.
- Can have 0..n ChildOrganisations
People
- Can have 0..n Positions, the order is unimportant.
Positions
- Must have 1 Person
- Can have 0..1 Organisations
Organisations, People and Positions can be published separately.
In the full system there is a lot more complexity to these entities, they have categories, metadata, offices, contact details, aliases etc but I'm deliberately reducing the scope for the time being.
The importer creates a load of CreateOrganisation commands, followed by AddChildOrganisation commands.
Here I use a Saga to maintain parent child ids. It listens for ChildOrganisationAdded events and sends SetParentOrganisation commands.
It then creates a load of CreatePerson commands, followed by CreatePosition commands.
I also use a Saga to listen for PositionCreated events and send AddPosition commands to the Person aggregate and Organisation aggregate when appropriate.
Finally Publish commands are created for each of the aggregates.
I have omitted the commands that fill in some of the aggregate details as I'm focusing on the relationships. I know the naming is a bit clumsy and very CRUDy however this is the terminology the users use.
Using saga's to maintain the relationships seems to be working very well so far but the system is very small.
Hope that helps.
I am still new to DDD terminology, currently halfway through Eric Evans DDD book. For example, I don't know the difference between aggregates and entities here. To me they are both different words for class. In the nTier world I use "entity" to mean a class that models a data record.