Aggregates creating other aggregates

266 views
Skip to first unread message

df

unread,
Apr 23, 2013, 5:03:42 PM4/23/13
to ddd...@googlegroups.com
Hi everyone,

I got stuck trying to create aggregates from other aggregates and transactional boundaries. Hoping I'm missing something obvious.

The problem I'm running into is when one aggregate, such as Client, creates another aggregate, such as Account. I'm specifically referring to Fohjin.DDD sample in this case, but this scenario seems to come up often in my domain. In the sample Client aggregate has method CreateNewAccount that returns a new Account aggregate and at the same time Client aggregate generates an event that a new account was assigned to this client. To save changed state to an event store I would need a transaction that allows me to save both aggregates or neither of them.

From everything I've learned about event sourcing so far such a transaction is not desirable. So I'm stuck on trying to figure out how to do something like the scenario above but without needing a transaction.

Johanna Belanger

unread,
Apr 23, 2013, 5:45:19 PM4/23/13
to ddd...@googlegroups.com
Hi df,

How about having a handler (process manager, technically) that listens for the NewAccountAssignedToClient event and sends a CreateNewAccount command with the AccountID from the event? The CreateNewAccount handler creates and saves the Account aggregate.

hth,
Johanna

Simon Parsons

unread,
Apr 23, 2013, 6:48:06 PM4/23/13
to ddd...@googlegroups.com

Hi folks,

 

Johanna’s contribution is a valid approach for many scenarios but there also is a great deal of value in being able to model factory methods on aggregates. 

In some instances it’s only the creation of a related aggregate with no change of state on the AR providing the factory method, this approach can introduce understanding into the model of relationships and processes that aren’t as clear when represented as process managers.

But also like the original poster we’re encountering the need to both create a new AR and change some state in the original.  In our case the most common requirement is for the parent AR to change state only the first time it creates and child to signify that it has been “used” and now has some degree of immutability that potentially impacts future method calls.  Currently we use events and process managers but its proving costly to develop, somewhat opaque and in some cases far less scalable than aggregate factory methods would be.

 

Our infrastructure was inspired by the Axon framework and the way command handlers use aggregate repositories to bring AR’s in existence is currently our obstacle to using AR factories, so I would also be very interested in elegant solutions to this problem.

 

Simon

--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Johanna Belanger

unread,
Apr 23, 2013, 7:09:43 PM4/23/13
to ddd...@googlegroups.com
Hi Simon,

Great to hear your experience. Would you be able to provide a concrete example? 

Thanks,
Johanna

df

unread,
Apr 23, 2013, 8:12:56 PM4/23/13
to DDD/CQRS
Yes these are exactly the same issue i'm facing and looking for a more
elegant or perhaps simpler solution than using process managers.

I will be analyzing my domain more carefully around these areas to see
if perhaps I am modeling it wrong and that might be the reason why
adding process managers appears to be an overkill to me.

Simon Parsons

unread,
Apr 23, 2013, 8:40:53 PM4/23/13
to ddd...@googlegroups.com

Hi Johanna et al,

One reasonably easy example is in our accounting module. 

We have a requirement that when journals are posted against a general ledger certain aspects of the general ledger become immutable, for example the general ledger accounts are individually correctable by accounts staff up until the point they have a journal that references them, then they are locked in stone to prevent changing the meaning of the posted journals.

Our current model has separate GeneralLedger and GeneralJournal aggregates, the GeneralLedger encapsulates the configuration of the a ledger and a GeneralJournal aggregate represent each individual journal that’s posted against that ledger.

To satisfy the immutability after posting requirement we have an event listener that receives the JournalPosted event and creates commands to update the state of the referenced account in the GeneralLedger.  I’m not happy about this approach for a number of reasons, but primarily because there are very large numbers of journals be posted but only one per account actually requires this change of state to occur.  Another concern with this approach is the decision on whether the journal can be posted has been placed in the GeneralJournal aggregate, requiring a trusted client to provide the context for this validation.  This seems  to smell a bit.  To solve that we could model the act of posting a journal on GeneralLedger but as we are event sourced the GeneralLedger would have potentially hundreds of thousands of GeneralJournalPosted events in its source that aren’t actually required to hydrate it current state.

I would rather model this as a factory method on the GeneralLedger aggregate that would verify the journal can be posted, change its own internal state if required, and then bring a GeneralJournal aggregate into existence.

There are of course a few other concerns I’ve not gone into but that the general picture.

Regards, Simon

Johanna Belanger

unread,
Apr 23, 2013, 11:07:25 PM4/23/13
to ddd...@googlegroups.com
Very interesting example, thanks!

It definitely seems cleaner to model journal creation approval on the general ledger because the approval is dependent on the state of the General Ledger. Also, if a journal is created based on the command's copy of the general ledger's state, and then that state changes before the general ledger receives the journal created event, doesn't that break the immutability invariant? Do you use a compensating command?

It seems like not putting that behavior on the GeneralLedger is a performance optimization to keep the General Ledger's events to a reasonable number. (Is this what you meant by "less scalable" in your first post? And what are you finding "costly to develop"?) 

Using a factory method on GeneralLedger, and thus changing two aggregates in a single transaction is another way of doing that optimization. But you lose the two benefits of the process manager design: 1) The ability to partition the datastore by aggregate without distributed transactions, and 2) A clean extensibility point for additional steps (that touch other aggregates) between creation approval and creation. (See https://groups.google.com/d/msg/dddcqrs/toU9nBeXWkQ/hyVX8QjS6gIJ)

Here's another idea: Use the process manager design, but filter out the journal creation events when storing the GeneralLedger to the event store. It's a performance hack, to be sure, but it is contained, and it preserves the "aggregate as consistency boundary" semantics.

Thoughts?

Johanna Belanger

unread,
Apr 23, 2013, 11:12:34 PM4/23/13
to ddd...@googlegroups.com
In my response to Simon I mentioned the benefits of the process manager design. I'm wondering if process managers feel like overkill when there is not currently a need for the benefits they provide?

Simon Parsons

unread,
Apr 24, 2013, 1:25:47 AM4/24/13
to ddd...@googlegroups.com

doesn't that break the immutability invariant?” – yes that’s another issue, this approach is more potentially inconsistent than eventual consistency as its reliant on data from the read store

 

“not putting that behavior on the GeneralLedger is a performance optimization”  - yes, not the one I was referring  to originally though, that is our current process manager approach that has to listen to many many events throughout the lifetime of the general ledger even though only a handful at the beginning actually have a follow on effect.  I briefly considered a dynamic listener but that just highlighted to me that the model was likely wrong.

 

Your and Greg Young’s process manager approaches make sense but I am still drawn toward a factory approach for this particular case and it seems that all that is holding this back is an elegant way to update two aggregates from a single command in our infrastructure.

Alexey Raga

unread,
Apr 24, 2013, 4:27:00 AM4/24/13
to ddd...@googlegroups.com
>>Client aggregate generates an event that a new account was assigned to this client.

Are you sure that Client should issue this event at all? Does Client aggregate need to know about this assignment? Does it impact its invariant? Why?
Shouldn't it be the other way around: "here is the account created for THAT client Id", so technically it is the Account aggregate published an event AccountOpened(accountId, clientId)?
 
Regards,
Alexey.

df

unread,
Apr 24, 2013, 9:33:27 AM4/24/13
to ddd...@googlegroups.com
Yes I think that this approach could work in a variety if situations. I have also found the answer I was looking for in a link posted by Johanna ( https://groups.google.com/d/msg/dddcqrs/toU9nBeXWkQ/hyVX8QjS6gIJ) about loading the same stream into a different aggregate when the original aggregate transitions to a new state. Previously I was trying to do this by creating another aggregate out of the first one but now I can see how it could be done differently. Only need to solve snapshotting problem when taking this road.

Johanna Belanger

unread,
Apr 24, 2013, 3:10:54 PM4/24/13
to ddd...@googlegroups.com
I replied in a new thread. =)

Johanna Belanger

unread,
Apr 24, 2013, 3:39:54 PM4/24/13
to ddd...@googlegroups.com
Well ok, here are a couple of ideas, but I'm not necessarily recommending them! =)

GeneralLedger could write the GeneralJournalCreated event *using the GeneralJournal's ID as the AggregateID*. Basically the GeneralLedger repository would be writing the first event to GeneralJournal's event stream. Whether this is simple or not depends on your architecture. GeneralJournal would not have a chance to perform any behavior on it's own creation. This assumes you have a way of writing to both event streams in a single transaction.

or

GeneralLedger could return a GeneralJournal aggregate from its CreateGeneralJournal command instead of returning void. The command handler would then be responsible for saving the state of the GeneralJournal and the GeneralLedger in the same transaction. (This was someone else's idea, but when I went back to look, I couldn't find it again. Sorry to whoever came up with this for not giving credit! =) )
Reply all
Reply to author
Forward
0 new messages