The use of read models in command handlers.

1,832 views
Skip to first unread message

Michael Ainsworth

unread,
Dec 14, 2015, 11:47:17 PM12/14/15
to DDD/CQRS
I know that the write model should not reference the read model, but I've read on the CQRS FAQ (cqrs.nu) that you should also refrain from referencing the read model in command handlers. E.g., the following would "wrong":

    class UserCommandHandler {
    public:

        UserCommandHandler(EventStore& eventStore_, UserRepository& userRepository, database& db);

        void on(const RegisterUser& command) {
            if (hasUser(command.username)) {
                throw DomainError("The username specified has already been taken.");
            }

            eventStore_.store(userRepository_.registerUser(command.username));
        }

    private:

        void hasUser(const std::string& username) {
            return db_.select("select count(*) from users where username = :username").use(":username", username).execute()[0]["count"] > 0;
        }

        UserRepository& userRepository_;
        database& db_;

    };

I like the idea of only retrieving aggregates by the ID, not by any other "columns" or "unique constraints" (to use SQL lingo).

xiety

unread,
Dec 15, 2015, 3:31:36 AM12/15/15
to DDD/CQRS
I think that you can use special denormalized "read-models" accessible by some services. It's the only way for me to get required data fast in command side. I think that FAQ is trying to prevent people from using the UI read-model in command handlers, and couple UI and Domain model together.

Michael Ainsworth

unread,
Dec 15, 2015, 3:48:00 AM12/15/15
to DDD/CQRS
I see. I was also under the impression that an aggregate can only be loaded by its ID. Accessing denormalised read models within a command handler would allow you to obtain the IDs of aggregates based off other properties, enabling the command handler to coordinate between aggregates with complex relationships.

Of course, while the command handler may query the read side, the aggregates themselves would be oblivious of them. Is this acceptable?

Greg Young

unread,
Dec 15, 2015, 4:24:00 AM12/15/15
to ddd...@googlegroups.com
There are times when you have to talk to a read model from the write
side but it should be avoided as a rule if possible.
> --
> You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+u...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

Vinicius Zaramella

unread,
Dec 15, 2015, 8:37:44 AM12/15/15
to DDD/CQRS
Wouldn't that happen every time you needed to validate the command against previous state? 

Greg Young

unread,
Dec 15, 2015, 8:40:03 AM12/15/15
to ddd...@googlegroups.com
Why would it?

João Bragança

unread,
Dec 15, 2015, 8:58:05 AM12/15/15
to ddd...@googlegroups.com
Your aggregate should contain that state. Use a dedicated read model on the command side for grabbing information from other aggregates or bounded contexts.

Michael Ainsworth

unread,
Dec 15, 2015, 7:21:37 PM12/15/15
to DDD/CQRS
My question was about command handlers. I can understand that keeping the read- and write- models completely decoupled is a good thing, and that I can accomplish the same task using a "local read model" for a small, dedicated purpose (which is completely under the control of the write model). From the CQRU.nu FAQ:

If eventual consistency is not fast enough for you, consider adding a table on the write side, a small local read-side as it were, of already allocated names. Make the aggregate transaction include inserting into that table.

However, in relation to sagas, they function as a client, in that they respond to events in order to issue commands. They seem to me to be things that automate workflows. For example, I have two aggregates, Order and Customer, with the rule that the Order's delivery address should be updated if both the customer updates their default delivery address and the order has not yet been shipped. This could be something that is currently manually done by staff. E.g., a customer calls in to say "I just moved house, so in future all parcels should be delivered to my new place". The staff member updates the customer's default delivery address. They've worked at the business for 12 months, so they know that the "policy" at work is to lookup all unshipped orders and change the delivery address here as well.

In the above example, the manual process is being replaced by the saga (it seems to me that the term "process manager" is more appropriate here). If the staff member looks up the unshipped orders using the read model, why shouldn't the saga do the same? If the main reason is to decouple the write-model from the read-model, and the advice is to have a "local read model" to be used by that saga, then we're effectively creating two read models - one for the user interface and another for "caching" in the write model. If the main reason is that the system may not yet be consistent, the saga can either implement a delay or can be re-executed next time the same event is replayed.

Ben Kloosterman

unread,
Dec 16, 2015, 12:35:10 AM12/16/15
to ddd...@googlegroups.com

In the above example, the manual process is being replaced by the saga (it seems to me that the term "process manager" is more appropriate here). If the staff member looks up the unshipped orders using the read model, why shouldn't the saga do the same? 

Its perfectly ok for a process manager to do this eg receive events , read the read domain and issue commands. However this is NOT normally regarded as part of the DDD/CQRS domain logic handled by the command handlers   even though its part of the business logic  .It is more seen as an external service and such a service can easily be a small SOA service as its loosely coupled. 

Doing it in a command handler is a completely different matter  ( these sagas /process managers are driven of events) , I can see few good cases for this since most of the time a facade that exists before the command is created can handle this and prevent the command even being created or getting to a command handler eg a web service  , anti corruption layer etc,

Regards,

Ben 

Michael Ainsworth

unread,
Dec 16, 2015, 1:07:36 AM12/16/15
to DDD/CQRS
Thanks for the reply, Bennie.

Its perfectly ok for a process manager to do this eg receive events , read the read domain and issue commands

OK, that's what I wanted to know. While it's "perfectly OK", is it the DDD way? I'm happy to follow the guidance of those more knowledgeable than myself. I just want to better understand the design.

Ben Kloosterman

unread,
Dec 17, 2015, 4:44:13 AM12/17/15
to ddd...@googlegroups.com
Not DDD or CQRS  but Event sourcing is the key here once you have business events it allows the decoupling. 

When i first started looking at CQRS/ES i was concerned about the split brain  , eg business logic in the fascade , client , process managers  etc   But now think this is a good think , the really key logic that mutates the entities is isolated , and process managers can be SOA micro-services (and remember CQRS /DDD is not a top level architecture) which I like .  I'm pretty pure in general with the rules the main exception is probably a single command modifying 2 aggregates sometimes in a transaction.   Often process managers need to cross BC anyway so having them outside means you are freer in terms of deciding your BCs . That said Sagas and Process Managers are relatively expensive in terms of coding time to implement test etc.


CQRS/ES  is pretty loose with the way you implement it ( horses for courses approach) but its a good idea to know why a rule is there before you break it. 

Ben

Reply all
Reply to author
Forward
0 new messages