Validation and Value Objects

332 views
Skip to first unread message

Danil Suits

unread,
May 6, 2016, 2:35:30 AM5/6/16
to DDD/CQRS

I'm trying to work out how to deal with validation that becomes more restrictive as understanding of the domain evolves. 

Until recently, I have been thinking about value objects in a naive way[1]; that they are immutable, so if we ensure that they are valid upon construction, then they will be valid forever.  By writing the signature of methods on our aggregates using value types, using them internally to calculate the new state of the aggregate, using them as the domain models representation of those concepts in the emitted events, everything is guaranteed to be valid (the application layer will prevent invalid inputs, and entity commands that require an invalid value will fail, leaving the entity in the previously valid state).

So command messages arrive; we use a fluent builder to create a representation of that message in the ubiquitous language, identify the target entity.  We load from the book of record the history of event messages for this object, transform each of them via another fluent builder, and then use the resulting collection to re-hydrate the entity.

That's all fine if we get the validation right the first time out the door.

But if we discover that we've released a model with the wrong validation (which might be the wrong rules on a value type, or alternatively choosing a permissive concept where a more restrictive concept was appropriate), then we will want to replace the domain model with a new implementation that reflects our new understanding.  And we want to do this in such a way that the replacement can consume the book of record generated by the original.

Commands don't seem to be a problem.  Just reject any that don't satisfy the new validation rules.

We probably want to reject any command dispatched to an entity known to be in an invalid state; the pre-condition isn't satisfied, so halt and catch fire rather than making the situation worse.  Therefore  the more restrictive validation to the entity state as well.

Events seem messier.  We can't drop events arbitrarily from the book of record and hope to maintain the correct invariant.

I had thought that I could use compensating events, meaning that I could add an event to the stream that corrects the earlier problem.  My conclusion: that's fine for the case where you have no validation to consider, and you are just correcting a problem raised in an exception report[2].  But it is just a train wreck in the case where you are trying to ensure that the entity is always in a valid state.

What I think I want instead is an implementation of compensating events where (1) all of the events for the entity, in their original (immutable) state are present in the book of record and (2) the stream of events used to re-hydrate the entity are all valid.

In other words, I want the repairs to happen as I am converting the event message representation into event value representation.[2]

I think it works something like this: having identified a stream to load, I check to see if there is a corresponding compensation stream.  In the most common cases, there won't be one, and the event messages are played through a fluent builder, left fold, done.

If there is a compensation stream, then I'm essentially loading that information into an event sourced transformer.  Once the state of the transformer has been restored, I then play into it the raw event messages, and emit from it a sequence of messages with all corrections applied.  Like commands, there won't necessarily be a 1:1 relationship between the event messages in the original history and the domain events

I'm too snarled at this point to work through compensating for errors in compensating events.  "Turtles all the way down" seems like a perfectly sane, albeit completely deranged, possibility.

Is this reasoning familiar to anybody?

[1] That may still be true

[2] Pen and Ink ledger books don't have validation; just exception reporting.[4]

[3] Do we have better language here?  I'm trying to distinguish json/protobufs/etc from the representation expressed as domain value types.  Event message vs Domain events?  Is there an analogous "domain command" concept?

[4] The motivation for validation is to reduce exception overhead?




Ben Kloosterman

unread,
May 6, 2016, 8:23:40 PM5/6/16
to ddd...@googlegroups.com
"I had thought that I could use compensating events, meaning that I could add an event to the stream that corrects the earlier problem.  My conclusion: that's fine for the case where you have no validation to consider, and you are just correcting a problem raised in an exception report[2].  But it is just a train wreck in the case where you are trying to ensure that the entity is always in a valid state."

Disagree , business systems are never in a 100% valid state  . If its important enough create compensating events to fix the problem , else leave it,. Consider an accounting system  if a incorrect entry was made last year to the general ledger you will either ignore it or put an adjustment in the current year.  Effectively you have rolled up events with a compensation .

We are missing the real validation reason it may be infra structure ( where keys , duplicates and such concerns create technical issues)  , id be looking why this validation is so critical .  Paper Business documents / dockets  have faults missed by a checker , Human (esp entry)  errors  are everywhere which is why accounting has such a key balance period at end of month or year .  Missed Business validation is  bad but not critical and even then compensated for. Oh we didnt pay you the right amount last month etc..


Ben

Danil Suits

unread,
May 10, 2016, 12:53:26 PM5/10/16
to DDD/CQRS
Checking my understanding of what Ben has written.

It sounds to be as though you are agreeing that validation and compensating events are a problem; but that the lesson to draw is that validation should be treated with suspicion.

In longer form; unless we are investing in robustness beyond the point of diminishing returns, there's going to be a business need for compensating event.  Generally speaking, there's value in keeping the situation malleable, especially in a domain where one's competitive advantage lies.

Therefore, to keep the situation fluid, keep validation out of the model until a compelling business justification appears.

Am I understanding this suggestion correctly, or am I still trapped in my own preconceptions?

Ben Kloosterman

unread,
May 10, 2016, 11:40:35 PM5/10/16
to ddd...@googlegroups.com
Compensating events are a problem but do need to be done sometimes. 

IMHO its ok to have some business validation and some basic Infrastructure checks ( Guid of key not empty)  , the issues you suggest tend to be more complex infra structure related validation error .

The normal rule has always been that commands should rarely fail so i have at times added validation to the command if it did not require dependencies and then get the command queue to validate it on add (or validate it on create).  One thing i always struggle with was a validation that required the read model was a bit expensive  to write.

Ben

Am I understanding this suggestion correctly, or am I still trapped in my own preconceptions?\

jarchin

unread,
May 11, 2016, 5:15:44 AM5/11/16
to DDD/CQRS
Hi Danils. This reasoning is familiar to me.

I think especially this part is interesting:
 
I think it works something like this: having identified a stream to load, I check to see if there is a corresponding compensation stream.  In the most common cases, there won't be one, and the event messages are played through a fluent builder, left fold, done.

If there is a compensation stream, then I'm essentially loading that information into an event sourced transformer.  Once the state of the transformer has been restored, I then play into it the raw event messages, and emit from it a sequence of messages with all corrections applied.  Like commands, there won't necessarily be a 1:1 relationship between the event messages in the original history and the domain events

Currently, we export the entire database when we want transformation of bad events / schema-changes to events.

If instead using a compensation stream, we would 
a. Retain the history of bad events (can be more or less useful)
b. Avoid export of entire db
c. Get a log of schema changes and data corrections.

My first instinct was that the transformer should be a part of the Aggregate base class. It will have a stream name based on the aggregate instance (like [ar]Compensation-[id]). All loads of ARs will first load any compensation events and build up a transformer state, that contains instructions on how to modify the events of the AR stream. When AR events are loaded they are piped through the transformer before applied.

Mm.. Maybe it's worth it.

Danil Suits

unread,
Jan 6, 2018, 8:50:32 AM1/6/18
to DDD/CQRS
Leaving breadcrumbs for those who follow after

Greg's book on Event versioning really helped drive home for me that events (and state, generally) are messages from the past to the future, with all of the compatibility concerns that implies.

Event histories are simply representations of state.  It's perfectly normal to have representations of state that aren't reachable in the current domain model, which is to say, states that have no inbound edges.

When we update domain behavior, we might remove the edge from state A to state B; but we don't remove state B itself -- among other concerns, we still want to be able to transition entities out of the unreachable state.  Load entity, apply compensating actions that take us out of the bad state, and now we're back on the happy path.


Reply all
Reply to author
Forward
0 new messages