Hi Gary,
I have 3 comments:
> 2. Validate command against an aggregate root WITHOUT updating any
state and generate event.
1. I am pretty sure this is not going to work out. For any but the most
trivial operations, you will end up with multiple resulting events per
command. This means that you will need to modify the state of the
aggregate in between to support your logic. So your aggregate updates
state immediately when generating an event, but the event is still not
published beyond an "uncommitted event" list on that very aggregate.
> It appears the aggregate root itself should expose a method to update
it's state based on an event in addition to functions for validating
commands - does this make sense?
and > The aggregate itself would then expose a function which accepts
events and outputs a new state.
2. The aggregate has no job accepting events with the exception of its
own history as a constructor parameter for loading. It is instead a
source of events. If following tell-dont-ask/no-getters principles, it
also should not publish state. Instead, the aggregate accepts method
calls from command handlers and provides the list of generated events.
Through the method call, you can also provide access services and other
aggregates for the duration of the method call.
3. Apart from all of that, I have a suggestion: if you are in a
non-timecritical domain, i.e. you can spend some extra milliseconds per
command, I would suggest not to have any "state" at all beyond the list
of events. Instead you define functions for state in traits which
project from event collections to whatever your state type is. This
allows you to reuse the definitions in aggregates as well as readmodels.
In your domain, you just publish events by writing to the aggregates
internal list (or use immutable domain if you like, but I am not sure if
there is a need for immutability here). So your aggregate is always up
to date, but not committed. For committing, you take the new events from
the aggregate and persist+publish. For reverting, you can let the
aggregate go out of scope and let the GC get rid of it.
Non-production example:
abstract case class Event(val what: String) {}
final case class CustomerBecamePreferred(val customerId: Int) extends
Event("Customer " + customerId + " became preferred") {}
type History = scala.collection.immutable.Seq[Event]
public trait Entity {
var history: History
def Publish(e: Event): History = history :+ e
}
public trait PreferredConcept {
var history: scala02.Entity.History
def IsPreferred: Boolean = history.map(e =>
e.isInstanceOf[CustomerBecamePreferred]).contains(true)
}
public class CustomerStatus(val id: Int, var history: History) extends
Entity with PreferredConcept {
def MakePreferred() {
if (IsPreferred) {
println("Already preferred...")
} else {
Publish(new CustomerBecamePreferred(id))
}
}
}
with
def main(){
val history = scala.collection.immutable.Seq.empty
val stat = new CustomerStatus(1, history)
if (stat.IsPreferred) println("Preferred") else println("Nornal")
stat.MakePreferred()
if (stat.IsPreferred) println("Preferred") else println("Nornal")
stat.MakePreferred()
}
The drawback is that for every access to a state, you need to run
through the aggregate's history again (in local memory). So if your
aggregates are limited by design to a few 100 or 1000 events, this is
quite feasible. The upside is state definition reusal (way less
testing/bugs in readmodels) and no state to manage beyond a single list
of events per live aggregate. Again, the above is just a
proof-of-concept for communications, not in any way production-like. But
I am doing somthing alike in c# with great success.
Cheers
Phil