My approach for updating in-memory aggregate root state in a CQRS/Event Sourcing model.

1,513 views
Skip to first unread message

Gary Malouf

unread,
Oct 24, 2012, 2:29:12 PM10/24/12
to ddd...@googlegroups.com
I'm designing a domain model based on DDD, CQRS and event sourcing to cleanly separate the domain logic from the query view.  For reference purposes, this is in scala and uses an 'immutable' domain model.  Per each aggregate/command handler, this is the workflow I am leaning towards:

 1. Submit command
 2. Validate command against an aggregate root WITHOUT updating any state and generate event.
 3. Write to event store
 4. Update aggregate state using generated event.
 5. 'Publish' event for consumption by interested parties

In my view, the appropriate function calls for steps 2-5 would be made by the command handler.

It appears the aggregate root itself should expose a method to update it's state based on an event in addition to functions for validating commands - does this make sense? Design-wise, I thought it was a bit convoluted to have the functions for command validation/event producing on the same trait/class where the function for updating the aggregate root's state lives. 

My thought in a scala was to put the validation on the companion object (think static methods in a Java world) of the aggregate -> successful validation of a command would return a Validation of either DomainError or a List of events corresponding to the command.  The aggregate itself would then expose a function which accepts events and outputs a new state.

Any feedback on my approach would be greatly appreciated.

Philip Jander

unread,
Oct 25, 2012, 4:24:33 AM10/25/12
to ddd...@googlegroups.com

Hi Gary,

I have 3 comments:

> 2. Validate command against an aggregate root WITHOUT updating any
state and generate event.
1. I am pretty sure this is not going to work out. For any but the most
trivial operations, you will end up with multiple resulting events per
command. This means that you will need to modify the state of the
aggregate in between to support your logic. So your aggregate updates
state immediately when generating an event, but the event is still not
published beyond an "uncommitted event" list on that very aggregate.

> It appears the aggregate root itself should expose a method to update
it's state based on an event in addition to functions for validating
commands - does this make sense?
and > The aggregate itself would then expose a function which accepts
events and outputs a new state.
2. The aggregate has no job accepting events with the exception of its
own history as a constructor parameter for loading. It is instead a
source of events. If following tell-dont-ask/no-getters principles, it
also should not publish state. Instead, the aggregate accepts method
calls from command handlers and provides the list of generated events.
Through the method call, you can also provide access services and other
aggregates for the duration of the method call.

3. Apart from all of that, I have a suggestion: if you are in a
non-timecritical domain, i.e. you can spend some extra milliseconds per
command, I would suggest not to have any "state" at all beyond the list
of events. Instead you define functions for state in traits which
project from event collections to whatever your state type is. This
allows you to reuse the definitions in aggregates as well as readmodels.
In your domain, you just publish events by writing to the aggregates
internal list (or use immutable domain if you like, but I am not sure if
there is a need for immutability here). So your aggregate is always up
to date, but not committed. For committing, you take the new events from
the aggregate and persist+publish. For reverting, you can let the
aggregate go out of scope and let the GC get rid of it.


Non-production example:


abstract case class Event(val what: String) {}

final case class CustomerBecamePreferred(val customerId: Int) extends
Event("Customer " + customerId + " became preferred") {}

type History = scala.collection.immutable.Seq[Event]
public trait Entity {
var history: History
def Publish(e: Event): History = history :+ e
}

public trait PreferredConcept {
var history: scala02.Entity.History

def IsPreferred: Boolean = history.map(e =>
e.isInstanceOf[CustomerBecamePreferred]).contains(true)
}

public class CustomerStatus(val id: Int, var history: History) extends
Entity with PreferredConcept {
def MakePreferred() {
if (IsPreferred) {
println("Already preferred...")
} else {
Publish(new CustomerBecamePreferred(id))
}
}
}


with

def main(){
val history = scala.collection.immutable.Seq.empty

val stat = new CustomerStatus(1, history)

if (stat.IsPreferred) println("Preferred") else println("Nornal")
stat.MakePreferred()

if (stat.IsPreferred) println("Preferred") else println("Nornal")
stat.MakePreferred()
}

The drawback is that for every access to a state, you need to run
through the aggregate's history again (in local memory). So if your
aggregates are limited by design to a few 100 or 1000 events, this is
quite feasible. The upside is state definition reusal (way less
testing/bugs in readmodels) and no state to manage beyond a single list
of events per live aggregate. Again, the above is just a
proof-of-concept for communications, not in any way production-like. But
I am doing somthing alike in c# with great success.

Cheers
Phil


Gary Malouf

unread,
Oct 25, 2012, 6:08:29 AM10/25/12
to ddd...@googlegroups.com
Hi Phil,

Thank you for reviewing my possible implementation. 

RE 1) This is a good point -> however, I would wonder whether as the command is validated the application of the said events could not be simulated with a local variable without yet 'committing' to the in-memory state.  I'll keep the uncommitted list on the radar until I see how the implementation seems to bear out.

RE 2) Understood

RE 3) We expect there to be on the order of a 10s of 1000s of changes made for each type of aggregate (there are of course many instances of the same type of aggregate root) which summed together would produce pretty weak performance for the end user.  Thus, I feel it is necessary to keep the state ready. 

Perhaps a compromise is to take a local copy of the aggregate root when validating commands/producing events.  The events would be applied to this local copy as validation of the command occurs.  Once the events have been written to the event store (outside of the aggregate's responsibility in my opinion) we would then need a way to apply uncommitted events to the aggregate state.

I think in general, I want to be sure that if I am unable to write to the event store that the current state is preserved without having to replay every event from history.
Message has been deleted

Philip Jander

unread,
Oct 25, 2012, 8:30:24 AM10/25/12
to ddd...@googlegroups.com
Hi Gary,
>
> Thank you for reviewing my possible implementation.
you're welcome.
>
> RE 1) This is a good point -> however, I would wonder whether as the
> command is validated the application of the said events could not be
> simulated with a local variable without yet 'committing' to the
> in-memory state. I'll keep the uncommitted list on the radar until I
> see how the implementation seems to bear out.
While this is possible, why would you want to do that? Commonly the
whole aggregate instance is "uncommitted" and ceases to exist once the
transaction has been committed.

The usual sequence of events is
- command handler fetches appropriate aggregate from event store via
repository
- command handler fetches possible additional dependencies
- command handler invokes method on aggregate
- aggregate may call other methods on itself or external dependencies
- command handler gathers produced events from aggregate
- events are published to storage and subscribers
- aggregate instance is GC'd

Without any optimizations (keyword: snapshot, aggregate cache), the
appropriate aggregate is created from event history for every command.

If (and only if) this proves to be to slow (depending on your setup and
domain complexity possibly at between some 10 and 100 commands/second),
it may be of interest to not let the GC get rid of the aggregate but
instead mark the instance as clean and hand it to the repository for reuse.

In case of a domain error, i.e. no commit, the instance is still GC'd
and the next command accessing the same aggregate needs to regenerate it
from the event history. If (and only) if this small additional latency
in case of an error is also unbearable, there is the option of the
repository not handing out it's snapshot but a copy. This is the realm
where the implementation you consider might have its merit. But then we
are talking about high frequency trading or comparable domains.

> RE 3) We expect there to be on the order of a 10s of 1000s of changes
> made for each type of aggregate (there are of course many instances of
> the same type of aggregate root) which summed together would produce
> pretty weak performance for the end user. Thus, I feel it is
> necessary to keep the state ready.
Obviously you are in the best position to judge this. However, if I
understand you correctly, 1000s of changes per *type* means not a lot
per instance. Event at 1000 events per instance we are talking about a
low millisecond range additional latency. So there is hardly any
problem, unless the 1000s of changes is per second.

I would suggest to actually measure the performance with the simplest
possible approach and only optimize later where required.

> Perhaps a compromise is to take a local copy of the aggregate root
> when validating commands/producing events. The events would be
> applied to this local copy as validation of the command occurs.
"Local copy" implies that there is a "Global original" someplace else.
Apart from the two "snapshot" optimizations I pointed out earlier, I
believe this is commonly not the case in an event sourced approach.

> Once the events have been written to the event store (outside of the
> aggregate's responsibility in my opinion) we would then need a way to
> apply uncommitted events to the aggregate state.
See above, only if you are going to do anything else with that instance.

> I think in general, I want to be sure that if I am unable to write to
> the event store that the current state is preserved without having to
> replay every event from history.
The point is that your event store *is* your state. If you don't write
to it, your state is preserved by default. Anything else is either
temporary or an optimization.

Ok, I guess I made my point clear :)
As always, just my 2 cts.

Cheers
Phil

Greg Young

unread,
Oct 25, 2012, 8:45:59 AM10/25/12
to ddd...@googlegroups.com
> If (and only if) this proves to be to slow (depending on your setup and
> domain complexity possibly at between some 10 and 100 commands/second), it
> may be of interest to not let the GC get rid of the aggregate but instead
> mark the instance as clean and hand it to the repository for reuse.

10-100 is too little. This number can be more like 1000.

If this is too slow though, exactly drop in an identity map over the top.

If all your objects don't fit in memory put them in memory on more
than one box and route messages
--
Le doute n'est pas une condition agréable, mais la certitude est absurde.

Gary Malouf

unread,
Oct 25, 2012, 1:32:55 PM10/25/12
to ddd...@googlegroups.com
Interesting, we are an F/X company so I have two cases to deal with here:

1) Business/Configuration Data - It sounds like you guys would NOT advocate sharding the command handling based on entity type across a cluster as the effort to reproduce per entity seems low - what about when we have say 500k+ events in the event log?  My original plan was to use Apache ZooKeeper to dynamically assign aggregate root command handling to specific instances in the cluster.

2) Trading Data - seems like we must shard and keep it in memory to ensure good performance.  Will probably also take snapshots of things like account balances, etc

Thanks,

Gary

Philip Jander

unread,
Oct 25, 2012, 6:42:16 PM10/25/12
to ddd...@googlegroups.com
Hi Gary,
> 1) Business/Configuration Data - It sounds like you guys would NOT
> advocate sharding the command handling based on entity type across a
> cluster as the effort to reproduce per entity seems low - what about
> when we have say 500k+ events in the event log? My original plan was
> to use Apache ZooKeeper to dynamically assign aggregate root command
> handling to specific instances in the cluster.
500k events is not much, once you think about the size. Assuming e.g.
2000 bytes/event on avg (which is probably too large an estimate), we
are talking about 1 GB of data. You can easily keep that in memory on a
single machine, if speed matters.

Once you need to shard, per-type is not a good solution for load
balancing. Different types tend to have different usage statistics. The
optimal solution will depend on how your aggregates communicate which
each other. Assuming no knowledge about such dependencies, a simple yet
effective solution is to shard by aggregate ID (e.g. first 4 bits to
multiplex onto 16 nodes), if your IDs are randomly distributed. This
requires no shared configuration (here goes zookeeper ;) ) just a small
but fast message router with a local lookup table ID pattern -> node
address.

> 2) Trading Data - seems like we must shard and keep it in memory to
> ensure good performance. Will probably also take snapshots of things
> like account balances, etc
Possibly. To be candid, since projects in this area tend to be mission
critical but not too short on budget, it might pay off to get advice
from someone who already made all the mistakes and knows which patterns
solve which problems ;) I hear Greg has a bit of experience with trading
systems.

Apart from that I reiterate my advice to start with a simple solution
and only take optimizations after profiling performance bottlenecks. In
my experience, premature over-engineering is by far the worst impediment
to achieving good performance.

Cheers
Phil

belitre

unread,
Oct 26, 2012, 4:25:01 AM10/26/12
to ddd...@googlegroups.com
@Philip, you have Make my day. For most of the aggregates in our business we have just a bunch of events, and getting rid of "current state" management opens tons of posibilities for improving domain modeling. Distribution of behaviour is an order of magnitude easier without state :-)

@Greg, this concept can make it easier the integration scenario in memory we talked in Event Store group. All "questions" over system would be projections over a "slice" of the history.

Time to spike :-)

Greg Young

unread,
Oct 26, 2012, 10:28:34 AM10/26/12
to ddd...@googlegroups.com

Im a bit confused as what you describe is wholwe point if event sourcing can you explain more?

On Oct 26, 2012 10:26 AM, "Gary Malouf" <malou...@gmail.com> wrote:

Gary Malouf

unread,
Oct 26, 2012, 11:35:44 AM10/26/12
to ddd...@googlegroups.com
Hi Greg,

I was wondering if it might scale a bit better to shard the aggregates across a cluster based on their aggregate id (thanks Phil).  Another benefit to this is that I do not need to worry about handling transaction failures due to events for the same aggregate entity being handled by different servers.  Using something like ZooKeeper or Akka 2.1 clustering, I can also enable dynamic failover/reassignment of the aggregates from one node to the other when I take a server down for maintenance as well.

Finally, my point about keeping aggregate state in-memory would probably only be appropriate for the trading portion - we obviously do not want to have to replay the previous events of a given trade (i.e. before/after getting liquidity, etc) each time an event related to it happens.

-Gary

Greg Young

unread,
Oct 26, 2012, 12:35:30 PM10/26/12
to ddd...@googlegroups.com
I discuss doing just that in my class (I thought I put a post here as well)

Gary Malouf

unread,
Oct 26, 2012, 12:47:33 PM10/26/12
to ddd...@googlegroups.com
You did discuss routing based on an id above when everything does not fit into memory on a single box - the important point I emphasize is the dynamic failover which I feel handles a lot of annoying error cases in production.  Thank you all for your input!

Greg Young

unread,
Oct 26, 2012, 1:20:51 PM10/26/12
to ddd...@googlegroups.com
I actually replied to wrong email. I was getting confused in responses here. It was intended for Blitre right after yours.

Re shadows. It's actually quite easy to do. Pump events to secondary node as a warm replica. Then when it comes back up it will have to only get say the last 200ms of events to continue not the whole day

belitre

unread,
Oct 27, 2012, 10:13:20 AM10/27/12
to ddd...@googlegroups.com
All examples I had read about event sourcing still had an idea of pre-processed "current state" rebuilt from event streams (yours also has, _activated). Phil example might be a slight difference or just an implementation detail for many of you, or something pretty obvious for people doing functional domain programming. But to infer the state "as needed" and not pre-processing it opens a couple of opportunities. True there are trade-offs for each option. 

Philip Jander

unread,
Oct 27, 2012, 5:15:57 PM10/27/12
to ddd...@googlegroups.com
Am 27.10.2012 16:13, schrieb belitre:
> All examples I had read about event sourcing still had an idea of
> pre-processed "current state" rebuilt from event streams (yours also
> has, _activated). Phil example might be a slight difference or just an
> implementation detail for many of you, or something pretty obvious for
> people doing functional domain programming. But to infer the state "as
> needed" and not pre-processing it opens a couple of opportunities.
> True there are trade-offs for each option.
>

It is the simplest implementation. Besides lighter entities, the selling
point for me was avoiding duplication of projection logic between
projection and domain.
I actually started out trying to have a shared (domain+projectors)
projection logic. If one wants to have up to date state information,
this is not easily done with IEnumerables, so I implemented this using
the Rx framework. But it turned out way to much framework for such a
simple thing. Eventually, I came up with the functional variant. For me,
this works perfectly as I tend to have small aggregates and my domain
execution time is completely non-critical.

And of course, as with all "simple" implementation choices, you can
always optimze towards increased performance (and complexity), if the
trade offs are better.

Cheers, Phil

Greg Young

unread,
Oct 28, 2012, 5:02:08 AM10/28/12
to ddd...@googlegroups.com
How is "activated" found in simplecqrs? Hint: it's not preprocessed.

belitre

unread,
Oct 28, 2012, 7:11:11 AM10/28/12
to ddd...@googlegroups.com
 private void Apply(InventoryItemCreated e)
        {
            _id = e.Id;
            _activated = true;
        }

        private void Apply(InventoryItemDeactivated e)
        {
            _activated = false;
        }

Pre-processed might be not a well suited word. 

Philip Jander

unread,
Oct 28, 2012, 7:27:03 AM10/28/12
to ddd...@googlegroups.com
Am 28.10.2012 12:11, schrieb belitre:
 private void Apply(InventoryItemCreated e)
        {
            _id = e.Id;
            _activated = true;
        }
        private void Apply(InventoryItemDeactivated e)
        {
            _activated = false;
        }


as opposed to (e.g., I wouldn't necessary recommend this for the ID ;) )

private bool _id { get { return _history.OfType<InventoryItemDeactivated>.Single().Id; } }
private bool _activated { get { return _history.OfType<InventoryItemDeactivated>.Any(); } }


But the real power lies in:

private bool _activated { get { return InventoryConcepts.Activated(_history); } }

with

public static class InventoryConcepts { // shared by domain, projectors, etc
    public static bool Activated(IEnumerable<Event> history) { return history.OfType<InventoryItemDeactivated>.Any(); }
}


Cheers
Phil

Philip Jander

unread,
Oct 28, 2012, 8:06:23 AM10/28/12
to ddd...@googlegroups.com

typo. This should have been:
private bool _id { get { return _history.OfType<InventoryItemCreated>().Single().Id; } }
private bool _activated { get { return _history.OfType<InventoryItemDeactivated>().Any(); } }

Marijn Huizendveld

unread,
Oct 28, 2012, 10:15:17 AM10/28/12
to ddd...@googlegroups.com
This all seems nice but how would this work when a snapshotter is in place? Would the snapshots contain a history of all events?

Verstuurd vanaf mijn iPhone

Greg Young

unread,
Oct 28, 2012, 1:03:20 PM10/28/12
to ddd...@googlegroups.com
I think this is a bit confused.

For one those apply methods are left folded (like what you suggest doing). You seem to be suggesting n separate left folds is prreferable to a single left fold. An interesting suggestion I have never thought about it

Also these examples are very simple things (quite often it's not as simple as an any()). Take for instance that you are interested in the durations between two events or complex ordering. In these cases you will end up left folding in the same way.

I wrote an article recently for Ndc magazine (I put it on my blog as well ) taking a more functional view and using a separated state which seems to be in the same direction you are heading.

Greg

Philip Jander

unread,
Oct 28, 2012, 2:48:59 PM10/28/12
to ddd...@googlegroups.com
Am 28.10.2012 15:15, schrieb Marijn Huizendveld:
> This all seems nice but how would this work when a snapshotter is in
> place? Would the snapshots contain a history of all events?

No, and if you need snapshots, this way of projecting doesn't make much
sense anyway. My domains so far have been pretty compact. The largest I
have in production is designed to have max approximately 500 events per
aggregate, except for very few ones which are handled differently. But I
have quite a few readmodels and the problem of keeping projection
definitions in sync was becoming a significant impediment.

Different problems -> different solutions. One of the strong selling
points of cqrs ;)

Cheers
Phil

Philip Jander

unread,
Oct 28, 2012, 3:08:40 PM10/28/12
to ddd...@googlegroups.com
Am 28.10.2012 18:03, schrieb Greg Young:
> I think this is a bit confused.
How's that?
>
> For one those apply methods are left folded (like what you suggest
> doing). You seem to be suggesting n separate left folds is prreferable
> to a single left fold. An interesting suggestion I have never thought
> about it
Mathematically, it's still the same state = f(events). A single left
fold. Only the runtime execution is different, n enumerations vs a
single enumeration. I actually started using Rx to move state
projections out into their own "concept" classes, to have a single point
of defining them. But I ended up with closures for hidden state anyway,
so there is no point in enumerating the events only once, except for
performance considerations. And as I learned from yourself, performance
optimizations are only to be taken after they are deemed neccessary ;)

> Also these examples are very simple things (quite often it's not as
> simple as an any()). Take for instance that you are interested in the
> durations between two events or complex ordering. In these cases you
> will end up left folding in the same way.
Sure, but closures allow for any kind of projections. And any can be
computed in a single enumeration. Of course, the tradeoff of being light
on code is memory usage (worst & pathological case you need to build up
a full copy of the entity history in the closure for each and every
projection) and runtime performance. Pick you choice for each projection
to optimize the tradeoff and be happy. Actually, while I'm not using Rx
any more for this, the methods available on the observable streams give
a pretty good indication of whats feasible and how to structure operations.

>
> I wrote an article recently for Ndc magazine (I put it on my blog as
> well ) taking a more functional view and using a separated state which
> seems to be in the same direction you are heading.
Yes it does. I actually started out doing this in nodejs some months
ago. Simply injecting state definitions into entities was a real boon
compared to c#. But scala traits do the same trick and scala is much
nicer to work with then js as a domain language (for me at least)

Cheers
Phil

@yreynhout

unread,
Oct 28, 2012, 4:17:59 PM10/28/12
to ddd...@googlegroups.com
Nice technique, but aren't you dealing with multiple forces when it comes to "projections"? I mean, both the read and the write side have their needs. Nuances could start conflicting (semanticly) and both could push the projection in a divergent direction (not saying this is the case here - just warning signs going off in my head). Seems like this could be an excuse not to use a shared kernel or an explicit query api. I'm intrigued but sceptical.

Philip Jander

unread,
Oct 28, 2012, 5:58:07 PM10/28/12
to ddd...@googlegroups.com
Hi Yves,
> Nice technique, but aren't you dealing with multiple forces when it
> comes to "projections"? I mean, both the read and the write side have
> their needs. Nuances could start conflicting (semanticly) and both
> could push the projection in a divergent direction (not saying this is
> the case here - just warning signs going off in my head). Seems like
> this could be an excuse not to use a shared kernel or an explicit
> query api. I'm intrigued but sceptical.
first, we are talking about quite fine grained things here. State
projections within one entity within one BC. So there shouldn't be much
chance of semantic conflict. Actually, I would expect the definition of
some concept of the ubiquitous language in terms of how to project that
concept off an event stream to be similar for all occurrences of that
concept. Kind of by definition of the ubiquitous language.

The problem which has bitten me more than once is small, supposedly
global changes to such definitions. E.g. one introduces some kind of
corrective event. That event must now be considered whereever a certain
projection is computed. This is just the kind of duplication of logic
DRY tells us to avoid (and I don't care much for DRY otherwise, but here
it has its applications). Because errors do happen in these cases,
sooner or later.

The only other option would be to avoid identical projections in
multiple readmodels. But that would require either joining on query
(towards 3NF...) or other kinds of coupling unrelated readmodels
together. Which I want to avoid. And at any rate, you are likely to have
at least readmodel/domain model duplication of projection logic.

Also, I don't propose this kind of projection as a global standard.
Rather as the minimal cost variant, to be exchanged for something more
complex if need be. It's just that it comes kind of naturally with
functional languages. Scala traits in particular (and the OP was about
scala) lend themselfes to this quite nicely.

Cheers
Phil


@yreynhout

unread,
Oct 28, 2012, 6:46:25 PM10/28/12
to ddd...@googlegroups.com
Querying history dynamically I'm comfortable with. The projection sharing I'm not convinced about.

I do get your point, but to me projection happens for a purpose. It's seldom it's about one field, let alone about forcing it to be semantically the same. I consider this freedom liberating because it decouples read and write, thereby reducing both cognitive load & constraint. The difference in purpose on write and read sides (multiple), from personal practical experience, makes it impractical to strive to DRY things up. YMMV. As you say, it may/does work for small things, and clearly you are more experienced than me in this department.

Thanks for sharing.

belitre

unread,
Oct 29, 2012, 4:18:44 PM10/29/12
to ddd...@googlegroups.com

> Also these examples are very simple things (quite often it's not as
> simple as an any()). Take for instance that you are interested in the
> durations between two events or complex ordering. In these cases you
> will end up left folding in the same way.

Complexity doesn't change in any of the options. Just change how the algorithms are distributed.
 
Sure, but closures allow for any kind of projec
 
. Of course, the tradeoff of being light
on code is memory usage (worst & pathological case you need to build up
a full copy of the entity history in the closure for each and every
projection)

Well, in both cases you must retrieve the aggregate events. But in your approach they are hold in memory for a couple of milliseconds more (the cost of processing the use case). It might depend on the domain, but memory usage doesn't seem to be a problem for many cases.
 
and runtime performance.

This also can't be predicted. Absolutely domain dependent...
  
Pick you choice for each projection
to optimize the tradeoff and be happy.

+1. And it is so easy to have the two options available!!! 

Phil, this deserves at least a blog post... or a gist... or a video... and call it with a striking name... Stateless ES-Aggregates, Immutable Functional Domain Modeling... or some new useless acronym... ;-).

Greg Young

unread,
Oct 29, 2012, 5:16:48 PM10/29/12
to ddd...@googlegroups.com
Call me slow but I fail to see any difference between what is being discussed and either the projections in the event store or the article mentioned. Perhaps you can explain the difference? Perhaps it would be more apparent with another example.

I'm interested in the cubed total of total on a series of order created.

In what I discuss:

Fromstream('foo').when(orderplaced: function(s,e) { return new state() {total=s.total + e.total * e.total});

This is a quite basic left fold immutable state passed from operation to operation. I am not seeing the difference except for lina operations that can already be implemented as folds.

Greg

belitre

unread,
Oct 30, 2012, 4:25:03 AM10/30/12
to ddd...@googlegroups.com
Projections in the event store ir almost the same thing. Conceptually, yes, they are they same immutable state functional thing. I have watched almost all your videos, including functional domain modeling and event store ones (we are pretty exited with Event Store, is fantastic). I know you have been talking about this functional view, I'm not saying that Phil is the first person to talk about this.

The only small difference is that Phil uses the same approach in write side (no state fields like in your simple-cqrs sample, or rinat approach with a state object).. Perhaps this was explained before, but I found this small step new.

Then someone can say that there is nothing new in all of this. Ok.... in logic systems this is common. Logic systems load sets of facts, and infer/projects new facts according to ontologies or rules. It is very common in description logics systems (f.i. in semantic web) to choose whether pre-process the derived facts or calculate them when a query is made. 
Reply all
Reply to author
Forward
0 new messages