CQRS framework

770 views
Skip to first unread message

Working on framework

unread,
Jul 26, 2016, 5:32:39 AM7/26/16
to DDD/CQRS
Hi all,

We are looking into a mature CQRS framework. Any successful implementation using Axon in production? Any other frameworks that we should look into?

Thanks.

Greg Young

unread,
Jul 26, 2016, 6:24:59 AM7/26/16
to ddd...@googlegroups.com
My advice, stay away from frameworks. They don't provide nearly the
value you that you think they will.
> --
> You received this message because you are subscribed to the Google Groups
> "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dddcqrs+u...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

Jorg Heymans

unread,
Jul 26, 2016, 3:04:58 PM7/26/16
to DDD/CQRS


On Tuesday, July 26, 2016 at 12:24:59 PM UTC+2, Greg Young wrote:
My advice, stay away from frameworks. They don't provide nearly the
value you that you think they will. 

Depends on your expectations. For the experienced cqrs'ers indeed no framework is going to cut it. For people still trying to grasp the pattern in actual code such framework can be very helpful. As far as Axon is concerned, it provides a lot of boilerplate stuff that you would end up writing yourself anyway, so why not leverage it if it matches your use case. 

Jorg

Ruslan Rusu

unread,
Jul 26, 2016, 4:15:17 PM7/26/16
to DDD/CQRS

Jorg,

What exactly are you optimizing for? 

Cheers,
Ruslan

Colin Yates

unread,
Jul 26, 2016, 4:28:58 PM7/26/16
to ddd...@googlegroups.com
Interesting thought, but I think I am with Greg - given the nature of
DDD (less so CQRS), the variation is far more to do with your model so
there is very little a framework can do.

If you are unfamiliar with the necessary implementation details or
even the more abstract modelling then the LAST thing you want to do is
use a framework as it might hide or guide you away from making
necessary decisions. Throw yourself in the deep end, do it all by
hand, learn the INCIDENTAL complexity and then optimise that away.

Frameworks/libraries add value when there is a prescribed approach to
doing things, or the correct thing to do is just really painful so
they hide the pain. None of that seems true for DDD, even more so for
CQRS. There just isn't that much to implement.

There are implementation details if you want to talk about
multi-server etc. but they really aren't that DDD/CQRS specific...

Kasey Speakman

unread,
Jul 26, 2016, 10:07:32 PM7/26/16
to DDD/CQRS
Honestly, there's not that much more to do than any other kind of endpoint project (which can still be a non-trivial effort). The way it's all stitched together is variable to what you need, so frameworks aren't useful. I'll outline some example choices for the write side.

Let's say you have an HTTP API for the write side. A request comes in. You get information out of the request and call the corresponding use case method. If you're doing messaging, then before you can get to the use case, you wire the request (command) to a handler. (You can make that a lot more complicated that it has to be.) So now you are running a use case. The use case could then call into a DDD BC or just CRUD validation. You could be rehydrating state from events or from a repository pattern. From the handlers you could be returning an event (which infrastructure persists/dispatches) or directly persisting repository changes. At the trust border, you're also likely to integrate security, logging, etc. And you could have different multiple kinds of implementations within the same endpoint. (e.g. even when using DDD it's pretty common to have CRUD data that would be wasteful to give full DDD treatment)

These are not choices you can generalize to a framework in a way provides any real reusability.

Some links I found helpful:

Lokad.CQRS Retrospective (I greatly respect Rinat's candor.)

Greg Young

unread,
Jul 27, 2016, 5:43:20 AM7/27/16
to ddd...@googlegroups.com
If you do it right there isnt really much boilerplate. Use reference
code not frameworks.

Phil Sandler

unread,
Jul 28, 2016, 5:02:34 PM7/28/16
to DDD/CQRS
What reference code would make a good starting point in the .Net space?

This whole thread is really interesting--in the past I've used frameworks (NES and CD), mainly because I didn't want to spend *any* time writing plumbing code.  It sounds like you and others are saying it's not that difficult, and the code is really *not* plumbing code, but is (or can be) more specific to your domain.

I think a thread or blog post on things to consider, decisions to be made, common pitfalls, etc. when rolling your own ES system would be really interesting.

Kyle Cordes

unread,
Jul 28, 2016, 8:34:13 PM7/28/16
to ddd...@googlegroups.com
On July 28, 2016 at 2:02:36 PM, 'Phil Sandler' via DDD/CQRS (ddd...@googlegroups.com) wrote:
What reference code would make a good starting point in the .Net space?

This whole thread is really interesting--in the past I've used frameworks (NES and CD), mainly because I didn't want to spend *any* time writing plumbing code.  It sounds like you and others are saying it's not that difficult, and the code is really *not* plumbing code, but is (or can be) more specific to your domain.

I think a thread or blog post on things to con


I think this thread has been a bit too anti-framework. I’ve had plenty of bad framework experiences also (internal corporate frameworks with mandated use are especially risky), but if you happen to find a CQRS framework with production-proven code that handles things the way you want, I’d encourage giving it a shot.

Over here we have our own framework (which we might release parts of), in public I’d certainly encourage Java shops to look at and try out Axon… we looked at it and liked a lot of what we saw.


--
Kyle Cordes

Kasey Speakman

unread,
Jul 28, 2016, 11:42:24 PM7/28/16
to ddd...@googlegroups.com
Sorry, this is the link I meant to send about Composition Roots and how they aren't reusable.

--
You received this message because you are subscribed to a topic in the Google Groups "DDD/CQRS" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dddcqrs/mMMDwTwyct0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dddcqrs+u...@googlegroups.com.

Greg Young

unread,
Jul 29, 2016, 8:53:18 AM7/29/16
to ddd...@googlegroups.com
The entire domain framework code for simplecqrs is < 200 lines of
code. I could do it in less especially in a functional language. The
one place where such code may be useful is in the management from
projections, there are a few libraries out there that do this ...

João Bragança

unread,
Jul 29, 2016, 9:35:49 AM7/29/16
to ddd...@googlegroups.com
Shameless plug time! We're using Projac (https://github.com/yreynhout/Projac/tree/master/src/Projac.Connector) with great success. Note that there's not a whole lot of code there. In fact it looks like the comments make up the majority of lines!

Then again, Projac is a library, not a framework

Hendry Luk

unread,
Jul 29, 2016, 1:45:44 PM7/29/16
to ddd...@googlegroups.com
I feel like there are quite a lot of non-trivial real-world challenges around cqrs that require significant boilerplate code that aren't anything domain specific at all. Just to name a few:
- event sourcing
- snapshots, caching
- unit-of-work, rollbacks
- optimistic/pessimistic locking
- validations
- versioning (of events, of domain logic, of projection logic)
- process-manager, state-machine
- integrations (message queue, API gateway, notifications, web-socket, event streaming, polling)
- event storage, archiving/rollover, backup, indexing, migration
- projections/denormaliser
- deduplication, data cleansing
- event ordering policy
- error handling, retry, dead-lettering, alert, circuit breaker
- scaling, load-balancing, redundancy
- managing read-model schema changes (e.g. new/alter read models, rebuild projection from entire event history while keeping uptime)
- administrative tools, e.g. Manual way to fix erroneous domain data, fix read models, view/query domain data
- operation monitoring, e.g. Monitor the flow of events; monitor eventual consistency, detect excessive delays, data inconsistency
- security: I find that access-control to entities (e.g. user permission, multi-tenancy) more challenging to enforce in applications backed with event-sourced repositories

Most of these are common requirements in typical cqrs projects meant for production. And of course, many of them are solved problems that aren't even cqrs specific, which is more the reason not to write everything again from scratch on every cqrs project.

Kyle Cordes

unread,
Jul 29, 2016, 1:48:36 PM7/29/16
to ddd...@googlegroups.com
 On July 29, 2016 at 12:45:41 PM, Hendry Luk
(hendr...@gmail.com(mailto:hendr...@gmail.com)) wrote:

> I feel like there are quite a lot of non-trivial real-world challenges around cqrs that require significant boilerplate code that aren't anything domain specific at all. Just to name a few:
> - event sourcing
> - snapshots, caching
> - unit-of-work, rollbacks



Yes, same experience here. There are numerous aspect which are
isolated from the domain, non-trivial in implementation (even if they
sound very easy at first explanation), and which therefore can benefit
from reuse of proven code. (Ideally in something that mostly feels
more like a useful “library” than a prescriptive “framework”, though
sometimes the latter is useful also.)



--
Kyle Cordes
kyle....@oasisdigital.com

Greg Young

unread,
Jul 29, 2016, 2:15:23 PM7/29/16
to ddd...@googlegroups.com
Note these frameworks don't touch 80% of this.

Dariusz Lenartowicz

unread,
Jul 29, 2016, 2:23:00 PM7/29/16
to DDD/CQRS

> Note these frameworks don't touch 80% of this.

Agree. That is painful truth.

Colin Yates

unread,
Jul 29, 2016, 4:41:20 PM7/29/16
to ddd...@googlegroups.com
I don't think anybody said write EVERYTHING from scratch. The point is
that to do CQRS/domain modelling you need to get the technology out of
the way and design the model based on reality. FRAMEWORKs by
definition are prescriptive. LIBRARYs contain solutions to
boilerplate.

None/hardly any of the things you listed are solutions to the unique
technical challenges CQRS and DDD bring, which is only and always
about domain modelling.

As others have said, the tooling CQRS and event sourcing requires
really is trivial. If it isn't then you are probably doing something
wrong.

My current project is very DDD focussed and uses event sourcing and
CQRS throughout the whole architecture. Why? Because that was the best
approach to solving the problem at hand. Which CQRS/DDD/ES framework
did I use? None. The system is built in Clojure which has a trivial
amount of incidental complexity compared to the C-family syntax
languages. The home-grown append-only event stream library which is
backed by a SQL Server database is about 30 lines long. That includes
comments, imports and white space. Oh, and a whole bunch of TODOs for
functionality I was sure I would need later, but actually it was fine
as it was.

Subscribing to events - that would be as complicated as a map of
subscriber ID -> [callback, last-seen-event-sequence-id]. If I wanted
to do multi-server deployment then I would almost certainly pull in a
library to handle distributed messaging, sure. But I don't need it
yet.

My final point is that I learnt significantly more, moved
significantly faster and was able to adapt far quicker because I
didn't buy into an all singing, all dancing framework which made
decisions for me. Do I write my own message/SQL/transaction/HTTP/REST
etc. code, of course not, that would be dumb.

TLDR; use libraries to solve CQRS/event sourcing agnostic problems and
you will find very little else that needs to be implemented.

Hendry Luk

unread,
Jul 30, 2016, 7:06:02 AM7/30/16
to ddd...@googlegroups.com

> On 30 Jul 2016, at 6:40 AM, Colin Yates <colin...@gmail.com> wrote:
>
> I don't think anybody said write EVERYTHING from scratch. The point is
> that to do CQRS/domain modelling you need to get the technology out of
> the way and design the model based on reality. FRAMEWORKs by
> definition are prescriptive. LIBRARYs contain solutions to
> boilerplate.
I respectfully disagree that a framework is necessarily prescriptive in the context of how you design your domain model. Only the technical approach.
For instance, I don't see how a framework (e.g. on top of Spring ecosystem) incorporating ETL tools like Talend/Pentaho (or integration framework like Camel) to handle how you build data projections would be prescriptive to how you actually model your domain.
But I don't think the OP's question intended to distinguish frameworks vs libraries. I just feel that the discussion would be more interesting and constructive if we discuss about what set of tools (that includes libraries, frameworks, containers, templates, dev tools) that you would suggest to address technical challenges in developing a cqrs application.
>
> None/hardly any of the things you listed are solutions to the unique
> technical challenges CQRS and DDD bring, which is only and always
> about domain modelling.
I did not list any solution. Only problems that beg for a solution. And no it will not address other domain modelling challenges in cqrs/ddd, but that's the whole point. You already have enough on your plate, why bother yourself with the uninteresting boilerplate part?
>
> As others have said, the tooling CQRS and event sourcing requires
> really is trivial. If it isn't then you are probably doing something
> wrong.
Writing unit-tests is also trivial. There are arguments to be made not to write your own unit-test framework just because you can write one in half an hour.
>
> My current project is very DDD focussed and uses event sourcing and
> CQRS throughout the whole architecture. Why? Because that was the best
> approach to solving the problem at hand. Which CQRS/DDD/ES framework
> did I use? None. The system is built in Clojure which has a trivial
> amount of incidental complexity compared to the C-family syntax
> languages. The home-grown append-only event stream library which is
> backed by a SQL Server database is about 30 lines long. That includes
> comments, imports and white space. Oh, and a whole bunch of TODOs for
> functionality I was sure I would need later, but actually it was fine
> as it was.
There's a huge difference between the amount of code needed to make something work, and everything you need to deploy to production. It's easy to get to a working cqrs app on your laptop, but once you introduce all the non-functional requirements such as snapshots get, concurrency control (locks), event versioning, evolving your projected data-model, and all the runtime safety checks that I've listed, I'm fairly certain the total amount of code (and other non-coding efforts) around your solution would be more than mere 30 lines.
>
> Subscribing to events - that would be as complicated as a map of
> subscriber ID -> [callback, last-seen-event-sequence-id]. If I wanted
> to do multi-server deployment then I would almost certainly pull in a
> library to handle distributed messaging, sure. But I don't need it
> yet.
>
> My final point is that I learnt significantly more, moved
> significantly faster and was able to adapt far quicker because I
> didn't buy into an all singing, all dancing framework which made
> decisions for me. Do I write my own message/SQL/transaction/HTTP/REST
> etc. code, of course not, that would be dumb.
No I was not suggesting the need to rewrite all that generic plumbing. But there is a specific use of these technologies in the context of a cqrs and event-sourcing application.
Just like you have frameworks like Camel/Fuse/Mule wrapping all those individual transport technologies/libraries so you could focus on modelling the high level integration between your applications in an EAI project, I don't see a problem having a cqrs framework wrapping these existing plumbing technologies so you could jumpstart yourself and focus on building your domain models and behaviors right away on a cqrs/event-sourcing project.

Colin Yates

unread,
Jul 30, 2016, 10:03:56 AM7/30/16
to ddd...@googlegroups.com
I'm not sure we are going to resolve this, and actually I think we are
far more in agreement that it appears. The major sticking point seems
to be 'how much CQRS/Event Sourcing' specific code is there once you
remove all the generic problems around messaging, DB access, REST
etc'?. My answer is 'hardly any'. One last attempt :-):

[snapshots]
turned out to not be very interesting as each view had their own
models so tended to just consume the events. Hydrating the domain
model for writes - using Clojure's in-memory STM managing a bounded
cache listening to tx commits - literally a few lines of code.

[concurrency control (locks)]
Clojure is a functional programming language strongly preferring
immutable data structures. A whole bunch of concurrency issues go out
of the window. Multi-server writes? - not needed.

[event versioning]
For DOMAIN MODEL versioning: (update-in ar [:version] inc).
Serialising writes to each TYPE of AR but allowing concurrent writes
to distinct AR types - trivial.
For munging events - yeah, your event structure has changed and that
needs to roll out through the code.

Still not seeing any huge implementation cost that a framework would
buy me? To be clear - if I was writing this in Java then oh yeah I can
see needing/writing some libraries to remove the boilerplate ;-).

P.S. (don't talk about inventing your own testing library to Greg ;-)).

Colin Yates

unread,
Jul 30, 2016, 10:12:18 AM7/30/16
to ddd...@googlegroups.com
And actually, yes if I was teaching somebody about HOW to unit test,
forcing them to write their own testing framework, or at least the
shape of it would force them to consider and answer a whole bunch of
necessary questions.

How many people started writing unit tests with names like 'testX'
where X was some class simply because the library need the prefix
'test'. How much better would it have been if people wrote tests
phrased as 'proveThatXDoesY'?

WHY test, WHAT to test, WHEN to test, HOW to test, what risk are you
mitigating? etc. would all be considerations I would expect the
subject to have and answer before event thinking about the
implementation details of tests.

Throwing them in front of XYZUnitTestingLibraryOfTheMonth removes that need.

Another major theme/concern for me is that giving them the
framework/library MAY also give them too many answers to questions
that they haven't even thought about. Once they know the questions to
ask and can formulate their own answers then sure - go wild pulling in
XYZ.

On 30 July 2016 at 12:05, Hendry Luk <hendr...@gmail.com> wrote:
>

Peter Hageus

unread,
Jul 30, 2016, 11:45:22 AM7/30/16
to ddd...@googlegroups.com
> respectfully disagree that a framework is necessarily prescriptive in the context of how you design your domain model. Only the technical approach.

Depends. On the first CQRS project I worked on, we pulled in a framework-as-a-starting-point, SimpleCQRS (this is not Greg’s example application btw, but I something someone else built from that I think, adding a few layers of abstractions.). There was some discussion about the usefulness, I was against, but caved in.

One of the questions we really should have asked ourselves was the datatype used for AR id’s. The framework prescribed guid’s and we happily went with that. This stopped us from using a number of useful patterns around predictable id’s. When I’d had enough of the limitation, we had a couple of years of production data to take into account when evolving.

This is just one tiny example of where a framework/library dictated things it shouldn’t have. And maybe a glaring obvious one we should have caught. But hindsight and so on...

/Peter




uwe schaefer

unread,
Jul 30, 2016, 2:08:55 PM7/30/16
to DDD/CQRS


On Saturday, July 30, 2016 at 5:45:22 PM UTC+2, Peter Hageus wrote:

The framework prescribed guid’s and we happily went with that. This stopped us from using a number of useful patterns around predictable id’s. 

could you elaborate on that or provide an example?

cu uwe
Message has been deleted

Greg Young

unread,
Jul 31, 2016, 3:11:30 AM7/31/16
to ddd...@googlegroups.com
They are less fun to type in as a uri as example.

On Sun, Jul 31, 2016 at 5:44 AM, Danil Suits <danil...@gmail.com> wrote:
>> The framework prescribed guid’s and we happily went with that. This
>> stopped us from using a number of useful patterns around predictable id’s.
>
>
> What's wrong with predictable uuids?

Peter Hageus

unread,
Jul 31, 2016, 3:51:34 AM7/31/16
to ddd...@googlegroups.com
Anything bound by date/time for example.
Or based upon some natural id.

Having different streams for the same id, say something high frequency that’s not part of the aggregate, but still correlated. A lot simpler if you can just prefix the id.

/Peter

Peter Hageus

unread,
Jul 31, 2016, 3:55:04 AM7/31/16
to ddd...@googlegroups.com
Did that for a while, but actually thats what pushed me over the edge to rewrite the thing.  Accidental complexity if there ever was any. 

/Peter

On 31 Jul 2016, at 04:44, Danil Suits <danil...@gmail.com> wrote:

The framework prescribed guid’s and we happily went with that. This stopped us from using a number of useful patterns around predictable id’s.
What's wrong with predictable uuids?

Ben Kloosterman

unread,
Jul 31, 2016, 7:37:43 AM7/31/16
to ddd...@googlegroups.com
Im with Greg here , I think its likely to cause more damage than good. Start with simple CQRS and implement your business logic.,  Most cqrs framework are far to heavy / infrastructure focused ( especially around  messaging) . 

Ben

--

Ben Kloosterman

unread,
Jul 31, 2016, 7:44:56 AM7/31/16
to ddd...@googlegroups.com
On Sun, Jul 31, 2016 at 12:03 AM, Colin Yates <colin...@gmail.com> wrote:
I'm not sure we are going to resolve this, and actually I think we are
far more in agreement that it appears. The major sticking point seems
to be 'how much CQRS/Event Sourcing' specific code is there once you
remove all the generic problems around messaging, DB access, REST
etc'?. My answer is 'hardly any'. One last attempt :-):


Unless you use a message heavy frame work / bus in which case there is a LOT..

Ben 

Colin Yates

unread,
Jul 31, 2016, 8:11:09 AM7/31/16
to ddd...@googlegroups.com
For example? Messaging is a very non-trivial implementation but it is
agnostic to WHAT the messages are - what extra code is there when you
send EVENTS (as oppose to some other type of message) across a message
bus?

Based on your previous post Ben I wonder if we are in agreement but
you misunderstood my point ;-)?

Hendry Luk

unread,
Aug 1, 2016, 12:47:38 AM8/1/16
to ddd...@googlegroups.com
I've only had a few experiences working with cqrs projects in the past. The first was the "textbook" cqrs (i.e. event-sourcing and that), and I have to say the biggest pain at the time for me was that we had to hand-roll everything, discover problems as they arise on our own, often in production, and improvise on the fly. I'll try to reenact my journey:
1. It started with a naiive implementation of unit-of-work, backed with a document db and message queue.
2. Then we started introducing AR snapshots, with a simple policy i.e. persists the AR states for every n events, and replay the rest on the top.
3. Then we discovered that as we we evolve our domain behaviors, we really need to introduce a more "formal" way of invalidating our snapshots, instead of manually just deleting the snapshot database every now and then, which obviously only works whilst in dev environment.
4. We discovered that even with a robust way of invalidating the snapshots, in prod we had to deal with the performance impact of rebuilding all the snapshots again. We had to deliberate over green-blue deployment approach to achieve snapshot invalidation that's not taxing on the prod performance. Mind you that this happens every time you deploy changes to your application, so you don't want this to be painful.
5. As we evolve the design of our events, we had a way to support both the old and the new events. We had to come up with two approaches to tackle this: 
- implement versioning in our event-handling mechanism
- event migration
6. On the read side, every time we had to add a new table, or a new column to an existing table, we had to rebuild the projections by reprojecting all the events again. This worked in dev, but took forever to run in production. To save time, we tried replaying just a specific projection that has strictly changed, but discovered incorrect results because some projections depend on data produced by another projection. This might not seem obvious at the start, but we ran onto things like violated FK reference, to broken denormalisation logic. Also since some projections haven't been replayed for a while, we had to find out that some older events in prod are no longer valid against the current projection logic. At any rate, the whole process seemed very ad-hoc (we approached projection migration in a very case-per-case basis), which was painful, discourages agility, and complicates continuous deployment.
7.  We discovered production issues, such as any error in the read model would stop the projection from moving forward at all. There were actually several reasons that a projection could stop working. To prevent this in the future, we had to implement a monitoring system to watch over our end-to-end eventual consistency. 
8. That brought us to error handling, retry policy, dead-lettering, etc. This in turn opens up another can of worms: event deduplication/idempotency, whacky ordering of event processing.
9. Then we discovered that to support production issue, you need a way to peek the states of your ARs in prod. From time to time, you even have to manually override the data. Say we want to manually remove duplicate records, or change the credit limit of an account. You'll need to create "administrative" events, and a small tool to issue these events against your AR (with preview).

Again, many of these problems probably aren't unique to cqrs. But considering that we almost always necessarily need to address all these problems in every cqrs project, doesn't really make sense that I have to re-address all these all over again and again.

Jorg Heymans

unread,
Aug 1, 2016, 3:37:26 AM8/1/16
to DDD/CQRS
Thanks Hendry for sharing this. This actually makes me more confident about having gone the Axon route for our new project. Several of the issues you describe are formalized in the framework, at least for these i won't have to roll my own solution => win.

Jorg

Ben Kloosterman

unread,
Aug 1, 2016, 7:04:27 AM8/1/16
to ddd...@googlegroups.com
No we are in agreement .. the point is those frameworks push people into complicated messaging .. when they should not be in that space especially with little experience.

Ben

Greg Young

unread,
Aug 1, 2016, 7:07:42 AM8/1/16
to ddd...@googlegroups.com
I read this and see anti-pattern after anti-pattern. Accidental
complexity all over the place.

Ben Kloosterman

unread,
Aug 1, 2016, 7:11:37 AM8/1/16
to ddd...@googlegroups.com
You will get most of these with a framework as well and it can be worse since it may do it in a way that doesnt work for your situation so you have to reverse engineer a complex framework  to make it work rather than adding to something simple . Or make the view side work with a different projector etc. 

"Again, many of these problems probably aren't unique to cqrs. But considering that we almost always necessarily need to address all these problems in every cqrs project, doesn't really make sense that I have to re-address all these all over again and again."

Were not saying to not re use any code - just not complex frameworks especially for new users.

This is worth reading for tips https://abdullin.com/lokad-cqrs-retrospective/
 


Hendry Luk

unread,
Aug 1, 2016, 7:13:03 AM8/1/16
to ddd...@googlegroups.com
Yea that's why i emphasised the word "textbook" cqrs in my earlier comment. There are different extents you can push your cqrs implementation to, but if you're to go full-blown with event-sourcing, mq, etc it would be silly not to use a framework from someone who's been down that road.

Greg Young

unread,
Aug 1, 2016, 7:15:18 AM8/1/16
to ddd...@googlegroups.com
It would be silly to use ES + MQ for projections I discuss this in
quite a few talks why (accidental complexity)

Hendry Luk

unread,
Aug 1, 2016, 7:18:15 AM8/1/16
to ddd...@googlegroups.com
Helpful commentary as always.
Anyway the point of a framework is to direct you away from anti-patterns

Hendry Luk

unread,
Aug 1, 2016, 7:28:30 AM8/1/16
to ddd...@googlegroups.com
Honestly mq is inconsequential. I can hardly think of a (non-cqrs) app i've worked on that doesn't use an mq. That's not where the complexity lies.
If you have ES and eventual consistency, there are certain inherent complexities you need to deal with, with or without mq.

João Bragança

unread,
Aug 1, 2016, 7:41:04 AM8/1/16
to ddd...@googlegroups.com
That wasn't his point. Message Queue usually (but not always) implies competing consumers. Which is a terrible idea for projections. Same goes for commands (usually). Unfortunately by going to framework route many people tend to miss this since it's too easy to treat all message handling the same.

Hendry Luk

unread,
Aug 1, 2016, 8:13:00 AM8/1/16
to ddd...@googlegroups.com
I talked about my experience in "textbook" handrolled cqrs project. I missed my second part.
In many many other (non-cqrs) apps, we actually do deal with a lot of similar problems, and is "essentially" cqrs in the sense that we usually have oltp and olap sides of things that integrate information from multiple sources (databases, kinesis streams,web-apis, google-analytics, application logs) and project them onto a read-model. This is a very common way of building software applications before cqrs was even trendy. 
But we always use out-of-the-box purpose-built data-integration frameworks for this sort of things (Pentaho, Talend, Mule, Splunk). Of course these tools don't automatically make the problems go away, but luckily this is an entire industry on its own right, and we leave this to data-integrators who know their trade and its toolset who manage this side of the shop. Do these people and their specialised tools affect how we design our domain model? Not in the slightest.

The problem in the few cqrs projects in my past is that developers came with the mindset of coding up these whole things themselves by hands, only to discover pit-holes and learn the lessons on the fly, while pretending like these are unique problems that nobody has solved before.
And frankly event-sourcing is quite an interesting way of writing an application.
I use spring, and if I hardly ever write any boilerplate code at all to "switch on" common application patterns, even trivial ones like circuit-breaker, gateway-service, service-discovery, to bigger ones like long-running workflow (bpm), brms, EIP, so I don't get why people are so against not writing any code to have event-sourcing done for you.

Colin Yates

unread,
Aug 1, 2016, 8:20:21 AM8/1/16
to ddd...@googlegroups.com
if you use Spring then I assume you use Java which starts to make your
perspective of 'a lot of code' make more sense :-).

However, even in Spring not everything is part of the framework. The
JDBC stuff for example is an un-opinionated way to handle the
boilerplate inherent to JDBC. You can use that to build whatever
approach best fits your data access layer (ORM, hand rolled, ibatis
etc.).

An analogy to his discussion? - You are asking for a 'persistence
approach' and some are suggesting an ORM. we are saying no, maybe you
don't need an ORM with all of its caveats/opinions. Use JDBC to build
your solution and if it looks like an ORM then jump to an ORM but
don't start there.

João Bragança

unread,
Aug 1, 2016, 8:51:19 AM8/1/16
to ddd...@googlegroups.com
Don't recall if this was mentioned previously in this thread, but it has certainly been mentioned other times on this mailing list. You call library, but in Soviet Russia framework calls you. This forces you to do things the way the framework wants you to. When you are writing a simple crud app, this is probably ok. When doing DDD, not so much, because your initial assumptions about the domain are usually wrong.

I'm currently working on a large-ish DDD project. I made the decision from the beginning to stay far away from any framework. All of the 'CQRS frameworky' type code in the solution space - wiring up of handlers, aggregate base class, generic repository, bootstrapping everything, etc - amounts to about 7% of the code base (according to vs code metrics). Hardly un-maintainable.

Mind that this does not mean stop using all frameworks all together - you probably want one to handle the web or do your json serialization. The difference is, these live at the edge of your solution space, not at its heart.

Hendry Luk

unread,
Aug 3, 2016, 1:40:16 AM8/3/16
to ddd...@googlegroups.com
Yea I mostly use java nowadays, but have never done cqrs on it.
i've only done cqrs in .net (c#), so didn't even look at axon. Nor any Spring stuff for that matter, e.g. spring dataflow. Just plain c#, nsb, and very little else.

I get your jdbc analogy, alas i've been at the opposite end of the spectrum, the equivalent of building a complex application from the ground up on top of jdbc and not much else.

Colin Yates

unread,
Aug 3, 2016, 6:21:12 AM8/3/16
to ddd...@googlegroups.com
Not to divert this discussion but Hendry, have you considered Scala -
it allows you to still write Java like code with much less verbosity
so is really easy to pick up. It also has loads of useful features
like case classes, CAKE pattern etc. It is also a great on-ramp for
functional programming. I went with Clojure which absolutely rocks but
you lose the benefits of a static type checking. If you really want to
get adventurous (and are stuck on the excellent JVM) then Frege :-).

Poule Dodue

unread,
Aug 3, 2016, 1:22:08 PM8/3/16
to DDD/CQRS
Lagom Framework seems interesting if you are into java/scala... if only they could provide ConductR for less $ (for small startups that can't pay 50k$).... 

Le mardi 26 juillet 2016 05:32:39 UTC-4, Working on framework a écrit :
Hi all,

We are looking into a mature CQRS framework. Any successful implementation using Axon in production? Any other frameworks that we should look into?

Thanks.

Chris Martin

unread,
Aug 3, 2016, 2:07:30 PM8/3/16
to ddd...@googlegroups.com
On Scala, I've found fun-cqrs to be great. It's got Akka bindings for production and in-memory bindings for testing. It's very agnostic in that it only provides the abstractions for wiring up command/event handlers and projections. It assumes nothing else.



--

Kirill Chilingarashvili

unread,
Aug 4, 2016, 6:51:26 AM8/4/16
to DDD/CQRS
Very nice summary,

From my experience I found all of them manageable except one

--- - managing read-model schema changes (e.g. new/alter read models, rebuild projection from entire event history while keeping uptime) 

I cannot believe there is no better solution,
How can you keep the principle of releasing few dozen times a day with having to do this again and again.
What if there are thousands of events per day and the history starts 10 years ago..
This one thing really worries my and I am using Event Sourcing only in places I absolutely need it, and the amount of events are manageable (as small and as light as possible 
>> --
>> You received this message because you are subscribed to the Google Groups
>> "DDD/CQRS" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to dddcqrs+u...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Studying for the Turing test
>

Hendry Luk

unread,
Aug 4, 2016, 7:46:37 AM8/4/16
to ddd...@googlegroups.com
That's what i felt too.
Uptime could be maintained by doing green/blue deployment on the projection, i.e. keep both the old and new versions of the projection running, until the new one catches up with all the events, then flip the switch over.
Still, it would take forever, and wouldn't help if you deploy frequently.

Greg Young

unread,
Aug 4, 2016, 8:10:14 AM8/4/16
to ddd...@googlegroups.com
"Still, it would take forever, and wouldn't help if you deploy frequently."

Have you measured? There are lots of strategies towards making
projections go faster when they are in history (batching is the main
one). I have seen projections doing millions of inserts into a sql db
run in < 1 minute.

Kyle Cordes

unread,
Aug 4, 2016, 10:52:38 AM8/4/16
to ddd...@googlegroups.com
On August 4, 2016 at 7:10:12 AM, Greg Young (gregor...@gmail.com) wrote:
"Still, it would take forever, and wouldn't help if you deploy frequently."

Have you measured? There are lots of strategies towards making
projections go faster when they are in history (batching is the main
one). I have seen projections doing millions of inserts into a sql db
run in < 1 minute.



Here are a few of those that we considered.


1: many events per transaction

Rather than commit the database after processing each event, process large batches of events (dozens, hundreds, perhaps thousands if they are pretty simple) then commit. To avoid having to guess the numbers, we wrote a bit more code to make projection playback adaptively increase the number of events per database transaction, as long as each batch continues to happen quickly.

It turns out this is all we have had to do to get great performance in our cases.

But we thought of a few more ideas, which seem worth writing down here in case anyone needs more projection playback speed.


2: safety isn’t really necessary sometimes

Some database systems have a switch you can flip which makes them not bother to ask the underlying operating system to fsync. This is of course ridiculously risky in terms of data loss if the computer crashes. If you don’t mind rebuilding a projection when that happens, go ahead and flip the switch. Things go much faster if you don’t need safety.


3: RAM is fast and cheap

Many very useful projections fit in RAM. Even on a very large system, some aspects of the data, aggregated appropriately, may very well fit in RAM. (A minor aside: please don’t call your system “big data” if your data will fit on a computer which can be purchased off-the-shelf at Best Buy.)

So, for the database instance containing a projection that will fit in RAM, feel free to tell your database server to simply stored in RAM. Some database servers have this is a built-in switch, with others you must trick them by providing a kind of RAM disk underneath.

This is ridiculously fast, and you can obtain a crash safety by simply replaying the projection if you have to restart.


4: different DBMSs have different characteristics

It’s possible that whatever database system you chose is not the very fastest for the queries are trying to do. Consider one of those shiny RAM-centric database systems, or a “column store”.


5: maybe not a DBMS at all?

It’s possible some projection data that first seems reasonable to store in the database, might not need that complexity. Maybe the data fits in RAM (see above discussion) - maybe you can just use some arrays etc. to store the data for a particular projection. Updating such things will typically be very fast.



--
Kyle Cordes

Peter Hageus

unread,
Aug 4, 2016, 11:17:58 AM8/4/16
to ddd...@googlegroups.com
Good tips.

A few more:

 Do you really need to update all readmodels on every deploy? We have tooling to only update specific tables. 

Do all read models need data from beginning of time? We have several that only contains last 30 days or similar.

Can you run them on a separate machine and switch over when done?

Do not use uuids as clustered keys in MS SQL Server. This kills performance on large tables. 

Regarding 3 below: While useful, this also brings full replay on system startup. We have several simple projections that are built on-demand and cached. (easiest when built from single stream, but doable in a few other scenarios was well).

Batching. Again. 

Our initial implementation was very problematic, could take all night after an upgrade. Now it rarely takes more than an hour on a fast machine. 

/Peter


Kirill Chilingarashvili

unread,
Aug 5, 2016, 3:03:01 AM8/5/16
to DDD/CQRS
I would say that to be honest, read model part is most complex part of event sourcing.
Saying that it is simple left-fold, may lead to thinking of read model as simple and strait forward thing to do.
But instead to make this work, I cannot think of any library helping with all problems discussed above, I even cannot think of any existing framework handling all of issues listed here.
I think to write this thing from scratch means starting solving infrastructure problems, and writing infrastructure code a lot, instead of focusing on domain.
I wish one day there will be a tool to make "left-fold" as easy as it sounds, taking into account schema changes, queries over history, working with distributed DB (sharded, clustered) etc.
Did someone try to use some existing tooling for read model instead of writing this monster from scratch?

Ben Kloosterman

unread,
Aug 5, 2016, 6:32:15 AM8/5/16
to ddd...@googlegroups.com
On Thu, Aug 4, 2016 at 8:51 PM, Kirill Chilingarashvili <kir...@gmail.com> wrote:
Very nice summary,

From my experience I found all of them manageable except one

--- - managing read-model schema changes (e.g. new/alter read models, rebuild projection from entire event history while keeping uptime) 

I cannot believe there is no better solution,
How can you keep the principle of releasing few dozen times a day with having to do this again and again.
What if there are thousands of events per day and the history starts 10 years ago..
This one thing really worries my and I am using Event Sourcing only in places I absolutely need it, and the amount of events are manageable (as small and as light as possible 

I have had the same issues on some CRUD sites especially log and audit records , solution for sites where its needed . Get an archive policy , aggregate the information and archive it.  Or roll over to a new year, 

Ben

Ben Kloosterman

unread,
Aug 5, 2016, 6:41:57 AM8/5/16
to ddd...@googlegroups.com
The key thing  is  each project will; need some of these but not all unless your way over complicating  it  bring it all in for many project will over complicate things., A list for large CRUD cloud app with events/ queues would be just as long and a few real nasties ( eg dirt read transactions /consistency  , schema changes( which paralyses most projects to the point of making mainly small changes)   , query optimization , data clean ups  , locking / concurrency ) . Its normally queues and buses which generate a lot of the pain which frameworks seem to have a habit of inflicting on people,

Personally i prefer the micro services routes with some services CRUD and some CQRS where its needed. 


Ben  

On Sat, Jul 30, 2016 at 3:45 AM, Hendry Luk <hendr...@gmail.com> wrote:
I feel like there are quite a lot of non-trivial real-world challenges around cqrs that require significant boilerplate code that aren't anything domain specific at all. Just to name a few:
- event sourcing
- snapshots, caching
- unit-of-work, rollbacks
- optimistic/pessimistic locking
- validations
- versioning (of events, of domain logic, of projection logic)
- process-manager, state-machine
- integrations (message queue, API gateway, notifications, web-socket, event streaming, polling)
- event storage, archiving/rollover, backup, indexing, migration
- projections/denormaliser
- deduplication, data cleansing
- event ordering policy
- error handling, retry, dead-lettering, alert, circuit breaker
- scaling, load-balancing, redundancy
- managing read-model schema changes (e.g. new/alter read models, rebuild projection from entire event history while keeping uptime)
- administrative tools, e.g. Manual way to fix erroneous domain data, fix read models, view/query domain data
- operation monitoring, e.g. Monitor the flow of events; monitor eventual consistency, detect excessive delays, data inconsistency
- security: I find that access-control to entities (e.g. user permission, multi-tenancy) more challenging to enforce in applications backed with event-sourced repositories

Most of these are common requirements in typical cqrs projects meant for production. And of course, many of them are solved problems that aren't even cqrs specific, which is more the reason not to write everything again from scratch on every cqrs project.

> On 27 Jul 2016, at 7:43 PM, Greg Young <gregor...@gmail.com> wrote:
>
> If you do it right there isnt really much boilerplate. Use reference
> code not frameworks.
>
>> On Tue, Jul 26, 2016 at 10:04 PM, Jorg Heymans <jorg.h...@gmail.com> wrote:
>>
>>
>>> On Tuesday, July 26, 2016 at 12:24:59 PM UTC+2, Greg Young wrote:
>>>
>>> My advice, stay away from frameworks. They don't provide nearly the
>>> value you that you think they will.
>>
>>
>> Depends on your expectations. For the experienced cqrs'ers indeed no
>> framework is going to cut it. For people still trying to grasp the pattern
>> in actual code such framework can be very helpful. As far as Axon is
>> concerned, it provides a lot of boilerplate stuff that you would end up
>> writing yourself anyway, so why not leverage it if it matches your use case.
>>
>> Jorg
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "DDD/CQRS" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Studying for the Turing test
>
> --
> You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+unsubscribe@googlegroups.com.

> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+unsubscribe@googlegroups.com.

Greg Young

unread,
Aug 5, 2016, 6:46:36 AM8/5/16
to ddd...@googlegroups.com
thousands of events per day (lets take 2000) * 365 * 10 is 7.3m
events. You should be able to replay a projection in under 1 hour (on
a full replay) The key is introducing things like batching/updating in
memory then doing large updates. Event store as example can bring you
this data in about 5 minutes assuming a good network connection (you
can get pretty easily up to 30k events/second).

a naive projection will just issue an insert upon receiving an event.
if you do this you will find your projection is blocked by io latency.

Ben Kloosterman

unread,
Aug 6, 2016, 8:34:39 PM8/6/16
to ddd...@googlegroups.com
Its very rare to have so many events to one aggregate ... and for that aggregate its not that hard to manage.  

I haven't hit it yet even though i deal with quite a bit of data my first instinct would be to get the aggregate type in question to just create a new message with the state   and save the new message. get the handler to purge the stream up to the special event.  If i need to retain audit on such a large aggregate it gets harder but IMHO it would be rare for an aggregate receiving such a huge amount of messages to require human audit. 


Ben
 

emragins

unread,
Aug 7, 2016, 9:43:53 AM8/7/16
to DDD/CQRS
I'm with Hendry (and others) on this one. As someone whose been reading about ES for about a year, then decided to do it, there was a considerable amount of infrastructure to put in place that I'm STILL working on the get where I want to be.

For example, I have no "read" database -- I'm using strictly in memory because my dataset will allow it. I started out trying to make the read model persistent and ran into complications and a lack of flexibility I didn't want to deal with.

I need a monitoring service -- badly. I have a production issue where for some reason the read side stops listening, so periodically I have to reset the server. Unhappiness :(

I already had to build a couple really cobbled together tools for event migration and "replay all".

I would much rather use a Library that abstracted away the details of the "how" from me.

Additionally, the Internet as a whole is sorely lacking in examples of how to do the majority of this. The literature is heavily skewed towards talking about the write-side and, with a couple exceptions, the read side is hand-waving magic of "just build a projection".

Hendry Luk

unread,
Aug 7, 2016, 8:54:21 PM8/7/16
to ddd...@googlegroups.com
Millions of inserts per minute is order of magnitude higher than the most optimistic pure benchmark I've done on sql azure at the time. What I got was more in the range of thousands of pure inserts per second, I don't have the exact number handy. We even got in touch with Azure to make sure there was no throttling.
Besides, our projections are rarely just inserts. They were usually updates (or upserts), some additional idempotency checks, and often with some joins and denormalisation across several tables (e.g to build full-text indexing).

Also, millions of events across all aggregates in the life of an application is perhaps unrealistically small.
Some of our entities have millions of instances each, and obviously a lot more events against them. So even if we do have the capacity to process that volume in 1 minute, that's for one entity type. If you multiply that with the number of other entities we have, and with the number of projections we have, you'll quickly hit 60 minutes mark easily.



> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+unsubscribe@googlegroups.com.

Greg Young

unread,
Aug 7, 2016, 9:03:34 PM8/7/16
to ddd...@googlegroups.com
try running more than 1/operation = waiting on io.

for update heavy projections they tend to be temporal. try doing them
in memory for a batch.

Hendry Luk

unread,
Aug 7, 2016, 9:12:19 PM8/7/16
to ddd...@googlegroups.com
I suppose that was the learning.
Replaying it in memory would require a big shift.
We were invested heavily in SQL when we wrote our projections, which was the most efficient, quickest, cheapest way of doing it, which was afforded to us thanks to CQRS.
Allowing in-memory replay would require some kind of abstraction away from the database so we could swap it out with in-memory persistence easily, which i think was counter to the reason we adopted CQRS (simplicity). It almost seems like you would need... a framework :P


>> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "DDD/CQRS" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an

>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Studying for the Turing test
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "DDD/CQRS" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+unsubscribe@googlegroups.com.

Greg Young

unread,
Aug 7, 2016, 9:13:44 PM8/7/16
to ddd...@googlegroups.com
A framework for allowing the concept of a memoization. Its like asking
for a framework for a for loop.

Hendry Luk

unread,
Aug 7, 2016, 9:19:36 PM8/7/16
to ddd...@googlegroups.com
Eh what? How's that related to memoization?

I'm asking for an abstraction over your db that you could swap out with RAM that can be loaded onto your rdbms later in bulk. It was almost like asking for an orm actually.
But even orm is not traditionally a great tool for managing olap data.


>> >> > For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "DDD/CQRS" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an

>> >> > For more options, visit https://groups.google.com/d/optout.
>> >>
>> >>
>> >>
>> >> --
>> >> Studying for the Turing test
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> >> Groups
>> >> "DDD/CQRS" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> >> an

>> >> For more options, visit https://groups.google.com/d/optout.
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "DDD/CQRS" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an

>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Studying for the Turing test
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "DDD/CQRS" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+unsubscribe@googlegroups.com.

Greg Young

unread,
Aug 7, 2016, 9:24:17 PM8/7/16
to ddd...@googlegroups.com
See ^^^

Ben Kloosterman

unread,
Aug 7, 2016, 10:51:04 PM8/7/16
to ddd...@googlegroups.com
On Sun, Aug 7, 2016 at 11:43 PM, emragins <emra...@gmail.com> wrote:
I'm with Hendry (and others) on this one. As someone whose been reading about ES for about a year, then decided to do it, there was a considerable amount of infrastructure to put in place that I'm STILL working on the get where I want to be.

For example, I have no "read" database -- I'm using strictly in memory because my dataset will allow it. I started out trying to make the read model persistent and ran into complications and a lack of flexibility I didn't want to deal with.

This is a good thing and if you started with a simple CQRS that would have been your start point,  A framework would tie you to some form of read persistence you then would have to provide a service for that framework to start,

I need a monitoring service -- badly. I have a production issue where for some reason the read side stops listening, so periodically I have to reset the server. Unhappiness :(

Nothing to do with ES , had the same issue with SQL due to our domain controller being too slow. . 


I already had to build a couple really cobbled together tools for event migration and "replay all".

I would much rather use a Library that abstracted away the details of the "how" from me.

We are not talking about a light weight library ( which doesnt really exist for this) or share you code but we are talking about as framework which is a set of libraries, eg MVC and EF put together,  

Additionally, the Internet as a whole is sorely lacking in examples of how to do the majority of this. The literature is heavily skewed towards talking about the write-side and, with a couple exceptions, the read side is hand-waving magic of "just build a projection".

Read side is just a standard denormalized DB what are the issues ? One for another thread,  The only trouble people get into is they use async domains ( against advice) so the read side is not persisted by the time the command finishes,.

Ben

João Bragança

unread,
Aug 8, 2016, 1:50:17 AM8/8/16
to ddd...@googlegroups.com
I'm asking for an abstraction over your db that you could swap out with RAM

For such a thing to exist, it would have to be written in such a way where you would lose all the performance benefits anyway.

Kirill Chilingarashvili

unread,
Aug 8, 2016, 1:58:03 AM8/8/16
to ddd...@googlegroups.com
>an abstraction over your db that you could swap out with RAM that can be loaded onto your rdbms later in bulk. It was almost like asking for an orm actually.

interesting idea,
I have abstracted DB layer through simple interfaces (for read model)
public interface IQueryRepository<TView>
        where TView : View
    {
        IPagedResult<TView> Fetch();

        IPagedResult<TView> Fetch(IQuery query);

        TView FetchOne(string id);

        void Delete(string id);

        void DeleteAll(IQuery query);

        TView Save(TView view);
    }

and

public interface IQuery
    {
        bool Distinct { get; }
        IEnumerable<string> Select { get; }
        IEnumerable<string> GroupBy { get; }
        IEnumerable<string> OrderBy { get; }
        IQueryParamGroup Root { get; }
        int Start { get; }
        int Count { get; }
    }

And we have 3 implementations
- in memory
- sql server
- mongo

we can switch between them, for building projections or for querying

but never thought of doing in-memory first, and pushing to persistent async,
have to try :)


You received this message because you are subscribed to a topic in the Google Groups "DDD/CQRS" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dddcqrs/mMMDwTwyct0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dddcqrs+unsubscribe@googlegroups.com.

Peter Hageus

unread,
Aug 8, 2016, 3:47:35 AM8/8/16
to ddd...@googlegroups.com
EF7/Core does this, no idea about performance though. I've previously used SQLites inmem mode for testing, and it does bring some challenges. 

Sent from my iPhone
Reply all
Reply to author
Forward
0 new messages