Cached eventStore

64 views
Skip to first unread message

@marcofranssen

unread,
Mar 30, 2012, 11:36:09 AM3/30/12
to ncqr...@googlegroups.com
We currently need a cached eventstore so the aggregates rebuilding goes faster. We are currently importing some data which takes +3 days to persist everything to the database. We loose the most time when querying the aggregates from the eventstore, so we would like to cache them because they will be used in multiple iterations during our dataconversion. We really need to improve this performance, because we need to do this dataconversion within a weekend. We spent the whole day measuring today and the most performance improvement can be gathered from the aggregate rebuilding. This takes about 40 milliseconds. Everything else is most of the time below 15 milliseconds.

Is there someone who knows enough of the ncqrs eventstore to provide us a cached version of the MsSqlEventStore?

Hopefully there is someone who can help us?

mynkow

unread,
Mar 30, 2012, 4:06:18 PM3/30/12
to ncqr...@googlegroups.com
How this cache should work? Cache is never simple.
Message has been deleted

Greg Young

unread,
Mar 31, 2012, 3:56:59 AM3/31/12
to ncqr...@googlegroups.com
Identity map?

Sent from my iPad

On 2012-03-30, at 6:15 PM, Chris Bowden <cbcw...@gmail.com> wrote:

If it hurts stop doing it. Implement snapshots. A decorated MsSqlEventStore with a caching layer is not going to achieve anything. Your problem isn't the read speed of Sql Server - you've reached a point where you are replaying too many events on the aggregate. 

@yreynhout

unread,
Jun 7, 2012, 11:13:49 AM6/7/12
to ncqr...@googlegroups.com
What do you mean by "importing data"? Are processes kicked off in reaction to events in this scenario, or is it just persisting ARs? Why not use some form of scale out and consolidate afterwards (bulk inserting into your sql server)? Ensuring sequential processing on an aggregate basis will allow you to keep the aggregate materialized in memory and just apply one event after another. You could collect batches of those changes and flush them as you see fit. Events are immutable so there should be no problem caching them (you could keep reading entirely out of the equation).
Reply all
Reply to author
Forward
0 new messages