We've been doing this for a long time in one project. It simplifies
projections a lot and customers are impressed by the latency of
in-memory read model lookups.
Our in-memory read model is currently about ~6 GB in memory and is built
at application startup. It's, however, using much more memory (5x) than
we originally estimated, but we were too lazy to figure why (blaming
inefficient Java collections) and ignored it since RAM is so much
cheaper than developer time :-)
I can definitely recommend this approach for small to medium services.
We are currently pondering moving parts of the read model to an
in-memory-grid like Hazelcast.
> --
> You received this message because you are subscribed to the Google
> Groups "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
dddcqrs+u...@googlegroups.com
> <mailto:
dddcqrs+u...@googlegroups.com>.