Hey folks,
I've been tinkering with using RavenDb as an event store. Because I'd be looking to plug it in to a legacy system which is currently a bit of a monolith, I need to do some funk.
I want to check for the presence of multiple documents (the events from the various services) and when they've all arrived combine them together and send them back to the caller (who's been nicely waiting for a http response all this time).
i.e.
1. A http post arrives to a web controller
2. the data is split in to multiple commands for different domains and written to each domain's RavenDb database
3. A RavenDb subscription on each domain picks up the command document, processes it and writes out events
4. The web controller set up a changes request to multiple databases for the presence of a document that meets certain criteria relating to the data it received (e.g. a unique tracking identifier)
5. Once all the results are in from the change observables (or the changes time out) the result JSON is created by combining the data from multiple sources together
My question is related to the cost of the changes API. If I have thousands of active users and the latency on the event processing could be up to 5 seconds (current threshold) is it OK to have a large number of transient change listeners on each database at the same time? If not then I'd write a subscriber (more durable) and push all matching events to a local cache with a ~10 second eviction policy.