Hi all. I find myself (once again) posting to what is probably the wrong mailing list, but I guess a lot of you have experience in this field. Sorry if you feel this question is not appropriate to this group.
I'm trying to design my first low latency distributed system as a learning experience, but not necessarily ultra low latency/jitter sensitive. For scalability and redundancy reasons the architecture will be based on micro-services doing pub/sub on a messaging bus. Question is, should I go event based, command based, mixed or something else entirely?
First option is fully event based: each service subscribes to some other services event topic and outputs events itself. As an example, I get a request from the outside to create an account, and the gateway service will output an event "AccountCreationRequested" on the "GatewayEvents" topic. An accounting system will be listening on this topic and create the accounting records, outputting it's own events ("AccountCreated" on topic "AccountingEvents", etc). An analytics service would also subscribe to "GatewayEvents" and generate some statistics, outputting it's own events on topic "AnalyticsEvents". This system, while fast, will couple the accounting and analytics services to the gateway, as an example.
Second option is fully command based: each service receives commands on a service specific command queue and commands other systems to do something by writing to their command queue. As an example, I get a request from the outside to create an account, and the gateway service will send a command "CreateAccount" on topic "AccountingCommands" and a command "CreateAccount" on topic "AnalyticsCommands", addressing those systems specifically. The gateway service now has intimate knowledge of who it is working with, although it would still be a fast system.
Third option is a mixed system: each service receives commands on a service specific command queue and outputs state changes on a service specific event topic. This system will need a "orchestrator" to transform events into commands. As an example, I get a request from the outside to create an account, and the gateway service will output an event "AccountCreationRequested" on the "GatewayEvents" topic. An orchestrator is subscribed to that topic and, upon seeing the event, will send a "CreateAccount" command to topics "AccountingCommands" and "AnalyticsCommands". This system looks better from a service decoupling perspective, but does add an additional network hop/contention point/failure point.
I'd like to gather people's feelings on this. Do any of these make sense? Have you used anything like this? What were the problems with your system? What was really great about it?
I've read a lot about all of this but still I have mixed feelings, I'm even kind of lost. Sam Adams from LMAX mentions in a 2014 talk [1] that LMAX uses an event based system but then describes topics as "ExecutionVenueServiceInstructions" and shows pieces of code akin to async RPC (45:52 in to the talk). It does sound like a command sourcing system, not an event based one. Command sourcing/journaling isn't without it's problems as commands can return different results as opposed to event sourcing/journaling where there is not extra validation on the event after it has been emitted, it just changes the system's state.
Appreciate any feedback I can get.