Hi, we have some related concerns, although I would still like my question to be addressed by the Axon team.
As you can see in my question, I partly answer your problem that it may not be possible at all(unless guys at Axon say otherwise and give us a hint) to combine rabbitMQ and tracking processor. So with RabbitMQ you use a subscribing Processor and you cannot have replays that way. So you could use a Streaming Platform such as apache Kafka which will keep track of events and do the replay outside of AXON which defies the whole thing with the Event Store actually.
What I am currently Doing is:
- Keep the communication between command and query microservices with RabbitMQ and subscribing processor on the query side microservice.
So the scenario is, come the commands ->stored as events in the Event Store of the Command Side, and propagated through RabbitMQ to all listening query side microservices which will then update their Database Records. From your brief question, I understand you have
a similar Aproach.
- Keep a tracking processor at query side which listens to the same events as the subscribing processor(duplicate code yes, but for the moment that is my least of concerns).
This tracking processor to work , needs to have direct access to the Event Store, so I have additional configuration in the query side microservice to be able to connect to my event Store (Mongo DB).
Now this tracking processor is registered based on an external boolean property and it is not registered by default Unless I trigger him by setting the property to true.
What I achieve with this:
- Still use RabbitMQ for communication between the microservices
- Be able to do a full manual safe Replay when things go south, by firing up the tracking processor and making sure I have cleared up the token Table (set from the TokenStore Bean) so that the tracking processor will be forced to do a full replay of all events found in the event Store.
So when we encounter an alert in our systems (we still investigate what those alerts will be), we will drop the query side DB, trigger the tracking processor, and have the events replayed and fresh data coming in). Then in normal scenarios, things play as usual with RabbitMQ and a subscribing processor only.
Keep in mind here, that the tracking processor on the query side, is capable of completely Replacing RabbitMQ, since it directly connects to the Event Store, This can be easily verified if you configure the tracking processor in the query side to listen to specific events,
and you add them from the command side on the event Store and disabling RabbitMQ propagation(or simply deactivating the Subscribing processor on the query side).You will notice the tracking processor getting the events and doing the actions(if configured) on the DB.
So in sort, My approach described here with the 2 processors on the query side, is only in order to keep RabbitMQ in the whole picture without losing the replay of events . We are still investigating if we still need RabbitMQ or we will rely on the tracking processor only.