Hi,
I've a system design question, which is somewhat related to the topics discussed on this group. So I am posting it here.
Let's say we have a system which sends orders to exchanges. It maintains the order state and other information in shared memory (e.g., /dev/shm/state_file1, /dev/shm/state_file2, ..., /dev/shm/state_fileN). When an order goes out, several shared memory files are updated and when messages are received from the exchanges, several shared memory files are updated. We need to replicate the state (all the relevant files in shared memory) to another box for disaster recovery. The system is handling 1000s of orders a second and the state should be replicated in near-real-time. So, my existing solution, has a process running on the box. It is separate from the order entry/management engine. It figures out which shared memory files change when the messages arrive, reads the relevant portion of the file, makes a message with the contents and sends a message to listener process on another box. The listener process writes the messages to disk. The problem is in figuring out what changes when an order is sent and when a message arrives - a big chunk of the logic in the order entry/management engine is also in the process which publishes the messages.
Is there any other way to do this?
Thanks for your time.
-Prashanth.