How would you approach the problem with large data over MQ

373 views
Skip to first unread message

Vero K.

unread,
Apr 16, 2017, 5:33:18 AM4/16/17
to mechanical-sympathy
We use Message Broker to communicate between different service components. One of the requirement, after one of the clients logs in, firstly, we need to send him a large amount of historical data (trade data for the last day) to init UI, after that client receives subsequent trade data updates in real-time. All data provided by our back-end servers which should push data to our server which is responsible for client data, servers communicate over MQ. Given that it might be not a good idea to send a lot of data when a member logs in over MQ because of MQ message size limitation, we either need to split a historical message into single messages - but this will be lots of data in the MQ which can affect the performance of our MQ. One more approach is to fetch historical data with REST and receive subsequent updates with MQ, but this can cause unsync situation: while we wait for REST results to travel to the server responsible for handling front-end clients, more trades might happen and we will need to run some reconciliation logic to filter out trades from MQ in our REST message response or apply them - doesn't not look quite good to me. how would you approach this problem given you have MQ Broker (RabbitMQ)? We use java for our project.

Michael Barker

unread,
Apr 16, 2017, 6:14:13 AM4/16/17
to mechanica...@googlegroups.com
This is similar to the approach used with our (and many others) ITCH protocol, but with one significant exception.  We don't sync using a full replay of historic data, instead from a snapshot that represents the current state of the market at a current point in time (using some form of transaction identifier).  The customer attaches to the real-time feed and start buffering the data, then collects a snapshot from a separate server.  This is generally much cheaper to ship to the customer than a full replay.  The customer takes the snapshot and its associated transaction id, dumps any buffered real-time updates that have occurred prior to the snapshot and continues from their.

Historical data is then handled as a separate service and pulled on demand as most UIs only need the historic data to handle charting.  For the historical data we have pre-processed CSV files that we ship from the file system to the customer using HTTP.

On 16 April 2017 at 21:33, Vero K. <vero....@gmail.com> wrote:
We use Message Broker to communicate between different service components. One of the requirement, after one of the clients logs in, firstly, we need to send him a large amount of historical data (trade data for the last day) to init UI, after that client receives subsequent trade data updates in real-time. All data provided by our back-end servers which should push data to our server which is responsible for client data, servers communicate over MQ. Given that it might be not a good idea to send a lot of data when a member logs in over MQ because of MQ message size limitation, we either need to split a historical message into single messages - but this will be lots of data in the MQ which can affect the performance of our MQ. One more approach is to fetch historical data with REST and receive subsequent updates with MQ, but this can cause unsync situation: while we wait for REST results to travel to the server responsible for handling front-end clients, more trades might happen and we will need to run some reconciliation logic to filter out trades from MQ in our REST message response or apply them - doesn't not look quite good to me. how would you approach this problem given you have MQ Broker (RabbitMQ)? We use java for our project.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vero K.

unread,
Apr 17, 2017, 1:47:43 PM4/17/17
to mechanical-sympathy
Thanks, Mike. useful input

Ayub Sayyad

unread,
Apr 17, 2017, 10:50:51 PM4/17/17
to mechanica...@googlegroups.com
Great suggestion Mike, can be even simplified by sending snapshot on same channel as incremental updates,. By doing this you will not need any buffering.

On 17-Apr-2017 11:17 PM, "Vero K." <vero....@gmail.com> wrote:
Thanks, Mike. useful input

Michael Barker

unread,
Apr 18, 2017, 12:37:52 AM4/18/17
to mechanica...@googlegroups.com
The problem with sending the full snapshot on the same channel is that is can be of arbitrary length.  If you are doing UDP multicast there is a good chance that the snapshot message could be fragmented across multiple packets and resolving packet loss in this scenario becomes tricky without a more complex protocol (e.g. Aeron, PGM).  Also you impact on the performance of the service producing the incremental updates as it can be expensive to calculate a full snapshot for the order book and render it, especially when you are measuring the latency of pushing out real-time updates in the < 100µs range.  The other question is how often do you push out updates; on a time interval or upon request?  If on time interval you force users to wait a specified amount of time before joining the incremental stream, which may be to long for their use case.  If you push is out on request, you end up impacting the performance of increment update service (as above) also with UDP multicast all of the other consumers have to discard large messages that they have no interest in.  For these reasons the de-facto approach in most exchanges is to use a separate snapshot service.

On 18 April 2017 at 14:50, Ayub Sayyad <ayub....@gmail.com> wrote:
Great suggestion Mike, can be even simplified by sending snapshot on same channel as incremental updates,. By doing this you will not need any buffering.
On 17-Apr-2017 11:17 PM, "Vero K." <vero....@gmail.com> wrote:
Thanks, Mike. useful input

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages