Socket buffer and rates streaming

382 views
Skip to first unread message

Vero K.

unread,
Apr 14, 2017, 6:03:26 AM4/14/17
to mechanical-sympathy
hi, we want to stream fx rates over websockets and need to find out how to do it properly. we open socket for every connection and it has a buffer, now if the buffer is full it might cause a problem, on the other side if our client is slow as some point we need to drop connection. how would you implement rates streaming over websockets to handle this? would you consider to put an additional buffer of some size (for example disruptor queue) for every client, pick up data from there and put into socket buffer and in case if it full, keep a message in disruptor and after socket buffer is free, publish it to the client? and if disruptor q. is full, disconnect a client? do you think it is a good solution or how it is usually handled? we use java for our project.

peter royal

unread,
Apr 14, 2017, 9:00:07 AM4/14/17
to mechanica...@googlegroups.com
For a similar problem I will only let one message for a given "key" remain in the queue to be sent. 

So if a client is slow, they'll receive the most recent message for a key but loose intermediate ones. 

-pete

-- 
peter royal - (on the go)

On Apr 14, 2017, at 5:03 AM, Vero K. <vero....@gmail.com> wrote:

hi, we want to stream fx rates over websockets and need to find out how to do it properly. we open socket for every connection and it has a buffer, now if the buffer is full it might cause a problem, on the other side if our client is slow as some point we need to drop connection. how would you implement rates streaming over websockets to handle this? would you consider to put an additional buffer of some size (for example disruptor queue) for every client, pick up data from there and put into socket buffer and in case if it full, keep a message in disruptor and after socket buffer is free, publish it to the client? and if disruptor q. is full, disconnect a client? do you think it is a good solution or how it is usually handled? we use java for our project.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vero K.

unread,
Apr 14, 2017, 10:00:05 AM4/14/17
to mechanical-sympathy
thanks, but losing msgs won't work for us: either wait and disconnect or consume all
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Greg Young

unread,
Apr 14, 2017, 10:01:56 AM4/14/17
to mechanica...@googlegroups.com
for a price feed? what good is a 30 second old price update? I would
prefer the current one losing the middle in most cases.

if you were doing level 2 data (order book) this statement would make
more sense.
>> email to mechanical-symp...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-symp...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

Michael Barker

unread,
Apr 14, 2017, 6:07:03 PM4/14/17
to mechanica...@googlegroups.com
We've found that as our exchange volumes have increased the only protocol capable of handling a full un-throttled feed is ITCH (over multicast UDP).  For all of our other stream based TCP feeds (FIX, HTTP) we are moving toward rate throttling and coalescing events based on symbol in all cases - we already do it in the majority of our connections.  We maintain a buffer per connection (Disruptor or coalescing ring buffer depending on the implementation) so that the rate at which a remote connection consumes does not impact on any of the other connections.  With FIX we also maintain some code that if we detect a ring buffer becoming too full (e.g. >50%) then we pro-actively tear down that connection under the assumption that their connection is not fast enough to handle the full feed or it has disconnected and we didn't set get a FIN packet.  If you have non-blocking I/O available, then you can be a little bit smarter regarding the implementation (unfortunately not an option with the standardised web socket APIs).

Mike.


>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Vero K.

unread,
Apr 15, 2017, 5:51:24 AM4/15/17
to mechanical-sympathy
right we stream rates for analytics and some clients wants the whole history though real-time factor is also important.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an

Vero K.

unread,
Apr 15, 2017, 5:54:15 AM4/15/17
to mechanical-sympathy

thanks quite useful answer. if we have around 700 clients, do we need to create around 700 disruptors? we also stream different types of data (3 types), will it be a good idea to create 700 * 3 disruptors?




On Saturday, April 15, 2017 at 1:07:03 AM UTC+3, mikeb01 wrote:
We've found that as our exchange volumes have increased the only protocol capable of handling a full un-throttled feed is ITCH (over multicast UDP).  For all of our other stream based TCP feeds (FIX, HTTP) we are moving toward rate throttling and coalescing events based on symbol in all cases - we already do it in the majority of our connections.  We maintain a buffer per connection (Disruptor or coalescing ring buffer depending on the implementation) so that the rate at which a remote connection consumes does not impact on any of the other connections.  With FIX we also maintain some code that if we detect a ring buffer becoming too full (e.g. >50%) then we pro-actively tear down that connection under the assumption that their connection is not fast enough to handle the full feed or it has disconnected and we didn't set get a FIN packet.  If you have non-blocking I/O available, then you can be a little bit smarter regarding the implementation (unfortunately not an option with the standardised web socket APIs).

Mike.

Vero K.

unread,
Apr 15, 2017, 9:41:14 AM4/15/17
to mechanical-sympathy
jus to add here - I mean I want to use multiple disruptors (or coalescing ring buffer + disruptor) per user because we can merge some fast ticking data and some slow data (trade info) we can't merge, do you think it will work?

On Saturday, April 15, 2017 at 12:54:15 PM UTC + 3, Vero K. wrote:

thanks quite useful answer. if we have around 700 clients, do we need to create around 700 disruptors? We also stream different types of data (3 types), will it be a good idea to create 700 * 3 disruptors?




On Saturday, April 15, 2017 at 1:07:03 AM UTC + 3, mikeb01 wrote:
We've found That as our exchange volumes have Increased the only protocol capable of handling a full un-throttled feed is ITCH (Multicast over UDP). For all of our other stream-based TCP feeds (FIX, HTTP) we are moving toward rate throttling and coalescing events based on the symbol in all cases - Already we do it in the Majority of our connections. We maintain a buffer per connection (Disruptor or coalescing ring buffer depending on the Implementation) So that the rate at Which a remote connection consumes does not have any impact of the other connections. With the FIX We also maintain some CodeThatCalendar if we detect a ring buffer Becoming too full (eg> 50%), then we pro-Activelly tear down That connection under the assumption That Their connection is not fast enough to handle the full feed or it has disconnected and we did not get a set FIN packet. If you have a non-blocking I / O available, then you can be a little bit smarter Regarding the implementation (Unfortunately not an option with the web Standardized socket APIs).

Mike.

-
Studying for the Turing test

-
You Received this message because you are Subscribed to the Google Groups "mechanical-sympathy" group.

Michael Barker

unread,
Apr 16, 2017, 1:03:27 AM4/16/17
to mechanica...@googlegroups.com
With Web-based traffic (long poll/http streaming rather than web sockets), we maintain separate buffers for each message type (market data, trade market data, execution reports).  As each message type has different rules around how events can be coalesced and/or throttled (e.g. market data can be, execution reports can't).

For FIX we have separate servers for market data and order processing, so in effect we have separate buffers for each event type, but because market data behaves quite a bit differently to order flow having separate servers allows the implementation to differ where needs be.

Mike.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.

Vero K.

unread,
Apr 16, 2017, 1:43:21 AM4/16/17
to mechanical-sympathy
Great answers, thanks, Mike!



On Sunday, February 16, 2017 at 8:03:27 AM UTC + 3, mikeb01 Quote
With Web-based traffic (long poll / http streaming rather than a web sockets), we maintain separate buffers for each message type (market data, trade market data, execution reports). As each message type has different rules around how events can be coalesced and / or throttled (eg market data can be, execution reports can not).

For the FIX we have separate servers for market data and order processing, that is, in effect we have separate buffers for each event type, but the supermarket Because the data behaves quite a bit differently to order the flow having separate servers Allows the Implementation differ to the where needs be.

Mike.


Ben Evans

unread,
Apr 16, 2017, 4:30:21 AM4/16/17
to mechanica...@googlegroups.com
This is the same architecture employed by >1 IB for their in-house solution.

It's a pretty solid pattern.

Ben
>> email to mechanical-symp...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-symp...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages