hi, we want to stream fx rates over websockets and need to find out how to do it properly. we open socket for every connection and it has a buffer, now if the buffer is full it might cause a problem, on the other side if our client is slow as some point we need to drop connection. how would you implement rates streaming over websockets to handle this? would you consider to put an additional buffer of some size (for example disruptor queue) for every client, pick up data from there and put into socket buffer and in case if it full, keep a message in disruptor and after socket buffer is free, publish it to the client? and if disruptor q. is full, disconnect a client? do you think it is a good solution or how it is usually handled? we use java for our project.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
>> email to mechanical-sympathy+unsub...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsub...@googlegroups.com.
--
Studying for the Turing test
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
>> email to mechanical-sympathy+unsub...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsub...@googlegroups.com.
We've found that as our exchange volumes have increased the only protocol capable of handling a full un-throttled feed is ITCH (over multicast UDP). For all of our other stream based TCP feeds (FIX, HTTP) we are moving toward rate throttling and coalescing events based on symbol in all cases - we already do it in the majority of our connections. We maintain a buffer per connection (Disruptor or coalescing ring buffer depending on the implementation) so that the rate at which a remote connection consumes does not impact on any of the other connections. With FIX we also maintain some code that if we detect a ring buffer becoming too full (e.g. >50%) then we pro-actively tear down that connection under the assumption that their connection is not fast enough to handle the full feed or it has disconnected and we didn't set get a FIN packet. If you have non-blocking I/O available, then you can be a little bit smarter regarding the implementation (unfortunately not an option with the standardised web socket APIs).Mike.
thanks quite useful answer. if we have around 700 clients, do we need to create around 700 disruptors? We also stream different types of data (3 types), will it be a good idea to create 700 * 3 disruptors?
On Saturday, April 15, 2017 at 1:07:03 AM UTC + 3, mikeb01 wrote:We've found That as our exchange volumes have Increased the only protocol capable of handling a full un-throttled feed is ITCH (Multicast over UDP). For all of our other stream-based TCP feeds (FIX, HTTP) we are moving toward rate throttling and coalescing events based on the symbol in all cases - Already we do it in the Majority of our connections. We maintain a buffer per connection (Disruptor or coalescing ring buffer depending on the Implementation) So that the rate at Which a remote connection consumes does not have any impact of the other connections. With the FIX We also maintain some CodeThatCalendar if we detect a ring buffer Becoming too full (eg> 50%), then we pro-Activelly tear down That connection under the assumption That Their connection is not fast enough to handle the full feed or it has disconnected and we did not get a set FIN packet. If you have a non-blocking I / O available, then you can be a little bit smarter Regarding the implementation (Unfortunately not an option with the web Standardized socket APIs).Mike.-
Studying for the Turing test
-
You Received this message because you are Subscribed to the Google Groups "mechanical-sympathy" group.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
With Web-based traffic (long poll / http streaming rather than a web sockets), we maintain separate buffers for each message type (market data, trade market data, execution reports). As each message type has different rules around how events can be coalesced and / or throttled (eg market data can be, execution reports can not).For the FIX we have separate servers for market data and order processing, that is, in effect we have separate buffers for each event type, but the supermarket Because the data behaves quite a bit differently to order the flow having separate servers Allows the Implementation differ to the where needs be.Mike.