The snarky answer is that if you care about performance in that context, you probably shouldn't be using kernel sockets anyway, and at least be using a shim for stack bypass.
Were you ever forced to use native sockets instead of nio because of performance ? I am addressing this to people in hft in particular.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
But in an HFT context, the end point of optimizing for latency doesn't even involve the CPU, let alone the JVM
Martin/Vitaly:Batching is something of a latency/throughput tradeoff, no? If you batch the send, you might get better latency for the 2nd message, but worse latency for the 1st message. It really then depends on your use case.
But in an HFT context, the end point of optimizing for latency doesn't even involve the CPU, let alone the JVM. So to go back to ymo's question, forced to do for what? Even the kernel bypass shims aren't "free" to use from a developer time perspective, if you care about your code actually working.
Martin/Vitaly:Batching is something of a latency/throughput tradeoff, no? If you batch the send, you might get better latency for the 2nd message, but worse latency for the 1st message. It really then depends on your use case.But in an HFT context, the end point of optimizing for latency doesn't even involve the CPU, let alone the JVM. So to go back to ymo's question, forced to do for what? Even the kernel bypass shims aren't "free" to use from a developer time perspective, if you care about your code actually working.
On Wed Feb 18 2015 at 2:52:07 PM Vitaly Davidovich <vit...@gmail.com> wrote:
Yeah, could've been me :)+1 on batching in particular. In fact, there was a follow-on talk at LCA 2015 on the kernel memory manager (http://lwn.net/Articles/629152/ -- pretty sure this one hasn't been posted :)) which is also looking at adding batching to kernel mem alloc routines (partly driven from the networking stack demands).
On Wed, Feb 18, 2015 at 2:47 PM, Martin Thompson <mjp...@gmail.com> wrote:
Yeah it is a good article, I think you or someone else posted it before. API/protocols sooooooo need to go async and support batching if we are to take further steps in performance.
On 18 February 2015 at 19:01, Vitaly Davidovich <vit...@gmail.com> wrote:
Don't recall if this was already posted to this list before, but either way, here's an interesting presentation (linked to from the lwn article) on the topic of kernel network stack: http://lwn.net/Articles/629155/
On Wed, Feb 18, 2015 at 1:40 PM, Martin Thompson <mjp...@gmail.com> wrote:
Not so snarky ;-) Actually a good point. If using Java on Linux and you need low latency comms then Solarflare with Open Onload, or Mellanox are good options.A really key thing to reduce latency is to batch up the expensive operations to amortise the cost. Consider micro burst scenarios. A great example is using sendmmsg() and recvmmsg(). So if you get to batch 2+ frames for each system call then you beat going via a user space stack.Next logical step is to do JNI and go to a native API for a user space stack and use batch semantics.The main advantage to the likes of Onload is the avoidance of kernel introduced jitter on top of its faster path.
On 18 February 2015 at 16:02, Jimmy Jia <tes...@gmail.com> wrote:
The snarky answer is that if you care about performance in that context, you probably shouldn't be using kernel sockets anyway, and at least be using a shim for stack bypass.
On Wed, Feb 18, 2015, 10:50 ymo <ymol...@gmail.com> wrote:
Were you ever forced to use native sockets instead of nio because of performance ? I am addressing this to people in hft in particular.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
The lowest latency trading strategies these days are typically implemented such that responding is done entirely in FPGA. Obviously there's software and CPUs involved, but your critical path never touches a CPU.If you're not there (or past that, my knowledge is not fully up-to-date, which is why I'm talking about this at all), you're in the world of making latency/throughput/ease-of-development compromises.This was more w/r/t Martin saying "API/protocols sooooooo need to go async and support batching if we are to take further steps in performance". This is not true in the lowest latency contexts, just because those sorts of things aren't on the critical path any more.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
I am surprised how no one has mentioned lockfree queues between c/c++ and java to bypass nio and garbage collection altogether up to now. Meaning a java thread is the consumer and the c++ thread is the producer (or vice versa). I was thinking that this would be very prevalent by now.IS anyone using this ?
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.