Does anyone on this list know how the use of docker effects latency? I keep hearing about it as if it is the greatest thing since sliced bread but I've heard anecdotal evidence that low latency apps take a hit.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
If you care about low latency, why on earth would you think that a virtualized OS was a good idea?
There are doubtless good papers on the low latency behaviour of e.g. Solaris zones or FreeBSD jails. That is the yardstick that Docker & its hype should be compared to IMO.
My 2c is that Dockers clumsiness, especially in terms of how it approaches networking probably doom it for low latency, but I'm more than happy to be proved wrong. On the quiet, I quite like Docker & would like it to live up to its spin.
Ben
You received this message because you are subscribed to a topic in the Google Groups "mechanical-sympathy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mechanical-sympathy/8QkiLhHC5-M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mechanical-symp...@googlegroups.com.
Docker isn't virtualized. It's a relatively primitive reimplementation of the equivalent of Solaris zones.
It has a heavily-NATed approach to networking and some other rather unfortunate design decisions.
My overall opinion is: some good ideas, some questionable design decisions, and a community with worrying amounts of NIH.
Ben
https://access.redhat.com/articles/1407003 (behind a paywall, can't remember the details).
Networking:The networking situation is pretty clear: If you want one of those "land anywhere and NAT/bridge with some auto-generated networking stuff" deployments, you'll probably pay dearly for that behavior in terms of network latency and throughput (compared to bare metal dedicated NICs on normal linux). However, there are options for deploying docker containers (again, may be different from how some people would like to deploy things) that provide either low-overhead or essentially zero-latency-overhead network links for docker. Start with host networking and/or use dedicated IP addresses and NICs, and you'll do much better than the bridged defaults. But you can go to things like Solarflare's NICs (which tend to be common in bare metal low latency environments already), and even do kernel bypass, dedicated spinning-core network stack things that will have a latency behavior no different (on Docker) than if you did the same on bare metal Linux.
--