Vertx tuning advice

854 views
Skip to first unread message

Thomas Cataldo

unread,
Mar 22, 2015, 4:43:17 AM3/22/15
to ve...@googlegroups.com
Hi,

I'm trying to understand one of settings proposed in vertx manual :

Tune TCP buffer size


Each TCP connection allocates memory for its buffer, so to support many connections in limited RAM you may need to reduce the TCP buffer size, e.g.

HttpServer server = vertx.createHttpServer();
server.setSendBufferSize(4 * 1024);
server.setReceiveBufferSize(4 * 1024);


Doing that seems to give me slow upload performance, as it seems to disable the tcp window autoscaling and have weird effect.

When I don't specify anything for those settings, performance is way higher as the buffer size seems to increase properly and automatically.

Looking at other guides for those settings, I came up with the protobuf tutorial which has this example :

Bootstrap bootstrap = new Bootstrap();
        bootstrap
.group(new NioEventLoopGroup());
        bootstrap
.handler(clientFactory);
        bootstrap
.channel(NioSocketChannel.class);
        bootstrap
.option(ChannelOption.TCP_NODELAY, true);
        bootstrap
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS,10000);
        bootstrap
.option(ChannelOption.SO_SNDBUF, 1048576);
        bootstrap
.option(ChannelOption.SO_RCVBUF, 1048576);

And links to this tutorial for tuning that stuff: http://fasterdata.es.net/host-tuning/background/

Which states this :

TCP Autotuning

Beginning with Linux 2.6, Mac OSX 10.5, Windows Vista, and FreeBSD 7.0, both sender and receiver autotuning became available, eliminating the need to set the TCP send and receive buffers by hand for each path. However the maximum buffer sizes are still too small for many high-speed network path, and must be increased as described on the pages for each operating system.


Do you still think the advice to decrease (and hardcode) buffer sizes is correct on a recent linux kernel ?





Tim Fox

unread,
Mar 23, 2015, 3:52:25 AM3/23/15
to ve...@googlegroups.com
Auto-tuning is all about getting the best throughput for a connection, but you might end up with large buffers, and therefore not able to cope with many connections in limited RAM.

So it makes sense to be able to turn this off and use a deterministic smaller buffer size if you need to cope with many connections in limited RAM as the docs say. But, of course, expect performance to be effected.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thomas Cataldo

unread,
Mar 23, 2015, 5:47:37 PM3/23/15
to ve...@googlegroups.com
Well, I tend to disagree or I think to doc should be more more explicit about what the trade is.

With N connections and your buffers capped to 4k, your JVM with be handling N x 4k at a time and you can handle a lot. What is missing here, is that you will never get more than 4k, so if 6.5m is needed to sature a 1gig link over a 50ms RTT connection, you will never get more than 630k/s (1024 / (6500/4)) out of your gig link, which is pretty annoying ;-)

All our tests shown that not setting anything and letting linux do the job gave the best result.

As I discovered / started to dig into this subject only for solving production problems, I am not an expert on this subject. My experience with vertx on linux servers just indicates me that forcing send / rec buffers buffers might be a perf killer.

Regards,

Thomas.


Tim Fox

unread,
Mar 24, 2015, 2:25:34 AM3/24/15
to ve...@googlegroups.com
On 23/03/15 21:47, Thomas Cataldo wrote:
Well, I tend to disagree or I think to doc should be more more explicit about what the trade is.

With N connections and your buffers capped to 4k, your JVM with be handling N x 4k at a time and you can handle a lot. What is missing here, is that you will never get more than 4k, so if 6.5m is needed to sature a 1gig link over a 50ms RTT connection, you will never get more than 630k/s (1024 / (6500/4)) out of your gig link, which is pretty annoying ;-)

All our tests shown that not setting anything and letting linux do the job gave the best result.

For many people that will be true, and that's why it's the default.

But it won't be true for everyone which is why we allow it to be configured.

Thomas Cataldo

unread,
Mar 24, 2015, 5:25:53 PM3/24/15
to ve...@googlegroups.com
If the documentation for vertx 2 is on a github, could I propose through a PR a different wording of those recommendations explaining the pros & cons ?

Reply all
Reply to author
Forward
0 new messages