Each TCP connection allocates memory for its buffer, so to support many connections in limited RAM you may need to reduce the TCP buffer size, e.g.
HttpServer server = vertx.createHttpServer(); server.setSendBufferSize(4 * 1024); server.setReceiveBufferSize(4 * 1024);
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(new NioEventLoopGroup());
bootstrap.handler(clientFactory);
bootstrap.channel(NioSocketChannel.class);
bootstrap.option(ChannelOption.TCP_NODELAY, true);
bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS,10000);
bootstrap.option(ChannelOption.SO_SNDBUF, 1048576);
bootstrap.option(ChannelOption.SO_RCVBUF, 1048576);
Beginning with Linux 2.6, Mac OSX 10.5, Windows Vista, and FreeBSD 7.0, both sender and receiver autotuning became available, eliminating the need to set the TCP send and receive buffers by hand for each path. However the maximum buffer sizes are still too small for many high-speed network path, and must be increased as described on the pages for each operating system.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Well, I tend to disagree or I think to doc should be more more explicit about what the trade is.
With N connections and your buffers capped to 4k, your JVM with be handling N x 4k at a time and you can handle a lot. What is missing here, is that you will never get more than 4k, so if 6.5m is needed to sature a 1gig link over a 50ms RTT connection, you will never get more than 630k/s (1024 / (6500/4)) out of your gig link, which is pretty annoying ;-)
All our tests shown that not setting anything and letting linux do the job gave the best result.