AsyncHttpClient throws ConnectException on heavy load

363 views
Skip to first unread message

Murali

unread,
Aug 30, 2016, 2:05:58 PM8/30/16
to asynchttpclient
I am trying to send 200k Http POST requests using AsyncHttpClient. After sending around 50k POST requests I am seeing the below exception at client side:

java.net.ConnectException: Cannot assign requested address: /10.10.33.131:8040
        at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:160)
        at org.asynchttpclient.netty.request.NettyChannelConnector$1.onFailure(NettyChannelConnector.java:103)
        at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:28)
        at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:427)
        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:129)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:239)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.connect(CombinedChannelDuplexHandler.java:494)
        at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
        at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:295)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
        at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
        at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:976)
        at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:220)
        at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:168)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:402)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Cannot assign requested address: /10.10.33.131:8040
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:454)
        at sun.nio.ch.Net.connect(Net.java:446)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
        at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:208)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:203)
        ... 19 more

System details:
-----------------------------
1) OS: Linux sles11
2) JRE:  openjdk version 1.8.0


Below are the sample client and server details to reproduce the issue:

Client:
--------------
2) SimpleAsyncClient.zip contains latest binaries and script file. You can use this bundle to start the client.
3) async-http-client version 2.0.12 and related binaries are used at client side.


Server:
-------------
1) Code is shared at: https://github.com/mmandala/SimpleHttpServer. This simple server acknowledges the Http Post request.
2) SimpleHttpServer.zip contains the latest jar and script file to run the server.

Below are the changes done at System level:
-----------------------------------------------------------------
1) Increased the File Descriptors to 10Lakhs at both client and server machines using the command "ulimit -n 1000000".
2) Enabled tcp-reuse in the client at /proc/sys/net/ipv4/tcp_tw_reuse. Example,  echo "1" > /proc/sys/net/ipv4/tcp_tw_reuse

Does anybody have any idea why the above exception is occurring? Appreciate for the help.

Thanks,
Murali.

Stéphane LANDELLE

unread,
Aug 30, 2016, 5:21:49 PM8/30/16
to asyncht...@googlegroups.com
You're trying to send 200k concurrent requests AT THE SAME TIME, hence trying to open 200k TCP connections.
You'll only be able to do that with multiple local IPs, and bind from all of them in order to achieve such number of concurrent connections.

Stéphane Landelle
GatlingCorp CEO


--
You received this message because you are subscribed to the Google Groups "asynchttpclient" group.
To unsubscribe from this group and stop receiving emails from it, send an email to asynchttpclient+unsubscribe@googlegroups.com.
To post to this group, send email to asynchttpclient@googlegroups.com.
Visit this group at https://groups.google.com/group/asynchttpclient.
For more options, visit https://groups.google.com/d/optout.

Murali

unread,
Aug 31, 2016, 6:22:17 AM8/31/16
to asynchttpclient
Thanks for your quick response! I will try with multiple local IPs to send 200k requests, but I am looking for a solution with single clientIP. I looked into the async-http-library code, below are my observations:

Library re-uses the Channel (NioSocketChannel) if the http connection is keep-alive but the underlying tcp socket is closed if no data received from the socket. The socket close is initiated at: AbstractNioByteChannel.NioByteUnsafe.read() 

public final void read()
{
   ----
   ----
   int localReadAmount = doReadBytes(byteBuf);
   if (localReadAmount <= 0)
   {
     byteBuf.release();
     byteBuf = null;
     close = localReadAmount < 0;
     break;
   }

   -----
   -----

  if (close) {
    closeOnRead(pipeline);
    close = false;
  }

}

Is there a way to re-use the socket (or override this functionality) so that next Http POST request can be served without opening the socket again. My understanding is that if the channel is reused then the underlying socket also should be reused. correct me if I am wrong.
To unsubscribe from this group and stop receiving emails from it, send an email to asynchttpclie...@googlegroups.com.
To post to this group, send email to asyncht...@googlegroups.com.

Stéphane LANDELLE

unread,
Aug 31, 2016, 6:33:26 AM8/31/16
to asyncht...@googlegroups.com
You get several things wrong.

First, regarding the piece of code you mention, close is equal to localReadAmount < 0, which doesn't mean that no data could be read (that would be localReadAmount == 0).

Then, the limit you hit is not because connections are not recycled/pooled. They are, AHC pooling is enabled by default.
The problem is that you're trying to open too many connection AT THE SAME TIME.
You are the one in charge of controlling your throughput so that you don't sature your OS or your remote endpoint. There's no way for AHC to do that properly for you.

Stéphane Landelle
GatlingCorp CEO


To unsubscribe from this group and stop receiving emails from it, send an email to asynchttpclient+unsubscribe@googlegroups.com.
To post to this group, send email to asynchttpclient@googlegroups.com.

Murali

unread,
Sep 11, 2016, 7:40:48 AM9/11/16
to asynchttpclient
Thanks for the response. Yes, the problem is my test code trying to open too many connections AT THE SAME TIME. If I give small delay (20 msec) between requests then I am not seeing this issue, everything works fine.

Is there any option in the library to queue the requests so that the requests can be sent after receiving the response for the previous request. This way the underlying sockets/conections can be reused without opening too many?

If there is a way to customize, point me to the right place in the library to implement queuing mechanism. Appreciate for the help.
Reply all
Reply to author
Forward
0 new messages