the only way I can help would be you send me a unit test reproducing the
issue. I've never experienced such issue...Are you sure your
AsyncHandler#onThrowable isn't invoked with some exception (usually it's
connect exception produced because the OS is busy sending requests).
A+
-- Jeanfrancois
> For both Client& Server OS, we configured to allow up to 200K
> connections.
> But still we see the above excepion.
> Any thoughts as to how to resolve this issue?
How many file descriptor have you configured on your OS? You usually get
that exception when the OS run out of file descriptor.
A+
-- Jeanfrancois
>
> Thank you.
>
> Thanks& Regards,
> --
> You received this message because you are subscribed to the Google Groups
> "asynchttpclient" group.
> To post to this group, send email to asyncht...@googlegroups.com.
> To unsubscribe from this group, send email to
> asynchttpclie...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/asynchttpclient?hl=en.
>
On Tue, Feb 21, 2012 at 4:51 PM, Jeffery Griffith
Hi Jeffery,Thanks a lot for all your help. I really appreciate!!Coming to the issue we are seeing, I dont think we are going out of ephemeral ports.Using netstat, when I monitor total (both ESTABLISHED & TIME_WAIT included) connections, it went up to 30K maximum.So I am sure we can go much higher. But for some reason we are getting this exception very early.Also at OS Level I have set the file descriptors to 300000 both on client & server.So I am expecting it to go even higher than 64K.One thing that I noticed is, as many connections are open, so many threads are also running.Is it the expected behavior or can we tune Async Http Client better?
https://github.com/sonatype/async-http-client/blob/master/src/main/java/com/ning/http/client/providers/netty/NettyAsyncHttpProviderConfig.java#L53
https://github.com/sonatype/async-http-client/blob/master/src/main/java/com/ning/http/client/AsyncHttpClientConfig.java#L679
https://github.com/sonatype/async-http-client/blob/master/src/main/java/com/ning/http/client/providers/netty/NettyAsyncHttpProviderConfig.java#L43A+
Can you describe the lifecycle of each request? I see you have a
request timeout, but didn't see its value in the example, however
based on the name of your handler LongPollResponseHandler, I take it
your goal is to establish >200k ESTABLISHED connections
*concurrently*?
Also, I don't see your OS mentioned. Are you using linux? The
available port range can be found with:
% cat /proc/sys/net/ipv4/ip_local_port_range
It could be that you have this set to around 32k.
Jean-Francois may correct me, but the last time I saw the connect
code, the client does an ephemeral bind (does not bind to any
particular IP or port) and therefore I don't see how one could ever
reach more than 64k without exhausting the port range.
--jg
On Wed, Feb 22, 2012 at 9:16 AM, Jeanfrancois Arcand
IP port numbers are 16-bit, so it is very much a hard limit. not just
by lib, Java or OS.
But it only limits connections to specific IP+port combination. Also,
do you really need that many connections? HTTP 1.1 reuses connections
(unless explicitly disabled), and typically only couple should be
concurrently used even for large number of requests.
-+ Tatu +-
Tatu, I think the difference here is that Mani wants to do long polls
on all many connections. Are the destinations all to different IPs,
Mani?
I've gone round and round this problem with my (netty-based) TCP
client also which needs massive scaling for outgoing connections. The
problem is that, even with multiple IPs, linux flattens its ephemeral
port space into a single table irrespective of the IP address and you
very quickly run out of ports if you allow ephemeral binding. The only
solution I have found for this is to configure my NIC for multiple IPs
and do the IP/port binding myself outside of the defined ephemeral
range. Adding this into the ahc client (or any other) might be quite a
challenge.
--jg
Ah yes, true. Not quite the same -- although, also one reason for
considering alternatives other than long-polling (or in general,
long-living connections)
-+ Tatu +-