Max Connections to Host (http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost)

3,440 views
Skip to first unread message

Suraj Narkhede

unread,
Aug 13, 2014, 9:11:03 PM8/13/14
to golan...@googlegroups.com
I am new to Go and trying to implement an http server using net/http - which internally calls n http services using go-routines, responses from n http server are written into buffered channel and then main http server responds back.

The problem I am finding is - 
- If MaxIdleConnections per host is low - then each connection after that limit is not persistent. This results in lot of connections being established and broken, when the already established connection could have been reused (I understand that it exceeds the MaxIdleConnections limit).
- If I set MaxIdleConnections to very high number then I may exhaust the port limit, since n can be high. Creating virtual network interfaces is option - But I could not find how to bind ip to the http.Client. 

I think, may be in highly network intensive tasks, it may be good to not to immediately close the connections if its above MaxIdleConnections, instead it should be tried to be reused. May be a basic algo to try to get it to stable limit and then again close the connections, if they are no longer needed. Or please suggest if such functionality can be achieved with current interface.




James Bardin

unread,
Aug 13, 2014, 9:44:08 PM8/13/14
to golan...@googlegroups.com


On Wednesday, August 13, 2014 9:11:03 PM UTC-4, Suraj Narkhede wrote:

- If I set MaxIdleConnections to very high number then I may exhaust the port limit, since n can be high. Creating virtual network interfaces is option - But I could not find how to bind ip to the http.Client. 


I think this is really the way to go. If the number you need for MaxIdleConnections to accommodate your peak concurrency is greater than the number of ports available, make more ports!

You can set the local address in a net.Dialer, which can provide the Dial function for your client's http.Transport.

Message has been deleted

Suraj Narkhede

unread,
Aug 14, 2014, 12:57:03 AM8/14/14
to golan...@googlegroups.com
Thanks for the links - will check that.

The problem with creating virtual network interfaces is - communication can be with outside world and then need public IP.

I fee that, the usage of MaxIdleConnection should be bit different - at peak concurrency, when the connections exceed this limit, that generally means that connections are not idle and thus new connections shall not be immediately closed (as it degrades the performance, because we need to again establish connection immediately with same host, and thus too many connections goes into TIME_WAIT state, rapidly decreasing the available ports) and instead be cached. I think the optimization should be at the global connection pool. For the partners whose connections are idle, they can be closed, but one who is in demand now, shall scale upto the limits allowed - by closing the idle connections with other hosts. 

Tamás Gulácsi

unread,
Aug 14, 2014, 6:27:15 AM8/14/14
to golan...@googlegroups.com
Why not create such pooling yourself?

James Bardin

unread,
Aug 14, 2014, 10:59:21 AM8/14/14
to golan...@googlegroups.com
Are you certain that this is adversely affecting you? It seems like only a slight optimization of the current behavior, with a much more complication implementation (not to mention, it makes MaxIdleConnsPerHost not a hard limit as it would imply)

That limit is specifically for "Idle" connections. If MaxIdleConnsPerHost is 10, and you have 1000 active connections, and 10 are released, you should have exactly 10 idle and waiting and 990 active. No [keepalive] connection is immediately closed if there are no idle connections already.

Suraj Narkhede

unread,
Aug 14, 2014, 4:05:12 PM8/14/14
to golan...@googlegroups.com


On Thursday, August 14, 2014 7:59:21 AM UTC-7, James Bardin wrote:
Are you certain that this is adversely affecting you? It seems like only a slight optimization of the current behavior, with a much more complication implementation (not to mention, it makes MaxIdleConnsPerHost not a hard limit as it would imply)

That limit is specifically for "Idle" connections. If MaxIdleConnsPerHost is 10, and you have 1000 active connections, and 10 are released, you should have exactly 10 idle and waiting and 990 active. No [keepalive] connection is immediately closed if there are no idle connections already. -

James, Thats what my expectation is and if it works that way, then thats great. But observation is different - during stress testing, till the total connections are below  MaxIdleConnsPerHost, number of connections are constant, but as I increase the concurrency for stress testing, when the total connections from server to outside clients exceed MaxIdleConnsPerHost, the connections count explodes as most of them (above the limit) go in TIME_WAIT state and it does not become stable. Code - http://play.golang.org/p/yK6Knh8RkE
The backend services, are simple, accept the request, sleep for 80ms and send the response.

I am using wrk to do stress testing. After initial set of testing connections were constant and 10000 - 

concurrency - a: 
ESTABLISHED=> 10000
TIME_WAIT => 0

concurrency - 2a:(By starting another wrk client):
ESTABLISHED => 10000 -> 11000 (stable around here)
TIME_WAIT: Increasing -> 1939 - > 3714 -> 4156 -> ..->15066 -> ...42000 ->...

My understanding is after some time - most of the connections should be established, and TIME_WAIT should be very low, since connections are not idle, as the stress testing is on.
Please let me know if there is issue in the code, or this is expected the behavior.

Suraj Narkhede

unread,
Aug 14, 2014, 4:25:57 PM8/14/14
to golan...@googlegroups.com
Came across - https://code.google.com/p/go/issues/detail?id=6785. Similar issue is already reported.

James Bardin

unread,
Aug 14, 2014, 4:44:08 PM8/14/14
to Suraj Narkhede, golan...@googlegroups.com
On Thu, Aug 14, 2014 at 4:25 PM, Suraj Narkhede <suraj...@gmail.com> wrote:
Came across - https://code.google.com/p/go/issues/detail?id=6785. Similar issue is already reported.




Ah, that makes sense.
Luckily that's not likely to come up in real world use for most people.

Jakob Borg

unread,
Aug 15, 2014, 5:05:31 AM8/15/14
to James Bardin, Suraj Narkhede, golan...@googlegroups.com
Indeed. I'm the original reporter; fixing it looked slightly
nontrivial so I decided to let it slide for the time being, and as far
as I know the problem has never popped up in production, just when
attempting local benchmarks.

//jb

Suraj Narkhede

unread,
Aug 15, 2014, 5:53:29 PM8/15/14
to golan...@googlegroups.com, j.ba...@gmail.com, suraj...@gmail.com
Thanks Jakob!

Though currently I am just prototyping the system, I think, this issue can easily come into production - if 
 - The backend services are slow.
-  You have to communicate with many backend services and hosts. 

Observation is - As soon as the MaxIdleConnsPerHost exceeds, the subsequent connections are not persistent and results into lot of connections in TIME_WAIT.

Even a basic test case with MaxIdleConnsPerHost = 1 and stress testing with concurrency 2 results in > 5000 connections in TIME_WAIT.
Setup:
- Only one backend service is called. It responds after 5 ms of sleep.
- Main HTTP server responds if timeout happens (10ms) or when response is received from backend service.
- Concurrency test - weighttp -n 10000 -c 2 -t 1  -k "http://127.0.0.1:8084/".

This basic test also results in > 5000 connections in TIME_WAIT. In practice, this test case should be handled in 2 persistent connections.
When I changed MaxIdleConnsPerHost to 4, keeping everything else same, the net connections established to the backend service are 4.

Suraj Narkhede

unread,
Aug 15, 2014, 6:18:51 PM8/15/14
to golan...@googlegroups.com, j.ba...@gmail.com, suraj...@gmail.com
Reported issue at - https://code.google.com/p/go/issues/detail?id=8536. Also mentioned the link for https://code.google.com/p/go/issues/detail?id=6785
I think there is a difference in two - earlier one asks for max limit on number of connections. This one asks for better utilization of the established connections.

San

unread,
Aug 19, 2014, 5:27:59 PM8/19/14
to golan...@googlegroups.com
On Thursday, August 14, 2014 8:11:03 AM UTC+7, Suraj Narkhede wrote:
I am new to Go and trying to implement an http server using net/http - which internally calls n http services using go-routines, responses from n http server are written into buffered channel and then main http server responds back.
- If I set MaxIdleConnections to very high number then I may exhaust the port limit, since n can be high. Creating virtual network interfaces is option - But I could not find how to bind ip to the http.Client.

This may not related to the issue in golang but You may want to look at good explanation about TCP tuple from CloudFlare.
This allow you to reuse outgoing port to n http servers you have just like an incoming port.
Reply all
Reply to author
Forward
0 new messages