Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

OT: How to tell an HTTP client to limit parallel connections?

16 views
Skip to first unread message

Grant Edwards

unread,
Nov 8, 2013, 12:25:30 PM11/8/13
to
Yes, this off-topic, but after a fair amount of Googling and searching
in the "right" places, I'm running out of ideas.

I've got a very feeble web server. The crypto handshaking involved in
opening an https: connection takes 2-3 seconds. That would be fine if
a browser opened a single connection and then sent a series of
requests on that connection to load the various elements on a page.

But that's not what browsers do. They all seem to open whole handful
of connections (often as many as 8-10) and try to load all the page's
elements in parallel. That turns what would be a 3-4 second page load
time (using a single connection) into a 20-30 second page load time.
Even with plaintext http: connections, the multi-connection page load
time is slower than the single-connection load time, but not by as
large a factor.

Some browsers have user-preference settings that limit the max number
of simultaneous connections to a single server (IIRC the RFCs suggest
a max of 4, but most browsers seem to default to a max of 8-16).

What I really need is an HTTP header or meta-tag or something that I
can use to tell clients to limit themselves to a single connection.

I haven't been able to find such a thing, but I'm hoping I've
overlooked something...

--
Grant Edwards grant.b.edwards Yow! INSIDE, I have the
at same personality disorder
gmail.com as LUCY RICARDO!!

donarb

unread,
Nov 8, 2013, 12:39:14 PM11/8/13
to
There's an Apache module called mod_limitipconn that does just what you are asking. I've never used it and can't vouch for it. I'm not aware of anything like this for other servers like Nginx.

http://dominia.org/djao/limitipconn2.html

Skip Montanaro

unread,
Nov 8, 2013, 12:39:20 PM11/8/13
to Python
> What I really need is an HTTP header or meta-tag or something that I
> can use to tell clients to limit themselves to a single connection.
>
> I haven't been able to find such a thing, but I'm hoping I've
> overlooked something...

That will only go so far. Suppose you tell web browsers "no more than
3 connections", then get hit by 30 nearly simultaneous, but separate
clients. Then you still wind up allowing up to 90 connections.

There should be a parameter in your web server's config file to limit
the number of simultaneously active threads or processes. It's been a
long time for me, and you don't mention what brand of server you are
running, but ISTR that Apache has/had such parameters.

(Also, while this is off-topic for comp.lang.python, most people here
are a helpful bunch, and recognizing that it is off-topic, will want
to reply off-list. You don't give them that option with
"inv...@invalid.invalid" as an email address.)

Skip

Grant Edwards

unread,
Nov 8, 2013, 1:01:09 PM11/8/13
to
On 2013-11-08, Skip Montanaro <sk...@pobox.com> wrote:
>> What I really need is an HTTP header or meta-tag or something that I
>> can use to tell clients to limit themselves to a single connection.
>>
>> I haven't been able to find such a thing, but I'm hoping I've
>> overlooked something...
>
> That will only go so far. Suppose you tell web browsers "no more than
> 3 connections", then get hit by 30 nearly simultaneous, but separate
> clients.

In practice, that doesn't happen. These servers are small, embedded
devices on internal networks. If it does happen, then those clients
are all going to have to queue up and wait.

> Then you still wind up allowing up to 90 connections.

The web server is single threaded. It only handles one connection at
a time, and I think the TCP socket only queues up a couple. But that
doesn't stop browsers from trying to open 8-10 https connections at a
time (which then eventually get handled serially).

> There should be a parameter in your web server's config file to limit
> the number of simultaneously active threads or processes.

The server is single-threaded by design, so it is not capable of
handling more than one connection at a time. The connections are
never actually in "parallel" except in the imagination of the browser
writers.

> It's been a long time for me, and you don't mention what brand of
> server you are running, but ISTR that Apache has/had such parameters.

FWIW, it's an old version of the GoAhead web server:

http://embedthis.com/products/goahead/

> (Also, while this is off-topic for comp.lang.python, most people here
> are a helpful bunch, and recognizing that it is off-topic, will want
> to reply off-list. You don't give them that option with
> "inv...@invalid.invalid" as an email address.)

Yea, trying to hide e-mail addresses from automated spammers is
probably futile these days. I'll have to dig into my slrn config
file.

--
Grant Edwards grant.b.edwards Yow! Somewhere in DOWNTOWN
at BURBANK a prostitute is
gmail.com OVERCOOKING a LAMB CHOP!!

Chris Angelico

unread,
Nov 8, 2013, 1:16:49 PM11/8/13
to pytho...@python.org
On Sat, Nov 9, 2013 at 4:25 AM, Grant Edwards <inv...@invalid.invalid> wrote:
> I've got a very feeble web server. The crypto handshaking involved in
> opening an https: connection takes 2-3 seconds. That would be fine if
> a browser opened a single connection and then sent a series of
> requests on that connection to load the various elements on a page.
>
> But that's not what browsers do. They all seem to open whole handful
> of connections (often as many as 8-10) and try to load all the page's
> elements in parallel.

Are you using HTTP 1.1 with connection reuse? Check that both your
client(s) and your server are happy to use 1.1, and you may be able to
cut down the number of parallel connections.

Alternatively, since fixing it at the browser seems to be hard, can
you do something ridiculously stupid like... tunnelling insecure HTTP
over SSH? That way, you establish the secure tunnel once, and
establish a whole bunch of connections over it - everything's still
encrypted, but only once. As an added bonus, if clients are requesting
several pages serially (user clicks a link, views another page), that
can be done on the same connection as the previous one, cutting crypto
overhead even further.

ChrisA

Grant Edwards

unread,
Nov 8, 2013, 2:20:37 PM11/8/13
to
On 2013-11-08, Chris Angelico <ros...@gmail.com> wrote:
> On Sat, Nov 9, 2013 at 4:25 AM, Grant Edwards <inv...@invalid.invalid> wrote:
>> I've got a very feeble web server. The crypto handshaking involved in
>> opening an https: connection takes 2-3 seconds. That would be fine if
>> a browser opened a single connection and then sent a series of
>> requests on that connection to load the various elements on a page.
>>
>> But that's not what browsers do. They all seem to open whole handful
>> of connections (often as many as 8-10) and try to load all the page's
>> elements in parallel.
>
> Are you using HTTP 1.1 with connection reuse?

Yes. And several years ago when I first enabled that feature in the
server, I verified that some browsers were sending multiple requests
per connection (though they still often attempted to open multiple
connections). More recent browsers seem much more impatient and are
determined to open as many simultaneous connections as possible.

> Check that both your client(s) and your server are happy to use 1.1,
> and you may be able to cut down the number of parallel connections.

> Alternatively, since fixing it at the browser seems to be hard, can
> you do something ridiculously stupid like... tunnelling insecure HTTP
> over SSH?

Writing code to implement tunnelling via the ssh protocol is probably
out of the question (resource-wise).

If it were possible, how is that supported by browsers?

--
Grant Edwards grant.b.edwards Yow! I was making donuts
at and now I'm on a bus!
gmail.com

Chris Angelico

unread,
Nov 8, 2013, 2:39:44 PM11/8/13
to pytho...@python.org
On Sat, Nov 9, 2013 at 6:20 AM, Grant Edwards <inv...@invalid.invalid> wrote:
> On 2013-11-08, Chris Angelico <ros...@gmail.com> wrote:
>> Are you using HTTP 1.1 with connection reuse?
>
> Yes. And several years ago when I first enabled that feature in the
> server, I verified that some browsers were sending multiple requests
> per connection (though they still often attempted to open multiple
> connections). More recent browsers seem much more impatient and are
> determined to open as many simultaneous connections as possible.

Yeah, but at least it's cut down from one connection per object to
some fixed number. But you've already done that.

>> Alternatively, since fixing it at the browser seems to be hard, can
>> you do something ridiculously stupid like... tunnelling insecure HTTP
>> over SSH?
>
> Writing code to implement tunnelling via the ssh protocol is probably
> out of the question (resource-wise).
>
> If it were possible, how is that supported by browsers?

You just set your hosts file to point the server's name to localhost
(or simply tell your browser to go to http://localhost/ if that's
easier), and have an SSH tunnel like:

ssh -L 80:localhost:80 us...@some.server.whatever.it.is

Browser and server both think they're working with unencrypted HTTP on
loopback, but in between there's an encrypted link. Alternatively, if
you can point your browser to http://localhost:8000/ you can work with
a non-privileged port locally, which may be of value. The user at that
host needn't have much of interest as its shell - just something that
says "Press Enter to disconnect" and waits for a newline - as long as
it's configured to permit tunnelling (which is the default AFAIK). So
effectively, no browser support is needed.

The downside is that you need to consciously establish the secure
link. If you don't mind having the traffic travel the "last mile"
unencrypted, you could have a single long-term SSH tunnel set up, and
everyone connects via that; similarly, if your embedded server has a
trusted link to another box with a bit more grunt, you could end the
SSH tunnel there and run unencrypted for the last little bit. Anything
can be done, it's just a question of what'd be useful.

But like I said, it's a ridiculously stupid suggestion. Feel free to
discard it as such. :)

ChrisA

Nick Cash

unread,
Nov 8, 2013, 2:42:07 PM11/8/13
to pytho...@python.org
>What I really need is an HTTP header or meta-tag or something that I can use to tell clients to limit themselves to a single connection.

I don't think such a thing exists... but you may be able to solve this creatively:

A) Set up a proxy server that multiplexes all of the connections into a single one. A reverse proxy could even handle the SSL and alleviate the load on the embedded server. Although it sounds like maybe this isn't an option for you?

OR

B) Redesign the page it generates to need fewer requests (ideally, only one): inline CSS/JS, data: url images, etc. It's not the prettiest solution, but it could work.

-Nick Cash

Ian Kelly

unread,
Nov 8, 2013, 3:13:11 PM11/8/13
to Python
No such header exists, that I'm aware of. The RFC simply recommends
limiting client connections to 2 per user, but modern browsers no
longer follow that recommendation and typically use 4-6 instead.

Do you really need to send all the page resources over HTTPS? Perhaps
you could reduce some of the SSL overhead by sending images and
stylesheets over a plain HTTP connection instead.

Grant Edwards

unread,
Nov 8, 2013, 3:48:59 PM11/8/13
to
On 2013-11-08, Chris Angelico <ros...@gmail.com> wrote:
> On Sat, Nov 9, 2013 at 6:20 AM, Grant Edwards <inv...@invalid.invalid> wrote:
>> On 2013-11-08, Chris Angelico <ros...@gmail.com> wrote:
>>> Are you using HTTP 1.1 with connection reuse?
>>
>> Yes. And several years ago when I first enabled that feature in the
>> server, I verified that some browsers were sending multiple requests
>> per connection (though they still often attempted to open multiple
>> connections). More recent browsers seem much more impatient and are
>> determined to open as many simultaneous connections as possible.
>
> Yeah, but at least it's cut down from one connection per object to
> some fixed number. But you've already done that.
>
>>> Alternatively, since fixing it at the browser seems to be hard, can
>>> you do something ridiculously stupid like... tunnelling insecure HTTP
>>> over SSH?
>>
>> Writing code to implement tunnelling via the ssh protocol is probably
>> out of the question (resource-wise).
>>
>> If it were possible, how is that supported by browsers?
>
> You just set your hosts file to point the server's name to localhost
> [...]

Ah, I see.

All I have control over is the server. I have no influence over the
client side of things other than what I can do in the HTTP server.

--
Grant Edwards grant.b.edwards Yow! Are you selling NYLON
at OIL WELLS?? If so, we can
gmail.com use TWO DOZEN!!

Chris Angelico

unread,
Nov 8, 2013, 4:01:31 PM11/8/13
to pytho...@python.org
On Sat, Nov 9, 2013 at 7:48 AM, Grant Edwards <inv...@invalid.invalid> wrote:
> All I have control over is the server. I have no influence over the
> client side of things other than what I can do in the HTTP server.

Hmm. Then the only way I can think of is a reverse proxy that can
queue, handle security, or whatever else is necessary. Good luck. It's
not going to be easy, I think. In fact, easiest is probably going to
be beefing up the hardware.

Oooh.... crazy thought just struck me. What's your source of entropy?
Is it actually the mathematical overhead of cryptography that's taking
2-3 seconds, or are your connections blocking for lack of entropy? You
might be able to add another source of random bits, or possibly reduce
security a bit by allowing less-secure randomness from /dev/urandom.

ChrisA

Grant Edwards

unread,
Nov 8, 2013, 4:02:01 PM11/8/13
to
On 2013-11-08, Nick Cash <nick...@npcinternational.com> wrote:

>>What I really need is an HTTP header or meta-tag or something that I
>>can use to tell clients to limit themselves to a single connection.
>
> I don't think such a thing exists...

Yea, that's pretty much the conclusion I had reached.

> but you may be able to solve this creatively:
>
> A) Set up a proxy server that multiplexes all of the connections into
> a single one. A reverse proxy could even handle the SSL and alleviate
> the load on the embedded server. Although it sounds like maybe this
> isn't an option for you?

Indeed it isn't. These "servers" are an embedded devices that are
installed on customer-owned networks where I can do nothing other than
what can be accopmplished by changes to the firmware on the server.

> B) Redesign the page it generates to need fewer requests (ideally,
> only one): inline CSS/JS, data: url images, etc. It's not the
> prettiest solution, but it could work.

That is something I might be able to do something about. I could
probably add support to the server for some sort of server-side
include feature. [or, I could pre-process the html files with
something like m4 before burning them into ROM.] That would take care
of the css and js nicely. Inlining the images would take a little
more work, but should be possible as well.

I have vague memories of inline image data being poorly supported by
browswers, but that was probably many years ago...

Thanks for the suggestion!

--
Grant Edwards grant.b.edwards Yow! ONE LIFE TO LIVE for
at ALL MY CHILDREN in ANOTHER
gmail.com WORLD all THE DAYS OF
OUR LIVES.

Grant Edwards

unread,
Nov 8, 2013, 4:05:44 PM11/8/13
to
On 2013-11-08, Ian Kelly <ian.g...@gmail.com> wrote:

> Do you really need to send all the page resources over HTTPS?

Probably not, but it's not my decision. The customer/client makes
that decision.

> Perhaps you could reduce some of the SSL overhead by sending images
> and stylesheets over a plain HTTP connection instead.

In theory, that could work, but some customers require use of
encryption.

--
Grant Edwards grant.b.edwards Yow! Did YOU find a
at DIGITAL WATCH in YOUR box
gmail.com of VELVEETA?

Grant Edwards

unread,
Nov 8, 2013, 4:14:44 PM11/8/13
to
On 2013-11-08, Chris Angelico <ros...@gmail.com> wrote:
> On Sat, Nov 9, 2013 at 7:48 AM, Grant Edwards <inv...@invalid.invalid> wrote:
>> All I have control over is the server. I have no influence over the
>> client side of things other than what I can do in the HTTP server.
>
> Hmm. Then the only way I can think of is a reverse proxy that can
> queue, handle security, or whatever else is necessary. Good luck. It's
> not going to be easy, I think. In fact, easiest is probably going to
> be beefing up the hardware.
>
> Oooh.... crazy thought just struck me. What's your source of entropy?
> Is it actually the mathematical overhead of cryptography that's taking
> 2-3 seconds,

Yes. AFAICT, it is. Some of the key-exchange options are pretty
taxing. I can speed things up by about a factor of 4 by disabling the
key-exchange algorithms that have the highest overhead, but those are
the algorithms that the SSL clients seem to prefer. I'm reluctant to
force them further down their preference list, lest I end up not being
able to support some clients.

> or are your connections blocking for lack of entropy?

Nope. The cyrpto libraries we're using don't do that. I'm not
entirely happy with the entropy generation used. I wish I had more
sources of "real" randomness, but at least they don't block.

> You might be able to add another source of random bits, or possibly
> reduce security a bit by allowing less-secure randomness from
> /dev/urandom.

It's not Unix-like OS, but that's more or less what's happening.

--
Grant Edwards grant.b.edwards Yow! I'm sitting on my
at SPEED QUEEN ... To me,
gmail.com it's ENJOYABLE ... I'm WARM
... I'm VIBRATORY ...

Chris Angelico

unread,
Nov 8, 2013, 4:29:03 PM11/8/13
to pytho...@python.org
On Sat, Nov 9, 2013 at 8:14 AM, Grant Edwards <inv...@invalid.invalid> wrote:
>> or are your connections blocking for lack of entropy?
>
> Nope. The cyrpto libraries we're using don't do that. I'm not
> entirely happy with the entropy generation used. I wish I had more
> sources of "real" randomness, but at least they don't block.
>
>> You might be able to add another source of random bits, or possibly
>> reduce security a bit by allowing less-secure randomness from
>> /dev/urandom.
>
> It's not Unix-like OS, but that's more or less what's happening.

And pop goes another theory. I knew you'd know what I mean by
/dev/urandom, regardless of the name you'd actually reference it by.
Oh well, worth a shot.

ChrisA
0 new messages