Newbie Questions About Waitress Channels

37 views
Skip to first unread message

Cooper Baird

unread,
Oct 5, 2020, 9:16:00 PM10/5/20
to pylons-discuss
I am starting to use Waitress, and I am trying to understand how channels and the backlog work, so forgive me for my ignorance if I'm not understanding this correctly. Let's say, hypothetically, that I am using all of the default settings (so 100 connection limit, 1024 backlog capacity, 4 threads, etc.). Let's say 100 users, all using HTTP/1.1 clients, go to the site at once and begin browsing. Does this mean that any additional users (past the 100) that try to browse the site will hit an error or have a connection timeout since the 100 users fill up the channel capacity of 100 (and being HTTP/1.1 clients, all their requests will be served over the same channel, keeping it open)? If this is the case, then does that mean anyone past those initial 100 users will have to wait some time between 30s (cleanup interval) and 120s (channel timeout) to be able to browse? Or is this where the backlog comes in and channels can be reused somehow between users/clients? I apologize if that didn't all make sense. I can clarify anything that was unclear in my thought process/questioning.

Michael Merickel

unread,
Oct 5, 2020, 9:23:25 PM10/5/20
to pylons-...@googlegroups.com
The connection limit dictates how many individual tcp connections waitress will handle at a time, and while those are alive (until client hangs up or idle channel timeout) no other connections will be made. The backlog is a signal to the OS to not outright reject connections even if waitress is not willing to handle them yet.

From the list of connections, waitress will handle requests based on the number of threads.

On Oct 5, 2020, at 20:06, Cooper Baird <coope...@gmail.com> wrote:

I am starting to use Waitress, and I am trying to understand how channels and the backlog work, so forgive me for my ignorance if I'm not understanding this correctly. Let's say, hypothetically, that I am using all of the default settings (so 100 connection limit, 1024 backlog capacity, 4 threads, etc.). Let's say 100 users, all using HTTP/1.1 clients, go to the site at once and begin browsing. Does this mean that any additional users (past the 100) that try to browse the site will hit an error or have a connection timeout since the 100 users fill up the channel capacity of 100 (and being HTTP/1.1 clients, all their requests will be served over the same channel, keeping it open)? If this is the case, then does that mean anyone past those initial 100 users will have to wait some time between 30s (cleanup interval) and 120s (channel timeout) to be able to browse? Or is this where the backlog comes in and channels can be reused somehow between users/clients? I apologize if that didn't all make sense. I can clarify anything that was unclear in my thought process/questioning.

--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/b9870007-07ea-4e25-bbd0-266e6d05bac2n%40googlegroups.com.

Cooper Baird

unread,
Oct 5, 2020, 9:34:23 PM10/5/20
to pylons-discuss
Awesome. That clarifies my questions. Thanks! I'm trying to get a sense as to what I should set connection limit on a heroku 1x dyno. I ran ulimit -a within the dyno using heroku run and saw a value of 10000 for the max # of file descriptors. 100 does seem very conservative, as mentioned in the documentation. I don't want to set this value to something unsafe, but I would like to maximize the number of open connections per dyno. Do you have any advice in what I should set that value? I don't anticipate needing to support > 100 connections at once super soon, but would like to plan ahead.

Michael Merickel

unread,
Oct 5, 2020, 9:41:38 PM10/5/20
to pylons-...@googlegroups.com
The only default I've really changed on waitress in most apps I've written has been the number of threads. On Heroku I also configure waitress to understand the forwarding headers (see trusted_proxy docs) so that client data shows up properly in the WSGI app.

I would not worry about these issues unless you feel your site is susceptible to specific types of abuse (DDOS, slowloris, etc) - at which point I would recommend you look at tuning a proxy like nginx in front of basically any WSGI server to buffer / filter clients before they hit your backend.

Cooper Baird

unread,
Oct 5, 2020, 10:14:18 PM10/5/20
to pylons-...@googlegroups.com
I gotcha. Yeah I'm just trying to get a sense of how high of a value for connection-limit is too high for the platform, but I get that it depends on other things going on in the application too (other things that are using the file descriptors). I just fear making it too high, so I guess I'll go with the default 100 for the time being.

You received this message because you are subscribed to a topic in the Google Groups "pylons-discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pylons-discuss/LO8TsTlzOfY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pylons-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/DC31D835-6859-47E7-8661-E107D210CE31%40gmail.com.
Reply all
Reply to author
Forward
0 new messages