most performant way to run WSGI app as single process

68 views
Skip to first unread message

Alexander Mills

unread,
Sep 11, 2019, 5:27:29 PM9/11/19
to pylons-discuss
We are trying to adhere to philosophy of one process per Docker container. So instead of Apache + 4 Python procs in a container, we just want 1 python proc per container.

I am new to WSGI apps. My question is - what is the most performant native python server which can run a WSGI app like Pyramid?

Remember, I am trying to just launch one python process. I am not trying to put a server in front of the python WSGI process.
Any recommendations?  A digitalocean article says this should work:


from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response

def hello_world(request):
    return Response('<h1>Hello world!</h1>')

if __name__ == '__main__':
    config = Configurator()
    config.add_view(hello_world)
    app = config.make_wsgi_app()
    server = make_server('0.0.0.0', 8080, app)
    server.serve_forever()



I assume this all runs as one process. Is this performant enough compared to Apache or should I use something else?

-alex

Alexander Mills

unread,
Sep 11, 2019, 5:44:31 PM9/11/19
to pylons-...@googlegroups.com
So yeah I hear that bjoern is good?



anyone have anything good/bad to say about bjoern?

-alex

--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/b9cf75b9-71f0-4d7d-a786-c2564796ff78%40googlegroups.com.


--

Bert JW Regeer

unread,
Sep 11, 2019, 11:07:14 PM9/11/19
to Pylons Project
The answer is:

it depends

You can run whatever you'd like. I run waitress in production in a docker setup. But you can also use uWSGI for example to run your production, or gunicorn, or a large variety of other choices.

What were you running before?

Bert

Alexander Mills

unread,
Sep 12, 2019, 1:27:14 PM9/12/19
to pylons-discuss
Thanks Bert, have you seen the results:


bjoern seems to be much more performant than the alternatives.

-alex

Theron Luhn

unread,
Sep 12, 2019, 1:41:02 PM9/12/19
to 'andi' via pylons-discuss
It looks like bjoern is only single threaded, so can’t process concurrent requests, and has no configuration besides host and port.

I wouldn’t sacrifice features for the sake of performance unless I had a compelling reason to.  The WSGI server is rarely a bottleneck.

You’ve been using Apache+mod_wsgi, presumable with success because you haven’t mentioned otherwise, why not keep on using that?

— Theron



--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.

Theron Luhn

unread,
Sep 12, 2019, 1:54:53 PM9/12/19
to 'andi' via pylons-discuss
Also, a note on those performance benchmarks:  They’re testing absurd numbers of simultaneous connections.  That might not be the most relevant metric.  Usually you put an HTTP server like nginx or HAProxy in front of your WSGI server, which are very good at handling high-volume HTTP requests, so that your WSGI server isn't subjected to whatever the internet might send your way.

Self-promotion:  I built a HAProxy-based Docker reverse proxy for gunicorn (but would probably work well with any WSGI server).  https://hub.docker.com/r/luhn/gunicorn-proxy

— Theron



Jonathan Vanasco

unread,
Sep 12, 2019, 4:25:23 PM9/12/19
to pylons-discuss


On Thursday, September 12, 2019 at 1:54:53 PM UTC-4, Theron Luhn wrote: 
Usually you put an HTTP server like nginx or HAProxy in front of your WSGI server, which are very good at handling high-volume HTTP requests, so that your WSGI server isn't subjected to whatever the internet might send your way.

in addition to that:

* I've found a forking server like uwsgi / gunicorn to be the most performant strategy for most situations, as their "max_requests" settings to restart workers will eliminate most issues with process growth or memory leaks.  they both also offer numerous deployment and management benefits

and in addition to what Bert said...

every deployment strategy has a pros and cons that are from tradeoffs in the design of the infrastructure. there is no overall "best option". your code could run blazingly fast on one stack with specific resources, and run terribly on another. 

while i have a handful of apps that could redeploy using a single-process strategy with no impact, moving my main app from a forked multi-process model to a series of single-process containers would greatly increase our cloud computing bills (we tested it a while back, and it would have required way more servers/resources).

Mikko Ohtamaa

unread,
Sep 12, 2019, 5:13:45 PM9/12/19
to pylons-...@googlegroups.com

* I've found a forking server like uwsgi / gunicorn to be the most performant strategy for most situations, as their "max_requests" settings to restart workers will eliminate most issues with process growth or memory leaks.  they both also offer numerous deployment and management benefits

You might be able to squeeze more juice out of uWSGI, but Gunicorn is pure Python making it easier to deploy, manage and diagnose.

Also the performance depends on the load characteristics - the usual bottleneck is SQL database performance. For static assets and such - they should not be served from WSGI in the first place and you can nicely offload this to CloudFlare or similar automatically. Real time like,  WebSockets, is a different beast and not relevant for WSGI.

Br,
Mikko 



and in addition to what Bert said...

every deployment strategy has a pros and cons that are from tradeoffs in the design of the infrastructure. there is no overall "best option". your code could run blazingly fast on one stack with specific resources, and run terribly on another. 

while i have a handful of apps that could redeploy using a single-process strategy with no impact, moving my main app from a forked multi-process model to a series of single-process containers would greatly increase our cloud computing bills (we tested it a while back, and it would have required way more servers/resources).

--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.

Bert JW Regeer

unread,
Sep 14, 2019, 3:24:47 AM9/14/19
to Pylons Project
Yeah,

I've seen them.

I don't care about them.

Here's why:


This is an unrealistic of a test as it can get. It does 0 I/O other than accepting a connection, reading the full request, and then sending a response. There is no further parsing of the request, there is no database interaction, there is nothing realistic about this test. If all we had to do was send back a 200 OK + OK body, I could write a VERY fast server, that would blow away Bjoern too.

What you need to do:

1. Use your code base and determine what your KPI's are
2. Test with various different WSGI servers
3. Deploy the one that you like best and can most easily manage/install

Why is 3 important? Because the reality is that once you start slinging real world code at any of the WSGI servers mentioned they are all going to perform pretty similarly. Some will maybe be a tiny bit faster, or make it easier to manage memory growth (uWSGI) or has proxy handling already built-in (Gunicorn/Waitress) or has some other feature you want.

Ultimately Bjoern may be fast at accepting/parsing the requests and thus at sending it down the WSGI stack and getting a response out, but with only a single thread it means if you have any I/O what so ever (such as connection to PostgreSQL and waiting for a response after querying the database) it won't be able to answer any other requests.

There's upsides and downsides to all of them. Pre-fork is great if you want to save some memory and have the ability to kill processes in the future, prefork + threaded allows you to handle many requests with a single forked child, and just threaded means you have a single process and let Python deal with spreading the load across threads (which in most web applications is fine, because they are going to be spending an inordinate amount of clock time just waiting on IO answers).

So in the real world, pick what you want. Ignore those benchmarks. Until you get to the size of Instagram and need to eke out every last bit of performance from your servers because even 2 msec per request adds up to hours of CPU time saved. Your application is the bottle neck, not the HTTP -> WSGI server.

Last but not least, you should absolutely put something like NGINX/Apache in front of whatever WSGI server you end up picking and reverse proxying to them, so that you get cheap TLS/HTTP2/and other features.

Bert

--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.

Alexander Mills

unread,
Sep 14, 2019, 2:57:36 PM9/14/19
to pylons-discuss
We used to have Apache + 4 python processes using mod_wsgi in a Docker container. The experience was subpar. Too much crap in one container. So we moved to the single proc per container idea, so one python process per container running bjoern..We just have more containers, instead of more procs in fewer containers.

Mike Orr

unread,
Sep 15, 2019, 2:22:57 PM9/15/19
to pylons-...@googlegroups.com
On Thu, Sep 12, 2019 at 2:13 PM Mikko Ohtamaa <mi...@redinnovation.com> wrote:
>
>
>For static assets and such - they should not be served from WSGI in the first place and you can nicely offload this to CloudFlare or similar automatically .

What if they're access-restricted to certain groups of users? I have
my public assets served by Apache (via 'ProxyPass /staic !' and 'Alias
/static /directory') but I serve private ones with 'FilerResponse'.It
would be nice to serve them outside WSGI but nothing else knows about
Pyramid's authorization.

Mike Orr

unread,
Sep 15, 2019, 2:35:08 PM9/15/19
to pylons-...@googlegroups.com
On Sat, Sep 14, 2019 at 12:24 AM Bert JW Regeer <xist...@0x58.com> wrote:
> There's upsides and downsides to all of them. Pre-fork is great if you want to save some memory and have the ability to kill processes in the future, prefork + threaded allows you to handle many requests with a single forked child, and just threaded means you have a single process and let Python deal with spreading the load across threads (which in most web applications is fine, because they are going to be spending an inordinate amount of clock time just waiting on IO answers).

The number of processes also matters if you have a lot of modules,
long intiialization routines, or keep a lot of things in memory shared
between requests -- a multiprocess server will have a copy in each
process while a multithreaded server will have one copy.

Regarding performance, I usually use Waitress but I hvae one
application that's higher-volume than the others and it bogged down in
stress tests but when we switched it to uWSGI the problems went away.
The downside is uWSGI is C so it's harder to install: it may be easy
on one server/OS but not on another, and if the environment doesn't
have the C build tools you have to install them. Recently we migrated
the application to Docker although I don't think we migrated uWSGI so
it may be back to Waitress. I started thinking about looking at Docker
uWSGI options but haven't pursued it yet.

Mike Orr

unread,
Sep 15, 2019, 2:47:38 PM9/15/19
to pylons-...@googlegroups.com
On Sun, Sep 15, 2019 at 11:34 AM Mike Orr <slugg...@gmail.com> wrote:
> Regarding performance, I usually use Waitress but I hvae one
> application that's higher-volume than the others and it bogged down in
> stress tests but when we switched it to uWSGI the problems went away.
> The downside is uWSGI is C so it's harder to install: it may be easy
> on one server/OS but not on another, and if the environment doesn't
> have the C build tools you have to install them. Recently we migrated
> the application to Docker although I don't think we migrated uWSGI so
> it may be back to Waitress. I started thinking about looking at Docker
> uWSGI options but haven't pursued it yet.

One problem with uWSGI is poor documentation. It has a lot of
configuration options and their documentation is sometimes hard to
understand and there's no place to go "How do I do this?" or "Which
options are relevant to this?"

And in one case the documentation was wrong. There's an option to read
a logging config from a PasteDeploy INI file but it didn't work. So as
a workaround I added a setting in my application, 'log = %(__file__)s'
(or maybe the value was %(here)s/production.ini'), and if it's set
then I call 'logging.config.fileConfig()' on it. (Or
'pyramid.paster.setup_logging()'.)

--
Mike Orr <slugg...@gmail.com>

Theron Luhn

unread,
Sep 15, 2019, 2:54:34 PM9/15/19
to pylons-...@googlegroups.com


On Sep 15, 2019, at 11:22 AM, Mike Orr <slugg...@gmail.com> wrote:

What if they're access-restricted to certain groups of users? I have
my public assets served by Apache (via 'ProxyPass /staic !' and 'Alias
/static /directory')  but I serve private ones with 'FilerResponse'.



 I’ve always ended up with FileResponse because my needs are modest and it’s good enough. 

Hynek Schlawack

unread,
Sep 16, 2019, 8:37:17 PM9/16/19
to pylons-...@googlegroups.com
Since I saw a few replies that I don’t entirely agree with, let me give you a bunch of different opinions:

- I also do 1 proc/Docker image, I would recommend to have a look on https://pythonspeed.com/docker/ for various things to keep in mind if you haven’t seen it already.

- As others pointed out, I don’t think your WSGI container is gonna be a huge bottle neck and there’s other details to keep in mind that weight heavier overall.

- bjoern is a pile of C with very few users <https://pypistats.org/packages/bjoern> and maintained mostly by one person. Without passing any judgement on the maintainer and bjoern _specifically_, there’s no way I’d expose something like that to the internet – not even with a proxy in front of it.

- I don’t know whether it’s me, my Python DB driver (sqlanydb 😞), or the underlying libs: there’s stuff leaking all the time so I wouldn’t use a WSGI container that doesn’t do worker recycling after a configurable amount of requests served. Otherwise you get best case uncontrolled recycling via crash and worst case deadlocks.

- Defense in depth: we have the policy of PII and secrets never hitting our internal network unencrypted because we don’t want one compromised application lead to user credentials, PII or credit card data leaking via a network sniffer.

There might even be legislation for that in some countries and whenever I talk to AWS or Google engineers they are very adamant that it’s important despite all VPCs and whatnot – use your own judgement. In my case this means that unless I want to have a proxy sidecar (I usually don’t), waitress and bjoern are right out. In case of waitress it’s kinda a bummer but oh well.

- The argument that you need a sidecar for static files is mostly obsolete since we got sendfile http://man7.org/linux/man-pages/man2/sendfile.2.html> and can be usually disregarded given the complexity it entails. I like using whitenoise for it because it allows for nice
re-mappings etc: http://whitenoise.evans.io/ (see also http://whitenoise.evans.io/en/stable/#isn-t-serving-static-files-from-python-horribly-inefficient)

- HAProxy kicks nginx’s and Apache’s behinds in almost every regard. This is my hill. I like my hill. I will retire on this hill.

All that said, my docker-entrypoint.sh usually looks something like this:

```
#!/bin/bash

set -euo pipefail
ifs=$'\n\t'

exec 2>&1 \
/app/bin/gunicorn \
--bind 0.0.0.0:8000 \
--workers 1 \
--threads 8 \
--max-requests 4096 \
--max-requests-jitter 128 \
—timeout 60 \
--forwarded-allow-ips=“EDGE-PROXY-IP" \
--ssl-version 5 \
--ciphers=ecdh+aesgcm \
--keyfile "/etc/ssl/private/*.node.consul.key" \
--certfile "/etc/ssl/certs/*.node.consul.crt" \
--worker-tmp-dir /dev/shm \
"app.wsgi"
```

So yeah, I’m running gunicorn and it’s just fine. It’s pure Python, therefore easy to install, and widely used, therefore quite stable.

If you want that percent more performance you can go for uWSGI but be aware that it can be quite rough on the edges and there isn’t much development going on anymore (neither is for gunicorn to be fair, but it seems mostly feature-complete and I’m not aware of any gross bugs). You should use <https://www.techatbloomberg.com/blog/configuring-uwsgi-production-deployment/> to guide you configuring it and I’ve heard rumors of an upcoming fork. We’ll see.

Finally, a lot I’m applying nowadays is straight from warehouse, the new Pyramid-based PyPI: <https://github.com/pypa/warehouse>. They are certainly running at scale and there’s a lot to learn and myths to debunk.

—h
> --
> You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/b9cf75b9-71f0-4d7d-a786-c2564796ff78%40googlegroups.com.

Julien Cigar

unread,
Sep 17, 2019, 4:42:20 AM9/17/19
to pylons-...@googlegroups.com
couldn't you use XSendfile / X-Accel?

>
> --
> You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/CAH9f%3DurUPMUdLXJg--f35gH0bBuP0SAr4wpr4AkLVQ%3DS%3D_2O8Q%40mail.gmail.com.

--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
signature.asc

Mike Orr

unread,
Sep 17, 2019, 12:12:35 PM9/17/19
to pylons-...@googlegroups.com
On Tue, Sep 17, 2019 at 1:42 AM Julien Cigar <jul...@perdition.city> wrote:
>
> On Sun, Sep 15, 2019 at 11:22:40AM -0700, Mike Orr wrote:
> > On Thu, Sep 12, 2019 at 2:13 PM Mikko Ohtamaa <mi...@redinnovation.com> wrote:
> > >
> > >
> > >For static assets and such - they should not be served from WSGI in the first place and you can nicely offload this to CloudFlare or similar automatically .
> >
> > What if they're access-restricted to certain groups of users? I have
> > my public assets served by Apache (via 'ProxyPass /staic !' and 'Alias
> > /static /directory') but I serve private ones with 'FilerResponse'.It
> > would be nice to serve them outside WSGI but nothing else knows about
> > Pyramid's authorization.
>
> couldn't you use XSendfile / X-Accel?

I did use X-Sendfile at one point, although not for this use case if I
remember. I'm not sure how I feel about letting Apache have access to
the files.

Jonathan Vanasco

unread,
Sep 17, 2019, 3:25:53 PM9/17/19
to pylons-discuss


On Monday, September 16, 2019 at 8:37:17 PM UTC-4, hynek wrote:
 
- I don’t know whether it’s me, my Python DB driver (sqlanydb 😞), or the underlying libs: there’s stuff leaking all the time so I wouldn’t use a WSGI container that doesn’t do worker recycling after a configurable amount of requests served. Otherwise you get best case uncontrolled recycling via crash and worst case deadlocks.

this is actually the main reason why I adopted uwsgi. a hook loads most of the Python we need into the shared memory before the fork, and then workers are recycled to our specs.

how do you avoid leaks / leak-like process growth in the threaded gunicorn solution you adopted? 

Alexander Mills

unread,
Sep 17, 2019, 4:13:20 PM9/17/19
to pylons-...@googlegroups.com
@hynek you are not using 1 process per container. You are using gunicorn or haproxy instead of the container and those talk to python?

--
You received this message because you are subscribed to a topic in the Google Groups "pylons-discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pylons-discuss/fkD20M94XBM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pylons-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/5df47be5-5f1a-40cc-870b-9e5132925434%40googlegroups.com.

Bert JW Regeer

unread,
Sep 17, 2019, 10:33:30 PM9/17/19
to Pylons Project
Unless you are using a version of sendfile that does TLS in the kernel, most of the benefits of sendfile disappear as soon as you enable TLS. All of the encryption is done in userspace and you can't use sendfile for that.

Even if you use sendfile to send it to a reverse proxy, at some point it leaves the kernel and ends up in userspace for encryption. It's unlikely that you are really getting much performance benefit at all these days.

There are some patches against FreeBSD that I am aware of that move TLS to the kernel so that sendfile is done in kernel rather than switching from kernel to userspace and back.

Bert

Hynek Schlawack

unread,
Sep 18, 2019, 1:31:01 AM9/18/19
to pylons-...@googlegroups.com

> On 17. Sep 2019, at 22:13, Alexander Mills <alexande...@gmail.com> wrote:
>
> @hynek you are not using 1 process per container.

Eh, yes it’s true that I run actually two: the gunicorn master and one child process as worker. Conceptually I think of it as one, because only one process of my app is running at a time and the supervising process enables some of the features I’ve mentioned. The fact that gunicorn actually starts another one seems like an uninteresting implementation detail to me.

> You are using gunicorn or haproxy instead of the container and those talk to python?

Our apps run in a Docker containers in a Nomad cluster in a private network with an HAProxy edge proxy that discovers them using Consul and exposes them to the Internet / internal users.

Hynek Schlawack

unread,
Sep 18, 2019, 1:34:57 AM9/18/19
to pylons-...@googlegroups.com
In the entrypoint:

```
--max-requests 4096 \
--max-requests-jitter 128 \
```

This means "recycle after 4096 +/- 128 requests” (The jitter is so they don’t recycle all at once; although it’s unlikely to happen at the same time over the whole cluster. The option is more useful when you have more than one worker.).

Hynek Schlawack

unread,
Sep 18, 2019, 1:44:56 AM9/18/19
to pylons-...@googlegroups.com
That’s a good point, I should’ve added more nuance to it.

Generally speaking I think the average developer doesn’t have to care for it if they don’t serve an excessive amount of huge static files. Make sure to serve everything that’s not part of app from a CDN, set aggressive caching headers, and you’ll ~most likely~ be fine. Especially because the ssl module releases the GIL so it’s not blocking your app either way. We’ve got quite a bit of leeway until we need to use sidecar proxies and the flexibility and features of whitenoise makes it worth twice as much.

Once you hit the wall, you should start thinking about using a CDN for your app’s data too and whitenoise will help you with that too.
> --
> You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/426440B8-61A7-47C2-BF1F-876B52EF184A%400x58.com.

Jonathan Vanasco

unread,
Sep 18, 2019, 11:32:12 AM9/18/19
to pylons-discuss


On Wednesday, September 18, 2019 at 1:34:57 AM UTC-4, hynek wrote:

This means "recycle after 4096 +/- 128 requests” (The jitter is so they don’t recycle all at once; although it’s unlikely to happen at the same time over the whole cluster. The option is more useful when you have more than one worker.).

That I knew... you actually answered what I needed to know above... with this line

On Wednesday, September 18, 2019 at 1:31:01 AM UTC-4, hynek wrote:
Eh, yes it’s true that I run actually two: the gunicorn master and one child process as worker.

I didn't realize that's how 1 process worked in gunicorn; i assumed it just ran everything in master.  that's an interesting approach.  i'll definitely consider it when we do a devops sprint next!

Mikko Ohtamaa

unread,
Sep 18, 2019, 11:39:33 AM9/18/19
to pylons-...@googlegroups.com

- I also do 1 proc/Docker image, I would recommend to have a look on https://pythonspeed.com/docker/ for various things to keep in mind if you haven’t seen it already.

I am very familiar with the trade offs and tuning of process vs. threads. Now, when this conversation is going on, would someone comment trade offs between container vs. process? Is it radically more memory/CPU sensitive? Are things like logging and external service API access harder to arrange? What are the trade offs and benefits?

Br,
Mikko
 

Jonathan Vanasco

unread,
Sep 18, 2019, 11:39:52 AM9/18/19
to pylons-discuss


On Monday, September 16, 2019 at 8:37:17 PM UTC-4, hynek wrote:

- HAProxy kicks nginx’s and Apache’s behinds in almost every regard. This is my hill. I like my hill. I will retire on this hill.

I really like HAProxy and Varnish. They were both keystones to some high traffic sites I worked on.

I mostly switched form nginx to the OpenResty fork a few years back. The "theoretical performance" isn't as good as HAProxy, but the "actual performance in real situations" has been negligible.  Being able to program at hooks into the various HTTP/nginx request cycle has been much faster to develop and deploy various product features for the teams I worked with.

Hynek Schlawack

unread,
Sep 18, 2019, 12:18:55 PM9/18/19
to pylons-...@googlegroups.com

> - I also do 1 proc/Docker image, I would recommend to have a look on https://pythonspeed.com/docker/ for various things to keep in mind if you haven’t seen it already.
>
> I am very familiar with the trade offs and tuning of process vs. threads. Now, when this conversation is going on, would someone comment trade offs between container vs. process? Is it radically more memory/CPU sensitive? Are things like logging and external service API access harder to arrange? What are the trade offs and benefits?

It’s definitely a trade-off between juicing everything you can out of your hardware vs operational maneuverability. I’m not even convinced that you shouldn’t have have more than one worker process per container, it’s just much easier to have introspection and pull-based metrics that way (just expose a /-/.* namespace in a view and you’re done).

Essentially you’re simplifying your apps to an easy to handle building block and move that complexity into your cluster managers, load balancers, log aggregators, service discovery, service meshes, …

I’ve also given a talk that tries to build and reason up to that building block ideal last year: <https://hynek.me/talks/deploy-friendly/>. I don’t talk about performance characteristics at all though since the focus is to make deployments easier, not make apps faster. :)

Whether or not that’s worth it to you, depends entirely on your use case. For instance one of the authors of the Bloomberg uWSGI article I linked before gave a talk on that topic and mentioned that they run everything on metal because their apps are computational expensive. And I’m sure they have a bunch of Kubernetes clusters laying around, so for them it’s literally choosing what’s “best for their workload” and not “what’s easiest”.

Theron Luhn

unread,
Sep 18, 2019, 7:49:08 PM9/18/19
to pylons-...@googlegroups.com
One thing not yet explicitly mentioned on this thread:  the recommendation is one *concern* per container, not necessarily one OS process. As Hynek said, that a single-worker gunicorn container is actually two processes is an uninteresting implementation detail. The number of OS processes is unrelated to achieving the goals of one concern per container. 

— Theron
--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages