Problems when using Gunicorn and Postgresql together

55 views
Skip to first unread message

Laurent Daverio

unread,
Nov 16, 2023, 10:41:22 AM11/16/23
to pylons-...@googlegroups.com
Hello list,

this page seems to describe perfectly a problem I've stumbled on:


Namely, if you deploy with Gunicorn a Pyramid + PostgreSQL app based on the standard cookiecutter, you will run into problems, because the connection to the DB can't be shared between the processes, so each process needs to have its own connection to the DB.

Before I start trying to develop a workaround, has anybody encountered the problem? How should I modify my app to make it compatible with both Waitress and Gunicorn?

Thanks in advance,

Laurent.

Theron Luhn

unread,
Nov 16, 2023, 12:18:59 PM11/16/23
to pylons-...@googlegroups.com
If you aren’t using `—preload` then gunicorn should load the application fresh for each worker and you shouldn’t have any issues.

If you are using preload, you have to recreate any existing connections on fork.  For SQLAlchemy I use:

def after_fork(registry):
    registry['db_engine'].pool.recreate()

def includeme(config):
    os.register_at_fork(
        after_in_child=functools.partial(after_fork, config.registry),
    )


— Theron



--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/CAB7cU6z1DqHpEazrrJ1sPHmSPQvYtfkmeKfsJP_jLmsDyPA96w%40mail.gmail.com.

Eldav

unread,
Nov 18, 2023, 1:41:37 PM11/18/23
to pylons-discuss
Thank you Theron,

I'm not using "--preload", actually not doing anything special, since I'm trying to use Gunicorn as a drop-in replacement for Waitress, like I always did, BUT I'm realizing that I was using `psycopg2` in the past, whereas I`m using `psycopg` (i.e. version 3) now, and version 3 creates a pool of connections. That must be the explanation why it worked then, but not anymore. With `psycopg2`, I had a connection created by each worker. Now I probably have a shared pool of connections.

Laurent.

Jonathan Vanasco

unread,
Nov 20, 2023, 7:14:01 PM11/20/23
to pylons-discuss
SQLAlchemy supports this via `Engine.dispose()`, which is the documented way of handling a post-fork connection:

   https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal

Invoking `Pool.recreate()` should be fine, but the documented pattern is to call `Engine.dispose`

> How should I modify my app to make it compatible with both Waitress and Gunicorn?

I've been trying to standardize this with pyramid_forkfsafe when I was deploying projects on uwsgi and gunicorn simultaneously:


I don't know if the gunicorn is currently working, as I only work with uwsgi now and never got around to writing tests for gunicorn.

Ideally, my package works like this:

* it defines a new `ApplicationPostFork` event
* you write your post-fork code in an event subscriber
* the package invokes the event at the right time

This allows you to just define the post-fork routine, and if everything goes correctly.... pyramid_forksafe will detect the correct post-fork event and invoke your routine.

Mike Orr

unread,
Nov 21, 2023, 1:06:43 AM11/21/23
to pylons-...@googlegroups.com
On Mon, Nov 20, 2023 at 4:14 PM Jonathan Vanasco <jvan...@gmail.com> wrote:
>
> SQLAlchemy supports this via `Engine.dispose()`, which is the documented way of handling a post-fork connection:
>
> https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal

Yes, that sounds familiar.

Jonathan Vanasco

unread,
Nov 21, 2023, 9:55:18 AM11/21/23
to pylons-discuss
> Namely, if you deploy with Gunicorn a Pyramid + PostgreSQL app based on the standard cookiecutter, you will run into problems, because the connection to the DB can't be shared between the processes, so each process needs to have its own connection to the DB.

I forgot to mention...

This should not happen. Do you know which cookiecutter you used, and when?  This should not happen in the most recent cookiecutter.


It also requires pyramid_tm, which will cleanup the connections at the end of the request lifecycle.  If you decide to not use pyramid_tm, then you'll have to use a cleanup routine like this one when you grab the request.  This was one the pattern in the pyramid cookiecutter, but it was taken out because pyramid_tm effectively does that itself.


The only way the connection pool sharing should be able to happen is if you grabbed a database connection before the fork - like during application setup, which would then require a call to Engine.dispose.

If you are not consciously grabbing a database connection before the fork, you should do a code audit to figure out where the first connection is being made and how it is recycled, as the last few versions of the cookiecutter do defend against this behavior.

Eldav

unread,
Nov 21, 2023, 10:14:15 AM11/21/23
to pylons-discuss
Thank you Jonathan, 

after asking my question, I did more googling and found this :

https://docs.sqlalchemy.org/en/20/core/pooling.html#pooling-multiprocessing

It does mention Engine.dispose :) I tried their solution #4, which seemed to be the one which fit best in my code. But somehow I feel it isn't quite satisfactory, I see a lot of connects / checkout, which seems inelegant and maybe bad for performance.

Laurent.

Eldav

unread,
Nov 21, 2023, 10:17:07 AM11/21/23
to pylons-discuss
This should not happen. Do you know which cookiecutter you used, and when?  This should not happen in the most recent cookiecutter.

As far as I remember, I used the official cookiecutter, but that was a few years ago (around the time when Pyramid 2.0 was released, and I felt the need to synchronize my code with it.

Based on what you're saying, I should probably do it again now, as a first step :) I'll let you know how it went.

Thank you again,

Laurent.

Arndt Droullier

unread,
Nov 21, 2023, 10:33:20 AM11/21/23
to pylons-...@googlegroups.com
I use a standalone connection pooling for Postgresql in some cases. It is faster and more efficient than 
doing full connects to Postgresql directly in each request.  


Arndt.

--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.


--
----------------------------------------------------------------------------------------------
Arndt Droullier
Tel 0221-2725782
Nive GmbH, Händelstr. 26, Köln, Germany
Reply all
Reply to author
Forward
0 new messages