Using a connection pool with multiple processes on the same machine

3,013 views
Skip to first unread message

Maximilian Roos

unread,
Jan 21, 2016, 8:43:48 PM1/21/16
to sqlalchemy
We're using celery, a job distribution package. On a single machine, there are 20+ celery workers running, each with their own Python process. We had some issues with the processes attempting to use the same SQLAlchemy connections (I think because they are referenced by file descriptors, and so can cross process boundaries?), and so we're now using a NullPool. 

But is there a way of doing this more efficiently than a NullPool?

There's are two suggestions in the docs :
  • Dispose the connection as soon as the other process receives it. This wouldn't work with celery, since the workers are started on their own, not forked from another python process.
  • Implement a listener that invalidates connections created in another process. I'm unclear how this works and whether that would be practical in this context. Specifically, given the number of processes we have running on each machine, is the pool that a process is selecting & testing from shared between all processes on that machine? If a pool is shared across 20 processes, the chances that a connection chosen at random would have been created in that process is fairly low...
It's also possible we're woefully misunderstanding how this works; corrections appreciated.

Many thanks,
Max

Mike Bayer

unread,
Jan 21, 2016, 11:01:41 PM1/21/16
to sqlal...@googlegroups.com


On 01/21/2016 08:43 PM, Maximilian Roos wrote:
> We're using celery, a job distribution package. On a single machine,
> there are 20+ celery workers running, each with their own Python
> process. We had some issues with the processes attempting to use the
> same SQLAlchemy connections (I think because they are referenced by file
> descriptors, and so can cross process boundaries?), and so we're now
> using a NullPool.
>
> But is there a way of doing this more efficiently than a NullPool?

yes! Though if NullPool is doing the job, there might not be a need to
switch.

>
> There's are two suggestions in the docs
> <http://docs.sqlalchemy.org/en/latest/core/pooling.html#using-connection-pools-with-multiprocessing>
> :
>
> * Dispose the connection as soon as the other process receives it.
> This wouldn't work with celery, since the workers are started on
> their own, not forked from another python process.

If the worker isn't forked from another Python process, then if you call
create_engine() in that process, that's a brand new engine. You
wouldn't be re-using connection across processes in this case. Only if
forking is happening, can the problem you refer to occur. If there's no
forking, then the process does not inherit any state.

However, if Celery is using Python multiprocessing, it's doing forking
and there is a parent Python process.


> * Implement a listener that invalidates connections created in another
> process. I'm unclear how this works and whether that would be
> practical in this context.

this is a simple recipe and is straight from what we do in Openstack,
and again, it only makes sense if there's forking going on. If Celery is
starting brand new Python processes that don't inherit any state from a
parent process, and the worker itself also isn't forking, then there's
nothing to do.

Specifically, given the number of
> processes we have running on each machine, is the pool that a
> process is selecting & testing from shared between all processes on
> that machine?

it's not, since the pool is an application-level pool. It exists within
the space of a Python interpreter and therefore is local to a process.
So when you have a lot of processes, pooling of this style is hard
because the pooling cannot coordinate between processes.

For Postgresql backends, there's already a solution, which is to use
PGBouncer: https://wiki.postgresql.org/wiki/PgBouncer For PG, I'd stick
with NullPool and just use PgBouncer as the pool. The issue with
PgBouncer is that it's entirely Postgresql specific and there aren't
really similar solutions for other backends.

For MySQL and others, there's not really a software solution out there
with the possible exception of SQLRelay, but SQLRelay is kind of an odd
quasi-commercial product with very complex licensing and an unfortunate
home on Sourceforge.

One of the things I'm doing at Red Hat is attempting to address this
issue in some way that addresses MySQL and backends in general. Some
approaches I'm looking into include building a coordination service that
communicates with the otherwise process-local pools, building a system
that somehow coordinates with an HAProxy service, or the last resort
which is that I build a brand-new PGBouncer / SQLRelay-like solution
that isn't hardwired to Postgresql and isn't burdened by weird
commercial licensing like SQLRelay.

If a pool is shared across 20 processes, the chances
> that a connection chosen at random would have been created in that
> process is fairly low...
>
> It's also possible we're woefully misunderstanding how this works;
> corrections appreciated.

I haven't worked with Celery so you might want to get a clear answer on
if the create_engine() you're calling is in fact in a parent process
that gets inherited (e.g. forked) or if it really is called individually
in totally distinct Python processes. If the latter, you wouldn't have
any shared connections between processes.


>
> Many thanks,
> Max
>
> --
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+...@googlegroups.com
> <mailto:sqlalchemy+...@googlegroups.com>.
> To post to this group, send email to sqlal...@googlegroups.com
> <mailto:sqlal...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
Message has been deleted

Maximilian Roos

unread,
Jan 22, 2016, 1:23:04 AM1/22/16
to sqlalchemy
Great, thanks for the reply Mike.

I looked further - Celery is using fork / multiprocessing, but the forking occurs before any import of our libraries / sqlalchemy / create_engine. Is there a risk of reusing connections in that state?

Can I confirm that the 'cross-effects' between processes on the same machine include those the docs details between threads - and so require a scoped_session (or something that achieves the same goal)?

Mike Bayer

unread,
Jan 22, 2016, 10:03:38 AM1/22/16
to sqlal...@googlegroups.com


On 01/22/2016 01:23 AM, Maximilian Roos wrote:
> Great, thanks for the reply Mike.
>
> I looked further - Celery *is* using fork / multiprocessing, but the
> forking occurs before any import of our libraries / sqlalchemy /
> create_engine. Is there a risk of reusing connections in that state?
>
> Can I confirm that the 'cross-effects' between processes on the same
> machine include those the docs details between threads - and so require
> a scoped_session (or something that achieves the same goal)?

A python process has absolutely no way to affect the state of another
python process without explicit communication channels (e.g. pipes) set
up between those processes, other than whatever resources were mutually
inherited from the parent, which can include filehandles (such as socket
connections to databases).

If you are seeing the effects of connection sharing between processes
this suggests that database connections are being created before a fork
occurs. The recipe in the docs for asserting this is not the case is
very handy for guaranteeing this is not possible.
> > an email to sqlalchemy+...@googlegroups.com <javascript:>
> > <mailto:sqlalchemy+...@googlegroups.com <javascript:>>.
> > To post to this group, send email to sqlal...@googlegroups.com
> <javascript:>
> > <mailto:sqlal...@googlegroups.com <javascript:>>.
> <https://groups.google.com/group/sqlalchemy>.
> > For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+...@googlegroups.com
> <mailto:sqlalchemy+...@googlegroups.com>.

Jonathan Vanasco

unread,
Jan 22, 2016, 12:14:14 PM1/22/16
to sqlalchemy
I use SqlAlchemy and Celery and haven't run into this issue yet, but it appears to be dumb-luck from the size of my worker pool.

I did a bit of searching, and check out Cerlery.signals

Reading through some closed tickets on their github, it looks like worker_process_init was introduced to handle support for Crypto.Random.atfork()  [https://github.com/celery/celery/issues/109]


I haven't tested yet -- and this is just pseudocode -- but i imagine something like this might fix your problem:

    @worker_process_init
     def mycelery_atfork():
          mySqlalchemyEngine.dispose()


Maximilian Roos

unread,
Jan 22, 2016, 6:24:37 PM1/22/16
to sqlalchemy
A python process has absolutely no way to affect the state of another 
python process without explicit communication channels (e.g. pipes) set 
up between those processes, other than whatever resources were mutually 
inherited from the parent, which can include filehandles (such as socket 
connections to databases). 

That's what I first thought... Then we had the problems we have, and I figured there must be some cross-process contamination. It makes sense that there needs to communication for there to be contamination.

We'll investigate further. It could be something like unwittingly pickling the engine and sending it across processes.

Thanks again
>     > <mailto:sqlalchemy+unsub...@googlegroups.com <javascript:>>.
>     > To post to this group, send email to sqlal...@googlegroups.com
>     <javascript:>
>     > <mailto:sqlal...@googlegroups.com <javascript:>>.
>     > Visit this group at https://groups.google.com/group/sqlalchemy
>     <https://groups.google.com/group/sqlalchemy>.
>     > For more options, visit https://groups.google.com/d/optout
>     <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+...@googlegroups.com

Jonathan Vanasco

unread,
Jan 27, 2016, 9:01:07 PM1/27/16
to sqlalchemy
just implemented the following code:

_saEngine is just the sqlalchemy engine I use.

logging some vars to the terminal window in my routine and this listener, and I see that this routine is called in the worker after forking.

from celery.signals import worker_process_init

@worker_process_init.connect
def mycelery_atfork(signal=None, sender=None, **named):
_saEngine.dispose()

Reply all
Reply to author
Forward
0 new messages