asyncio.Lock equivalent for multiple processes

626 views
Skip to first unread message

Ludovic Gasc

unread,
Apr 16, 2018, 6:05:38 PM4/16/18
to asyn...@python.org, python-tulip
Hi,

I'm looking for a equivalent of asyncio.Lock (https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but shared between several processes on the same server, because I'm migrating a daemon from mono-worker to multi-worker pattern.

For now, the closest solution in term of API seems aioredlock: https://github.com/joanvila/aioredlock#aioredlock
But I'm not a big fan to use polling nor with a timeout because the lock I need is very critical, I prefer to block the code than unlock with timeout.

Do I miss a new awesome library or do you have an easier approach ?

Thanks for your responses.
--
Ludovic Gasc (GMLudo)

Roberto Martínez

unread,
Apr 17, 2018, 1:19:33 AM4/17/18
to Ludovic Gasc, asyn...@python.org, python-tulip

Hi,

I don't know if there is a third party solution for this.

I think the closest you can get today using the standard library is using a multiprocessing.manager().Lock (which can be shared among processes) and call the lock.acquire() function with asyncio.run_in_executor(), using a ThreadedPoolExecutor to avoid blocking the asyncio event loop.

Best regards,
Roberto

Ludovic Gasc

unread,
Apr 17, 2018, 6:01:44 AM4/17/18
to Roberto Martínez, asyn...@python.org, python-tulip
Hi Roberto,

Thanks for the pointer, it's exactly the type of feedbacks I'm looking for: Ideas that are out-of-box of my confort zone.
However, in our use case, we are using gunicorn, that uses forks instead of multiprocessing to my knowledge, I can't use multiprocessing without to remove gunicorn.

If somebody is using aioredlock in his project, I'm interested by feedbacks.

Have a nice week.


--
Ludovic Gasc (GMLudo)

Nickolai Novik

unread,
Apr 17, 2018, 6:46:22 AM4/17/18
to Ludovic Gasc, Roberto Martínez, asyn...@python.org, python-tulip
Hi, redis lock has own limitations and depending on your use case it may or may not be suitable [1]. If possible I would redefine problem and also considered:
1) create worker per specific resource type to avoid locking
2) optimistic locking
3) File system lock like in twisted, but not sure about performance and edge cases there

Ludovic Gasc

unread,
Apr 17, 2018, 7:35:09 AM4/17/18
to Nickolai Novik, Roberto Martínez, asyn...@python.org, python-tulip
Hi Nickolai,

Thanks for your suggestions, especially for the file system lock: We don't have often locks, but we must be sure it's locked.

For 1) and 4) suggestions, in fact we have several systems to sync and also a PostgreSQL transaction, the request must be treated by the same worker from beginning to end and the other systems aren't idempotent at all, it's "old-school" proprietary systems, good luck to change that ;-)

Regards.
--
Ludovic Gasc (GMLudo)

Antoine Pitrou

unread,
Apr 17, 2018, 7:41:09 AM4/17/18
to python...@googlegroups.com, asyn...@python.org
On Tue, 17 Apr 2018 13:34:47 +0200
Ludovic Gasc <gml...@gmail.com> wrote:
> Hi Nickolai,
>
> Thanks for your suggestions, especially for the file system lock: We don't
> have often locks, but we must be sure it's locked.
>
> For 1) and 4) suggestions, in fact we have several systems to sync and also
> a PostgreSQL transaction, the request must be treated by the same worker
> from beginning to end and the other systems aren't idempotent at all, it's
> "old-school" proprietary systems, good luck to change that ;-)

If you already have a PostgreSQL connection, can't you use a PostgreSQL
lock? e.g. an "advisory lock" as described in
https://www.postgresql.org/docs/9.1/static/explicit-locking.html

Regards

Antoine.


Ludovic Gasc

unread,
Apr 17, 2018, 9:04:59 AM4/17/18
to Antoine Pitrou, python-tulip, asyn...@python.org
Hi Antoine & Chris,

Thanks a lot for the advisory lock, I didn't know this feature in PostgreSQL.
Indeed, it seems to fit my problem.

The small latest problem I have is that we have string names for locks, but advisory locks accept only integers.
Nevertheless, it isn't a problem, I will do a mapping between names and integers.

Yours.

--
Ludovic Gasc (GMLudo)

Ludovic Gasc

unread,
Apr 17, 2018, 9:08:35 AM4/17/18
to Dima Tisnek, asyn...@python.org, python-tulip
Hi Dima,

Thanks for your time and explanations :-)
However, I have the intuition that it will take me more time to implement your idea compare to the builtin feature of PostgreSQL.

Nevertheless, I keep your idea in mind in case of I have problems with PostgreSQL.

Have a nice day.

--
Ludovic Gasc (GMLudo)

2018-04-17 14:17 GMT+02:00 Dima Tisnek <dim...@gmail.com>:
Hi Ludovic,

I believe it's relatively straightforward to implement the core
functionality, if you can at first reduce it to:
* allow only one coro to wait on lock at a given time (i.e. one user
per process / event loop)
* decide explicitly if you want other coros to continue (I assume so,
as blocking entire process would be trivial)
* don't care about performance too much :)

Once that's done, you can allow multiple users per event loop by
wrapping your inter-process lock in a regular async lock.

Wrt. performance, you can start with a simple client-server
implementation, for example where:
* single-threaded server listens on some port, accepts 1 connection at
a time, writes something on the connection and waits for connection to
be closed
* each client connects (not informative due to listen backlog) and
waits for data, when client gets the data, it has the lock
* when client wants to release the lock, it closes the connection,
which unblocks the server
* socket communication is relatively easy to marry to the event loop :)

If you want high performance (i.e. low latency), you'd probably want
to go with futex, but that may prove hard to marry to asyncio
internals.
I guess locking can always be proxied through a thread, at some cost
to performance.


If performance is important, I'd suggest starting with a thread proxy
from the start. It could go like this:
Each named lock gets own thread (in each process / event loop), a sync
lock and condition variable.
When a coro want to take the lock, it creates an empty Future,
ephemerally takes the sync lock, adds this future to waiters, and
signals on the condition variable and awaits this Future.
Thread wakes up, validates there's someone in the queue under sync
lock, tries to take classical inter-process lock (sysv or file or
whatever), and when that succeeds, resolves the future using
loop.call_soon_threadsafe().
I'm omitting implementation details, like what if Future is leaked
(discarded before it's resolved), how release is orchestrated, etc.
The key point is that offloading locking to a dedicated thread allows
to reduce original problem to synchronous interprocess locking
problem.


Cheers!
> _______________________________________________
> Async-sig mailing list
> Asyn...@python.org
> https://mail.python.org/mailman/listinfo/async-sig
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>

Reply all
Reply to author
Forward
0 new messages