But when I run it with ASGI under uvicorn, it makes a different connection
to memcached for every request and immediately closes it. That is,
#11331/#15324 has returned. This at least adds undesirable latency, and
may also eventually lead to connection failures as described in #11331.
I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
switched the thread-local cache objects to task-local cache objects. That
was justified in #31253 by “a potential race condition if two coroutines
touch the same cache object at exactly the same time”. But that
justification doesn’t make sense: two coroutines ''cannot'' be running the
same ''synchronous'' code on the same thread at the same time, so thread-
local objects were already safe.
Now that asynchronous cache backend support is being developed, the
argument is slightly different—but it seems obvious to expect that any
asynchronous cache backend should at least properly support concurrent use
from multiple asynchronous tasks on the same thread, so thread-local
objects should still be safe.
--
Ticket URL: <https://code.djangoproject.com/ticket/33625>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.
Comment (by Anders Kaseorg):
Possibly related: #33497 (for database connections).
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:1>
* keywords: => asgi, async
* stage: Unreviewed => Accepted
Comment:
I didn't test, but seems reasonable. Thanks for the report Anders.
(If you could attach a reproducing project with exact steps, that would
save a cycle later.)
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:2>
Comment (by Anders Kaseorg):
Sure. Here’s a
[https://gist.github.com/andersk/56784fc0c4cd22005b4ba2925ebb9895 minimal
test project]. Clone it, run `memcached -vv` along with `gunicorn
mctest_wsgi` or `uvicorn mctest_asgi:application`, visit
http://localhost:8000 several times, and check whether `memcached -vv` is
printing lines like “new auto-negotiating client connection” and
“connection closed.”
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:3>
Comment (by Carlton Gibson):
Super, thanks. 🏅
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:4>
* owner: nobody => Pablo Montepagano
* status: new => assigned
Comment:
I'd like to try to solve this ticket. If I understand correctly, what
would be needed is a connection pool for `PyLibMCCache` and for
`PyMemcacheCache` that works well under ASGI. I'm thinking of doing
something similar to what was done in this repo: https://github.com
/Andrew-Chen-Wang/django-async-redis
Before I start developing, do we have a good way to do integration testing
for this? I'll try to look into that first.
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:5>
Comment (by Carlton Gibson):
Replying to [comment:5 Pablo Montepagano]:
> Before I start developing, do we have a good way to do integration
testing for this? I'll try to look into that first.
Good question! We have a few ASGI related tickets for which this is on the
agenda. Currently working on putting test projects together and then using
[https://locust.io Locust] to run requests against it. If you fancied
going that route I'd be happy to input. (Perhaps ping me on a sample
repo... or...) Thanks!
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:6>
Old description:
> When I run an application that uses
> `django.core.cache.backends.memcached.PyMemcacheCache` with WSGI under
> Gunicorn, it makes a single connection to memcached and leaves it open,
> as I expect.
>
> But when I run it with ASGI under uvicorn, it makes a different
> connection to memcached for every request and immediately closes it. That
> is, #11331/#15324 has returned. This at least adds undesirable latency,
> and may also eventually lead to connection failures as described in
> #11331.
>
> I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
> switched the thread-local cache objects to task-local cache objects. That
> was justified in #31253 by “a potential race condition if two coroutines
> touch the same cache object at exactly the same time”. But that
> justification doesn’t make sense: two coroutines ''cannot'' be running
> the same ''synchronous'' code on the same thread at the same time, so
> thread-local objects were already safe.
>
> Now that asynchronous cache backend support is being developed, the
> argument is slightly different—but it seems obvious to expect that any
> asynchronous cache backend should at least properly support concurrent
> use from multiple asynchronous tasks on the same thread, so thread-local
> objects should still be safe.
New description:
When I run an application that uses
`django.core.cache.backends.memcached.PyLibMCCache` with WSGI under
Gunicorn, it makes a single connection to memcached and leaves it open, as
I expect.
But when I run it with ASGI under uvicorn, it makes a different connection
to memcached for every request and immediately closes it. That is,
#11331/#15324 has returned. This at least adds undesirable latency, and
may also eventually lead to connection failures as described in #11331.
I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
switched the thread-local cache objects to task-local cache objects. That
was justified in #31253 by “a potential race condition if two coroutines
touch the same cache object at exactly the same time”. But that
justification doesn’t make sense: two coroutines ''cannot'' be running the
same ''synchronous'' code on the same thread at the same time, so thread-
local objects were already safe.
Now that asynchronous cache backend support is being developed, the
argument is slightly different—but it seems obvious to expect that any
asynchronous cache backend should at least properly support concurrent use
from multiple asynchronous tasks on the same thread, so thread-local
objects should still be safe.
--
Comment (by Anders Kaseorg):
We don’t need a new pool. Both
[https://sendapatch.se/projects/pylibmc/pooling.html pylibmc] and
[https://pymemcache.readthedocs.io/en/latest/getting_started.html#using-a
-client-pool pymemcache] already have one.
It seems to me it’d be best to leave pooling up to the backend library,
because it’s in a better position to know what kinds of concurrency are
safe.
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:7>