[Django] #33625: Under ASGI, PyLibMCCache closes memcached connection after every request

40 views
Skip to first unread message

Django

unread,
Apr 7, 2022, 11:53:16 PM4/7/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
------------------------------------------------+------------------------
Reporter: Anders Kaseorg | Owner: nobody
Type: Cleanup/optimization | Status: new
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Keywords:
Triage Stage: Unreviewed | Has patch: 0
Needs documentation: 0 | Needs tests: 0
Patch needs improvement: 0 | Easy pickings: 0
UI/UX: 0 |
------------------------------------------------+------------------------
When I run an application that uses
`django.core.cache.backends.memcached.PyMemcacheCache` with WSGI under
Gunicorn, it makes a single connection to memcached and leaves it open, as
I expect.

But when I run it with ASGI under uvicorn, it makes a different connection
to memcached for every request and immediately closes it. That is,
#11331/#15324 has returned. This at least adds undesirable latency, and
may also eventually lead to connection failures as described in #11331.

I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
switched the thread-local cache objects to task-local cache objects. That
was justified in #31253 by “a potential race condition if two coroutines
touch the same cache object at exactly the same time”. But that
justification doesn’t make sense: two coroutines ''cannot'' be running the
same ''synchronous'' code on the same thread at the same time, so thread-
local objects were already safe.

Now that asynchronous cache backend support is being developed, the
argument is slightly different—but it seems obvious to expect that any
asynchronous cache backend should at least properly support concurrent use
from multiple asynchronous tasks on the same thread, so thread-local
objects should still be safe.

--
Ticket URL: <https://code.djangoproject.com/ticket/33625>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.

Django

unread,
Apr 8, 2022, 12:35:01 AM4/8/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------

Reporter: Anders Kaseorg | Owner: nobody
Type: | Status: new
Cleanup/optimization |

Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:

Keywords: | Triage Stage:
| Unreviewed
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------

Comment (by Anders Kaseorg):

Possibly related: #33497 (for database connections).

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:1>

Django

unread,
Apr 8, 2022, 2:05:03 AM4/8/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
--------------------------------------+------------------------------------

Reporter: Anders Kaseorg | Owner: nobody
Type: Cleanup/optimization | Status: new

Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted

Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
--------------------------------------+------------------------------------
Changes (by Carlton Gibson):

* keywords: => asgi, async
* stage: Unreviewed => Accepted


Comment:

I didn't test, but seems reasonable. Thanks for the report Anders.

(If you could attach a reproducing project with exact steps, that would
save a cycle later.)

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:2>

Django

unread,
Apr 8, 2022, 3:12:57 AM4/8/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
--------------------------------------+------------------------------------
Reporter: Anders Kaseorg | Owner: nobody
Type: Cleanup/optimization | Status: new
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
--------------------------------------+------------------------------------

Comment (by Anders Kaseorg):

Sure. Here’s a
[https://gist.github.com/andersk/56784fc0c4cd22005b4ba2925ebb9895 minimal
test project]. Clone it, run `memcached -vv` along with `gunicorn
mctest_wsgi` or `uvicorn mctest_asgi:application`, visit
http://localhost:8000 several times, and check whether `memcached -vv` is
printing lines like “new auto-negotiating client connection” and
“connection closed.”

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:3>

Django

unread,
Apr 8, 2022, 3:28:02 AM4/8/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
--------------------------------------+------------------------------------
Reporter: Anders Kaseorg | Owner: nobody
Type: Cleanup/optimization | Status: new
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
--------------------------------------+------------------------------------

Comment (by Carlton Gibson):

Super, thanks. 🏅

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:4>

Django

unread,
Jun 30, 2022, 11:17:39 AM6/30/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------
Reporter: Anders Kaseorg | Owner: Pablo
Type: | Montepagano
Cleanup/optimization | Status: assigned

Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------
Changes (by Pablo Montepagano):

* owner: nobody => Pablo Montepagano
* status: new => assigned


Comment:

I'd like to try to solve this ticket. If I understand correctly, what
would be needed is a connection pool for `PyLibMCCache` and for
`PyMemcacheCache` that works well under ASGI. I'm thinking of doing
something similar to what was done in this repo: https://github.com
/Andrew-Chen-Wang/django-async-redis

Before I start developing, do we have a good way to do integration testing
for this? I'll try to look into that first.

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:5>

Django

unread,
Jun 30, 2022, 1:13:37 PM6/30/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------
Reporter: Anders Kaseorg | Owner: Pablo
Type: | Montepagano
Cleanup/optimization | Status: assigned
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------

Comment (by Carlton Gibson):

Replying to [comment:5 Pablo Montepagano]:


> Before I start developing, do we have a good way to do integration
testing for this? I'll try to look into that first.

Good question! We have a few ASGI related tickets for which this is on the
agenda. Currently working on putting test projects together and then using
[https://locust.io Locust] to run requests against it. If you fancied
going that route I'd be happy to input. (Perhaps ping me on a sample
repo... or...) Thanks!

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:6>

Django

unread,
Jun 30, 2022, 2:22:08 PM6/30/22
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------
Reporter: Anders Kaseorg | Owner: Pablo
Type: | Montepagano
Cleanup/optimization | Status: assigned
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------

Old description:

> When I run an application that uses
> `django.core.cache.backends.memcached.PyMemcacheCache` with WSGI under
> Gunicorn, it makes a single connection to memcached and leaves it open,
> as I expect.
>
> But when I run it with ASGI under uvicorn, it makes a different
> connection to memcached for every request and immediately closes it. That
> is, #11331/#15324 has returned. This at least adds undesirable latency,
> and may also eventually lead to connection failures as described in
> #11331.
>
> I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
> switched the thread-local cache objects to task-local cache objects. That
> was justified in #31253 by “a potential race condition if two coroutines
> touch the same cache object at exactly the same time”. But that
> justification doesn’t make sense: two coroutines ''cannot'' be running
> the same ''synchronous'' code on the same thread at the same time, so

> thread-local objects were already safe.


>
> Now that asynchronous cache backend support is being developed, the
> argument is slightly different—but it seems obvious to expect that any
> asynchronous cache backend should at least properly support concurrent
> use from multiple asynchronous tasks on the same thread, so thread-local
> objects should still be safe.

New description:

When I run an application that uses

`django.core.cache.backends.memcached.PyLibMCCache` with WSGI under


Gunicorn, it makes a single connection to memcached and leaves it open, as
I expect.

But when I run it with ASGI under uvicorn, it makes a different connection
to memcached for every request and immediately closes it. That is,
#11331/#15324 has returned. This at least adds undesirable latency, and
may also eventually lead to connection failures as described in #11331.

I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da
switched the thread-local cache objects to task-local cache objects. That
was justified in #31253 by “a potential race condition if two coroutines
touch the same cache object at exactly the same time”. But that
justification doesn’t make sense: two coroutines ''cannot'' be running the
same ''synchronous'' code on the same thread at the same time, so thread-
local objects were already safe.

Now that asynchronous cache backend support is being developed, the
argument is slightly different—but it seems obvious to expect that any
asynchronous cache backend should at least properly support concurrent use
from multiple asynchronous tasks on the same thread, so thread-local
objects should still be safe.

--

Comment (by Anders Kaseorg):

We don’t need a new pool. Both
[https://sendapatch.se/projects/pylibmc/pooling.html pylibmc] and
[https://pymemcache.readthedocs.io/en/latest/getting_started.html#using-a
-client-pool pymemcache] already have one.

It seems to me it’d be best to leave pooling up to the backend library,
because it’s in a better position to know what kinds of concurrency are
safe.

--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:7>

Django

unread,
Oct 1, 2024, 9:36:28 AM10/1/24
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------
Reporter: Anders Kaseorg | Owner: Pablo
Type: | Montepagano
Cleanup/optimization | Status: assigned
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------
Comment (by Natalia Bidart):

I believe that #35757 is a dupe of this one.
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:8>

Django

unread,
Oct 2, 2024, 8:29:37 AM10/2/24
to django-...@googlegroups.com
#33625: Under ASGI, PyLibMCCache closes memcached connection after every request
-------------------------------------+-------------------------------------
Reporter: Anders Kaseorg | Owner: Pablo
Type: | Montepagano
Cleanup/optimization | Status: assigned
Component: Core (Cache system) | Version: 4.0
Severity: Normal | Resolution:
Keywords: asgi, async | Triage Stage: Accepted
Has patch: 0 | Needs documentation: 0
Needs tests: 0 | Patch needs improvement: 0
Easy pickings: 0 | UI/UX: 0
-------------------------------------+-------------------------------------
Comment (by Harm Verhagen):

NB: #35757 is about breaking runserver (stops working), this one (#33625)
is a performance optimization.
--
Ticket URL: <https://code.djangoproject.com/ticket/33625#comment:9>
Reply all
Reply to author
Forward
0 new messages