Memcache not caching big values (> 1Mb)

419 views
Skip to first unread message

Luciano Pacheco

unread,
Mar 10, 2013, 6:41:02 PM3/10/13
to django-d...@googlegroups.com
Hi all,

Memcache backend, using python-memcache silently ignores data bigger than 1Mb.

This is more serious on using cache for pages (full body) where it's more likely to have larger data being cached.

Also, larger pages usually are the ones that need more cache, because their more costly to generate.

I've propose a fix that address this issue and 2 more:

1 - Compress cache data

Because body page has a good compression rate this feature helps a lot on this issue.

Most of django backends uses higher pickle protocol, but it isn't true for Memcache

3 - cache_db session "forgetting" big values
https://code.djangoproject.com/ticket/16358

Here is the pull request to address these issues:

It has changes in the docs and memcache backends, but there isn't any test, because I couldn't figure out how to test them. :( Any advice on how to test them?

Any feedback is welcome, I'm willing to make any necessary changes to get it on the 1.6 train :)

[],
--
Luciano Pacheco
blog.lucmult.com.br

Aymeric Augustin

unread,
Mar 11, 2013, 4:46:04 AM3/11/13
to django-d...@googlegroups.com
Hi Luciano,

Memcache backend, using python-memcache silently ignores data bigger than 1Mb.

This is actually a limitation of memcached itself.

1 - Compress cache data
Because body page has a good compression rate this feature helps a lot on this issue.

This ticket was closed as wontfix by Russell, and I agree with him. Django provides an API to implement pluggable caching backends, and that's the right solution here.

Most of django backends uses higher pickle protocol, but it isn't true for Memcache

Bas Peschier and I fixed this ticket two weeks ago.

3 - cache_db session "forgetting" big values
https://code.djangoproject.com/ticket/16358

This is a valid bug, but your pull request doesn't address it. It merely shifts the limit from "1MB of raw data" to "1MB of compressed data". The ticket has a patch that seems correct to me. It still needs tests.

It has changes in the docs and memcache backends, but there isn't any test, because I couldn't figure out how to test them. :( Any advice on how to test them?

Tests for caching are in tests/cache/tests.py. See also the documentation:

Any feedback is welcome, I'm willing to make any necessary changes to get it on the 1.6 train :)

In terms of process, the first step is to make sure you're working on an idea that' s accepted.

You may try to reverse the decision to close a ticket as wontfix if you have strong arguments: evidence that the reason was closing the ticket is inappropriate, that an use case was missed, that the context changed, etc.

I would also recommend against fixing several tickets in one patch. The simpler a patch, the better its chances to be merged. If two tickets are really describing the same problem, one should be closed as a duplicate of the other.

-- 
Aymeric.



Matthew Summers

unread,
Mar 11, 2013, 10:38:11 AM3/11/13
to django-d...@googlegroups.com
On Mon, Mar 11, 2013 at 3:46 AM, Aymeric Augustin
<aymeric....@polytechnique.org> wrote:
> Memcache backend, using python-memcache silently ignores data bigger than
> 1Mb.
>
>
> This is actually a limitation of memcached itself.
>

memcached can be invoked with -I <size> (that is a capital i) to
change the size of the slab page. Default is 1m, minimum is 1k, max is
128m (from the man page). You may also want to pass -L to use large
memory pages also. Now you have me wondering if the various python
backends imposes some limit. That would be interesting to know.

--
M. Summers

"...there are no rules here -- we're trying to accomplish something."
- Thomas A. Edison

Matthew Summers

unread,
Mar 11, 2013, 10:47:32 AM3/11/13
to django-d...@googlegroups.com
On Mon, Mar 11, 2013 at 9:38 AM, Matthew Summers <msumm...@gmail.com> wrote:
> On Mon, Mar 11, 2013 at 3:46 AM, Aymeric Augustin
> <aymeric....@polytechnique.org> wrote:
>> Memcache backend, using python-memcache silently ignores data bigger than
>> 1Mb.
>>
>>
>> This is actually a limitation of memcached itself.
>>
>
> memcached can be invoked with -I <size> (that is a capital i) to
> change the size of the slab page. Default is 1m, minimum is 1k, max is
> 128m (from the man page). You may also want to pass -L to use large
> memory pages also. Now you have me wondering if the various python
> backends imposes some limit. That would be interesting to know.
>

Sadly, the python-memcached backend sets a var SERVER_MAX_VALUE_LENGTH
that artificially limits the cache size. Seems like this is a poor
assumption given that memcached is configurable.

See here: http://bazaar.launchpad.net/~python-memcached-team/python-memcached/trunk/view/head:/memcache.py#L92

Easy enough to patch ...
Reply all
Reply to author
Forward
0 new messages