Hey,
Thanks for the info. Yes; this generally confirms the issue. I see some of
your higher slab classes with "free_chunks 0", so if you're setting data
that requires these chunks it could error out. The "stats items" confirms
this since there are no actual items in those lower slab classes.
You're certainly right a workaround of making your items < 512k would also
work; but in general if I have features it'd be nice if they worked well
:) Please open an issue so we can improve things!
I intended to lower the slab_chunk_max default from 512k to much lower, as
that actually raises the memory efficiency by a bit (less gap at the
higher classes). That may help here. The system should also try ejecting
items from the highest LRU... I need to double check that it wasn't
already intending to do that and failing.
Might also be able to adjust the page mover but not sure. The page mover
can probably be adjusted to attempt to keep one page in reserve, but I
think the algorithm isn't expecting slabs with no items in it so I'd have
to audit that too.
If you're up for experiments it'd be interesting to know if setting
"-o slab_chunk_max=32768" or 16k (probably not more than 64) makes things
better or worse.
Also, crud.. it's documented as kilobytes but that's not working somehow?
aaahahah. I guess the big EXPERIMENTAL tag scared people off since that
never got reported.
I'm guessing most people have a mix of small to large items, but you only
have large items and a relatively low memory limit, so this is why you're
seeing it so easily. I think most people setting large items have like
30G+ of memory so you end up with more spread around.
Thanks,
> To view this discussion on the web visit
https://groups.google.com/d/msgid/memcached/3c08514a-f43f-45aa-b25b-87b431cb74aen%40googlegroups.com.
>
>