High cache miss rate with memcached

382 views
Skip to first unread message

rufusth...@gmail.com

unread,
Feb 8, 2017, 4:33:11 PM2/8/17
to Support
I've been trying to get IISpeed tuned to my liking for the past week. I'm close to uninstalling it and giving up out of frustration. I hope Otto or someone can help me out.

Curent problem is with memcached my Cache Miss rate according to the Admin Console Graph is hovering in the low 70% range after the site's been running for 24+ hours.

I was originally running IISpeed without memcached using an SSD as the filecache. Things would work well as long as I kept the filecache size below roughly 2.5GB. When I would set the filecache size larger than that my IIS worker processes would start consuming lots of memory during the cache clean period. "Lots" of memory in this case would be 500+ MB consumed in 4kb chunks over a period of 60 to 90 seconds. Enough that the IIS worker process would hit it's configured recycle limit and kill the process, presumably leaving the file cache clean incomplete. (The file cache would eventually grow many times larger than it's configured value. So I guess the clean phase wasn't finishing.)

Otto suggested I try memcached due to limitations of the Windows file system. I have memcached running on the local system with 3GB of memory allocated. It's shared among three sites all using the same root domain for IISpeed.

Images and HTML rewriting is working well and very quick. But very little data is being cached to the drive. It seems with memcached running IISpeed barely makes use of the filecache. Only 540MB of data there after 24 hours.

The site is very large with 300 to 500 simultaneous users most of the time. Googlebot is also fetching pages at a rate of 180k to 200k per day lately due to a site-wide switch from http to SSL. Maybe Googlebot is the cause of the poor cache hit rate?

Is there anything I can do to lower the cache miss % besides giving memcached a lot more memory? When I was was using the filesystem only my cache hit rate was 80% (miss rate 20%). If there's no solution my efforts would be much better suited to simply processing the images on the drive with ImagMagik instead of using IISspeed/Pagespeed for anything. The extra CPU and memory resources are very significant to only get a 25% - 30% cache hit rate.

rufusth...@gmail.com

unread,
Feb 8, 2017, 4:39:10 PM2/8/17
to Support, rufusth...@gmail.com
A few more stats from the Console graphs:

Image rewrite failures: 2.65%
Resources not rewritten because of restrictive Cache-Control headers: 1.01%
Cache lookups that were expired: 0.93%
Resources not rewritten because domain wasn't authorized: 0.01%
Resources not loaded because of fetch failures: 0.00%

Any other stats to check?

Otto van der Schaaf

unread,
Feb 8, 2017, 4:58:37 PM2/8/17
to rufusth...@gmail.com, Support
Sorry to hear that.

When you use memcached, the filesystem cache will only be used for files > 1MB (from the top of my head), because these don't fit into memcached. Or at least, some versions of memcached have that limitation. If you need cache persistance, there are off the shelf solutions alternatives to memcached (which are protocol compatible and should work).

On the miss-rates - part of that might be attributable towards the patterns pagespeed uses. What do the hit-rates look like?
Things that may cause a high miss-rate:
If your html references any high-entropy (e.g. dynamic/changing) resource urls, that may cause a high miss rate
Also, if you deployed filters that are able to collect data via beaconing (and do not have beaconing turned off), that may skew the miss rate, because these store many small and short lived entries - and some of them may actually never be used. 
Lastly, I have to double check, but I suspect in-place-resource-optimization may give you high miss-rates in certain situations, especially when you have a huge volume of urls and these are crawled.

Lastly, if your cache is (way) undersized, that is going to give you a high miss rate. 
The ballpark cache-size recommendation is around 2-3 times the size of the original css/js/images (the images need to be stored in multiple variants).

What may help with getting things to run smoothly, is starting out in passthrough with ipro turned off, and try out features one by one. 
Perhaps starting with the ones that are the most important for your use-case.

Otto

Reply all
Reply to author
Forward
0 new messages