Memcache assumptions for Counters

160 views
Skip to first unread message

Andrin von Rechenberg

unread,
Jul 22, 2011, 1:05:33 PM7/22/11
to google-a...@googlegroups.com
Hey there

I'm building something like "Google Analytics" for Appengine but in real time
(Including qps, hourly & daily graphs, backend counters, monitors with alerts, etc...)
The cool thing is that it only uses memcache to increase counters/stats so its
really quick to use in prod code. Every minute I gather all counters and write them to datastore.
It seems to work perfectly for my app (~250qps with about 1000 different counters, and about
1000 counter increases per second)
I can also measure how correct my data is (if stuff flushed in memcache, but so far that never happened),
but it's all based on one assumption:

If I call:

memcache.incr("a", intitail_value=0)
...
memcache.incr("b", initial_value=0)
....
memcache.incr("b", initial_value=0)
....

if "a" is still in the memcache "b" will also be in the memcache and wont have been flushed, correct?

or in other words: If the entity size for two items in the memcache is the same,
does the memcache work like either a LRU or FIFO cache?

Any response is greatly appreciated...

-Andrin

Ikai Lan (Google)

unread,
Jul 22, 2011, 1:52:06 PM7/22/11
to google-a...@googlegroups.com
Memcache works like an LRU cache, but I don't see why a would force out b unless you ran out of space.

Also, App Engine's Memcache has 2 LRU structures: an app specific LRU and a global LRU for that Memcache instance.

Ikai Lan 
Developer Programs Engineer, Google App Engine


--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.

Ikai Lan (Google)

unread,
Jul 22, 2011, 1:53:53 PM7/22/11
to google-a...@googlegroups.com
Oh, I see what you're getting at: you're asking me if B will still be in the cache if A is still in the cache. That depends on whether or not the keys hash to the same Memcache instance. 

FYI - in general, we don't make any guarantees of this behavior, so it potentially can be problematic down the line if this changes.


Ikai Lan 
Developer Programs Engineer, Google App Engine


Ikai Lan (Google)

unread,
Jul 22, 2011, 1:55:03 PM7/22/11
to google-a...@googlegroups.com
One more thing to be aware of: there are times when Memcache needs to be flushed. If a flush happens sometime, B can have a value, but A would be unset.

Ikai Lan 
Developer Programs Engineer, Google App Engine


MiuMeet Support

unread,
Jul 22, 2011, 2:08:03 PM7/22/11
to google-a...@googlegroups.com
Thanks for getting back to me so quickly!

RE: Your last email, you wrote:
"B can have a value, but A would be unset."

That wouldnt be a problem since i only want the implication A=>B ("If A exists then B exists").
But I assume you meant:
A can have a value, but B would be unset. Right?

Can you (*really roughly*) say how many memcache instances you are running? Let's say I would create 1000 "A"s, would I hit all memcaches (with a high probabilty)? Then the assumption "if all A's exist, then B exists' would hold...
Or can I influence the hash somehow to end up in the same instance (something else then finding hash collisions :)) ?


Thanks again, for getting back to me so quickly.

-Andrin

Ikai Lan (Google)

unread,
Jul 22, 2011, 2:16:36 PM7/22/11
to google-a...@googlegroups.com
Ha ha ha ...

To be honest, I couldn't say off the top of my head, but it's not something you can depend on.

Another thing you can think about doing is using the backend instances. Those are more or less guaranteed to stick around, though you might not be able to store as much data in them.

Ikai Lan 
Developer Programs Engineer, Google App Engine


Reply all
Reply to author
Forward
0 new messages