|High-performance, high-precision sharded counters in the HRD||Jeff Schnitzer||8/20/11 1:50 PM|
The short version boiled down to two questions:
1) What is the maximum realistic throughput for memcache increment()
2) The documentation for the HRD says that queries across entity
SELECT * FROM Thing WHERE __key__ >= KEY('Thing', 10) AND __key__ <=
The long version, to hopefully open a discussion:
I'm selling various types of virtual goods, each of which have a fixed
* I can sell up to exactly N items
I have thought of several strategies for building this system, but the
1) Be updated hundreds of times per second (thousands?)
Between memcache and the datastore, I can build this:
* Memcache increment() gives me a current atomic count of purchases.
First question: What is the practical throughput of
Second: The key is being able to get a strongly consistent sum of the
Since get() is strongly consistent, the obvious solution is to give
I guess I've probably answered question #2... but if anyone else has
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Stephen||8/20/11 2:34 PM|
On Sat, Aug 20, 2011 at 9:50 PM, Jeff Schnitzer <je...@infohazard.org> wrote:
You only really need to know whether widgets are in stock or out of stock.
While widgets are in stock, record the customers intent to purchase by
All orders received get an entity in the datastore marked with
Ajax form submission on the front end, receive a ticket immediately,
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Jeff Schnitzer||8/20/11 4:49 PM|
This is a really interesting idea, especially combined with the
1) Are there periods when pull task queues go down? There are for
2) Wouldn't this require continual looping to see if there is anything
3) I'm a little concerned about getting enough parallelism with this.
Just to add a few bits of complexity that I didn't mention the first time:
* The UI must display the # of units left
* There's a shopping-cart of sorts whereby the product is deducted
However, I don't think this extra complexity is too hard to add to a
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Robert Kluin||8/20/11 11:57 PM|
Stephen has a neat idea with using pull-queue tasks as a marker. It would be really great if you could batch the tasks in some way, I think there is an issue for this (maybe subqueues). The reliability of inserting tasks has been excellent from what I've seen, as good as or better than any of the other services, so using them as a marker might be a really good way to ensure you don't oversell an item. In your case, you may not need to care if you aren't immediately able to lease a task. Just using them as markers might be sufficient to know if the purchase is OK or not.
I started writing a process using pull queues for aggregations like this in April. I use the pull tasks as markers, then I've got a dispatcher process that leases them, groups them, and inserts aggregator tasks to process each batch. I try to insert a dispatcher task any time something is added to the pull queue. I use a simple system of batch numbers and rounded timestamps so I don't insert a new dispatcher for every item, made more efficient by memcache markers.
As for the shared counter, I would use the predictable names + fetch by key method. That will give you the best chance of not missing something, just keep the number of shards as low as you can. Each key requires a separate transaction to fetch the entity, true whether using a query or fetch by key, so each additional entity increases the chance a fetched entity has already been updated. This is a great spot to use multiple async batch fetches to minimize the actual wall-clock time. You should be able to come up with a nice way of adding / removing shards to keep this as efficient as possible.
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Stephen||8/21/11 6:14 AM|
On Sun, Aug 21, 2011 at 12:49 AM, Jeff Schnitzer <je...@infohazard.org> wrote:
If you're selling up to 1000/s is this a meaningful number? By the
You could estimate the global sales velocity by recording sales by a
In Phase One, a front-end instance starts up and deducts 1000 widgets
Contention on the datastore counter is reduced 1000-fold and
A front-end may crash or be recycled without selling all 1000 widgets.
However, when a front-end instance grabs the last 1000 widgets from
Sweep the sales looking for gaps in your blocks of 1000. You could
|Re: High-performance, high-precision sharded counters in the HRD||Jan Zawadzki / Hapara||8/22/11 6:28 AM|
Memcache incr: we have upwards of 200 instances doing this with no
apparent performance penalty.
As Robert pointed out, there are occasions when a value "sticks", so
you will need a way to deal with this gracefully.
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Ikai Lan (Google)||8/22/11 10:37 AM|
1. I don't know the exact number, but this method has been used in apps that get thousands of QPS
2. No, because there is also a Key index. There's no entity group root to "transact" on to see if we have the newest version of data. If you did a batch get with the keys themselves, it would be strongly consistent. Here's my example that does that:
My example uses pregenerated keys with a prefix and a batch get by key for strong consistency, though the write behind counter cache (for sorting) is eventually consistent.
Developer Programs Engineer, Google App Engine
On Sat, Aug 20, 2011 at 1:50 PM, Jeff Schnitzer <je...@infohazard.org> wrote:
|Re: [google-appengine] High-performance, high-precision sharded counters in the HRD||Ikai Lan (Google)||8/22/11 10:38 AM|
|Re: High-performance, high-precision sharded counters in the HRD||Kaan Soral||8/22/11 12:49 PM|
Out of curiosity, are you in the development stage?
I did a lot of unnecessary thinking like this when I was in the
development stage and after the production, I saw that half of my work