Too much contention on these datastore entities. please try again.

3,270 views
Skip to first unread message

Benjamin

unread,
Oct 5, 2011, 10:44:45 AM10/5/11
to Google App Engine
Hi guys,

I know there are other threads with this topic, but they are a little
old. I can't seem to code my way out of this error: update point
error too much contention on these datastore entities. please try
again.

I'm only reading and writing to a small data store entity a few times
a second. But I still get swamped with these errors in my logs. I'm
trying to move the data being accessed into memcache and decrease my
load on the datastore, but i'm surprised i need to do this. I'm just
updating an object with three statistics based on incoming data.

A lot of these errors are coming from a task that is saving data and
updating the stats. Is is possible that the task queue is trying to
write simultaneously and causing the error?

My app id is nimbits1.

Ben

Simon Knott

unread,
Oct 5, 2011, 10:47:06 AM10/5/11
to google-a...@googlegroups.com
I believe that the advised write-rate for a single entity group is 1 write per second.

Benjamin

unread,
Oct 5, 2011, 11:01:37 AM10/5/11
to Google App Engine
huh. It must be that the task queue is attempting several writes in <
second. I could use some guidance on the best way to handle this kind
of demand. I was considering maintaining the statistics in memcache
and then having them trickle back into the datastore using a cron task
so i can control the frequency.

Joshua Smith

unread,
Oct 5, 2011, 11:16:51 AM10/5/11
to google-a...@googlegroups.com
Sharding:

http://code.google.com/appengine/articles/scaling/contention.html

> --
> You received this message because you are subscribed to the Google Groups "Google App Engine" group.
> To post to this group, send email to google-a...@googlegroups.com.
> To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
>

Simon Knott

unread,
Oct 5, 2011, 11:18:49 AM10/5/11
to google-a...@googlegroups.com
I believe you have a couple of options:
  • Change your task queue configuration, so that it can't run updates concurrently and slow the task queue down so that it doesn't go over the desired rate of 1 per second.  Docs for this configuration can be found at http://code.google.com/appengine/docs/python/config/queue.html - Ikai also drew a nice diagram at http://twitpic.com/3y5814!  This route may lead to a backlog of data, if your updates are frequent.
  • Alternatively you can write the data to Memcache, as you suggested, and have the cron job periodically write the data into the datastore.  This route can lead to dataloss, as if MemCache is flushed you will lose any updates that were pending storage.

Gerald Tan

unread,
Oct 5, 2011, 2:06:50 PM10/5/11
to google-a...@googlegroups.com
One thing you can try is to persist a copy of your entity with a timestamp everytime you update it. If you want the latest version, you simply query for the latest version in the datastore. Then you can setup a cronjob to update the master copy of the entity and delete all the copies.

Matthew Jaggard

unread,
Oct 5, 2011, 11:48:21 AM10/5/11
to google-a...@googlegroups.com
A further option involves using pull task queues.

1. Write a task to the pull queue for each item you wish to remember.
2. Have a queue reader run periodically**** - reading up to 1000 tasks
at a time and writing the relevant information to the datastore.

****Although this sounds simple, you need to make sure that you run
this doesn't run too often (and burn instance time with no benefit) or
too infrequently (and fill up the queue because it never gets through
all the tasks). Scaling is not done for you like it is with a push
queue.

Mat.

> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.

> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/-dejKvvC6wAJ.

Cyrille Vincey

unread,
Oct 10, 2011, 7:12:07 PM10/10/11
to Google App Engine
Joshua is right : contention during statistics writes has to be fixed
with sharding.
http://code.google.com/appengine/articles/sharding_counters.html

Seems odd at the beginning, but give it a try, works like a charm.
If you wish, I have some code that I can share to shard automatically
when the datastore throws a contention exception.

Benjamin

unread,
Oct 11, 2011, 4:37:23 PM10/11/11
to Google App Engine
Thanks Guys, I've been trying to catch the error and start tasks to
try again, but it seems to be taking me down a bad path. Sharding
seems like the way to go and i'm going to give it a try.

Cyrille - it'd be great to see your code

Ben
nimbits.com


On Oct 10, 7:12 pm, Cyrille Vincey <cvin...@qunb.com> wrote:
> Joshua is right : contention during statistics writes has to be fixed
> with sharding.http://code.google.com/appengine/articles/sharding_counters.html
Reply all
Reply to author
Forward
0 new messages