:)
What do the logs say? I think tasks are like any other http request, are you starting new processes a bunch of time? That's 7-8 seconds each time. That may be the cause.
On Jun 22, 2010 4:53 PM, "dflorey" <daniel...@gmail.com> wrote:
And: It is running fast like hell in dev environment. Just a few
seconds to run the full bunch of tasks.
On Jun 22, 11:51 pm, dflorey <daniel.flo...@gmail.com> wrote:
> :-)
> Yes, I can see like 1000 wait...
Regards
Martin Webb
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
Regards
Martin Webb
The information contained in this email is confidential and may contain proprietary information. It is meant solely for the intended recipient. Access to this email by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted in reliance on this, is prohibited and may be unlawful. No liability or responsibility is accepted if information or data is, for whatever reason corrupted or does not reach its intended recipient. No warranty is given that this email is free of viruses. The views expressed in this email are, unless otherwise stated, those of the author
> To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
You can only do sustained about 1 transaction per second per entity
group (a post by Nick Johnson).
Maybe you get by doing 2 per second resulting in 120 updates finishing
in 1 minutes.
Have a look at the talk of Bret Slatkin of I/O 2010, the fan-in
solution might be something to use.
2010/6/23 Tristan Slominski <tristan....@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
> --
> You received this message because you are subscribed to the Google Groups "Google App Engine" group.
> To post to this group, send email to google-a...@googlegroups.com.
> To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
The example at the TaskQueue man page also mentions the 1/s update by
transaction.
2010/6/24 dflorey <daniel...@gmail.com>:
The application generates 1000 tasks. Each task has a parameter that
is the name of a counter, chosen random. The task increments 1 of the
shards of this counter in a transaction.
All tasks where GET requests, then you can see the parameter in the
logs. A second parameter is the tasknumber (0-999). This lets you see
when the added tasks are executed.
It takes about 10-13 seconds to add 1000 tasks in a queue.
Resulting in a fill rate of 75/s. Adding 1 task at a time in the Queue.
If I use batch add for the tasks in a Queue, in batches of 50, it
takes 1.2 seconds to add 1000 tasks.
Queue settings are rate=50/s bucket_size=10.
For sequential add:
It takes 97 sec to process the tasks ( 10.3 tasks/sec).
Looking at the logs I notice that the later tasks are executed in
groups with a size of bucket_size.
The time between groups is 2 seconds, right down to the millisecond.
I increased the bucket_size to 20 (the maximum, says appcfg.py).
Now it took only 58sec for 1000 tasks (17 tasks/sec), sequential fill.
If you plot a graph of the cumulative tasks finishing for every
second, then the first 12 seconds 46 tasks/sec finish. Then until the
end you get 9.6 tasks/sec. That means that half the work is done in
just the first 12 seconds.
I found in the doc page that you can batch add tasks, using the
Queue.add() method.
The fill time is reduced to 1.3 seconds but now it takes 100-105
seconds for 1000 tasks (10 tasks/sec). Many of the early tasks took
1-8 seconds to complete. Later tasks took 100-200ms to complete. Only
in the first second 27 tasks finish, after that 10 tasks/sec finish
(20 every 2 seconds).
It looks like that during the request that inserts tasks you get a
larger tasks/sec that finish. When this request ends you get
bucket_size tasks every 2 seconds until the end.
Is there an explanation for this behavior?
During these experiments I got around 15-30 collisions for every 1000 tasks.
No task needed a rerun.
At the moment all counter shards have the same prefix for the
key_name. There are moments where 60-70 shards got an update in 1
second. Clearly no problem that they all address the same tablet
server.
Djidjadji