Consider a situation where requests for your application experience
a sudden spike in traffic due to being linked from a popular site. The
rate at which your application can consume resources will instantly be
increase and initially all of the requests for your application will be
served as the traffic starts to increase. This "spike rate" will then
gradually decrease in order to ensure your app will continue to serve
throughout the day while remaining under the 24-hour quota limits. Once
the rate of requests pushes your application beyond this decreasing
spike rate, the overage requests will be denied with a 403 error."
I think the explanation given (which I wouldn't classify as an
"excuse") is perfectly reasonable, appropriate, and workable.
That is a little harsh. AppEngine is a preview release, the appengine
team is scaling up their service. As with any new software release,
there are unexpected gotchas that have to be debugged in real time.
I'm impressed with how quickly the team have scaled up.
--
Brett Morgan http://brett.morgan.googlepages.com/
During my load/deletion of entities in my test app I've also
hit the 403 over quota page; no indications in the Dashboard other
than the "high cpu usage" warnings. I've had them also using my webapp
interactively from a browser, and nothing happened; using my client
api from a batch script (no related to bulkloader), I've observed that
after 10-20 warnings in a short period (10-20 seconds) my app went
overquota.
The high-cpu-usage messages are quite funny, since only tell you
about "x times the cpu usage". From that, combined with app's
profiling, I've internally assumed a limit of 250 ms for anything
related to datastore reading, and no idea about requests making
datastore updates; Trying to obtain a correlation among response time
and cpu warning, in my case, was impossible, since I've had warnings
for requests with 0.3 response time and no warnings for requests with
0.7 response times (read-only requests), and, for writing-datastore
requests, 2.0 seconds didn't make any warning to appear :-/
The problem is that I've not been able either to make that
correlation using the profiler's reported CPU usage time; low cpu
usage time (from the profiler) ocasionally produce cpu usage warnings
and, on the other side, some times those warnings don't show up for a
requests using much time that the first ones (by the profiler logs).
funny...
Jose
I am experiencing the same problem. I have been over the quota for
3.5 hours now. My dashboard doesn't show any activity
To google, services and support is just another engineering problem.
Everything can be engineered.
This is a very serious concern for all of us considering trusting
Google with our applications. I expect someone from Google can provide
more information on this.
Hours of service outage caused by a few bad requests seems very outrageous.
That's correct. Don't paginate by number directly, paginate by some
property on your entities:
query = Church.all().order('title')
if 'next_first_key' in self.request.GET:
query.filter('title >=', Church.get(self.request.get('next_first_key')).title)
churches = query.fetch(501) # one extra
churches, next_first_key = churches[:-1], churches[-1].key()
Then your "next page" link will go to something with
"?next_first_key={{ next_first_key }}".
You'll also have to add code to deal with the edge cases, but the
basic principle here (relying on the datastore indexes to do your
offsetting) is how to get the best performance from GAE.
Dave.
Its not set to really scale, just a preview of what will (hopefully)
be possible. Hosting 'real world' apps on it at the moment is always
going to be a risk.
--
Barry