Cron job suffering from intermittent timeouts since approx 21:00 UTC 12-OCT-09

497 views
Skip to first unread message

RollingCircle

unread,
Oct 13, 2009, 10:15:43 AM10/13/09
to Google App Engine
(Apologies if this appears to be a cross-posting - I did raise this as
a reply to a post describing a similar issue, but that reply seems to
have disappeared into the ether)

Since approx 21:00 UTC yesterday (12-OCT-09) my application (tag
'metutil') has been suffering from roughly 75% timeouts on a
previously reliable cron job - although when run manually, the request
run by the cron job still completes correctly in all cases.

The timeout report in the log states:

"Request was aborted after waiting too long to attempt to service your
request. Most likely, this indicates that you have reached your
simultaneous active request limit. This is almost always due to
excessively high latency in your app. Please see
http://code.google.com/appengine/docs/quotas.html for more details."

and the reported time for the request when it fails is usually just
over 10 seconds. Normally, the request (which calls out via HTTP to
an external service) completes in a couple of seconds at most. The
cron job is set to run every 15 minutes, and as noted above, currently
roughly every 3 out of four requests are now timing out.

Could this be related to the 1.2.6 release I've seen mentioned on
other threads?

Regards,

Ed

RollingCircle

unread,
Oct 14, 2009, 11:33:36 AM10/14/09
to Google App Engine
Still occurring as of 15:30 UTC 14-OCT-09.

Any ideas?


On Oct 13, 3:15 pm, RollingCircle <e.j.hard...@gmail.com> wrote:
> (Apologies if this appears to be a cross-posting - I did raise this as
> a reply to a post describing a similar issue, but that reply seems to
> have disappeared into the ether)
>
> Since approx 21:00 UTC yesterday (12-OCT-09) my application (tag
> 'metutil') has been suffering from roughly 75% timeouts on a
> previously reliable cron job - although when run manually, the request
> run by the cron job still completes correctly in all cases.
>
> The timeout report in the log states:
>
> "Request was aborted after waiting too long to attempt to service your
> request. Most likely, this indicates that you have reached your
> simultaneous active request limit. This is almost always due to
> excessively high latency in your app. Please seehttp://code.google.com/appengine/docs/quotas.htmlfor more details."

divricean

unread,
Oct 14, 2009, 3:06:21 PM10/14/09
to Google App Engine
I can confirm this exact behavior. Except that my cron jobs run every
hour. The request fails just above 10 seconds.

On Oct 14, 6:33 pm, RollingCircle <e.j.hard...@gmail.com> wrote:
> Still occurring as of 15:30 UTC 14-OCT-09.
>
> Any ideas?
>
> On Oct 13, 3:15 pm, RollingCircle <e.j.hard...@gmail.com> wrote:
>
>
>
> > (Apologies if this appears to be a cross-posting - I did raise this as
> > a reply to a post describing a similar issue, but that reply seems to
> > have disappeared into the ether)
>
> > Since approx 21:00 UTC yesterday (12-OCT-09) my application (tag
> > 'metutil') has been suffering from roughly 75% timeouts on a
> > previously reliable cron job - although when run manually, the request
> > run by the cron job still completes correctly in all cases.
>
> > The timeout report in the log states:
>
> > "Request was aborted after waiting too long to attempt to service your
> > request. Most likely, this indicates that you have reached your
> > simultaneous active request limit. This is almost always due to
> > excessively high latency in your app. Please seehttp://code.google.com/appengine/docs/quotas.htmlformore details."

Jon McAlister

unread,
Oct 14, 2009, 4:39:46 PM10/14/09
to google-a...@googlegroups.com
There was an issue with 1.2.6 in that it throughput on task queue requests was
getting unfairly throttled. This has now been fixed. Apologies for the bug and
for the confusion.

RollingCircle

unread,
Oct 15, 2009, 12:01:25 PM10/15/09
to Google App Engine
Normal service resumed for 'metutil' cron jobs as of roughly 02:00 UTC
15-OCT-09 - thanks to the team for resolving this.

Regards,

Ed
Reply all
Reply to author
Forward
0 new messages