Elevated Error Levels: "Request was aborted after waiting too long to attempt to service your request."

410 views
Skip to first unread message

David Fischer

unread,
Jan 20, 2016, 5:20:04 PM1/20/16
to Google App Engine
Starting at about 8AM PST today, I'm seeing elevated error levels on my app on Appengine. I see batches of requests where my code on Appengine is not even executed and I simply receive a generic server error (not my normal custom 500 page) that says "Error: Server Error - The server encountered an error and could not complete your request - Please try again in 30 seconds". When I view the logs, I see 500 errors taking 10-15 seconds with 0 byte responses and the only log message is "Request was aborted after waiting too long to attempt to service your request.". Since I log quite verbosely and there's no logs I wrote in there, I can safely assume Appengine didn't even get to my code in its execution. The overall number of requests my app is getting is normal relative to past loads.

Nothing has changed on my end and there were no deploys to my app or changes to my settings at or around that time.

Is anybody else seeing these types of problems? Any advice?

Relevant parts of my app.yaml:
version: 37
runtime: python27
api_version: 1
threadsafe: false

module: default
instance_class: F4
automatic_scaling:
  max_idle_instances: 3
  min_pending_latency: 5.0s

I figured this could be an issue with insufficient idle instances. I changed my automatic scaling preferences to have a min_idle_instances and that has not helped. 
Screen Shot 2016-01-20 at 2.02.50 PM.png

David Fischer

unread,
Jan 21, 2016, 1:20:30 PM1/21/16
to Google App Engine
As of about 4PM PST yesterday, this is resolved.

Danny Leshem

unread,
Jan 21, 2016, 2:10:15 PM1/21/16
to Google App Engine
We are experiencing the same issue, right now.

App ID = spice-prod

Nick (Cloud Platform Support)

unread,
Jan 21, 2016, 5:39:05 PM1/21/16
to Google App Engine
@Danny Leshem,

Are there any request logs you can post related to this error? Do you have any stack traces? I'm currently doing my best to see if we can find a cause for your issues, but in the absence of more information, this is difficult to determine.

Nick (Cloud Platform Support)

unread,
Jan 25, 2016, 7:52:55 PM1/25/16
to Google App Engine
Hey Danny,

Is this issue still occurring for you?

Thanks,

Nick


On Thursday, January 21, 2016 at 2:10:15 PM UTC-5, Danny Leshem wrote:

Trevor Chinn

unread,
Feb 2, 2016, 8:46:06 PM2/2/16
to Google App Engine
Hi Nick, 

Our site, as of 3am JST started experiencing this, and another issue simultaneously:

  1. A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. (Error code 121)
  1. Request was aborted after waiting too long to attempt to service your request.


    No idea if this will be helpful for you, but here is some data from the error log (project details scrubbed)
    {
    metadata:{
    severity:"ERROR"
    projectId:"145xxxxxxxx"
    zone:"us2"
    labels:{
    appengine.googleapis.com/request_id:"56b1576400ff0b71268d285d950001737e7a617a656e2d7777772d646f7072000135383000010155"
    appengine.googleapis.com/clone_id:"00c61b117c8da0eebaa3e77ecc63530dfe2432c3"
    }
    timestamp:"2016-02-03T01:27:00.749862Z"
    projectNumber:"145xxxxxx"
    }
    protoPayload:{
    appId:"xxx"
    versionId:"580"
    requestId:"56b1576400ff0b71268d285d950001737e7a617a656e2d7777772d646f7072000135383000010155"
    ip:"0.1.0.2"
    startTime:"2016-02-03T01:27:00.749862Z"
    endTime:"2016-02-03T01:27:00.960585Z"
    latency:"0.210723s"
    method:"POST"
    resource:"/task/cache/site"
    httpVersion:"HTTP/1.1"
    status:500
    userAgent:"AppEngine-Google; (+http://code.google.com/appengine)"
    urlMapEntry:"app.handlers.task.app"
    taskQueueName:"cron-jobs"
    taskName:"02537116329299544261"
    instanceIndex:-1
    instanceId:"00c61b117c8da0eebaa3e77ecc63530dfe2432c3"
    line:[1]
    appEngineRelease:"1.9.32"
    }
    insertId:"2016-02-02|17:27:05.961610-08|10.106.10.10|-217610687"
    httpRequest:{
    status:500
    }
    operation:{
    id:"56b1576400ff0b71268d285d950001737e7a617a656e2d7777772d646f7072000135383000010155"
    }
    }
    10:27:00.960
    A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. (Error code 121)


    10:26:59.796
    500
    0 B
    432 ms
    Safari 8
    /xxxx/xxx
    126.152.64.216 - - [02/Feb/2016:17:26:59 -0800] "GET /xxx/xxx HTTP/1.1" 500 - - "Mozilla/5.0 (iPhone; CPU iPhone OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12H321 Safari/600.1.4" "www.xxx.net" ms=432 cpu_ms=0 cpm_usd=0 instance=- app_engine_release=1.9.32 trace_id=-
    {
    metadata:{
    severity:"ERROR"
    projectId:"145xxxxxxxxxx"
    zone:"us2"
    labels:{
    appengine.googleapis.com/request_id:"56b1576300ff0c2887b873aea80001737e7a617a656e2d7777772d646f7072000135383000010134"
    }
    timestamp:"2016-02-03T01:26:59.796807Z"
    projectNumber:"145xxxxxxxxx"
    }
    protoPayload:{
    appId:"s~xxx"
    versionId:"580"
    requestId:"56b1576300ff0c2887b873aea80001737e7a617a656e2d7777772d646f7072000135383000010134"
    ip:"126.152.64.216"
    startTime:"2016-02-03T01:26:59.796807Z"
    endTime:"2016-02-03T01:27:00.229232Z"
    latency:"0.432425s"
    method:"GET"
    resource:"/xxx/xxx" 
    httpVersion:"HTTP/1.1"
    status:500
    userAgent:"Mozilla/5.0 (iPhone; CPU iPhone OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12H321 Safari/600.1.4"
    urlMapEntry:"app.handlers.page.alias_app"
    instanceIndex:-1
    line:[1]
    appEngineRelease:"1.9.32"
    }
    insertId:"2016-02-02|17:27:04.199804-08|10.106.69.66|-1090119170"
    httpRequest:{
    status:500
    }
    operation:{
    id:"56b1576300ff0c2887b873aea80001737e7a617a656e2d7777772d646f7072000135383000010134"
    }
    }
    10:27:00.229
    Request was aborted after waiting too long to attempt to service your request.

    Trevor Chinn

    unread,
    Feb 3, 2016, 1:45:37 AM2/3/16
    to Google App Engine
    Hi Nick,

    Just as a follow up, the errors suddenly stopped at 12pm JST, so I suppose something was happening on Google's backend that was fixed around noon. 


    Thanks.

    -Trevor 

    On Tuesday, January 26, 2016 at 9:52:55 AM UTC+9, Nick (Cloud Platform Support) wrote:

    Nick (Cloud Platform Support)

    unread,
    Feb 3, 2016, 6:51:16 PM2/3/16
    to Google App Engine, revolu...@gmail.com
    Hey Trevor,

    Thanks for updating the thread. It's possible that you were affected by a transient issue in production. In future, keep in mind that the best place to post details of an issue which might be related to the platform and not your own configuration / code would be the Public Issue Tracker. This is a more formal means of tracking issues and while we monitor both Google Groups and Public Issue Trackers, we will invariably direct such important information to be posted in a Public Issue Tracker thread.

    Regards,

    Nick

    Trevor Chinn

    unread,
    Feb 3, 2016, 8:59:53 PM2/3/16
    to Google App Engine
    Hi Nick,

    I did have a look at the issue tracker, however it wasn't absolutely certain that it was a platform issue, and thus the frantic search for recent posts relating to that issue. I will post there from now on when it is more likely to be on Google's end than ours. Thanks for your support. We'd love to subscribe to a support package to be able to get these things sorted out right away but the cost:benefit ratio ($150-400+p/m) just doesn't work for a small startup like ours (these kinds of issues only pop up once every 6 months or so). In any case, have a nice day.

    Regards,

    Trevor

    Nick (Cloud Platform Support)

    unread,
    Feb 4, 2016, 12:08:57 PM2/4/16
    to Google App Engine
    Hey Trevor,

    Thanks for explaining your situation. We take note of the current level of development on the platform and we're aware that there are more apps with a decent amount of production activity than there are support accounts opened.

    While the benefits of a support account include high level advice and a relative fast-track for production issues, we do want to make sure that no matter the level of ability to invest in support, you and every other user can get high quality support at least for platform issues, and a healthy ecosystem to discuss and engage with. This is why we make such an effort to constantly improve our documentation, work with third-party and open source tools which are current in the industry, and to maintain a healthy interest in public forums like this one.

    Thanks for your feedback and I hope that you continue to get in touch with us any time you have feedback or need an issue looked at. 

    Regards,

    Nick
    Reply all
    Reply to author
    Forward
    0 new messages