High 104 error rates for default module and versions requests

83 views
Skip to first unread message

Ice13ill

unread,
Aug 31, 2016, 5:21:06 AM8/31/16
to Google App Engine
Hello, 
I'm experiencing a lot of issues for incoming requests on the default version of my app. 
The issues appeared around 9:30 AM EEST and most of them had code 123 (deadline issues). Now the requests fail with code 104. I'm not sure about the percent of the requests that fail, but it seems it is a significant number.
I'm using Java SDK for development.

Please advise!

Ice13ill

unread,
Aug 31, 2016, 5:32:34 AM8/31/16
to Google App Engine
I can specify that I did not make a change in the settings of my instances, or any deploy. 
Everything is configured as same as yesterday or the days before. 



The message of the error (show as Warning in logs) is: 
A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may be throwing exceptions during the initialization of your application. (Error code 104)

The issue is somehow similar to the one posted here a few months ago:

Ice13ill

unread,
Sep 6, 2016, 5:03:49 AM9/6/16
to Google App Engine
Hello, it seams the issue has reappeared in the last days, but today it seems it is more aggressive: the application is experiencing high 104 error rate with the same message:
A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may be throwing exceptions during the initialization of your application. (Error code 104)

I have a paid app engine application and I need to know if i can be assisted by a Google App Engine team member or should I contact a specific support section.
Please advise!


On Wednesday, August 31, 2016 at 12:21:06 PM UTC+3, Ice13ill wrote:

Mauricio Lumbreras

unread,
Sep 6, 2016, 2:59:47 PM9/6/16
to Google App Engine
Hello
are you suffering this in a specific time frame or scattered during the whole day?
I suffered some issue with version 1.9.44 and we were downgraded to 1.9.43
We face some problems from 31th aug
Regards
Mauricio
 

Nick (Cloud Platform Support)

unread,
Sep 8, 2016, 7:36:04 PM9/8/16
to Google App Engine
Hey Folks,

Have you taken a look at the documentation related to Deadline Errors to determine what might be happening in your apps? Do your request response times generally hover around the 60 second limit, or is it conceivable that they could inflate above that limit with a degradation of any service they rely on synchronously?

Cheers,

Nick
Cloud Platform Community Support

Ice13ill

unread,
Sep 12, 2016, 5:58:07 AM9/12/16
to Google App Engine
Hi Nick, thank you for your response. 
It seams that in time, many request have begun to need more and more memory, and some requests that need more processing time (they take longer and need new instances). I believe there are some memory problems, but I am not sure because many requests simply fail with code 104 and i see no out of memory exceptions.
An advice regarding memory profiling for app engine / jetty web apps (if only locally available) would be much appreciated! I also posted on stackoverflow here: http://stackoverflow.com/questions/39447618/google-app-engine-java-memory-profiling-on-local-dev-server

Thank you!

Nick (Cloud Platform Support)

unread,
Sep 12, 2016, 1:41:09 PM9/12/16
to Google App Engine
Here are some potential routes for debugging / profiling in general:
  • Make use of the Cloud Debugger to inspect variables

  • Make use of AppStats to profile RPC call timings

  • Make use of custom timing code before and after relevant code blocks/lines as an alternative to AppStats

  • Make use of Java memory use inspection calls before and after relevant code blocks/lines to track memory consumption

  • Deploy the app to a Flexible Environment Custom Runtime app using a -compat runtime for Java (so that you existing GAE code shouldn't need to change), using the Dockerfile to run commands during container building which install and configure a more comprehensive Java memory profiler (of your choice). You could then swap out the default serving version for this version long enough to observe the memory use on requests, and its causes.
I hope these methods are useful. The last two are most relevant to out-of-memory errors. Let me know if you have any more questions about implementing any of them - I'll be happy to help!


Cheers,

Nick
Cloud Platform Community Support


Reply all
Reply to author
Forward
0 new messages