Google App Engine. Start from last week, keep responding 503 error.

170 views
Skip to first unread message

Y H

unread,
Jul 6, 2018, 12:31:50 PM7/6/18
to Google App Engine
Hello,
I met a issue.
Everything works fine at the begin of this week. In the middle of the week, user reported can not access the website and I found the server respond 503 error. 
Do you have any guide how to locate the root cause?
I tried to redeploy it. It worked last night and broken again this morning.

Thanks.

Giuliano Ribeiro

unread,
Jul 6, 2018, 12:34:34 PM7/6/18
to Google App Engine
Hi Yi, 

do you took a look on StackDrive logging? Any issues on the instances startup?

Y H

unread,
Jul 6, 2018, 1:30:00 PM7/6/18
to Google App Engine
Thanks Giuliano,

I saw some failures.
A  INFO: End of request or previous flush has not yet completed, blocking.
 
A  Jul 06, 2018 5:14:44 PM com.google.apphosting.vmruntime.VmApiProxyDelegate runSyncCall
 
A  INFO: Error body: RPC Error: /StubbyService.Send to (unknown) : APP_ERROR(2)
018-07-06 12:14:44.000 CDT
INFO: Error body: RPC Error: /StubbyService.Send to (unknown) : APP_ERROR(2)
{
insertId: "1l61xvbf9j89do" 
labels: {…} 
logName: "projects/triage/logs/appengine.googleapis.com%2Fstderr" 
receiveTimestamp: "2018-07-06T17:14:44.240478404Z" 
resource: {…} 
textPayload: "INFO: Error body: RPC Error: /StubbyService.Send to (unknown) : APP_ERROR(2) " 
timestamp: "2018-07-06T17:14:44Z" 
}



2018-07-06 12:14:44.000 CDTjava.util.concurrent.ExecutionException: com.google.apphosting.api.ApiProxy$RPCFailedException: The remote RPC to the application server failed for the call logservice.Flush(). at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at com.google.apphosting.vmruntime.VmAppLogsWriter.waitForCurrentFlush(VmAppLogsWriter.java:226) at com.google.apphosting.vmruntime.VmAppLogsWriter.flushAndWait(VmAppLogsWriter.java:211) at com.google.apphosting.vmruntime.VmApiProxyEnvironment.flushLogs(VmApiProxyEnvironment.java:508) at com.google.apphosting.vmruntime.VmRuntimeUtils.flushLogsAndAddHeader(VmRuntimeUtils.java:109) at com.google.apphosting.vmruntime.jetty9.VmRuntimeWebAppContext.doScope(VmRuntimeWebAppContext.java:323) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119) at org.eclipse.jetty.server.Server.handle(Server.java:517) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:306) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:261) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) at java.lang.Thread.run(Thread.java:745) Caused by: com.google.apphosting.api.ApiProxy$RPCFailedException: The remote RPC to the application server failed for the call logservice.Flush(). at com.google.apphosting.vmruntime.VmApiProxyDelegate.runSyncCall(VmApiProxyDelegate.java:175) at com.google.apphosting.vmruntime.VmApiProxyDelegate.makeApiCall(VmApiProxyDelegate.java:155) at com.google.apphosting.vmruntime.VmApiProxyDelegate.access$000(VmApiProxyDelegate.java:75) at com.google.apphosting.vmruntime.VmApiProxyDelegate$MakeSyncCall.call(VmApiProxyDelegate.java:434) at com.google.apphosting.vmruntime.VmApiProxyDelegate$MakeSyncCall.call(VmApiProxyDelegate.java:410) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more
2018-07-06 12:14:44.000 CDT2018-07-06 17:14:44.933:INFO:oejs.ServerConnector:Thread-1: Stopped ServerConnector@43184193{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
{
insertId: "1r9ki5zf9mcqwx" 
labels: {…} 
logName: "projects/triage/logs/appengine.googleapis.com%2Fstderr" 
receiveTimestamp: "2018-07-06T17:14:45.895354407Z" 
resource: {…} 
textPayload: "2018-07-06 17:14:44.933:INFO:oejs.ServerConnector:Thread-1: Stopped ServerConnector@43184193{HTTP/1.1,[http/1.1]}{0.0.0.0:8080} " 
timestamp: "2018-07-06T17:14:44Z" 
}


I think above exception or error maybe related. But no idea how to get rid of them.

Thanks

Giuliano Ribeiro

unread,
Jul 6, 2018, 1:32:53 PM7/6/18
to Google App Engine
It's on Flex, right ?

As the Flex requires an answer on the default port, have you sure your app is running at least on the local development env ?

Y H

unread,
Jul 6, 2018, 1:41:41 PM7/6/18
to Google App Engine
Yes. It is flexible environment.  In fact, I deployed again last night and it worked. But I found it broken again this morning.
The same code worked very well last week. I didn't change any config file and only change business logic code. 

Y H

unread,
Jul 6, 2018, 1:55:18 PM7/6/18
to Google App Engine
Also how to debug the failure? Is the error related?
 INFO: Error body: RPC Error: /StubbyService.Send to (unknown) : APP_ERROR(2)


eps...@gmail.com

unread,
Jul 31, 2018, 4:52:03 PM7/31/18
to Google App Engine
Did this issue get resolved? I see a lot of 502 Bad Gateway errors since my last components upgrade ~ 2weeks.

George (Cloud Platform Support)

unread,
Aug 1, 2018, 3:25:57 PM8/1/18
to Google App Engine
Hello Yi, 

503 is a back-end error, and a little too general to be useful. You mention that you have recently changed your app's business logic. This might have impacted your back-end behavior, and thus result in these 503 errors. The recommended practice, in such cases, is to implement truncated exponential back-off

This forum meant for general discussion on the platform and its services. If you would like to have this issue attract proper attention, and finally solved by Engineering, you are encouraged to open a case in the Public Tracker
Reply all
Reply to author
Forward
0 new messages