A serious problem was encountered with the process that handled this request, ..you should contact the App Engine team. (Error code 203)

1,520 views
Skip to first unread message

Eric Ka Ka Ng

unread,
Feb 10, 2011, 11:10:32 AM2/10/11
to Google App Engine
I have two tasks stuck in the task queues, keep on creating errors and cannot completed successfully. Here are the error dumps


1st error

  1. "AppEngine-Google; (+http://code.google.com/appengine)" "gaewsdev.appspot.com" ms=204109 cpu_ms=473266 api_cpu_ms=473266 cpm_usd=13.146328 queue_name=default task_name=12116044570446551933 exit_code=203
  2. W02-10 07:15AM 30.026
    A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you should contact the App Engine team. (Error code 203)




2nd error

  1. 02-10 08:02AM 01.014
    Exceeded soft memory limit with 299.996 MB after servicing 14 requests total
  2. W02-10 08:02AM 01.051
    After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.


Could anyone or the App Engine team advise how i could solve these problems?

my app id is gaewsdev


- eric

Tim Hoffman

unread,
Feb 10, 2011, 11:15:22 AM2/10/11
to google-a...@googlegroups.com
Hi

This is a problem in your application code.  As the error says, you are using too much memory and the instance is being killed.

You haven't said if you use python or java.  If you are using python then have a look at apptrace (The apptrace package provides a WSGI middleware for tracking memory usage in Google App Engine Python applications.)


Bottom line is you will need to do some profiling and debugging to work out where you are either consuming too much memory or leaking (keeping around globals that grow for instance).

Rgds

T

Eric Ka Ka Ng

unread,
Feb 10, 2011, 9:36:51 PM2/10/11
to google-a...@googlegroups.com
Hi Tim,

thx for your reply. forgot to mention that i'm using python. i didn't use apptrace but i did profiling and optimization using appstats

but one of the memory issue seems to be thrown from wsgi middleware (appstats?) when it's doing the recording. correct me if i'm wrong, here is the completed traceback. 


  1. E2011-02-10 18:30:12.361
    <type 'exceptions.MemoryError'>: 
    Traceback (most recent call last):
      File "/base/data/home/apps/gaewsdev/6.348249895332861938/admin/removeUser.py", line 219, in <module>
        main()
      File "/base/data/home/apps/gaewsdev/6.348249895332861938/admin/removeUser.py", line 216, in main
        run_wsgi_app(application)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 97, in run_wsgi_app
        run_bare_wsgi_app(add_wsgi_middleware(application))
      File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 117, in run_bare_wsgi_app
        for data in result:
      File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/appstats/recording.py", line 859, in appstats_wsgi_wrapper
        end_recording(500, firepython_set_extension_data)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/appstats/recording.py", line 933, in end_recording
        memcache.delete(lock_key(), namespace=config.KEY_NAMESPACE)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/api/memcache/__init__.py", line 513, in delete
        self._make_sync_call('memcache', 'Delete', request, response)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 86, in MakeSyncCall
        return stubmap.MakeSyncCall(service, call, request, response)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 281, in MakeSyncCall
        rpc = stub.CreateRPC()
      File "/base/python_runtime/python_lib/versions/1/google/appengine/runtime/apiproxy.py", line 193, in CreateRPC
        return RPC()
      File "/base/python_runtime/python_lib/versions/1/google/appengine/runtime/apiproxy.py", line 103, in __init__
        super(RPC, self).__init__(*args, **kargs)
      File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 62, in __init__
        self.request = request
  2. C2011-02-10 18:30:12.376
    Exceeded soft process size limit with 299.793 MB after servicing 1 requests total
  3. I2011-02-10 18:30:12.411
    This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
  4. W2011-02-10 18:30:12.411
  1. After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.

Tim Hoffman

unread,
Feb 11, 2011, 4:53:04 AM2/11/11
to google-a...@googlegroups.com
Hi

just because the error is thrown in appstats doesn't mean its leaking. you may have a leak in your code or you are just using too much memory in a request when used in conjunction with appstats.

i would try running without appstats. i would also check to see if all  memory exhaustion always happens on the same request. 

some additional things that can contribute to high memory use is string manipulation.  if you do a lot of string concatination in a loop using "+"  you could blow out memory.


T
Reply all
Reply to author
Forward
0 new messages