runtime: python37
env: standard
instance_class: F2
entrypoint: gunicorn main:app --workers=1 --bind :$PORT
#entrypoint: uwsgi --module hubiter_backend.wsgi.production --workers=1 --http-socket :$PORT --enable-threads
default_expiration: "1d"
automatic_scaling:
max_instances: 1
min_instances: 1
env_variables:
PYTHONUNBUFFERED: 1
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 16345 esistgut 20 0 220M 94576 16192 S 0.0 0.6 0:01.03 gunicorn --workers 1 --bind :8000 main:app 16342 esistgut 20 0 49804 22484 8412 S 0.0 0.1 0:00.23 gunicorn --workers 1 --bind :8000 main:app
2018-08-02 12:50:58.762 CESTExceeded soft private memory limit of 256 MB with 256 MB after servicing 2 requests total. Consider setting a larger instance class in app.yaml.
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 18743 esistgut 20 0 217M 85472 18316 S 0.0 0.5 0:01.04 uwsgi --module hubiter_backend.wsgi.development --workers=1 --http-socket :8000 --enable-threads
You can also play around with it on the local development server (hopefully the leak reproduces there). And you could try using other methods to understand the source of the leak. In my experience this message usually means that your instances use more memory than your instance class supports. If you start getting this message upgrade to the next instance class and see if it goes aways. This is a change that you do in your module configuration file (used to be in the management console at the Applications Settings section).
Unlike other resources that scale automatically to your budget limits, RAM is not; if a request causes an instance to exceed the RAM limit of its instance class, the instance is terminated at the end of the request and this message is logged.
Regarding any other issues, you can open another thread as this thread addresses the Memory leakage issue that you have mentioned. Thanks.