Exceeded soft private memory limit of 128 MB with 128 MB after servicing 238 requests total

625 views
Skip to first unread message

Richard Cheesmar

unread,
Jul 30, 2017, 12:03:22 PM7/30/17
to Google App Engine
I am seeing more of these, these days.

I just got the one above from a simple signup url that does nothing but render a simple template with a form.

How does that end up with 128MB and 238 requests in total? And as described in the error how does 128MB exceed 128MB?

It seems inconsistent to say the least, I don't normally get this from signup urls but have found I'm getting quite a few on other urls but as I said, it is not consistant.

Is there something else I should be looking at?




timh

unread,
Jul 31, 2017, 7:17:09 AM7/31/17
to Google App Engine
This is generally indicative of a memory leak.  Are you sure something isn't hitting that url (for instance a series of redirects)

Richard Cheesmar

unread,
Jul 31, 2017, 8:47:10 AM7/31/17
to Google App Engine
timh,

There are no redirects on the normal signup url, it simply renders a template and returns.

Yannick (Cloud Platform Support)

unread,
Jul 31, 2017, 4:24:57 PM7/31/17
to Google App Engine
Hello Richard,

as timh indicated this is most often the sign of a memory leak of some kind in your code. If you were performing operations that are more memory-heavy I'd advise you pick an instance type with more memory but this doesn't sound like its the case.

About the number in the error message I expect that they are rounded down values, so 128.1MB would be over 128MB.

Richard Cheesmar

unread,
Aug 1, 2017, 2:41:14 AM8/1/17
to Google App Engine
Hi, Yannick, as I said, all the single signup request 'GET' does is simply return a very simple rendered form. It does not access the ndb or other storage, it just renders a template and returns it. I don't think it is a memory leak on the part of that particular piece of code.

I'll monitor the situation and see if there are any many repeats then explore the issue further.

Attila-Mihaly Balazs

unread,
Aug 3, 2017, 2:07:49 AM8/3/17
to Google App Engine
Hello Richard,

What programming language/framework are you using? Please note that even though you might not "be doing anything" explicitly, the framework might have some caching logic (for example Jinja2 template cache) in the background that can lead to memory leaks. As they say: "an unbounded cache is just an other name for a memory leak" :-).

Attila

Richard Cheesmar

unread,
Aug 5, 2017, 2:58:49 AM8/5/17
to Google App Engine
Hi, Atilla,

I'm using Python and I do use Jinja2 templates. The inconsistency is what baffles me. I seems to be random, no set requests suffers more than another. I get this on new user requests, on requests from old users whom may have cache.

It's hard to pin this down to anything in particular. I think boosting the instance will solve it, but I don't yet need this for most of the time, my traffic isn't that heavy and the payloads to and from server client are fairly light weight at the moment.

Richard

Alex Martelli

unread,
Aug 5, 2017, 11:46:39 AM8/5/17
to google-a...@googlegroups.com
To check if the memory consumption is due to jinja2 caching, make the jinja2 Environment object that you will later use for rendering with cache_size=0 -- this disables jinja2's caching, essentially. If that removes the issue, you've found your root cause. (If not, you need to keep digging). See http://jinja.pocoo.org/docs/2.9/api/#jinja2.Environment for more details.

Alex

--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscribe@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-appengine/0e266a95-7f1b-47fa-9b89-30d01c795db9%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Richard Cheesmar

unread,
Aug 7, 2017, 2:01:09 PM8/7/17
to Google App Engine
I'm testing that now Alex, Cheers


On Saturday, August 5, 2017 at 6:46:39 PM UTC+3, Alex Martelli wrote:
To check if the memory consumption is due to jinja2 caching, make the jinja2 Environment object that you will later use for rendering with cache_size=0 -- this disables jinja2's caching, essentially. If that removes the issue, you've found your root cause. (If not, you need to keep digging). See http://jinja.pocoo.org/docs/2.9/api/#jinja2.Environment for more details.

Alex
On Fri, Aug 4, 2017 at 11:58 PM, Richard Cheesmar <cheza...@gmail.com> wrote:
Hi, Atilla,

I'm using Python and I do use Jinja2 templates. The inconsistency is what baffles me. I seems to be random, no set requests suffers more than another. I get this on new user requests, on requests from old users whom may have cache.

It's hard to pin this down to anything in particular. I think boosting the instance will solve it, but I don't yet need this for most of the time, my traffic isn't that heavy and the payloads to and from server client are fairly light weight at the moment.

Richard

On Thursday, August 3, 2017 at 9:07:49 AM UTC+3, Attila-Mihaly Balazs wrote:
Hello Richard,

What programming language/framework are you using? Please note that even though you might not "be doing anything" explicitly, the framework might have some caching logic (for example Jinja2 template cache) in the background that can lead to memory leaks. As they say: "an unbounded cache is just an other name for a memory leak" :-).

Attila

--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.
To post to this group, send email to google-a...@googlegroups.com.

PK

unread,
Aug 7, 2017, 3:13:06 PM8/7/17
to Google App Engine
Is there any general tool successfully used to profile how much memory a request is using and whether it is leaking any memory? I still do not have a good general solution when I hit this problem.

Thanks,
PK

Alex Martelli

unread,
Aug 7, 2017, 3:36:34 PM8/7/17
to google-a...@googlegroups.com
On Mon, Aug 7, 2017 at 12:13 PM, PK <p...@gae123.com> wrote:
Is there any general tool successfully used to profile how much memory a request is using and whether it is leaking any memory? I still do not have a good general solution when I hit this problem.

The module I used to use for the purpose -- https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.runtime.runtime -- is, alas, deprecated, and I don't know of a successor. Perhaps that's worth a feature request, since it seems normal and healthy for an app's instance to want to keep track of its resource consumption (e.g to help pick the best instance class to use).

Alex
 

Thanks,
PK

On Sunday, July 30, 2017 at 9:03:22 AM UTC-7, Richard Cheesmar wrote:
I am seeing more of these, these days.

I just got the one above from a simple signup url that does nothing but render a simple template with a form.

How does that end up with 128MB and 238 requests in total? And as described in the error how does 128MB exceed 128MB?

It seems inconsistent to say the least, I don't normally get this from signup urls but have found I'm getting quite a few on other urls but as I said, it is not consistant.

Is there something else I should be looking at?




--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscribe@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.

Attila-Mihaly Balazs

unread,
Aug 7, 2017, 4:06:46 PM8/7/17
to Google App Engine
Yes, memory profiling support is quite poor for Python. I used some code like the following in the past to track down a nasty memory leak in the ndb module (https://issuetracker.google.com/u/1/issues/35901184):

@contextlib.contextmanager
def memoryTracker():
    if constants.UNITTEST:
        yield
        return

    if not constants.LOCAL:
        memory_usage_before = runtime.memory_usage().current()
    common_types_before = dict(objgraph.most_common_types(shortnames=False))

    yield

    gc.collect()
    if gc.garbage:
        loggin.warn(
            'Leaking objects with __del__ methods? (len %d, types %s...)'
            % (len(gc.garbage), [g.__class__ for g in gc.garbage[:10]])
        )

    if not constants.LOCAL:
        memory_usage_after = runtime.memory_usage().current()
        logging.info(
            'Memory usage after: %d (diff: %d)'
            % (memory_usage_after, memory_usage_after - memory_usage_before)
        )

    common_types_after = dict(objgraph.most_common_types(shortnames=False))
    common_types_diff = {
        name: common_types_after.get(name, 0) - common_types_before.get(name, 0)
        for name in frozenset(common_types_before.keys() + common_types_after.keys())
    }
    logging.info(
        'Most common types:\n'
        + '\n'.join(
            '%s\t%d (diff: %+d)' % (name, common_types_after.get(name, 0), diff)
            for name, diff in sorted(common_types_diff.items(), key=lambda e: e[1], reverse=True)
        )
    )

I hereby place the code in the public domain. Feel free to use it in any way you like. I use it something like:

with memoryTracker():

Attila-Mihaly Balazs

unread,
Aug 7, 2017, 4:08:27 PM8/7/17
to Google App Engine
I use it something like:

with memoryTracker():
  # code suspected of leaking

You can also play around with it on the local development server (hopefully the leak reproduces there). And you could try using other methods from objgraph to understand the source of the leak (http://mg.pov.lt/objgraph/), though it probably won't be easy :(

Attila
Reply all
Reply to author
Forward
0 new messages