How to track object allocations in node.js?

129 views
Skip to first unread message

Haitao Li

unread,
Feb 20, 2016, 5:28:14 PM2/20/16
to nodejs

I have a node.js app with dependencies on 50+ modules. A worker process consumes about 1G memory, as shown in RES column of "top" command output. CPU load is light. Ever since I upgraded node.js from 0.10 to 4.2.3, I noticed the response time increased quite a bit. I can't totally blame nodejs because I also upgraded a few modules with it. Most requests are completed within 100ms, but a fraction of them can take more than 300ms. During the first couple of hours after server starts, everything looks normal. Then delays start to occur at different places. That makes me believe it's related to memory usage. CPU profiling didn't reveal anything interesting.


Questions: 

1. Can GC cause delays of more than 100ms? 

2. How do I find out what objects are allocated for processing one request? If there is a way to disable gc, then I can take snapshots before and after the request to compare. But if gc kicks in during that process I lose track of what were allocated then subsequently released by gc.

Ben Noordhuis

unread,
Feb 21, 2016, 9:25:19 AM2/21/16
to nod...@googlegroups.com
On Sat, Feb 20, 2016 at 7:37 PM, Haitao Li <lht...@gmail.com> wrote:
> I have a node.js app with dependencies on 50+ modules. A worker process
> consumes about 1G memory, as shown in RES column of "top" command output.
> CPU load is light. Ever since I upgraded node.js from 0.10 to 4.2.3, I
> noticed the response time increased quite a bit. I can't totally blame
> nodejs because I also upgraded a few modules with it. Most requests are
> completed within 100ms, but a fraction of them can take more than 300ms.
> During the first couple of hours after server starts, everything looks
> normal. Then delays start to occur at different places. That makes me
> believe it's related to memory usage. CPU profiling didn't reveal anything
> interesting.
>
>
> Questions:
>
> 1. Can GC cause delays of more than 100ms?

Yes, it can. Run with `--trace_gc` (and perhaps `--trace_gc_verbose`)
to find out if that happens. It prints collection times to stdout.

> 2. How do I find out what objects are allocated for processing one request?
> If there is a way to disable gc, then I can take snapshots before and after
> the request to compare. But if gc kicks in during that process I lose track
> of what were allocated then subsequently released by gc.

There is no way to completely disable the garbage collector but you
can (indirectly) tweak its frequency through the
`--max_semi_space_size=<mb>` and `--min_semi_space_size=<mb>` flags
(and, to some extent, `--max_old_space_size=<mb>` too.) A larger new
space means fewer minor garbage collection cycles.

If you combine that with `--expose_gc` and call gc() at opportune
times, you have some influence on when the garbage collector runs.
Hope that helps!

Haitao Li

unread,
Apr 30, 2016, 10:04:46 PM4/30/16
to nodejs
Hi Ben, thanks for your reply. Somehow I didn't get the notification for it. I've enabled logging and found that it's MarkSweepCompact that's taking long. In each nodejs process in the past one hour, MarkSweepCompact occurs almost exactly once per minute. About 1/3 of them take longer than 100ms. Because my project is a service to other internal nodejs apps, a request that takes more than 100ms to finish returns ETIMEDOUT on client side. I'm willing to sacrifice average response time a bit to reduce max response time. Can I reduce max_old_space_size to make MarkSweepCompact run more often but quicker each time?

Thanks

Ben Noordhuis

unread,
May 2, 2016, 9:08:28 AM5/2/16
to nod...@googlegroups.com
On Thu, Apr 28, 2016 at 7:31 PM, Haitao Li <lht...@gmail.com> wrote:
> Hi Ben, thanks for your reply. Somehow I didn't get the notification for it.
> I've enabled logging and found that it's MarkSweepCompact that's taking
> long. In each nodejs process in the past one hour, MarkSweepCompact occurs
> almost exactly once per minute. About 1/3 of them take longer than 100ms.
> Because my project is a service to other internal nodejs apps, a request
> that takes more than 100ms to finish returns ETIMEDOUT on client side. I'm
> willing to sacrifice average response time a bit to reduce max response
> time. Can I reduce max_old_space_size to make MarkSweepCompact run more
> often but quicker each time?

That can certainly help but it depends on your application's
allocation patterns.

GC times are proportional to the number of live objects (dead objects
aren't traversed) so reducing the size of the old space will only help
if it means that the set of live objects shrinks as well. It won't do
much if your application is of the kind that creates a lot of
long-lived objects at start-up but everything else is short-lived.
Reply all
Reply to author
Forward
0 new messages