Depending on the nature of the datas, I would first try to pack them into Buffer objects, with a free list like the http module does with the parsers. If concatenating lots of strings also look for using array.join() to limit allocations.
sent from my phone
> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>
Pawal,
I wonder if you've seen a difference in the master branch rather than
v0.2 branch?
fs.stat('/tmp/world', function (err, stats) {
if (err) throw err;
console.log('stats: ' + JSON.stringify(stats)); console.log(x);});fs.stat('/tmp/world', function(x, undefined){return function (err, stats) {
if (err) throw err;
console.log('stats: ' + JSON.stringify(stats)); console.log(x);};}(x));I think these GCs are triggered by V8::IdleNotifications which is
invoked by node.js itself. The whole idea behind IdleNotification is
that it should be called when pause does not actually matter [i.e. it
would not be noticed by user]
But I also see a low hanging fruit here: V8 could at least "cancel"
compaction when it saw that almost all objects in heap are alive.
--
Vyacheslav Egorov
Yes, we can probably be smarter about when we call it. Current logic is:
- Every 5 seconds run a timer which decides:
- If the last 5 iterations happened in less than 0.7 seconds
- Call V8::IdleNotification over and over until it returns true or
something happens on the event loop.
> But I also see a low hanging fruit here: V8 could at least "cancel"
> compaction when it saw that almost all objects in heap are alive.
Sounds good!
This doesn't sound like an issue with V8. If the V8/node process has
been killed then the memory should be gone.
I'm not quite sure what a kernel process is. Normally on Unix there a
process can be in kernel mode or user mode, but it's the same process.
Kernel process sounds like something the kernel would use to
implement asynchronous IO. So it sounds a bit like a bug in node's
AIO libraries or MacOSX's implementation of AIO.
I processed some gigabytes and I did not have this problem
RSS is probably not a good indicator for this, the OS won't shrink it
(swap, reclaim) if there is no memory pressure.
What OS your are using?
On Mac OS X I see the following:
whitestar:node mraleph$ ./node --expose-gc --trace-gc ./test.js
Scavenge 1.0 -> 1.0 MB, 0 ms.
Mark-sweep 1.4 -> 1.3 MB, 5 / 7 ms.
whitestar:node mraleph$
notice that 5 ms were spent inside external scope --- most probably it
was deallocation of buffer's underlying storage.
Another point you should take into account: node uses new/delete
operators to manage buffer's underlying storage. This creates an
additional layer of abstraction between OS and node. C++ runtime can
manage it's memory in different ways: e.g. it can decide not to return
memory region occupied by deleted object with intent to reuse it for
later allocations.
--
Vyacheslav Egorov
On Wed, Oct 13, 2010 at 3:49 PM, Felix Geisendörfer
Interesting. Try applying the following patch and post output (it also
fixes a small glitch in node sources. delete[] should be used to free
something allocated by new[]).
> Do you have a better idea to trace down the cause of that? How can I
> force the operating system to apply "memory pressure"?
I am pretty sure this has nothing to do with memory pressure. If C++
runtime decided not to return memory to OS you probably cannot force
it.
--
Vyacheslav Egorov
But it does. Felix's test measures RSS, the part of the program that
is in physical memory. If there is no memory pressure[1], no other
processes requesting (and using) memory, the OS will neither reclaim
nor swap so the RSS won't shrink.
> If C++ runtime decided not to return memory to OS you probably cannot force it.
Partially true. The runtime may not return the memory (I think glibc
indeed does not but don't quote me on this) but that doesn't mean the
OS won't swap out unused pages.
@Felix: there is no silver bullet for debugging memory usage. Your
best bet is valgrind + node_g.
[1] That should actually read 'VM pressure' but VM is an overloaded
concept on this mailing list so let's go with memory pressure.
It is freed when GC gets to it (gc trace line is printed at the very
end of GC).
> , but the OS/C++ runtime don't let go of it.
It looks so.
> But it does. Felix's test measures RSS, the part of the program that
> is in physical memory.
I stand corrected. I was talking more about general case than about
this particular approach to leak detection.
--
Vyacheslav Egorov