--nouse-idle-notification and "last resort gc"

1,233 views
Skip to first unread message

Joran Dirk Greef

unread,
Nov 4, 2012, 10:19:11 AM11/4/12
to v8-u...@googlegroups.com
I am running Node v0.8.14 with --nouse_idle_notification --expose_gc --max_old_space_size=19000 --max_new_space_size=19000000.

I have a large object used as part of a BitCask style store, keeping a few million entries.

Calling gc() manually takes a 3 seconds which is fine as I call it every 2 minutes.

The machine has 32GB of RAM and all of this is available to the process, there is nothing else running.

The process sits at around 1.9GB of RAM.

I have found an interesting test case where async reading a 1mb file in Node takes longer and longer depending on how many entries are in the large object discussed above:

Node.fs.readFile('test', 'binary', End.timer())
  347745 ms: Scavenge 1617.4 (1660.4) -> 1611.1 (1660.4) MB, 0 ms [allocation failure].
  350900 ms: Mark-sweep 1611.5 (1660.4) -> 1512.2 (1633.4) MB, 3153 ms [last resort gc].
  354072 ms: Mark-sweep 1512.2 (1633.4) -> 1512.0 (1592.4) MB, 3171 ms [last resort gc].
  357247 ms: Mark-sweep 1512.0 (1592.4) -> 1512.0 (1568.4) MB, 3175 ms [last resort gc].
  360426 ms: Mark-sweep 1512.0 (1568.4) -> 1512.0 (1567.4) MB, 3178 ms [last resort gc].
  363620 ms: Mark-sweep 1512.0 (1567.4) -> 1512.0 (1567.4) MB, 3193 ms [last resort gc].
  366802 ms: Mark-sweep 1512.0 (1567.4) -> 1511.6 (1567.4) MB, 3182 ms [last resort gc].
  369967 ms: Mark-sweep 1511.6 (1567.4) -> 1511.6 (1567.4) MB, 3164 ms [last resort gc].
2012-11-04T14:59:30.700Z INFO 22230ms

Reading the 1mb file before the large object is created is fast, the bigger the object becomes the slower the file is to read.

Why is last resort gc being called if gc is exposed and if the machine has more than enough RAM?

What was interesting was that this behabiour does not happen for V8 3.6.6.25 and earlier.

The reason I can't use 3.6.6.25 however is that the heap is limited to 1.9GB and I need more head room than that.

Is there anyway I can disable the last resort GC?

Yang Guo

unread,
Nov 5, 2012, 3:21:11 AM11/5/12
to v8-u...@googlegroups.com
The short answer is: don't mess with GC settings if you don't know what you are doing.

The long answer is: new space is the part of the heap where short-living objects are allocated. The GC scans new space on every collection and promotes long-living objects into the old space. You are setting the new space to ~19GB, which takes a while to scan. Furthermore, you are setting the old space to only 19MB, limiting the part of the heap where long-living objects are being moved to, hence the last resort GC. What you probably want is to specify a large old space size, but leave the new space size at default.

Yang

Joran Dirk Greef

unread,
Nov 5, 2012, 3:50:21 AM11/5/12
to v8-u...@googlegroups.com
Max-old-space-size is measured in MB not KB as you suggest.

Further, max-new-space-size makes no difference to the GC trace given above, whether it's passed as flag or not, big or small.

Vyacheslav Egorov

unread,
Nov 5, 2012, 9:43:27 AM11/5/12
to v8-u...@googlegroups.com
Hello Joran,

"last resort gc" means that there was an allocation failure that a
normal GC could "resolve". Basically you are in a kinda OOM situation.
I am kinda curious what kind of allocation it is. Probably it is some
very big object. It can be that allocation attempt does not correctly
fall into allocating from LO space.

One thing though is that last resort GC can be much more lightweight
for node.js application that it is currently. I doubt 7 GC in a row
are very helpful. As a workaround you can go into
Heap::CollectAllAvailableGarbage and replace everything inside with

CollectGarbage(OLD_POINTER_SPACE, gc_reason);

This should get rid of 7 repetitive GCs. I think for an application
like yours it makes perfect sense to set internal GC limits very high
and let incremental GC crunch things instead of falling back to
non-incremental marking. But there are currently no way to configure
GC like that.
Vyacheslav Egorov
> --
> v8-users mailing list
> v8-u...@googlegroups.com
> http://groups.google.com/group/v8-users

Joran Dirk Greef

unread,
Nov 5, 2012, 10:01:52 AM11/5/12
to v8-u...@googlegroups.com
Thanks Vyacheslav.

I thought it may be some kind of OOM situation, but was surprised that this would be the case, given all the memory available to the process. Running top command shows 32GB used memory but I assume this is all disk cache, since there are no other user programs shown in top apart from the node process itself which is shown to be around 2GB used memory. The node process accesses files where the total data set is over 32GB so it makes sense that Linux would grow the disk cache? Would something like overcommit_memory=1 help V8 here? It seems like V8 is seeing a false positive OOM. There really should be more than enough RAM.

As to the kind of allocation, it seems to be caused by calling buffer.toString, which drops out to C++ to convert the buffer to a string which it passes back. So essentially any 1-2MB readFile('binary' or 'utf8' or 'ascii') seems to trigger it. Interestingly enough, reading the file as a pure buffer does not cause the allocation error and returns within a few ms. And then converting the buffer to a string manually in JS does not cause any further GC either.

I will give your suggestion re: CollectAllAvailableGarbage a try and post the results here.

What I was wanting to do was to set the GC limits very high, as you say, to try and prevent it from anything non-incremental, since the heap has millions of persistent objects. I was hoping there would be a way to configure this using flags, or make exposing gc cause V8 to refrain from doing anything non-incremental, except when gc() is called.

Your help is much appreciated.

Joran Dirk Greef

unread,
Nov 6, 2012, 4:29:20 AM11/6/12
to v8-u...@googlegroups.com
Vyacheslav, just to clarify, must I replace the entire content of CollectAllAvailableGarbage with:

CollectGarbage(OLD_POINTER_SPACE, MARK_COMPACTOR, gc_reason, NULL)

or

CollectGarbage(OLD_POINTER_SPACE, gc_reason)

?

Must I remove everything including things like:

new_space_.Shrink();
UncommitFromSpace();
Shrink();

Your help is much appreciated.

On Monday, November 5, 2012 4:43:30 PM UTC+2, Vyacheslav Egorov wrote:

Hitesh Gupta

unread,
Jun 6, 2013, 4:22:04 AM6/6/13
to v8-u...@googlegroups.com
Hi,
  We are facing a similar problem. We have an XMPP server running over node.js on a machine with 3.8 GB RAM available. However, around 400mb heap usage, v8 starts seeing a false positive OOM, and starts triggering last resort gc. The detail problem description, and the gc trace can be found at https://groups.google.com/forum/?fromgroups#!topic/v8-users/pnhQsNxUhs4

  Please let us know if any cause or resolution has been identified for similar problems.

Regards,
Hitesh.

Vyacheslav Egorov

unread,
Jun 6, 2013, 5:03:43 AM6/6/13
to v8-u...@googlegroups.com

Do you report the size of your enormous Judy array to v8 via AdjustAmountOfExternallyAllocatedMemory? If you do try disabling that.

Vyacheslav Egorov

--
---
You received this message because you are subscribed to the Google Groups "v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-users+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Hitesh Gupta

unread,
Jun 7, 2013, 6:07:39 AM6/7/13
to v8-u...@googlegroups.com
No, we are not using that. 

Regards,
Hitesh.

Joran Greef

unread,
Jun 7, 2013, 6:31:20 AM6/7/13
to v8-u...@googlegroups.com
Are you frequently creating/slicing/toStringing any Buffers around a few MB? This might be calling AdjustAmountOfExternallyAllocatedMemory indirectly.

Have you tried running without CollectAllAvailableGarbage and monitoring your RAM usage after that?

Joran Greef
You received this message because you are subscribed to a topic in the Google Groups "v8-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/v8-users/Xc5EriJtpd4/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to v8-users+u...@googlegroups.com.

Hitesh Gupta

unread,
Jun 7, 2013, 8:04:09 AM6/7/13
to v8-u...@googlegroups.com
Yeah, we serialize and deserialize a lot of data to/from Buffers. I couldn't spot calls to AdjustAmountOfExternallyAllocatedMemory from node_buffer though. 

Haven't tried running without CollectAllAvailableGarbage. Did neutering that disable only the last resort gc or entire garbage collection on old generation for you ?

Also, is there a scenario of facing memory allocation failures when there is memory available on the machine, but v8 is thinking that it's OOM (and thus invoking CollectAllAvailableGarbage which doesn't do anything) ?

Regards,
Hitesh.

Joran Greef

unread,
Jun 10, 2013, 1:44:13 AM6/10/13
to v8-u...@googlegroups.com
On 07 Jun 2013, at 2:04 PM, v8-u...@googlegroups.com wrote:
Yeah, we serialize and deserialize a lot of data to/from Buffers. I couldn't spot calls to AdjustAmountOfExternallyAllocatedMemory from node_buffer though. 
Haven't tried running without CollectAllAvailableGarbage. Did neutering that disable only the last resort gc or entire garbage collection on old generation for you ?

Neutering CollectAllAvailableGarbage should just disable the 7 times last resort you are seeing.

Neutering CollectAllGarbage will eliminate most of the other collection on the old generation. In my case, the collection on old space still runs every now and then but this reduces it.

Also, is there a scenario of facing memory allocation failures when there is memory available on the machine, but v8 is thinking that it's OOM (and thus invoking CollectAllAvailableGarbage which doesn't do anything) ?
Regards,
Hitesh.

Yes, that is what I had. My machine has 32GB, the process is around 3GB and without those hacks it was spending 50 seconds plus per GC and collecting every 5 seconds. The filesystem had most of the 32GB used as disk cache otherwise nothing else was using it.
Reply all
Reply to author
Forward
0 new messages