Node.JS Memory Limit Questions

2,527 views
Skip to first unread message

James

unread,
Nov 29, 2011, 6:31:22 AM11/29/11
to nod...@googlegroups.com
Hi Everyone,

At the moment I am working on a project which need's to handle many ten's of thousands of active socket concurrent connections. Now been looking at the memory usage and the V8 engine, I am using the v0.6.2 of Node.JS but can easily upgrade my code base if needed. I been reading a lot of discussion on the V8 group and I one post kinda without being to clear that the memory limit in x64 build is now unknown and that the previous limits where now not applicable? or does the 1GB limit still apply?

I can't afford for "out of memory" event to happen. So currently working on running up to 40 node instances each limited to 1GB of RAM on a single server with plenty of RAM to cope with that and with quad core xeon processors and a load balancer to organise the load across all the instances, so if someone could please help answer my following questions this would be great.

1). What is the maximum memory limit in Node.JS?
2). Is there a way to limit Node.JS memory usage per process?
3). Do Node.JS instance share memory and this effect any memory limits?
4). How best to monitor each instance memory consumption (HeapTotal/HeapUsed)? 
5). Is there a way to store socket connections outside the V8 Heap stack like in buffers or some other way so to reduce the Node.JS memory usage? 

Thanks,

James

James

unread,
Nov 29, 2011, 6:37:49 AM11/29/11
to nod...@googlegroups.com
Would cluster module allow me to do this automatically without needing to build a custom load balancer?  so my Node.JS application could hit 40GB or RAM across 40+ workers sharing the load?

Arnout Kazemier

unread,
Nov 29, 2011, 7:14:35 AM11/29/11
to nod...@googlegroups.com
1) by default its 1.7 gb on 64 bit, but it can be as much as your machines memory with node 0.6
2) use --max-old-space-size flag to set the memory limit lower or higher
3) no they don't share memory
4) I would just check the overall process memory instead of the V8 heap
5) Maybe but you got to build your own custom c module to allocate memory and hack in to the node source.
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

James

unread,
Nov 29, 2011, 8:14:51 AM11/29/11
to nod...@googlegroups.com
Thanks V1, very helpful :-)

So technically a Node.JS instance can run using 36GB+ or RAM or more then? 

Would it be worth adding the cluster module to load balance anyway, and do i need to have isWorker or isMaster throughout my code so the workers only see the HTTP Servers? 
As the example in the doc's really lacks a detail explanation of how this need's to be implemented.

James

James Jackson

unread,
Nov 29, 2011, 8:23:09 AM11/29/11
to nod...@googlegroups.com
Sorry for the multi-thread..

Ideally I would prefer to have a single instance being able to use 36GB or RAM as the cluster module would pretty complex to implement within the source code and building a load balancer, but if the best way to get the performance needed then I will look at adding the cluster module to the source. Just would like some advice about how best to implement it  :-)

Thanks.

James



Diogo Resende

unread,
Nov 29, 2011, 8:33:07 AM11/29/11
to nod...@googlegroups.com
On Tue, 29 Nov 2011 13:23:09 +0000, James Jackson wrote:
> Sorry for the multi-thread..
>
> Ideally I would prefer to have a single instance being able to use
> 36GB or RAM as the cluster module would pretty complex to implement
> within the source code and building a load balancer, but if the best
> way to get the performance needed then I will look at adding the
> cluster module to the source. Just would like some advice about how
> best to implement it :-)

It might be hard but it's safer. I would not advise you to have a
single
36GB+ process. If this will handle several clients, you have advantages
splitting into several workers.

- Start/stop workers as needed (depending on load)
- Upgrades without disturbing clients
- No single point of failure at least for your client requests

---
Diogo R.

James

unread,
Nov 29, 2011, 8:38:36 AM11/29/11
to nod...@googlegroups.com
Thanks Diogo, 

I thought as much, but it nice to hear from others, I am going to implement the cluster module as this is where all the memory is used to handle and store active socket connections so this will help load balance all the connection across loads of workers. Thanks for the advice. 

Ben Noordhuis

unread,
Nov 29, 2011, 9:42:17 AM11/29/11
to nod...@googlegroups.com
On Tue, Nov 29, 2011 at 13:14, Arnout Kazemier <in...@3rd-eden.com> wrote:
> 1) by default its 1.7 gb on 64 bit, but it can be as much as your machines
> memory with node 0.6

The 0.6.x releases ship with V8 3.6.x. There is still a hard upper
limit on the size of the V8 heap.

It's worth noting that data in buffers doesn't count towards the limit
(a single buffer cannot be larger than 2^30-1 bytes but that is still
quite large, of course).

Arnout Kazemier

unread,
Nov 29, 2011, 10:24:34 AM11/29/11
to nod...@googlegroups.com
What was the reason it got reverted back then? Because the last time I tested it.. it worked just fine.

Arnout Kazemier

---
e-mail: in...@3rd-Eden.com
twitter: @3rdEden

http://observer.no.de - Observing and learning from your customers

Fedor Indutny

unread,
Nov 29, 2011, 10:29:19 AM11/29/11
to nod...@googlegroups.com
3.7 was quite unstable: crashes, errors on stress tests.

Probably, situation is better now, but we can only consider using 3.7 for 0.5.x right now.

Cheers,
Fedor.

Shripad K

unread,
Nov 29, 2011, 11:19:53 AM11/29/11
to nod...@googlegroups.com
Ok i just tested node v0.6.3 on a ec2 large instance. Wrote a script that automates opening websocket connections to a websocket server. The max persistent connections that could be opened was 48500 for 1 node process (Note: This is with the client running on the server). Considering the large instance has 4 compute units, I launched 4 node processes but was still stuck at this magic number of 48500. Is this because of the 64k connections per client limit?

Bert Belder

unread,
Nov 29, 2011, 11:28:58 AM11/29/11
to nodejs
On Nov 29, 5:19 pm, Shripad K <assortmentofso...@gmail.com> wrote:
> Ok i just tested node v0.6.3 on a ec2 large instance. Wrote a script that
> automates opening websocket connections to a websocket server. The max
> persistent connections that could be opened was 48500 for 1 node process
> (Note: This is with the client running on the server). Considering the
> large instance has 4 compute units, I launched 4 node processes but was
> still stuck at this magic number of 48500. Is this because of the 64k
> connections per client limit?
>
>
>
>
>
>
>
> On Tue, Nov 29, 2011 at 8:59 PM, Fedor Indutny <fe...@indutny.com> wrote:
> > 3.7 was quite unstable: crashes, errors on stress tests.
>
> > Probably, situation is better now, but we can only consider using 3.7 for
> > 0.5.x right now.
>
> > Cheers,
> > Fedor.
>
> > On Tue, Nov 29, 2011 at 7:24 PM, Arnout Kazemier <i...@3rd-eden.com>wrote:
>
> >> What was the reason it got reverted back then? Because the last time I
> >> tested it.. it worked just fine.
>
> >> Arnout Kazemier
>
> >> ---
> >> e-mail: i...@3rd-Eden.com
> >> twitter: @3rdEden
>
> >>http://observer.no.de- Observing and learning from your customers

>
> >> On Tuesday, November 29, 2011 at 3:42 PM, Ben Noordhuis wrote:
>

Obviously you have to set the file descriptor limit high enough to
handle a lot of connections. The problem you're seeing could also
occur when you're using one client to make the connections to your
server, and that client just runs out of available ports.

Reply all
Reply to author
Forward
0 new messages