Simple Memcached server in Javascript with 100 lines of code

1,612 views
Skip to first unread message

junyi sun

unread,
Aug 3, 2012, 2:48:15 AM8/3/12
to nod...@googlegroups.com
Hi guys,

I am studying node.js. It is a wonderful utility to write network-based application.

Now, I have written a memcached server using node.js.  You can have a look at https://gist.github.com/3244607

I tested the program, and found it could reach 12000/s throughput. However, during the test, I found sometimes the speed suddenly decreased due to the GC pause from my mind.


Is there a way to improve my code ?



Thanks

Junyi


Tim Caswell

unread,
Aug 3, 2012, 10:29:15 AM8/3/12
to nod...@googlegroups.com
Reducing GC pauses is a tricky problem. The best advice I can give is
allocate less objects. Remember that functions, closures, and arrays
are also objects. But don't do this blindly without understanding at
the expense of making your code more complicated. Look at the V8
command-line options in node. I remember there being several to
expose gc events (as well as optimizer events).
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

Jimb Esser

unread,
Aug 3, 2012, 9:35:41 PM8/3/12
to nod...@googlegroups.com
Best thing to try, add --nouse_idle_notification to the node command line, this disables the full garbage collects when node tells V8 it thinks its idle, but V8's garbage collection it does on every allocation should still take care of collecting garbage.  Give that a try, watch the RSS in top or your favorite process monitor to make sure it's still garbage collecting (doesn't just leak), and hopefully the stalls will also go away.  We found this totally eliminated the giant garbage collect stalls and did not noticeably impact process memory usage in our application.

Marcel Laverdet

unread,
Aug 3, 2012, 11:26:38 PM8/3/12
to nod...@googlegroups.com
Yes adding --nouse-idle-notification to your node flags is definitely the first thing you should try. You may also try avoiding objects with large numbers of keys (like a million seems to be my ceiling).

junyi sun

unread,
Aug 4, 2012, 9:04:18 AM8/4/12
to nod...@googlegroups.com
Thanks you all.

I have tested my code with mc-benchamrk, and I started the server like this: 

node --nouse_idle_notification memcached.js


The benchmark score is:

====== SET ======

 1000000 requests completed in 24.53 seconds

 50 parallel clients 

3 bytes payload 

keep alive: 1

77.58% <= 1 milliseconds 

99.95% <= 2 milliseconds 

99.96% <= 3 milliseconds 

99.99% <= 4 milliseconds 

100.00% <= 5 milliseconds

 40766.41 requests per second


====== GET ======

 1000000 requests completed in 23.54 seconds

 50 parallel clients

 3 bytes payload

 keep alive: 1

0.00% <= 0 milliseconds 

82.48% <= 1 milliseconds

 99.97% <= 2 milliseconds

 99.98% <= 3 milliseconds 

99.99% <= 4 milliseconds 

100.00% <= 5 milliseconds 

42479.08 requests per second



junyi sun

unread,
Aug 7, 2012, 11:28:50 PM8/7/12
to nod...@googlegroups.com
Hi guys, I have updated the code. Now, it is not just a memory store. With LRU code added, it become a real Cache Daemon. 

You can review the code here: https://gist.github.com/3291755

It still has many places to improve. Any idea to improve the LRU algorithm or memory management?  please let me know, Thanks.

alessioalex

unread,
Aug 9, 2012, 8:16:03 AM8/9/12
to nod...@googlegroups.com
This is why I love Node so much, creating a Cache Daemon in around 200 loc is awesome.
Reply all
Reply to author
Forward
0 new messages