ANN: fastcgi parser for node.js

847 views
Skip to first unread message

billywhizz

unread,
Oct 13, 2010, 5:51:12 AM10/13/10
to nodejs
http://github.com/billywhizz/node-fastcgi-parser

I've been digging around in the internals of the fastcgi specification
and have come up with a very basic but working FastCGI parser for
node.js. This should make it fairly easy to build the following in
node.js:

FastCGI responders/authorizers/filters behind a FastCGI capable web
server (Apache/nginx/lighttpd/IIS etc.)
FastCGI capable web servers that can interact with backend FastCGI
applications (PHP, custom apps) through unix or inet sockets

It's very much alpha quality right now and haven't put a lot of work
into boundary testing or anything like that.

performance is so-so. i used creatonix's buffer modifications to do
binary packing, which is kinda slow for what i need. I have a binary
packing addon written (which almost doubles the packing speed) but
need to work out a few kinks before adding to this. at some stage i
may replace the whole library with a c++ addon with minimal js
bindings, but am happy just to have something working for now.

If anybody has any suggestions, please holla. Am not sure if the api's
make sense yet and any feedback would be appreciated.

There's an example Server and Client in the tests directory. I'll be
updating the README with better instructions at some stage over the
next 24 hours.

Thx.

r...@tinyclouds.org

unread,
Oct 13, 2010, 6:31:03 AM10/13/10
to nod...@googlegroups.com

Holy crap! That's awesome! I would like to see what sort of latency
and throughput you're getting.

billywhizz

unread,
Oct 13, 2010, 6:44:12 AM10/13/10
to nodejs
ry, lemme know what specific tests you'd like to see and i'll set them
up and run them.

have done some basic testing between node.js client and server and the
server is able to get through about 5-6k fastcgi records (of various
sizes) per second on a single core of a core 2 quad 2.33GHz. With my
binary packer this goes up to about 8-9k. there's a lot of room for
improvement on the performance side.

i have a max size (no real checks in place at the moment) of 16384 for
any body record. fastcgi allows splitting a request into multiple
records, so i could set this limit smaller if we want to minimize heap
allocation in each parser. Nice thing about fastcgi is you can also
multiplex multiple requests across one connection (using the recordId
to identify requests) so if we can keep chunk sizes relatively small
we should be able to handle a lot of concurrent connections with
reasonable latency.

have it all working behind nginx and lighttpd. the configs for them
are in the repo. you'll need to chmod the socket after you launch the
test server so nginx/lighttpd can access it...

anything else you need, let me know...

On Oct 13, 11:31 am, r...@tinyclouds.org wrote:

billywhizz

unread,
Oct 13, 2010, 10:35:32 PM10/13/10
to nodejs
a quick question on this. am wondering would it be better for a low
level library like this to use callbacks instead of emitting events? i
imagine there is quite a bit of overhead added when emitting
events...?

Shakti

unread,
Oct 29, 2010, 3:33:27 AM10/29/10
to nodejs

This extension looks pretty exciting. However I would want to know if
performance wise it is better than running a nodejs server and
forwarding requests from a Lighty/nginx proxy. I understand that fast
CGI works on a Unix socket in Linux environment and it is much faster
than a TCP/HTTP socket that is used for actual proxy forwarding, still
I read your concerns about memory and speed and want to know your
opinion.

There is another option where we can write a custom JS binding over
existing FCGI lib and provide JS extension in nodejs to run it as FCGI
app.

Cheers,
Shakti

Ben Noordhuis

unread,
Oct 29, 2010, 5:11:27 AM10/29/10
to nod...@googlegroups.com
On Fri, Oct 29, 2010 at 09:33, Shakti <shakti....@gmail.com> wrote:
> This extension looks pretty exciting. However I would want to know if
> performance wise it is better than running a nodejs server and
> forwarding requests from a Lighty/nginx proxy. I understand that fast
> CGI works on a Unix socket in Linux environment and it is much faster
> than a TCP/HTTP socket that is used for actual proxy forwarding, still
> I read your concerns about memory and speed and want to know your
> opinion.

I don't have any hard numbers for you but with a regular reverse
proxy, both the proxy and the back-end have to parse the HTTP request.
FastCGI uses a binary protocol to communicate with the back-end so
there is less overhead. How much of a difference that makes in the
grand scheme of things depends on the application.

> There is another option where we can write a custom JS binding over
> existing FCGI lib and provide JS extension in nodejs to run it as FCGI
> app.

I've been hacking on this. FastCGI supports two modes of operation:

A. One request per process (essentially CGI without the per-request exec).
B. Multiple concurrent requests, all handled by the same process.

A is relatively easy to get working but mode B requires more love and care.

Shakti

unread,
Oct 29, 2010, 7:10:09 AM10/29/10
to nodejs
Hey, thanks for the quick reply. In my setup I am trying to run
multiple node instances behind a lighty in FCGI mode. Here each node
is designed to load a JS file and serve the request.

I have tried the setup but failing to use it as per my requirement. In
my understanding a binary name is suppose to be given in FCGI conf
like the following and the executable is started when lighty comes up.

fastcgi.server = ( ".js" =>
( "localhost" =>
(
"socket" => "/tmp/nginx.sock",
"bin-path" => "/usr/local/bin/
node",
"check-local" => "disable"
)

Also this node instance need to run the server.js as provided by you
in example section. I am little confused setting the whole thing up.

I also tried running lighty without the bin-path as given by you, then
started a node instance manually with server.js as argument. Still
couldn't find the /tmp/nginx.sock getting created.

Thanks,
Shakti

On Oct 29, 2:11 pm, Ben Noordhuis <i...@bnoordhuis.nl> wrote:

Ben Noordhuis

unread,
Oct 29, 2010, 7:52:35 AM10/29/10
to nod...@googlegroups.com
Sorry, Shakti. I realize my previous message could be interpreted to
mean that FCGI works with nodejs right now. Alas, it doesn't -
Andrew's (billywhizz) effort not withstanding.

I've been hacking on a node-fastcgi[1] project. It's not complete (far
from it) but it might be in a couple of weeks. The idea is to support
mode B, single-process FCGI and provide a http.createServer()-like
interface to the application. I'll keep you posted if you want.

[1] http://github.com/bnoordhuis/node-fastcgi

Shakti

unread,
Oct 29, 2010, 8:45:19 AM10/29/10
to nodejs
Hi Ben,

I could set up Node.js to a lighty backend successfully.

"bin-path" => "/usr/local/bin/node pathtoserver.js"

Does the trick. The socket that lighty was creating had a suffix "-0"
to the name given conf file, so I had to modify the server.js. One
problem still remains, when I replicated same configuration in another
system I got the following issue. The issue comes when I make a curl
request to the server.

$ sudo lighttpd -D -f examples/lighttpd.conf

/home/azingo/node-fastcgi-parser/lib/fastcgi.js:174
var tmp = _header.unpack("oonno", 0);
^
TypeError: Object

Cheers,
Shakti

billywhizz

unread,
Oct 29, 2010, 6:59:06 PM10/29/10
to nodejs
A couple of things to be aware of:

- doing binary work in node.js is not particularly fast. the
buffer_extras.js that is included in my github is not as fast as doing
the binary packing/unpacking using a node C++ addon (i have a version
that does this and can add it to github, but it has a dependency on my
binary packer/unpacker here: http://github.com/billywhizz/node-binary)
- my fastcgi parser is not as good as it could be - currently, it
caches the whole body in the parser before raising an event. ideally,
it should be able to serve up chunks of the body (stdin/stdout/stderr)
as they come in and discard the buffers, allowing more concurrency and
lower memory usage. This might be possible but i haven't worked
through it yet. In reality, this should be taken care of by the web
server breaking the body up into fastcgi packets of a respectable size
- Lighttpd/Nginx/Apache all have a pretty serious limitation in their
fastcgi implementations (i think because they all presumed fastcgi
will only ever be used to talk to PHP). They do not use persistent
connections which means for each request a new connection to the
backend is established and torn down. Node.js doesn't like this much
and performance is a lot poorer than it would be if the connection was
kept open.
- As far as i can see, none of the major servers implement fastcgi
multiplexing, which allows multiple requests to be multiplexed across
the same connection. This is one of the nicest features of FastCGI so
is a bit of a bummer that none of them do it. The only server i have
seen a reference to that implements it is litespeed (http://
www.litespeedtech.com/) and that is a commercial license only product
as far as I know
- As far as comparing fastcgi to reverse-http-proxy, that would be an
interesting benchmark to see. There will likely be some overhead to
parsing FastCGI requests but this might be offset by the fact that
http is a much more complex protocol. if you look here:
http://github.com/billywhizz/node-fastcgi-parser/tree/master/benchmark/
i have done a basic benchmark against ry's http parser and it doesn't
hold up to badly, but it's hard to come up with a fair benchmark as it
all depends on how much control you want over the stream on the client
side (i.e. which headers/events etc you are interested in handling)
- You could use libfastcgi and wrap a node.js binding around it, but
then you would be dependent on the way sockets are handled in
libfastcgi and i don't know what the situation is with regard to
blocking in that lib. it's also a pretty nasty looking lib in general
and makes a lot of assumptions about the way you do things.
- this parser could fairly easily be rewritten in c/c++ as a node
addon, which would likely improve performance. this is something i
might look at if i find the time in the next couple of months
- using a unix socket will be much faster than TCP but in most real
world scenarios, the FastCGI backend is going to be running on a
separate server to the Web server so you are going to be stuck with
TCP if you want to scale out
- Shakti - if you have some code you are trying to get to work with
the parser, feel free to post it up and i'll see if i can help you out
getting it working...
- Ben, i just checked out your project - looks like you have got the
interaction with the web server working - nice work!! i wasted so much
time trying to get this to work, you wouldn't believe! i am going to
have a play around with this in the next few days... it should be
fairly easy to get this working in tandem with my parser.
- Shakti - can you post some code related to the error you posted so
we can try to replicate it?

billywhizz

unread,
Oct 29, 2010, 9:52:25 PM10/29/10
to nodejs
i had a look at ben's code and finally figured out how to get node.js
to act as a backend when started from lighttpd. i've posted an example
here:

http://gist.github.com/654795

you will need to copy the js file to the examples directory in my
fastcgi-parser git repo and change the lighttpd config to point at it.
you will also need to be sure to set permissions so lighttpd can run
it.

when you have it started, you will need to point your browser at
lighttpd and use a file name with a .js extension.

let me know if you have any problems getting it to work.
> seen a reference to that implements it is litespeed (http://www.litespeedtech.com/) and that is a commercial license only product

billywhizz

unread,
Oct 29, 2010, 9:57:07 PM10/29/10
to nodejs
check this out and let me know if you can get it working. i've tested
against lighttpd and it works like a dream. thanks for ben for a
pointer in the right direction!

http://gist.github.com/654795

billywhizz

unread,
Oct 29, 2010, 10:56:52 PM10/29/10
to nodejs
you can also try using spawn-fcgi (comes with lighttpd as far as i
know):

spawn-fcgi -s /tmp/nginx.sock -F 4 -U nginx -G nginx -- examples/
responder.js

(this creates 4 child processes and gives ownership of the socket to
the nginx user)

i've added responder.js to the gitgub repo.

Shakti

unread,
Oct 30, 2010, 6:15:25 AM10/30/10
to nodejs
Thanks for the insightful answer. I will try these and post my
findings on Monday.

Shakti

unread,
Nov 1, 2010, 9:46:07 AM11/1/10
to nodejs
Hi,

We ran the server.js as a fastCGI spawned process on a lighty server.
Compared it with normal node.js server configuration. Found
40%(6milisec for my setup) extra time taken for fastCGI response
compared to a direct response when very little data is sent. However
ran with 10k POST data and the ratio still remained 40% . All our
responses are of very small size, so didn't try for very large
records. With responder.js, however we found 10% extra time. I am not
sure what is the difference between, responder and server.js.

Will be trying with a proxy configuration hopefully very soon.

Got one question. We sent multiple requests at the same time and the
node could still get the requests. This behavior is different compared
to libFCGI on C which makes an accept call and hangs on a request. I
am wondering if my serverside app code is written in a reentrant
manner, can the node-fcgi-parser handle multiple requests.

P.S: Ran the suite for both 1000 and 10000 request batch.

billywhizz

unread,
Nov 1, 2010, 11:46:30 AM11/1/10
to nodejs
Shakti,

Am not sure why there should be a difference between responder.js and
server.js as they are doing the same thing. maybe lighttpd is spawning
multiple instances of responder.js and load balancing across them? by
default it creates 4 child processes i think.

also, you will get better performance if you use my binary packer to
create the responses as it's about twice as fast using this compared
to buffer_extras.js code. I've added a different version of the parser
here:

http://github.com/billywhizz/node-fastcgi-parser/blob/master/lib/fastcgi-binary.js

you will need to require it instead of fastcgi.js and make sure that
node-binary is available to it. This can be cloned from here:
http://github.com/billywhizz/node-binary.

If you could post up any scripts/code for the tests you are running, i
can try to run the same tests here and see where the bottlenecks might
be. If you don't want to share them with everyone, then just message
me on github or send me an email (my mail address is on my github
profile).

Andrew

billywhizz

unread,
Nov 1, 2010, 12:38:56 PM11/1/10
to nodejs
> Got one question. We sent multiple requests at the same time and the
> node could still get the requests. This behavior is different compared
> to libFCGI on C which makes an accept call and hangs on a request. I
> am wondering if my serverside app code is written in a reentrant
> manner, can the node-fcgi-parser handle multiple requests.

Am a bit unclear on what you are asking here Shakti. The responder.js
script should be able to handle multiple concurrent requests. I think
a C/libfcgi app will block until a request is complete unless you use
multiple threads to handle requests in your c application. Again, if
you could send on some examples of what you are trying to do, it would
be easier for me to see what is going on.

Also, a benchmark directly against a node.js http server is not very
fair. to get a fair comparison you should at least be doing a reverse
proxy from lighttpd to node.js backend.

billywhizz

unread,
Nov 1, 2010, 1:18:44 PM11/1/10
to nodejs
another thing...

the way the fastcgi.writer currently works is far from optimal.
currently, it allocates lots of buffers on the fly and gives them back
to the controlling code to write to a stream. i think it would be
better to initialise the writer with a stream object and write the
responses directly to the stream on the fly. this would result in a
lot less allocation and better performance.

i've done a quick test here comparing responder.js to a simple c/
libfcgi program that sends the same 10-byte response. using
apachebench, i get 7k rps for responder.js and 10k rps for the c/
libfcgi program. at these rates, lighttpd gets maxed out, but the c/
libfcgi solution used a LOT less CPU and memory than the node.js
solution. we could definitely improve this performance but we are not
going to get close to a c/libfcgi solution as far as i can see...

the main purpose behind this library would be to replace PHP fastcgi
backends as opposed to c/c++ ones so if you're trying to replace c/c++
backends, i think you'll be disappointed, unless you are willing to
trade performance for the ease of maintaining a javascript solution.

if you want the c program for comparison to responder.js, i've posted
it up here:

http://gist.github.com/658527

Shakti

unread,
Nov 1, 2010, 1:24:30 PM11/1/10
to nodejs
Hey Andrew,

Thanks. I understand its not fair to compare pure node.js server vs
fcgi, yet I had to measure between them to find out the time
difference to be able to pick the right technology for my use. :-)
Frankly reverse proxy I think will show worse result, so not
concentrating on it. Will try it as soon as possible. I will integrate
the binary module tomorrow and measure the same. About the concurrent
request handling, I wasn't too sure if it handles, but thanks for the
clarification that it does. I have a simple script that makes 10000
get/post calls one after other and I am taking mean of it. Will share
it.

I was kind of thinking if we could have the websockets available
alongside FCGI. Please share some pointers, if you have. Currently I
am looking at http://github.com/miksago/node-websocket-server/

Will keep updating on the topic.

Shakti

unread,
Nov 3, 2010, 1:02:42 AM11/3/10
to nodejs
Hey, I tried mod_proxy of lighty with independent node instance and
ran the same test cases. It takes 20% higher than direct node
communication and 20% less time than FCGI. Ran the same with Node as a
proxy and it showed very bad result almost double the time than direct
node communication.

I haven't yet integrated the C++ binary packer. Will try soon.

Regards,
Shakti
On Nov 1, 10:24 pm, Shakti <shakti.ashir...@gmail.com> wrote:
> Hey Andrew,
>
> Thanks. I understand its not fair to compare pure node.js server vs
> fcgi, yet I had to measure between them to find out the time
> difference to be able to pick the right technology for my use. :-)
> Frankly reverse proxy I think will show worse result, so not
> concentrating on it. Will try it as soon as possible. I will integrate
> the binary module tomorrow and measure the same. About the concurrent
> request handling, I wasn't too sure if it handles, but thanks for the
> clarification that it does. I have a simple script that makes 10000
> get/post calls one after other and I am taking mean of it. Will share
> it.
>
> I was kind of thinking if we could have the websockets available
> alongside FCGI. Please share some pointers, if you have. Currently I
> am looking athttp://github.com/miksago/node-websocket-server/

billywhizz

unread,
Nov 3, 2010, 11:37:29 AM11/3/10
to nodejs
i ran some tests before using node.js as a proxy and it didn't work
too well. what code are you using to do the proxying? i don't think
this is the best role for node.js due to the performance issues with
buffers. maybe the fastbuffers in 3.0 branch will make a difference...

Ryan Dahl

unread,
Nov 4, 2010, 11:32:19 AM11/4/10
to nod...@googlegroups.com
On Wed, Nov 3, 2010 at 8:37 AM, billywhizz <apjo...@gmail.com> wrote:
> i ran some tests before using node.js as a proxy and it didn't work
> too well. what code are you using to do the proxying? i don't think
> this is the best role for node.js due to the performance issues with
> buffers. maybe the fastbuffers in 3.0 branch will make a difference...
>

This should get better with the writev stuff as well.

Shakti

unread,
Nov 5, 2010, 12:07:25 AM11/5/10
to nodejs
I tried the code frm this website...

http://www.catonmat.net/http-proxy-in-nodejs

On Nov 4, 8:32 pm, Ryan Dahl <r...@tinyclouds.org> wrote:
Reply all
Reply to author
Forward
0 new messages