A couple of things to be aware of:
- doing binary work in node.js is not particularly fast. the
buffer_extras.js that is included in my github is not as fast as doing
the binary packing/unpacking using a node C++ addon (i have a version
that does this and can add it to github, but it has a dependency on my
binary packer/unpacker here:
http://github.com/billywhizz/node-binary)
- my fastcgi parser is not as good as it could be - currently, it
caches the whole body in the parser before raising an event. ideally,
it should be able to serve up chunks of the body (stdin/stdout/stderr)
as they come in and discard the buffers, allowing more concurrency and
lower memory usage. This might be possible but i haven't worked
through it yet. In reality, this should be taken care of by the web
server breaking the body up into fastcgi packets of a respectable size
- Lighttpd/Nginx/Apache all have a pretty serious limitation in their
fastcgi implementations (i think because they all presumed fastcgi
will only ever be used to talk to PHP). They do not use persistent
connections which means for each request a new connection to the
backend is established and torn down. Node.js doesn't like this much
and performance is a lot poorer than it would be if the connection was
kept open.
- As far as i can see, none of the major servers implement fastcgi
multiplexing, which allows multiple requests to be multiplexed across
the same connection. This is one of the nicest features of FastCGI so
is a bit of a bummer that none of them do it. The only server i have
seen a reference to that implements it is litespeed (http://
www.litespeedtech.com/) and that is a commercial license only product
as far as I know
- As far as comparing fastcgi to reverse-http-proxy, that would be an
interesting benchmark to see. There will likely be some overhead to
parsing FastCGI requests but this might be offset by the fact that
http is a much more complex protocol. if you look here:
http://github.com/billywhizz/node-fastcgi-parser/tree/master/benchmark/
i have done a basic benchmark against ry's http parser and it doesn't
hold up to badly, but it's hard to come up with a fair benchmark as it
all depends on how much control you want over the stream on the client
side (i.e. which headers/events etc you are interested in handling)
- You could use libfastcgi and wrap a node.js binding around it, but
then you would be dependent on the way sockets are handled in
libfastcgi and i don't know what the situation is with regard to
blocking in that lib. it's also a pretty nasty looking lib in general
and makes a lot of assumptions about the way you do things.
- this parser could fairly easily be rewritten in c/c++ as a node
addon, which would likely improve performance. this is something i
might look at if i find the time in the next couple of months
- using a unix socket will be much faster than TCP but in most real
world scenarios, the FastCGI backend is going to be running on a
separate server to the Web server so you are going to be stuck with
TCP if you want to scale out
- Shakti - if you have some code you are trying to get to work with
the parser, feel free to post it up and i'll see if i can help you out
getting it working...
- Ben, i just checked out your project - looks like you have got the
interaction with the web server working - nice work!! i wasted so much
time trying to get this to work, you wouldn't believe! i am going to
have a play around with this in the next few days... it should be
fairly easy to get this working in tandem with my parser.
- Shakti - can you post some code related to the error you posted so
we can try to replicate it?