Introducing Nitrode - A dedicated HTTP server

204 views
Skip to first unread message

Oliver Morgan

unread,
Aug 15, 2010, 2:30:33 PM8/15/10
to nod...@googlegroups.com
Hi All,

Welcome to my introduction to Nitrode.

The Problem...
Node.JS has no FastCGI module or alike that allows it to connect to an existing HTTP server like NGINX.

The Solution...
Nitrode is aimed to bridge that gap by providing a fast and versatile HTTP server, built on top of Node.JS. It aims to support many of the mainstream features NGINX does, but has the added advantage of being written in Javascript and exposing an API to allow existing applications to fully control it without leaving the current process.

Great! How do i use it?
You can find the repository here: http://github.com/ollym/nitrode
I have included an example.js file, which will utilise all the current feature of Nitrode which are:
  1. HTTP Basic Authentication
  2. ETag and If-Modified-Since Support
  3. Public directory / Static file support
  4. SSL Support
  5. Fully configurable
  6. Vitual Host support
  7. Redirect support
  8. And much much more...
Unlike Connect, it aims to be purely an HTTP server, and focusses entirely on that function.

To run the example:
  1. Clone the repository
  2. cd to the repository directory
  3. Run the command: "sudo node example.js"
  4. Open up a browser, and type in: "localhost:80"
  5. Authenticate yourself by using using username: 'admin', password: 'admin'
  6. You should now see a page titled: "Welcome to Nitrode!"
Make sure you also read through example.js to see what it's doing!

So whats the catch?
Nitrode is very young, i've only been working on it for the past week or so and so as such there are a number of things i still have to do:

  1. Split index.js into smaller libraries
  2. Add authentication digest support
  3. Add SSL certificate authentication support
  4. Write unit tests
  5. Improve sys.pump performance
  6. Write benchmarks
  7. Write documentation
If anyone is interested to help me out with this list, then please do!

Regards,
Olly

Micheil Smith

unread,
Aug 15, 2010, 3:16:02 PM8/15/10
to nod...@googlegroups.com
Just thought I'd add word, but there's an article up about Nitrode on
The Changelog blog: http://wynn.fm/4w

— Micheil Smith

> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.

Rasmus Andersson

unread,
Aug 15, 2010, 9:28:41 PM8/15/10
to nod...@googlegroups.com
Nice!

You might want to consider reading in the system mime types. Maybe
like this: http://github.com/rsms/oui/blob/master/oui/mimetypes.js

> --
> You received this message because you are subscribed to the Google Groups
> "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en.
>

--
Rasmus Andersson

Olly

unread,
Aug 16, 2010, 4:41:00 AM8/16/10
to nodejs
Hi Rasmus,

Thanks for this, will certainly add it to the system at some point.

I'm currently 90% done with v0.3.0 which separates the different
middleware sections into libraries which get thrown onto a stack. Like
what Connect does, just a bit differently.

Once i've done that i'll add that mimetype stuff for sure.

One further idea...

What do you think about using HTTP authentication as site
authentication. With the JS callback you can link it up to a database
and authenticate users dynamically. Seeing as the browser then re-
sends those authentication details each time, you can keep track of
the client's identify.

What do you think?

Peter Hewat

unread,
Aug 16, 2010, 7:01:35 AM8/16/10
to nodejs
Hi Oliver,

This project looks nice. Just what I was looking for. Do you plan on
supporting compression? How about an example where you use Socket.IO?

Concerning HTTP Basic Authentication, the problem is that if you don't
use https, you are sending your password in plain text at each request
(not even a hash). Not only that but the authentication mechanism in
the browser brings up a modal popup that you can not integrate in your
page.

Cheers,
Peter

Olly

unread,
Aug 16, 2010, 8:21:42 AM8/16/10
to nodejs
On Aug 16, 12:01 pm, Peter Hewat <peter.he...@gmail.com> wrote:
> Hi Oliver,
>
> This project looks nice. Just what I was looking for. Do you plan on
> supporting compression? How about an example where you use Socket.IO?

Yes, i most certainly plan to support compression. As for Socket.IO,
it's a big high-level for nitrode, the long term goal is to certainly
support an IO package, whether it will be possible to integrate with
an existing solution i'll have to see. If not i will be creating my
own.

> Concerning HTTP Basic Authentication, the problem is that if you don't
> use https, you are sending your password in plain text at each request
> (not even a hash). Not only that but the authentication mechanism in
> the browser brings up a modal popup that you can not integrate in your
> page.

Yes. although passwords are sent in plain-text in forms anyway. The
difference being it's sent at every request rather then just once. On
second thought it would also be a massive overkill to have to
authenticate the user at every request.

Better just stick to the old fashioned way.

Xuan Huy

unread,
Aug 16, 2010, 10:54:15 AM8/16/10
to nod...@googlegroups.com
i have error when i run nitrode :D



Creating HTTP server listener...
Listening for incoming requests...
Client 127.0.0.1 Connection established
Client 127.0.0.1 Connection established


buffer:73
      throw new Error('Unknown encoding');
            ^
Error: Unknown encoding
    at Buffer.write (buffer:73:13)
    at IncomingMessage.authorize (/home/bookmark/Desktop/nodejs/nitrode/index.js:164:43)
    at [object Object].handle (/home/bookmark/Desktop/nodejs/nitrode/index.js:179:28)
    at Server.<anonymous> (/home/bookmark/Desktop/nodejs/nitrode/index.js:117:73)
    at Server.emit (events:33:26)
    at HTTPParser.onIncoming (http:825:10)
    at HTTPParser.onHeadersComplete (http:87:31)
    at Stream.ondata (http:757:22)
    at IOWatcher.callback (net:517:29)
    at node.js:266:9

Ngô Xuân Huy


Olly

unread,
Aug 16, 2010, 11:32:30 AM8/16/10
to nodejs
Make sure you are using Node 0.103 or newer.

Your error is occuring on this line:
http://github.com/ollym/nitrode/blob/master/index.js#L164

Which is where a buffer object is used to decode the base64 encoded
authentication string. base64 encoding/decoding within the buffer
object was only added recently, and i suspect that's where your
problem lies.

Upgrade your Node version to the latest, and try again.

Olly

On Aug 16, 3:54 pm, Xuan Huy <ngoxuan...@gmail.com> wrote:
> i have error when i run nitrode :D
>
> Creating HTTP server listener...
>
>
>
>
>
> > Listening for incoming requests...
>
> > Client 127.0.0.1 Connection established
>
> > Client 127.0.0.1 Connection established
>
> >> buffer:73
>
> >       throw new Error('Unknown encoding');
>
> >             ^
>
> > Error: Unknown encoding
>
> >     at Buffer.write (buffer:73:13)
>
> >     at IncomingMessage.authorize
> >> (/home/bookmark/Desktop/nodejs/nitrode/index.js:164:43)
>
> >     at [object Object].handle
> >> (/home/bookmark/Desktop/nodejs/nitrode/index.js:179:28)
>
> >     at Server.<anonymous>
> >> (/home/bookmark/Desktop/nodejs/nitrode/index.js:117:73)
>
> >     at Server.emit (events:33:26)
>
> >     at HTTPParser.onIncoming (http:825:10)
>
> >     at HTTPParser.onHeadersComplete (http:87:31)
>
> >     at Stream.ondata (http:757:22)
>
> >     at IOWatcher.callback (net:517:29)
>
> >     at node.js:266:9
>
> > > > > > nodejs+un...@googlegroups.com<nodejs%2Bunsu...@googlegroups.com>
> > .
> > > > > > For more options, visit this group at
> > > > > >http://groups.google.com/group/nodejs?hl=en.
>
> > > > > --
> > > > > Rasmus Andersson
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "nodejs" group.
> > To post to this group, send email to nod...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > nodejs+un...@googlegroups.com<nodejs%2Bunsu...@googlegroups.com>
> > .

Olly

unread,
Aug 16, 2010, 6:12:08 PM8/16/10
to nodejs
I am pleased to announce Nitrode v.0.3.0

In this version i've separated middleware logic into separate files,
and done a major revision of the boostrap, and cleaned up the code.
The server now uses a logical middleware stack, which brings with it
further flexibility.

I have also made some significant improvements to the API, check
index.js there is a commented out version of a full example API.

New TODO List:

(1. Split index.js into smaller libraries) - Complete v.0.3.0 !
2. Add authentication digest support
3. Add SSL certificate authentication support
4. Write unit tests
5. Improve sys.pump performance
6. Write benchmarks
7. Write documentation
8. Add compression support.

And thanks for all those who have given me feedback, its greatly
appreciated, and keep it coming!

Olly

Olly

unread,
Aug 16, 2010, 7:43:47 PM8/16/10
to nodejs
Just released Nitrode v.0.3.1 which is a follow up to the previous
version adding rewrite support which wasn't available in previous
versions.

The milestones for 0.3.2 are as follows:

1) Port the Oui mime type manager to Nitrode.
2) Add Gzip and Deflate compression support.

Milestones for 0.3.3 are:

1) SSL certificate authentication support
2) HTTP Digest authentication support

And by the next major milestone 0.4.0 i will need to have completed:

1) Full unit test coverage
2) Benchmarks
3) Documentation
4) Code refactoring + performance improvements

--------------

Please feel free to contribute wherever you can. Feature request and
patches etc. all welcome !!

Olly

Peter Hewat

unread,
Aug 17, 2010, 7:25:02 AM8/17/10
to nodejs
Hi again, you seem to be progression swiftly :)

Haven't had time to look into it yet but I will in the following days
(rather nights, after work...)

Keep up the good work :)

Cheers,
Peter

Sonny

unread,
Aug 18, 2010, 9:15:36 PM8/18/10
to nodejs
Good job.
That would be nice to have a files list when no index is available in
a particular folder.

Olly

unread,
Aug 19, 2010, 3:14:36 PM8/19/10
to nodejs
I'm very pleased to announce v0.3.2 of Nitrode.

This version includes:

* GZip handling support
* OS-level MIME dictionary support - Thanks Ramsus!
* API stip down - Removed send(), stream() and sendhead() methods.
Overriding existing methods instead. (sendfile() is still there).

Please give me some feedback, it helps with the motivation and
ultimately results in a better product!

Now moving onto the next milestone...

On Aug 19, 2:15 am, Sonny <sonny.pi...@gmail.com> wrote:
> Good job.
> That would be nice to have a files list when no index is available in
> a particular folder.
>
> On Aug 17, 1:25 pm, Peter Hewat <peter.he...@gmail.com> wrote:
>
>
>
> > Hi again, you seem to be progression swiftly :)
>
> > Haven't had time to look into it yet but I will in the following days
> > (rather nights, after work...)
>
> > Keep up the good work :)
>
> > Cheers,
> > Peter
>
> > On Aug 17, 1:43 am, Olly <oliver.mor...@kohark.com> wrote:
>
> > > Just releasedNitrodev.0.3.1 which is a follow up to the previous

Peter Hewat

unread,
Aug 20, 2010, 7:07:28 AM8/20/10
to nodejs
Hi Oliver,

I've tested your previous version benchmarking it with apache bench
and comparing it to another node http server (nothing fancy like
yours, just serving out static files) and noticed that Nitrode was
noticeably slower. I was planning on writing up a test result and
looking where the bottle neck could be etc. but haven't found the time
yet. I'll do so some time next week.

Once again: keep up the good work, your project looks very
promising :)

Cheers,
Peter

Olly

unread,
Aug 20, 2010, 12:21:38 PM8/20/10
to nodejs
Hi Peter,

Yes, Nitrode along with Antinode are considerably slower then projects
like Connect. The reason for this being is that data input streams are
pumped to the response stream using sys.pump. Which in itself is
incredibly slow compared to other methods, but is atleast maintained
by node.js developers, and will have significant performance increases
in the coming years.

The other advantage of using sys.pump is the reduced memory usage and
unlimited file size limit. Projects like connect, read the entire
file, storing it on memory, before sending it through to the client in
one big chunk. This works fine, and in-fact has the best performance,
but given that the file is having to be stored in memory, with a lot
of clients downloading files, this will become a significant
problem... You could of course cache these files in memory to make
this process more efficient but the logic required to ensure not too
much memory is used and monitoring file changes would just be too much
of a pain to add and and will probably contribute to a lot more
overhead.

Linux has a function called sendfile (http://www.kernel.org/doc/man-
pages/online/pages/man2/sendfile.2.html) which is what sys.pump will
hopefully use in future, which would significantly increase
performance by offloading the IO to the system kernal.

The other thing to keep in mind is Nitrode's transfer mechanism. By
default Nitode uses GZip compression (released in v.0.3.2) this means
that the poor performance of sys.pump is doubled as data is streamed
first into the GZip stdin stream, then back out from the GZip stdout
stream to the response stream. Again i'll be thinking of some caching
mechanism to prevent this happening every time a client downloads a
file, but for now the system is simple and easy maintain, and until
i've finished all the features i plan to add, i'm not considering that
kind of performance which will add considerably more bloat to the
source code.

So in answer to your concern, i am well aware of Nitrode's current
poor performance and rest assured i will be doing everything i can to
improve it, but note that what i have done which has reduced
performance is for a reason.

Thanks again for your input, appreciated as always.

Regards,
Olly

Timothy Caswell

unread,
Aug 20, 2010, 2:30:47 PM8/20/10
to nod...@googlegroups.com
Also, I am well aware that connect isn't ideal for serving huge files, but I haven't met anyone doing that who doesn't prefer nginx or apache anyway. I do plan on adding a threshold in connect so that it only buffers small files, and streams everything else out. I'm just focussed about immediate road blocks instead of potential issues that nobody is actually having.

Nitrode is cool though, just had to defend my project :P

> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.

> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.

Olly

unread,
Aug 20, 2010, 3:25:45 PM8/20/10
to nodejs

Hi Tim

Totally understand I don't mean to be confrontational but if connect
did what I wanted, nitride wouldn't exist! I would of also forked
connect but the changes I wanted to make were too radical.

In response to your point, well let's say you're serving public
content to your clients and the page contains a load of high quality
images 2 megs each. It would only take 100 clients requesting those
images concurrently to force node to use 2GB of ram. That is something
I wanted nitride to solve.

Don't get me wrong, connect does what it was designed to do perfectly.
Maybe even we could work together on nitride and it becomes connects
HTTP backend?

Olly

Guillermo Rauch

unread,
Aug 20, 2010, 3:28:23 PM8/20/10
to nod...@googlegroups.com
Keep in mind, it would take a hundred different ones. And of course, that's a really uncommon scenario (if you're serving 2mb files you probably want S3 or similar)

Sent from my iPhone

Timothy Caswell

unread,
Aug 20, 2010, 3:42:34 PM8/20/10
to nod...@googlegroups.com

On Aug 20, 2010, at 12:28 PM, Guillermo Rauch wrote:

> Keep in mind, it would take a hundred different ones. And of course, that's a really uncommon scenario (if you're serving 2mb files you probably want S3 or similar)
>

Exactly my point, connect caches the results by unique request. You can serve a 650mb iso with connect with 100 concurrent users and it won't use much more than 650mb of ram, and it will only hit the disk once. It's when you have more unique data than ram that it becomes a problem, and only then if all those requests come in before the cache timeout. The default cache time in connect is 0, so it's a highly unlikely situation.

Besides, like I said, most people are used to serving large files with a system like s3 or nginx/apache on their server.

I would love to collaborate and share code, that's what connect is all about. It's meant to be a set of libraries and tools for framework makers. Express has made good use of is and drastically reduces it code size.

-Tim

Mikeal Rogers

unread,
Aug 20, 2010, 5:21:52 PM8/20/10
to nod...@googlegroups.com
Caching doesn't require that you always load an entire query in to memory or that you hold it in memory before you begin to return the response.

A chain of sys.pump calls is the best use of memory and you can just keep an extra data listener on the last stream to write to cache, then subsequent requests can pull out of the cache. 

These two ideas aren't mutually exclusive and should be complementary.

-Mikeal

Timothy Caswell

unread,
Aug 20, 2010, 5:26:09 PM8/20/10
to nod...@googlegroups.com

On Aug 20, 2010, at 2:21 PM, Mikeal Rogers wrote:

> Caching doesn't require that you always load an entire query in to memory or that you hold it in memory before you begin to return the response.
>
> A chain of sys.pump calls is the best use of memory and you can just keep an extra data listener on the last stream to write to cache, then subsequent requests can pull out of the cache.
>
> These two ideas aren't mutually exclusive and should be complementary.
>
> -Mikeal

No offense, but in my experience sys.pump is still way too slow and putting a cache on the end of that would be the worst of both worlds. You get large latency on the initial request, and all the memory saving from the expensive pump is lost in the cache.

I think a switch that streams large files and caches small ones is better.

Rasmus Andersson

unread,
Aug 20, 2010, 6:19:52 PM8/20/10
to nod...@googlegroups.com
On Fri, Aug 20, 2010 at 18:21, Olly <oliver...@kohark.com> wrote:
> Hi Peter,
>
> Yes, Nitrode along with Antinode are considerably slower then projects
> like Connect. The reason for this being is that data input streams are
> pumped to the response stream using sys.pump. Which in itself is
> incredibly slow compared to other methods, but is atleast maintained
> by node.js developers, and will have significant performance increases
> in the coming years.

The reason it's slow is because disk I/O must be done in the eio
thread pool and throttled — each time pump sends through one chunk
(say, 4kB) that result in queueing a read in eio, waiting for it to
get dequeued and then continue.

>
> The other advantage of using sys.pump is the reduced memory usage and
> unlimited file size limit. Projects like connect, read the entire
> file, storing it on memory, before sending it through to the client in
> one big chunk. This works fine, and in-fact has the best performance,
> but given that the file is having to be stored in memory, with a lot
> of clients downloading files, this will become a significant
> problem... You could of course cache these files in memory to make
> this process more efficient but the logic required to ensure not too
> much memory is used and monitoring file changes would just be too much
> of a pain to add and and will probably contribute to a lot more
> overhead.
>

> Linux has a function called sendfile (http://www.kernel.org/doc/man-
> pages/online/pages/man2/sendfile.2.html) which is what sys.pump will
> hopefully use in future, which would significantly increase
> performance by offloading the IO to the system kernal.

It's not limited to unix and can only read file descriptors
representing data on disk. It's available already in node though the
fs module: fs.sendfile(outFd, inFd, inOffset, length, callback)

> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.

> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.

Olly

unread,
Aug 21, 2010, 8:39:55 PM8/21/10
to nodejs
In the latest commit I have included a new stats layer useful for
monitoring response times, memory usage per response, number of
requests per second and global memory usage at a given time interval.
It will probably be useful for benchmarking your application and for
general interest. The example.js has been updated explaining how to
use it.

All feature requests and remarks are welcome and given how much time I
have for this ATM, will probably be answered/done fairly quickly!

For those who watch me will also have noticed I have created a new
file system cache library which will eventually house a intelligent
and configurable file system cache mechanism to speed up io operations
and will be included into Nitrode!

Please be in touch, I love to hear from you it helps one massively in
keeping the motivation high!

All the best,

Olly

On Aug 20, 11:19 pm, Rasmus Andersson <ras...@notion.se> wrote:

tjholowaychuk

unread,
Aug 24, 2010, 12:37:53 AM8/24/10
to nodejs
Didnt read all of this but I agree w/ Tim as far as caching / serving
smaller files and streaming the large ones,
this is what I did pre-connect in express and it worked well aside
from Heroku

tjholowaychuk

unread,
Aug 24, 2010, 12:38:39 AM8/24/10
to nodejs
plus I see some for/in's, not good, use Object.keys() / for

On Aug 21, 5:39 pm, Olly <oliver.mor...@kohark.com> wrote:

Olly

unread,
Aug 24, 2010, 3:06:53 AM8/24/10
to nodejs
Why's that? Such a simple construct can't be adding that much
overhead?

And what were your experiences with Heroku?

Mikeal Rogers

unread,
Aug 24, 2010, 3:21:03 AM8/24/10
to nod...@googlegroups.com
The overhead isn't in sys.pump so please stop saying that. sys.pump hardly does anything and it's overhead is damn near immeasurable. 

The cause of the slowness is in reading so many small chunks off of disc which Rasmus described well. There are lots of ways to fix that, none of them would touch sys.pump.

For lower latency we can increase the default chunk sizes. There is probably a sweet spot where we get a decent memory footprint and low latency and it should be explored.

Your comments about caching are a little simplistic. I'm assuming a cache that only keeps things around that are requested often and doesn't try to keep the whole site in memory or even bring every response in to memory. 

If other not-so-often requested content has a little latency, who cares, so long as you can serve thousands of them concurrently without blowing up the memory.

The solution is *not* to read all responses in to memory. This may work for hello world apps but it won't work for any site that is doing a lot of concurrency for varying content because you'll run out of memory.

-Mikeal


--

Tim Smart

unread,
Aug 24, 2010, 3:39:53 AM8/24/10
to nodejs
On Aug 21, 10:19 am, Rasmus Andersson <ras...@notion.se> wrote:
> > For more options, visit this group athttp://groups.google.com/group/nodejs?hl=en.
>
> --
> Rasmus Andersson

When I was working on biggie-router ( http://github.com/Tim-Smart/biggie-router
), me and Tim agreed on a 'middleware' syntax that is shared between
Connect and biggie-router. In other words middleware should be
entirely interchangeable between Connect and biggie.

This is the thing: the static file module is only around 150 lines of
code, and can be converted to use streaming in a matter of minutes.
Heck, you can even use biggie's streaming static file middleware in
Connect if you wanna: http://github.com/Tim-Smart/biggie-router/blob/master/lib/biggie-router/modules/static.js
. I prefer streaming, as I can then pipe the output through gzip etc.

Connect (and biggie-router) aren't trying to be everything, they are
just a really simple tiny core delegating functionality to CommonJS
modules. I would love to see a massive repository of middleware
modules that are framework agnostic, with a few different DSL's to
choose from.

Tim.

Olly

unread,
Aug 24, 2010, 11:02:20 AM8/24/10
to nodejs
Mikael,

I guess that comment was aimed at tim. All sys.pump does is copy data
from one stream to another, waiting for the written data to be flushed
before reading and then writing any more. This process is slow, as you
correctly said, because data is by default sent in 4KB chunks, which
works ok for small files, but with large files you end up with 500k
chunks which really is a problem.

The advantage to smaller chunks is that between the read stream and
the write stream, node.js holds that chunk in memory. If you send the
file in 1 chunk the size of the file itself, then the whole file is
stored in memory. So the best idea would be to create some algorithm
that establishes the best chunk size based on the file size and memory
available, remembering that the larger the chunk size, the more
efficient it is. This is easily done using fs.createReadStream, with
an option (i think its bufferSize) that allows you to define the size
of the chunk. When playing with this i noticed that using sys.pump on
a read stream with the buffer size the same size as the file had the
same performance as using fs.readFile and sending the data straight
(as how connect does it). This is by no way surprising as both methods
do (or at least should do - i haven't checked) use the same construct.

So the problem isn't with sys.pump at all, its with how you've setup
the read stream you're dealing with. And i will continue to use
sys.pump wherever possible and just make sure i optimise the read
stream to have the most appropriate chunk size. Having said all that,
the ideal solution would be to use the sendfile and sendfile64
commands which is faster then manually using read/write commands as
its handled internally by the kernel. However this only works with
file read streams and not TCP sockets, and so complicates the use of
automatic GZip compression streams.

I think all of what i've said coobarates with your original post, and
i completely agree, applications should NOT read files to memory,
which was basically the premis of the point i was making to tim, and
hence why i suggested he use nitrode to handle the HTTP backend in
connect, and avoid having to worry about these sorts of things. The
responsibility then for me and others who wan't to help with nitrode
(and note to all, you're all welcome, just drop me an email), is to
ensure that the system

Olly
> When I was working on biggie-router (http://github.com/Tim-Smart/biggie-router
> ), me and Tim agreed on a 'middleware' syntax that is shared between
> Connect and biggie-router. In other words middleware should be
> entirely interchangeable between Connect and biggie.
>
> This is the thing: the static file module is only around 150 lines of
> code, and can be converted to use streaming in a matter of minutes.
> Heck, you can even use biggie's streaming static file middleware in
> Connect if you wanna:http://github.com/Tim-Smart/biggie-router/blob/master/lib/biggie-rout...

Olly

unread,
Aug 24, 2010, 11:38:10 AM8/24/10
to nodejs
Hi Tim Smart,

I don't know about Connect's and Biggie-Router's shared middleware
syntax, but the method im using is that each layer exports a function
that takes parameter 1 as the user-defined configuration, and
parameter 2 as the parent HTTP server object. Each of these layers are
called during process start up and allows the layer to register any
number of functions to the middleware stack, which is an array
property of the parent server object.

On an incoming connection, the request and response objecnitrodets are
given to each one of these stack layers one by one, allowing them to
do their function. If a function returns false, then no more layers
are called, and the server assumes that the response has been
returned. This isn't quite the same way as using a next() like
function, and i may eventually use that, as my way doesn't permit for
async operations returning a conditional value, and next() does. I
just haven't found a need to use a function like next() yet, but i may
very well need to soon. Its a very elegant way to do it!

This modular approach is as you said, very powerful and flexible, and
is adopted fully by. I suggest you take a look at the source code!

Regards,
Olly
> ...
>
> read more »

Olly

unread,
Aug 26, 2010, 11:34:33 AM8/26/10
to nodejs
Nitrode 0.3.3 Released!

Although i haven't strictly followed to my milestones, i was so
pleased to have finished SSI support, and it involved such a
significant number of fixes and additional features that i decided to
bump it up a version!

SSI currently only works with IF/INCLUDE directives as defined here:
http://en.wikipedia.org/wiki/Server_Side_Includes with the small
exception that conditional statements using a single = fails. You must
use the standard double equals == for it to work.

The SSI interpreter works by compiling the whole document into a js
script which is then interpreted in a new context. I had originally
planned to write my own interpreter in javascript, but there were too
many complications, and this way is a lot lot faster.

An example use of the SSI include function (which i guess to be the
most popular directive) is given in the new example.js file.

Feedback is still welcome!

Olly
> ...
>
> read more »

Mikeal Rogers

unread,
Aug 26, 2010, 1:13:39 PM8/26/10
to nod...@googlegroups.com
On Tue, Aug 24, 2010 at 8:02 AM, Olly <oliver...@kohark.com> wrote:
Mikael,

I guess that comment was aimed at tim. All sys.pump does is copy data
from one stream to another, waiting for the written data to be flushed
before reading and then writing any more.

That's not accurate.

In this simple case all sys.pump does is move chunks from one stream to another. It does *not* "wait for data to be flushed". 

Data events are going to come out of the read stream at a speed determined out side of sys.pump or the write stream, it's totally decoupled. 

If the client can't keep up we trigger a pause() on the read stream, but in the case that client is too slow this is fine, slow clients can deal with a little more delay if it means that the server doesn't have to hold their response entirely in memory.

 
This process is slow, as you
correctly said, because data is by default sent in 4KB chunks, which
works ok for small files, but with large files you end up with 500k
chunks which really is a problem.

Yes, but that 4K chunk size is determined by the read stream and has nothing to do with sys.pump or the write stream.
 

The advantage to smaller chunks is that between the read stream and
the write stream, node.js holds that chunk in memory. If you send the
file in 1 chunk the size of the file itself, then the whole file is
stored in memory. So the best idea would be to create some algorithm
that establishes the best chunk size based on the file size and memory
available, remembering that the larger the chunk size, the more
efficient it is. This is easily done using fs.createReadStream, with
an option (i think its bufferSize) that allows you to define the size
of the chunk. When playing with this i noticed that using sys.pump on
a read stream with the buffer size the same size as the file had the
same performance as using fs.readFile and sending the data straight
(as how connect does it). This is by no way surprising as both methods
do (or at least should do - i haven't checked) use the same construct.

So the problem isn't with sys.pump at all, its with how you've setup
the read stream you're dealing with. And i will continue to use
sys.pump wherever possible and just make sure i optimise the read
stream to have the most appropriate chunk size. Having said all that,
the ideal solution would be to use the sendfile and sendfile64
commands which is faster then manually using read/write commands as
its handled internally by the kernel. However this only works with
file read streams and not TCP sockets, and so complicates the use of
automatic GZip compression streams.

sendfile is really cool for a specific use case and we should try to find a way to upgrade a pump chain where the streams don't manipulate the data to a sendfile, still haven't figured that out yet.

But if you do *anything* it's useless. If you gzip, use templates, etc, sendfile is worthless.

What sendfile is actually *amazing* for is for on-disc caches, so instead of us talking only about writing caches in memory we could write the cache out generated and gzipped and then use sendfile to give it back to the clients.

Isaac Schlueter

unread,
Aug 26, 2010, 1:58:30 PM8/26/10
to nod...@googlegroups.com
On Thu, Aug 26, 2010 at 10:13, Mikeal Rogers <mikeal...@gmail.com> wrote:
> What sendfile is actually *amazing* for is for on-disc caches, so instead of
> us talking only about writing caches in memory we could write the cache out
> generated and gzipped and then use sendfile to give it back to the clients.

IMO, this is why sys.pump should not do sendfile, ever. Rather,
sendfile should be something that Nitrode or Connect does when it
knows that it can serve the response directly from an on-disk cache,
bypassing the whole "middleware" concept at that point.

Even if the input stream is a file descriptor, and the output stream
is a socket, the actual "Stream" object might have been mutated such
that it changes the incoming data before sending it out. sys.pump
needs to be completely agnostic and accepting of *anything* that
matches the Stream API, or else it's too magical to be reliable.

It's not that we haven't figured out how to do it, I don't think. I
think we've figured out that we can't, and that sendfile is just a
different thing than sys.pump.

--i

Mikeal Rogers

unread,
Aug 26, 2010, 2:07:26 PM8/26/10
to nod...@googlegroups.com
Yeah, we don't have a good way to tell whether a stream is going to mutate data so it's just not doable.

When I think about a way that a stream might indicate it's not going to effect data I can't imagine a nice API that won't break easily so my gut says that it's just never gonna happen.

-Mikeal


--

Olly

unread,
Aug 26, 2010, 2:21:32 PM8/26/10
to nodejs
Hi Mikael,

Thanks for the reply.

I haven't managed to get sendfile working at all, even though my
attempts have been very brief. As you said, sendfile only works with
files and not streams. So if any sorts of filtering is necessary (ssi,
gzip, etc.), it won't work.

Anyway, for now that isn't my concern. I just hope that someone who
knows what they're doing can work on the sys.pump performance.

Olly

On Aug 26, 6:13 pm, Mikeal Rogers <mikeal.rog...@gmail.com> wrote:
> ...
>
> read more »

Peter Hewat

unread,
Aug 27, 2010, 4:36:54 AM8/27/10
to nodejs
Hi Olly,

I didn't expect my question on performance would trigger this
(interesting) discussion. In any case, all this is way over my head ;)

My use case is as follows:
I am developing a web app. This implies serving a couple of small
static files to the client once (using application cache, only the
meta file is then checked for updates from time to time and triggers
file re-downloads when available...). The rest of the dynamic data
(coming from MongoDB) is transmitted via websocket (or downgraded to a
comet like system if the browser does not support it).

So ideally (in this use case...), static files would be loaded once,
compressed if necessary and served from memory.

I believe that this schema is exactly what NodeJS can excel at. I hope
Nitrode will enable this.

Cheers,
Peter
> > > > > >> > > > > Just releasedNitrodev.0.3.1 which is a follow up to the...
>
> read more »

Timothy Caswell

unread,
Aug 27, 2010, 2:31:52 PM8/27/10
to nod...@googlegroups.com

On Aug 27, 2010, at 1:36 AM, Peter Hewat wrote:

> Hi Olly,
>
> I didn't expect my question on performance would trigger this
> (interesting) discussion. In any case, all this is way over my head ;)
>
> My use case is as follows:
> I am developing a web app. This implies serving a couple of small
> static files to the client once (using application cache, only the
> meta file is then checked for updates from time to time and triggers
> file re-downloads when available...). The rest of the dynamic data
> (coming from MongoDB) is transmitted via websocket (or downgraded to a
> comet like system if the browser does not support it).
>
> So ideally (in this use case...), static files would be loaded once,
> compressed if necessary and served from memory.
>
> I believe that this schema is exactly what NodeJS can excel at. I hope
> Nitrode will enable this.
>
> Cheers,
> Peter
>


That is exactly connect's default behavior with a simple stack.

Connect.createServer(
Connect.cache(1000),
Connect.gzip(),
Connect.staticProvider()
);

Will serve the small files from disk, compress the compressible ones, and cache it in ram as a buffer for super fast serving.

Socket.io or other websockets work fine with connect. They just hijack the request handler outside of the connect stack.

Olly

unread,
Aug 28, 2010, 3:53:51 PM8/28/10
to nodejs
Hi Peter,

Nitrode fully supports HTTP If-Modified-Since headers which will only
return a file to a client if they have an older version stored in the
browser cache. This is enabled by default.

Websockets work over a different protocol, and so its advisable (but
not necessary) to use a separate HTTP server instance (Not using
Nitrode) over a different port.

The aim of Nitrode is to simulate the regular HTTP server but provide
some higher level layers which aim to be similar to those in Nginx.
There is still a bit of a way to go, but it is fully operational.

Eventually i'll write a layer for managing comet based applications
which is what you're talking about. Because i think this is a funder-
mental part of the new HTML5 era.

But in short, Nitrode should be perfectly sufficient for all your
needs.

Regards,
Olly
> > > > > > >> > * OS-level MIME dictionary support -...
>
> read more »
Reply all
Reply to author
Forward
0 new messages