Why are people expecting Node to compare well with nginx or Apache for serving static files? Node is faster than PHP or Ruby, but comparing it to something specifically designed for high performance static file serving is always going to be a losing proposition.Use Node for your dynamic content. Use nginx or apache for your static content. It's a very simple equation.
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en
But node wins over apache and nginx by having the right
event model.
Node serving a static file from
memory is faster than nginx serving the same file from disk.
Sorry, that period was a typo. I know that nginx is event based too.
my point was that high-level algorithms matter more than the constant
overhead of the vm sometimes.
>
>>
>> Node serving a static file from
>> memory is faster than nginx serving the same file from disk.
>
>
> Benchmarks please. I very much doubt that.
I've benchmarked it before, but it should be obvious too. Node is
doing less work than nginx in this case. (writing an already buffered
response to the socket is much easier than reading a file from disk
and then sending it to the socket. Even sendfile has to spin up the
disk even if the kernel is doing the rest)
The point is we're not doing the same thing. Nginx is really good at
what it's written for. Node is really good at what it's written for
and that can include serving static resources with business logic
smarts scripted in.
>
> Matt.
I've benchmarked it before, but it should be obvious too. Node is
doing less work than nginx in this case. (writing an already buffered
response to the socket is much easier than reading a file from disk
and then sending it to the socket. Even sendfile has to spin up the
disk even if the kernel is doing the rest)
Liam - i asked Ben about exposing the fd on the socket handle so linux
folks could use sendfile but it seems the core team don't really want
to do that. there was a mention of providing a sendfile and/or pipe
between two handles internally in the c++ code at some stage, but my
feeling was that this is some way off. my patch has nothing to do with
big changes to the c++ - it is just a patch to expose the fd so i can
pass it to the fs.sendfile system call.
Matt - there's no need to be so rude, especially to one of the best
guys in the community. from what i understand nginx will cache small
files in memory if configured to do so and it will also use sendfile
if configured to do so. when using sendfile, there could be a blocking
operation if the file has not been cached by the kernel, but once it's
in memory then subsequent requests should be as fast as possible and
most of the work will be done in the kernel. the difference between
using sendfile from v8/node.js and directly from c/c++ is very small
from the testing i have done. getting close to nginx performance is
definitely achievable, but probably not using the standard node.js
libs (at least not right now anyway). the only way i have got good
performance is to use the c++ bindings directly along with sendfile.
i'll post up a benchmark asap which should shed some light on all
this.
Matt - there's no need to be so rude, especially to one of the best
guys in the community.
getting close to nginx performance is
definitely achievable
--
--
Matt,I'm not offended by the tone. I understand the intent and tone are hard to convey on the internet. That's why I love going to tech conferences to meet people face to face. I hope to be at nodeconf this summer, maybe we can discuss this there.
But I do disagree with your statement "it's just that people need to understand that Node will never be the high performance static file server that nginx is. And trying to get there seems pretty futile to me".
I can see that you're just trying to do your civic duty and make sure people know that serving static files isn't something node is good for. I personally think it can be good for that.
Matt, there are all sorts of optimisations available. if you really
want top performance, then you could write a c++ module that does
static file serving and can be easily plugged into a node.js http
server.
it would be able to spend most of it's time in c++ land
serving static files so there is no reason it could not be as fast as
nginx. also, nginx is only optimised once - at compile time. in v8,
the JIT compiler has the opportunity to optimise on the fly as the
load on the server changes. this is a big advantage over something
like nginx and it wouldn't surprise me at all to see a node.js
solution match or out perform nginx at static file serving in the near
future.
On Tue, Feb 14, 2012 at 7:00 PM, billywhizz <apjo...@gmail.com> wrote:Matt, there are all sorts of optimisations available. if you really
want top performance, then you could write a c++ module that does
static file serving and can be easily plugged into a node.js http
server.Yes, but why would you do that, when there's perfectly good open source code (nginx) to do it already?
it would be able to spend most of it's time in c++ land
serving static files so there is no reason it could not be as fast as
nginx. also, nginx is only optimised once - at compile time. in v8,
the JIT compiler has the opportunity to optimise on the fly as the
load on the server changes. this is a big advantage over something
like nginx and it wouldn't surprise me at all to see a node.js
solution match or out perform nginx at static file serving in the near
future.It's *very* rare for a JIT to do better than compiled C, except on very synthetic hard looping problems - HTTP serving really doesn't fit into that.Thank you. Kind of proves my point. Nginx serves more data, from the filesystem, faster, is checking for changes to the file, isn't doing sendfile(), etc. At least you turned logging off :)But I really appreciate seeing real numbers.Matt.
--
Not just one. :) FEFE!!! :D
Am 15.02.2012 06:24 schrieb "Louis Santillan" <lpsa...@gmail.com>:
Wow. Someone who has heard of Gatling.
-L
On Tuesday, February 14, 2012, billywhizz <apjo...@gmail.com> wrote:
> heh. proves your point. f...
--
Sent from Gmail Mobile
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mai...
On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.I can almost guarantee you there are no deployments where this is a limiting factor.
i don't hear anyone on this thread complaining that node.js is slower.
Matt,
The original post started with an observation and a request for more
details, presumably from those who know what's going on under the
hood. I didn't detect any hint of "expecting Node to compare well with
nginx or Apache."
A conversation about the underlying differences in the servers and
thoughts on how or how much node can be optimized would have been
welcome.
Instead, your response was not only unhelpful, but condescending.
Shouldn't we appreciate and support others' efforts - as it could lead
directly to improvements in node?
On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner <scr...@gmail.com> wrote:On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.On a very basic level yes, but there are more advantages to using a front-end nginx server than there are this one downside of "it's another piece of software".
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.I can almost guarantee you there are no deployments where this is a limiting factor.
I just tested the simple file server at https://gist.github.com/701407 serving up a ~7kB png file. It gets around 2000 requests per second on my ageing macbook with Core 2 Duo CPUs.That's enough to serve 172 MILLION requests per day.Matt.
--
On Wed, Feb 15, 2012 at 12:35 PM, Matt <hel...@gmail.com> wrote:On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner <scr...@gmail.com> wrote:On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.On a very basic level yes, but there are more advantages to using a front-end nginx server than there are this one downside of "it's another piece of software".That's a value judgement that you can't possibly make for other people. Nor are you in any position to understand what is good enough for other people's use cases.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.I can almost guarantee you there are no deployments where this is a limiting factor.What? Serving static files? Exactly.
On Wed, Feb 15, 2012 at 3:03 PM, Dean Landolt <de...@deanlandolt.com> wrote:On Wed, Feb 15, 2012 at 12:35 PM, Matt <hel...@gmail.com> wrote:On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner <scr...@gmail.com> wrote:On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.On a very basic level yes, but there are more advantages to using a front-end nginx server than there are this one downside of "it's another piece of software".That's a value judgement that you can't possibly make for other people. Nor are you in any position to understand what is good enough for other people's use cases.I'm trying to stick to facts. But sure, if you weighed the pros of running a proxy front end extremely low against the con of having 1 more piece of software running, then you may come out with having a proxy as a negative. But I would hope people are aware of all the benefits before making that choice. Having one more piece of software in place is a con you have to deal with once. The rest of the benefits you gain occur all the time:- Ability to proxy to multiple different backends as your site expands (e.g. maybe even mixing technologies, such as Rack/Sinatra and Node).
- A well tested, battle hardened static file server with support for all of HTTP already coded in (e.g. will your Node file server do If-Modified-Since? ETags? Gzip compress on the fly?
Support sendfile()?
Log correctly when scaled to multiple CPUs? Support Accept headers?)
- When you restart or take down your Node process for maintenance of some sort, do your users see a spinny beachball or a nice "Site is down for Maintenance" page which nginx can easily deliver?
There's probably a couple more I forgot, but those are the biggies.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.I can almost guarantee you there are no deployments where this is a limiting factor.What? Serving static files? Exactly.Right. I've never once said Node is too slow at serving static files. Perhaps you misread my posts?
On Wed, Feb 15, 2012 at 3:27 PM, Matt <hel...@gmail.com> wrote:On Wed, Feb 15, 2012 at 3:03 PM, Dean Landolt <de...@deanlandolt.com> wrote:On Wed, Feb 15, 2012 at 12:35 PM, Matt <hel...@gmail.com> wrote:On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner <scr...@gmail.com> wrote:On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.On a very basic level yes, but there are more advantages to using a front-end nginx server than there are this one downside of "it's another piece of software".That's a value judgement that you can't possibly make for other people. Nor are you in any position to understand what is good enough for other people's use cases.I'm trying to stick to facts. But sure, if you weighed the pros of running a proxy front end extremely low against the con of having 1 more piece of software running, then you may come out with having a proxy as a negative. But I would hope people are aware of all the benefits before making that choice. Having one more piece of software in place is a con you have to deal with once. The rest of the benefits you gain occur all the time:- Ability to proxy to multiple different backends as your site expands (e.g. maybe even mixing technologies, such as Rack/Sinatra and Node).Proxying and load balancing, eh? That's wonderful if you can succinctly express all your logic here in a clunky declarative config file. Yes, it's better than apache, but let's not kid ourselves -- couldn't a lot of that be made substantially more efficient with a more suitable tool for this logic? So if node makes a good static file server it makes an even better proxy. Perhaps moreso for LB.
- A well tested, battle hardened static file server with support for all of HTTP already coded in (e.g. will your Node file server do If-Modified-Since? ETags? Gzip compress on the fly?It could. And I suspect there'll be a lib for that (done in c) some day. Your use of the word "battle hardened" seems to preclude the use of anything new. That's your prerogative, and perhaps sound advice, but you're wielding it like a sledgehammer. There's room for nuance here.
- When you restart or take down your Node process for maintenance of some sort, do your users see a spinny beachball or a nice "Site is down for Maintenance" page which nginx can easily deliver?And you need nginx for this?
There's probably a couple more I forgot, but those are the biggies.And each one could be accomplished with a module or a small amount of javascript.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need to complicate their
infrastructure more than it needs to be.I can almost guarantee you there are no deployments where this is a limiting factor.What? Serving static files? Exactly.Right. I've never once said Node is too slow at serving static files. Perhaps you misread my posts?No, you've consistently said that Node will never be as fast as nginx
and implied that because of that you should of course have nginx in front of node. It's this implication I take issue with.
I'm not saying we should all use node for this. What I'm replying to is your consistent, belligerent insistence that node will never be good enough for these use cases (in spite of the evidence, including your admission that static files aren't the bottleneck).That's ridiculous.
1) please don't benchmark Node against Nginx for static file serving unless you're actively working to make it faster - that is a fools game as Nginx will always be faster.
On Wed, Feb 15, 2012 at 5:12 PM, Matt <hel...@gmail.com> wrote:(or your choice of non-alcoholic beverage).
Matt - just wanted to address a couple of points you made:
1. With node.js clustering you get what is effectively a layer 4 load
balancer across the available cpu's on a single box. this is all being
handled by the OS and is insanely fast
2. Restarting a server gracefully is pretty easy with a little bit of
work
3. Recommending nginx as a reverse proxy in front on node.js for all
scenarios seems a bit OTT to me. reverse proxies add quite a lot of
latency to each request
and introduce another point of failure into
the pipeline.
if you want to use nginx for serving static files, then
you'd be better to serve your static from a separate host or port and
go directly to node.js for anything non-static.
imho, a reverse proxy
should always be a last resort if no other solution is possible, and
this is partly why i'd like to node.js performance improve, so we can
put together simpler solutions that don't have multiple points of
failure
4. you mentioned issues with logging when clustering over multiple
cpu's? what exactly is the issue here and what does nginx do to solve
it?