On Jul 8, 10:19 am, Brian Hammond <
or.else.it.gets.the.h...@gmail.com>
wrote:
> Sorry, I'm not sure I follow. Are you not expecting this limit of
> 1024? Have you modified your resource limits? AFAIK, 1024 is the
> default for 'nofiles' in Linux.
Apologies for not being clear with my gripe, I'll try one more time
here and as someone suggested maybe Ryan can help me get it straight
upon his return...
It's definitely true that the loop over curl appears to makes
node.http.createServer quickly run up against the default 1024 file
descriptor limit (on Ubuntu 8.10 Server). What's unexpected is that
such a process would approach this limit at all in response to a
simple loop over a single call to curl. Certainly to affect the
symptom Ubuntu's limit could be bumped up manually and then the loop
of http requests could be seen to fail at a higher number of
iterations, but that exercise would essentially skirt the behavior I'm
trying to either understand or file as a bug.
The behavior boils down to node.http.createServer being seen to grow
file descriptors up to the default maximum (1024) while that trivial
loop over curl is executing. That behavior is observable with
something like:
$ while true;do lsof -p <node's pid here> | wc -l; sleep 1;done
Oddly enough Perl's POE will do this, too. Run the perl script in
place of the node program and employ lsof again to count the file
descriptors and you'll only have to wait a moment before it bombs out
in the same fashion:
use strict;
use POE qw(Component::Server::HTTP);
POE::Component::Server::HTTP->new(
Port => 6000,
ContentHandler => {"/" => sub {
my ($request, $response) = @_;
$response->code(RC_OK);
$response->content_type("text/html");
$response->content('Hello World!');
return RC_OK;
}},
);
print "Server running at
http://127.0.0.1:6000/\n";
$poe_kernel->run();
exit;
But monitoring apache, nginx, etc. or trying other trival server
samples with some dynamic code shows file descriptors going away when
they're no longer being used and thus keeping the process safely under
the max. Samples like these in place of the node program + counting
their descriptors will show fd release after the connection goes away
(perl)
use strict;
use HTTP::Daemon;
use HTTP::Response;
my $x = HTTP::Response->new(200, 'OK', undef, 'Hello World!');
my $d = HTTP::Daemon->new(LocalPort => 6500) || die;
print "Please contact me at: <URL:", $d->url, ">\n";
while (my $c = $d->accept) {
while (my $r = $c->get_request) {
$c->send_response($x);
}
$c->close;
undef($c);
}
(python)
import socket
host = ''
port = 9000
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
sock.bind((host,port))
sock.listen(1)
while 1:
csock,caddr = sock.accept()
cfile = csock.makefile('rw',0)
while 1:
line = cfile.readline().strip()
if line == '':
cfile.write("HTTP/1.0 200 OK\n\n")
cfile.write("Hello World!")
cfile.close()
csock.close()
break
(ruby)
require 'webrick'
s = WEBrick::HTTPServer.new(:Port => 7000)
s.mount_proc("/") { |req,resp| resp.body = 'Hello World!' }
trap('INT') { s.stop; s.shutdown }
s.start
So it seems like these non-evented web servers feature sockets that
are created and then destroyed but if you go with node or Perl's
POE::Component::Server::HTTP (an event dispatcher, of course) you see
file descriptors that never get released.
The POE behavior has caused headaches before (
http://osdir.com/ml/
lang.perl.poe/2007-03/msg00051.html) but this last tidbit is what
makes me suspect I may just be making bad assumptions about how node
and other event-based frameworks deal with the sockets they create -
but then that in turn makes me wonder on the node side of things why
node's "raw" tcp example destroys its descriptors while the http
example apparently does not.
I'll try running the node http thing again with v8's --gc_global and --
always_compact options enabled, see if I detect any differences.
Otherwise I'm fine waiting for advice from Ryan and other experts on
how to either fix or better understand this behavior.
-pete