Re: Siege Leaking Sockets

410 views
Skip to first unread message

je...@joedog.org

unread,
Feb 24, 2011, 4:04:14 PM2/24/11
to siege...@googlegroups.com
Perhaps it's leaking sockets but there's also a chance that you're opening
more files than your OS will allow. What parameters are you using to start
siege?

Cheers,
Jeff

> I'm seeing siege leaking sockets. Everything is fine for the first
> little bit of execution, but then it starts creating more and more
> sockets, and eventually it runs out and the execution fails with the
> following error:
>
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> [error] descriptor table full sock.c:108: Too many open files
> libgcc_s.so.1 must be installed for pthread_cancel to work
>
> Environment:
>
> OS: 64bit Ubuntu Lucid Lynx (10.04.1 LTS)
> Kernel: Linux argon 2.6.32-27-server #49-Ubuntu SMP Thu Dec 2 02:05:21
> UTC 2010 x86_64 GNU/Linux
> Siege: SIEGE 2.71b3
>
> libgcc_s.so.1 does exist, at the path: /usr/lib32/libgcc_s.so.1
>
> .siegerc (comments removed for brevity):
> verbose = true
> timestamp = true
> fullurl = true
> show-logfile = false
> logging = true
> logfile = ${SIEGE_ROOT}/siege.log
> protocol = HTTP/1.1
> chunked = true
> cache = false
> connection = close
> concurrent = 15
> time = 20M
> file = ${SIEGE_ROOT}/urls.txt
> delay = 1
> expire-session = true
> internet = false
> benchmark = true
> user-agent = ${SIEGE_HOSTNAME}/1.00 [en] (X11; I; ${SIEGE_VERSION})
> accept-encoding = gzip
> spinner = false
>
> urls.txt is a pretty standard file with a single POST request (to
> login) and then a bunch of GET requests. In total it is ~600 URLs
> long.
>
> I'll start looking at this today, but wondered if there were any
> pointers as things to try before starting in on it.
>


Jay Steiger

unread,
Feb 24, 2011, 4:14:54 PM2/24/11
to siege...@googlegroups.com
When I started hitting this I just added this to my .profile:

ulimit -n 1300

Good Luck!
Jay

je...@joedog.org

unread,
Feb 24, 2011, 4:22:35 PM2/24/11
to siege...@googlegroups.com
Just be careful. You wouldn't want to make a mess ;-)

Jeff

Remi Broemeling

unread,
Feb 24, 2011, 5:31:34 PM2/24/11
to siege...@googlegroups.com
Hi Jeff,

Thanks for the response.  I suppose that it's possible, but I don't really know why it would be needing so many sockets with such a low concurrent user count?

In any event, I am running siege like this:

siege --rc "my.siege.rc"

There aren't any "real" values set via the command-line parameters; all of that is done in the RC file.

The ulimit output for the shell that I am executing it from follows.

[remi@argon:~]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Thanks,

Remi

je...@joedog.org

unread,
Feb 24, 2011, 5:49:13 PM2/24/11
to siege...@googlegroups.com
What concurrent user count are you using?

Remi Broemeling

unread,
Feb 24, 2011, 5:57:52 PM2/24/11
to siege...@googlegroups.com
As is defined in the rc file, there are 15 concurrent users.

I have found the problem: siege leaks a socket anytime a DNS lookup fails.  The issue is here (in sock.c):

  if(hp == NULL){
    return -1;
  }

The new_socket call is returning -1 without closing the socket that was opened, which means that it slowly leaks sockets as gethostbyname_r calls fail.

Why so many gethostbyname_r calls are failing is what I plan to look into next. :)

Remi Broemeling

unread,
Feb 24, 2011, 6:23:42 PM2/24/11
to siege...@googlegroups.com
I tracked down the issue with the DNS lookups failing: when siege is reading in URLs from a text-file, any blank line in the text-file is read in and appended as a "blank" entry... and then when it is going through it's list of URLs, any "blank" entry causes a DNS failure (which in turn causes a leaked socket, as below).  Unfortunately I am time-constrained at the moment due to work and don't have time to write a patch for either of these issues.  Suggestions are below, however.
  1. The sock.c fix is/should be very easy; just close the socket before returning a failure when a DNS lookup fails.  I'd also suggest adding a warning that a DNS lookup has failed, as that should not be a common occurrence, and the user should definitely be made aware of it when it occurs.
  2. When reading from a URL list file, all blank lines should be ignored rather than be used as a basis for a "blank" URL entry.  It'd be nice to have some method of comments, too (not sure if siege currently supports this; but just ignoring any line that started with a # would work).
Thanks for your help, hope this is useful information.

Remi Broemeling

unread,
Feb 24, 2011, 3:28:50 PM2/24/11
to Siege Users

je...@joedog.org

unread,
Feb 25, 2011, 8:21:24 AM2/25/11
to siege...@googlegroups.com
Remi,

Thanks for the information. I'll look into it. The urls.txt parser was
designed to ignore blank lines and support # comments. If that's not
happening, then it's a long unnoticed bug.

Jeff

> I tracked down the issue with the DNS lookups failing: when siege is
> reading
> in URLs from a text-file, any blank line in the text-file is read in and
> appended as a "blank" entry... and then when it is going through it's list
> of URLs, any "blank" entry causes a DNS failure (which in turn causes a
> leaked socket, as below). Unfortunately I am time-constrained at the
> moment
> due to work and don't have time to write a patch for either of these
> issues. Suggestions are below, however.
>

> 1. The sock.c fix is/should be very easy; just close the socket before


> returning a failure when a DNS lookup fails. I'd also suggest adding a

> warning that a DNS lookup has failed, as that *should not* be a common


> occurrence, and the user should definitely be made aware of it when it
> occurs.

> 2. When reading from a URL list file, all blank lines should be ignored

Reply all
Reply to author
Forward
0 new messages