Lighttpd 500 with rapid requests

64 views
Skip to first unread message

Sam Beveridge

unread,
Nov 9, 2012, 5:46:46 PM11/9/12
to we...@googlegroups.com
If enough requests are made fast enough, I get a 500 internal server error and the following in my lighttpd error logs:

2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 10 
2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 11 
2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 12 
2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 13 
2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 14 
2012-11-09 16:17:31: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 15 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 16 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 17 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 16 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 17 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 16 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 15 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 16 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 15 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 14 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 13 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 14 
2012-11-09 16:17:32: (mod_fastcgi.c.3005) got proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 15 
2012-11-09 16:17:32: (mod_fastcgi.c.2494) unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: tcp:127.0.0.1:7000 
2012-11-09 16:17:32: (mod_fastcgi.c.3325) response not received, request sent: 1252 on socket: tcp:127.0.0.1:7000 for /sm , closing connection 
2012-11-09 16:17:32: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 14 
2012-11-09 16:17:34: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 13 
2012-11-09 16:17:34: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 12 
2012-11-09 16:17:34: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 11 
2012-11-09 16:17:34: (mod_fastcgi.c.1515) released proc: pid: 0 socket: tcp:127.0.0.1:7000 load: 10 

Running gentoo, lighttpd, flup, webpy.

Has anyone run into this issue before?  Any help would be greatly appreciated!
Message has been deleted

Sam Beveridge

unread,
Nov 14, 2012, 12:11:51 PM11/14/12
to we...@googlegroups.com
Sorry for my previous post, accidentally submitted it.  As I was saying, I think i have narrowed it down to flup.  If I bypass flup, I don't have any errors (it just takes forever to process a bunch of requests).  Does anyone have any idea why flup would be breaking in this manner?  It isn't getting too big of a load.  My app is posted below:

#!/usr/bin/python

from apps.main import app as main_app

# run as fastcgi
from flup.server.fcgi import WSGIServer
params = {
    # multiplexed means handle more than one connection at a time
    # we noticed no performance gain from using multiplexed, but
    # we had more completed connections without multiplexing.
    'multiplexed': False,
    'bindAddress': ('127.0.0.1', 7000),
    # 4/1 changed maxThreads from 50 to 9 because it seems that flup
    #     creates 5 processes each with its own thread pool, so we were
    #     getting at most 5 X 50 (250) db connections from web.py.
    #     Reducing it to 9 should mean that at most, 5 x 9 (45) db
    #     connections are used (mh)
    'maxThreads': 9,
}
server = WSGIServer(main_app.wsgifunc(), **params)
server.run()

If I up the thread count to 20 (or just some higher number)  the problem goes away, but I am looking for a more elegant solution... if there is one.
Reply all
Reply to author
Forward
0 new messages