Couldn't bind: 24: Too many open files

1,211 views
Skip to first unread message

Ali Bozorgkhan

unread,
Jan 7, 2014, 12:48:10 PM1/7/14
to scrapy...@googlegroups.com
Hi,

I am running 25 jobs on a single machine, each with 50 concurrent requests. After a 10 minutes or so, I get this error in my proxy middleware:

Couldn't bind: 24: Too many open files.

I tried increasing the ulimit on the system but no luck. Is there anyway I could fix it?

Pablo Hoffman

unread,
Jan 7, 2014, 4:33:52 PM1/7/14
to scrapy-users
How did you increased the ulimit?. Are you sure it was increased properly?


--
You received this message because you are subscribed to the Google Groups "scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scrapy-users...@googlegroups.com.
To post to this group, send email to scrapy...@googlegroups.com.
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/groups/opt_out.

Ali Bozorgkhan

unread,
Jan 7, 2014, 4:45:19 PM1/7/14
to scrapy...@googlegroups.com
I just found the solution:

The problem is as scrapyd starts as a service, it will ignore any configuration in '/etc/security/limits.conf'. I solved the problem by adding this line to '/etc/init/scrapyd.conf':

limit nofile 524288 524288




--
You received this message because you are subscribed to a topic in the Google Groups "scrapy-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scrapy-users/rArexq4tI7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scrapy-users...@googlegroups.com.

Ali Bozorgkhan

unread,
Jan 7, 2014, 4:46:13 PM1/7/14
to scrapy...@googlegroups.com
Thanks pablo for your reply, you are right, I did't fully explain what I did because at first, I thought it is something that needs to be set in scrapy, but I just realized it is a server problem.

Pablo Hoffman

unread,
Jan 7, 2014, 11:32:06 PM1/7/14
to scrapy-users
No problem Ali, thanks for posting follow up with your solution.

I think you should also be able to adjust the ulimit by adding the following line to /etc/default/scrapyd (which has less chances of being overwritten by a scrapyd package upgrade):

ulimit -n 524288

Pablo.
Reply all
Reply to author
Forward
0 new messages