On Fri, 17 Mar 2023 12:40:40 +0000, Harold Johanssen wrote:
> I have a multithreaded C application in 15.0 that does indeed open a lot
> of file descriptors, at some point dying with the diagnostic mentioned
> in the subject line. The same application keeps running under Ubuntu
> 20.04 without problems.
The first step migth be to reconsider the way that application works.
Does it really need to have all those files and sockets open at the same
time? As the number of open files is a limited resource it is good
practice to close files and sockets once done with them. There are tools
like cppcheck which can identify resource leaks in C programs.
If you still decide that you need to have thousands of files or sockets
open at once you will get twarted by the soft limit as you have
discovered. Using ulimit from your shell or from your login scripts, you
can as a normal user increase this soft limit up to the hard limit. To
increase the hard limit you will need to be root.
These limits are for each process and they are inherited from their
parent process.
> Checking out online, the suggestion is to increase the value of
> /proc/sys/fs/file-max. Indeed, for 15.0 this is 1632376 by default,
This number does not limit a single process, but if the sum of all open
files for all processes gets to high you will get trouble.
> whereas in the Ubuntu system it was set to 9223372036854775807. So I set
> it to that value in 15.0 and restarted the application, to no avail:
> after (many) hours running, I got the same issue.
Yes, you are still limited by the soft limit.
> Anybody know to overcome this problem? Like I said, the code is
> exactly the same in both Ubuntu 20.04 and Slackware 15.0, and I have the
> following settings:
>
> Slackware 15.0:
>
> # ulimit -n
> 1024
>
> # ulimit -Hn 4096
>
> # ulimit -Sn 1024
>
> # cat /proc/sys/fs/file-max 9223372036854775807
>
> Ubuntu 20.04:
>
> # ulimit -n 1024
>
> # ulimit -Hn 1048576
>
> # ulimit -Sn 1024
>
> # cat /proc/sys/fs/file-max 9223372036854775807
It seems a little odd that also your ubuntu machine where there is no
problem has a soft limit of 1024 open files. However, maybe your C
application calls setrlimit(RLIMIT_NOFILE, ...). If it does so you might
need to increase the hard limit.
> Is it just a matter of increasing the hard value under Slackware 15.0? I
> haven't done so yet because I don't know how to do so without rebooting
As root is able to increase the hard limit and child processes inherit
their parents limits you might try:
su root
ulimit -Hn 1048576
su my_normal_user
ulimit -Hn
and voila! You will have a shell as your normal user where the hard limit
is 1048576.
> which, for a number of reasons, is not an option in the short term.
In the long term you might want to create a file /etc/initscript looking
something like this:
-8<----------------------------
#
# initscript If this script is intalled as /etc/initscript,
# it is executed by init(8) for every program it
# wants to spawn like this:
#
# /bin/sh /etc/initscript <id> <level> <action> <process>
#
# It can be used to set the default umask and ulimit
# of all processes. By default this script is installed
# as /etc/initscript.sample, so to enable it you must
# rename this script first to /etc/initscript.
#
# Version: @(#)initscript 1.10 10-Dec-1995 MvS.
#
# Author: Miquel van Smoorenburg, <
miq...@cistron.nl>
#
/proc/sys/fs/file-max
ulimit -Hn 1048576
# Execute the program.
eval exec "$4"
-8<----------------------------
From some startup script you also might want to increase the value of
/proc/sys/fs/file-max
regards Henrik