Someone described to me that using pipes for an application is very
bad and not scalable at all.
They said that AIX has a limit of 1024 (or something close) file
descriptors and while any file is open a descriptor is used, such that
you can only have 1024 files open at once.
They said that just the existence of a pipe alone is enough to use up
a file descriptor - and so competing with the number of files that can
be open.
It seems a little incredulous to me that only a relatively small
number of files can be opened at once, or even that a relativelty
small number of pipes can be used.
Can anyone confirm or deny?
$ grep OPEN_MAX /usr/include/sys/limits.h
#define OPEN_MAX 65534 /* max num of files per process */
HTH
Mark Taylor
Rgds
Mark Taylor
is that per PROCESS?? so you could have 10 processes, each with 50,000
file descriptors open (in theory?)
There's a system-wide limit as well, but it's very large.
Essentially, you were misinformed. The practical limit is
quite high, is indeed per-process.
When we had similiar problem, we found a way to set the limit (somewhere
in smitty, I don't remember), but only for all users globally
unlike linux where you can set different limits for different users.
yup .. the system wide limit I am assuming is based on the number of
process slots in the proc table ?
$ grep PROCSHIFT /usr/include/sys/proc.h
#define PROCSHIFT 18 /* number of bits in proc index */
2^18==262144
262144*65534==17179344896
Although, somthing else will probably break before you hit this
limit :)
Gary might know .. has this limit been reached/tested in dev or is it
theoretical based on the architecture ? or is the supported limit the
actual number reached/tested in dev ?
Rgds
Mark Taylor
I don't know what the system-wide limit is any more (it was 1 M in
AIX 4.3.1). Whatever it is, I'm sure that some testing has taking
place, and that the upper limit is by design with attention to
scalability.
Thanks for your response..