Please e-mail replies...
ron
------------------------------------------------------------------------------
Ron Klatchko klat...@cory.Berkeley.EDU ...!ucbvax!cory!klatchko
There may be a more direct way, but
#include <stdio.h>
main()
{
printf( "Number of file descriptors is %d\n",
sizeof( _iob )/sizeof( struct _iobuf ) );
}
will do it. _iob[] and struct _iobuf are defined in stdio.h.
--Blair
"So is _NFILE."
No, no, no.
It is no where guaranteed that an _iob array exists, or that it points
to all of the possible FILE streams. For example, on the Data General
MV systems, _iob is only 3 elements (for stdin, stdout, and stderr),
and the rest of the FILE's are allocated with malloc. Under a full
ANSI compliant system, it is not allowed to have an _iob structure
that macros refer to because '_' followed by a lowercase letter is in
the user's namespace for macros and such.
Under BSD systems, the getdtable() function returns the number of file
descriptors available. Under System V.[0123] systems, the define
_NFILE within stdio.h is the number of file descriptors. Under POSIX,
the function sysconf with an argument of _SC_OPEN_MAX (defined in
unistd.h) returns the number of file descriptors. Under ANSI C, the
define FOPEN_MAX refers to the minimum number of FILE's that can be
open (which may or may not be the number of file descriptors).
--
Michael Meissner email: meis...@osf.org phone: 617-621-8861
Open Software Foundation, 11 Cambridge Center, Cambridge, MA
Catproof is an oxymoron, Childproof is nearly so
No, it won't, on machines with dynamic allocation of the io buffers.
Unfortunately, that snippet of code compiled, ran, and gave
the desired results under this machine's Umax, which
allocates all of the _iob[] array in stdio.h...
Well, ron, you get the idea, I hope.
--Blair
"One of these days, I will, too... :-S"
This is not a good idea. ANSI C conforming implementations will not
use the name _iob (for reasons Sue Meloy explained in the Journal of
C Language Translation). Also, the length of the static-duration
FILE array has no necessary relation to the number of available file
descriptors; it could even be as short as 3 on some implementations.
In article <56...@buengc.BU.EDU> b...@buengc.BU.EDU (Blair P. Houghton) writes:
>There may be a more direct way, but
> printf( "Number of file descriptors is %d\n",
> sizeof( _iob )/sizeof( struct _iobuf ) );
Nope. There is no guarantee that this will even compile. Under 4.3BSD-tahoe,
you will get the answer
warning: sizeof returns 0
at compile time, and the program will print `0'.
>... _iob[] and struct _iobuf are defined in stdio.h.
My stdio.h defines instead `FILE' (a typedef with no corresponding `struct'
tag) and `extern FILE __sF[]'.
Under 4.2BSD and later systems, the `getdtablesize()' system call will
return (at runtime) the maximum number of open files per process. On
other systems the only way to find out is to open files until this fails.
On systems which have extended the limit to `as much memory as is available'
this will take some time. :-)
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: ch...@cs.umd.edu Path: uunet!mimsy!chris
Uhhh, I found on a Sun last week that _NFILE is NOT defined in stdio
and _iob was defined as
extern struct _iobuf _iob[];
without an explicit length. In Xenix, at least, _NFILE is defined, and
there is an additional parameter, NOFILE, in <sys/param.h>. I would not
rely on either of these. A sure run-time way is to (f)open or dup until you
get an error. Then your highest fildes # should be = max_files - 1.
Then you could allocate an array to hold your files from dynamic storage.
--
> printf( "Number of file descriptors is %d\n",
> sizeof( _iob )/sizeof( struct _iobuf ) );
This will not work on a large number of Unixes.
First, it's not guaranteed that _iob etc exist at all.
Second, it only gives you the number of statically allocated iobufs.
Third, some stdio.h's just declare _iob like this:
extern struct _iobuf _iob[];
Which tends to provoke a compiler warning and a result of zero.
A partial answer: see if your manual mentions getdtablesize().
-- Richard
--
Richard Tobin, JANET: R.T...@uk.ac.ed
AI Applications Institute, ARPA: R.Tobin%uk.a...@nsfnet-relay.ac.uk
Edinburgh University. UUCP: ...!ukc!ed.ac.uk!R.Tobin
No, it's not; it's the number of "_iob" structures - i.e., the maximum
number of FILEs that can be open, which, as you indicated, may or may
not be the number of file descriptors.
Under System V Release "1" and 2, the number of file descriptors is
NOFILE in <sys/param.h>.
Under System V Release 3, "ulimit(4, 0L)" returns the maximum number of
file descriptors that can be open (not documented in S5R3.0, may be
documented in later versions).
Well, you used to be able to rely on the constant (hack define) NOFILE.
But with the proliferation of systems that have a quasi-dynamic number of
file descriptors there's no real portable way of working it out at compile
time.
At run time you could use:
int i;
int fd;
i = 4; /* stdin + stdout + stderr + the open */
for (fd = open("/dev/null", 0); dup(fd) != -1; i++)
;
This assumes that only stdin, stdout and stderr are open.
You could use the results of such a program to define a constant
in a Makefile or #include. Then at compile time you'd have a
pretty good idea of the number of file descriptors.
Boyd Roberts bo...@necisa.ho.necisa.oz.au
``When the going gets wierd, the weird turn pro...''
cudcv@clover [~] > gcc c.c
c.c: In function main:
c.c:5: invalid use of array with unspecified bounds
Exit 1
cudcv@clover [~] > cc c.c
"c.c", line 5: warning: sizeof returns 0
cudcv@clover [~] >
_iob is declared as `extern struct _iobuf _iob[];' on all the systems round
here (4.3bsd, SunOS 4.0). The call `getdtablesize()' returns the number of
available file descriptors on these systems. I believe in SunOS 4.1 you can
increase this number with a call to `setrlimit', so that `getrlimit' will
return the current maximum, and the maximum maximum (as it were ...).
Rob
--
UUCP: ...!mcvax!ukc!warwick!cudcv PHONE: +44 203 523037
JANET: cu...@uk.ac.warwick ARPA: cu...@warwick.ac.uk
Rob McMahon, Computing Services, Warwick University, Coventry CV4 7AL, England
True in System V Release 4 as well; use RLIMIT_NOFILE there (and
probably in SunOS 4.1). The hard limit is the maximum maximum, and the
soft limit is the current maximum.
As I read the "sysconf(BA_OS)" section of the Third Edition SVID (the
S5R4 one), the standard POSIX call "sysconf(_SC_OPEN_MAX)" will also
return the soft limit (they say "Additionally, a call to "setrlimit()"
may cause the value of OPEN_MAX to change."). Presumably, it does the
same under SunOS 4.1 (also claiming POSIX conformance) as well (Larry?).
I can't speak for 4.4BSD, although if they add something similar, it
would be nice if they did so compatibly, by extending "*etrlimit".
After hearing rumors that our AIX machine would allow > 20 FDs, we
decided to run a little test:
while (fopen("/dev/null", "r") != NULL)
x++;
printf("%d\n", x);
We ran the test, and were pretty impressed: "147 files open at one time...
Not Bad!". We ran the test several more times, getting numbers in the
range 130 ~ 150.
About that time we started hearing cries up and down
the hall, "Hey, how come I can't write to /tmp?" "Hey, all of my compiles
are blowing up: it says the assembler can't open the intermediate file!"
About this time, we quietly logged out and took an early lunch.
--
Mark Harrison harr...@necssd.NEC.COM
(214)518-5050 {necntc, cs.utexas.edu}!necssd!harrison
standard disclaimers apply...