Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

/dev/fd ?

1,387 views
Skip to first unread message

Russell Hammer

unread,
Jun 24, 1997, 3:00:00 AM6/24/97
to

Vikas Agnihotri wrote:
>
> Hello,
>
> What are all the files [0-9]* in /dev/fd on Unix boxes? I think they have
> something to do with 'pipes'.

No, they have something to do with a processes' file descriptors. When
a process opens one of these character device files, it has the effect
of "dup"ing the file descriptor with the same number in /dev/fd/??.
It is used by some shells when running a SUID script, for security
reasons. (among other things I'm sure!)

> I see that their file-modification time is always current so I think they
> are used whenever anyone on the machine uses a pipe construct like
> 'ls -l|more'

Here is an example (using ksh syntax):

$ rm /tmp/foo # make sure /tmp/foo doesn't exist
$ exec 3>/tmp/foo # open /tmp/foo and assign to file desc 3
$ cat /etc/group > /dev/fd/3 # send output to file desc 3
$ exec 3<&- # close file desc 3
$ cat /tmp/foo # see it works!
root::0:root
other::1:
bin::2:root,bin,daemon
sys::3:root,bin,sys,adm
adm::4:root,adm,daemon
uucp::5:root,uucp
mail::6:root
tty::7:root,tty,adm
lp::8:root,lp,adm
nuucp::9:root,nuucp
staff::10:
daemon::12:root,daemon
sysadmin::14:
nobody::60001:
noaccess::60002:
nogroup::65534:
$

This is probably not the best example, but you get the picture.

> But I have just 64 files in /dev/fd i.e. /dev/fd/0 thru /dev/fd/63. Does
> this mean that only 64 pipes can be used at any given time in the system?
> How do I bump this up (Solaris 2.5) ?

I think if you set rlim_fd_cur in /etc/system to a higher number
and reboot (-r maybe??) it will create more entries in /dev/fd.

I didn't test this on my system, but I think this is true.
Anyone else have more experience here?

> Does this mean that **if** I have a command line like
> command1 | command2 | ......|command64 |command65, it will fail because
> /dev/fd has only 64 entries? (assuming I am the only user on the box?)

No.

> Help? Thanks,
>
> Vikas

Regards,
Russ

Andrew Gierth

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

>>>>> "Vikas" == Vikas Agnihotri <vi...@insight.att.com> writes:

Vikas> Hello, What are all the files [0-9]* in /dev/fd on Unix boxes?
Vikas> I think they have something to do with 'pipes'.

Not really.

They are special files that access whatever files the accessing
process already has open. Opening "/dev/fd/n" is exactly the same
as calling dup(n).

This is sometimes used to pass an open pipe to a command that
is expecting a filename to open. e.g. for command redirection
like

$ diff <(command1) <(command2)

This starts command1 and command2 with their output hooked to pipes,
then invokes diff /dev/fd/3 /dev/fd/4 (the actual numbers could vary)
to compare the output of the two commands.

Also, on some systems this is used to pass an open fd for an
interpreter script to the interpreter; this is more secure than just
passing the name, and defeats the "race condition" hacks that setuid
interpreter scripts are vulnerable to.

Vikas> But I have just 64 files in /dev/fd i.e. /dev/fd/0 thru
Vikas> /dev/fd/63. Does this mean that only 64 pipes can be used at
Vikas> any given time in the system?

No.

There are two possibilities for /dev/fd/n. On some systems it is simply
a directory full of character device nodes, in which case the number
present is pretty arbitrary. On other systems, /dev/fd is a special
filesystem, and the number of entries reflects the accessing process's
soft file descriptor limit.

--
Andrew.

comp.unix.programmer FAQ: see <URL: http://www.erlenstar.demon.co.uk/unix/>

Bill Marcum

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

In message <slrn5r09l2...@joshua.insight.att.com>,

vi...@insight.att.com (Vikas Agnihotri) wrote:
>Hello,
>
> What are all the files [0-9]* in /dev/fd on Unix boxes? I think they have

>something to do with 'pipes'.
>
They're file descriptors. "cat >/dev/fd/3" is the same as "cat >&3" in
Bourne shell. These might be handy if you use one of those shells that
don't understand expressions like "cat >&3". Also, Bourne/Korn/bash shells
can only handle ten fd's as &0 - &9, while /dev/fd on your system goes up to
63.

>I see that their file-modification time is always current so I think they
>are used whenever anyone on the machine uses a pipe construct like
>'ls -l|more'
>

>But I have just 64 files in /dev/fd i.e. /dev/fd/0 thru /dev/fd/63. Does
>this mean that only 64 pipes can be used at any given time in the system?
>How do I bump this up (Solaris 2.5) ?

I believe that's 64 fd's per process. You might be able to add more with
mknod, but I wouldn't bet on it.

>
>Does this mean that **if** I have a command line like
>command1 | command2 | ......|command64 |command65, it will fail because
>/dev/fd has only 64 entries? (assuming I am the only user on the box?)
>

That sounds like it might be an interesting experiment. You might run into
a limit on the number of processes per user unless you run it as root.

--
Bill Marcum bmarcum at iglou dot com
"I'm looking at PAGES AND PAGES of stuff even the Franklin Mint couldn't
give away for free." -- K. Mennie


Icarus Sparry

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

In article <slrn5r2kvj...@joshua.insight.att.com>,

Vikas Agnihotri <vi...@insight.att.com> wrote:
>>Also, on some systems this is used to pass an open fd for an
>>interpreter script to the interpreter; this is more secure than just
>>passing the name, and defeats the "race condition" hacks that setuid
>>interpreter scripts are vulnerable to.
>
>An example, please?

The 'race condition' is caused by the kernel reading the first line of
the file, seeing '#!/bin/wombat', and then starting up /bin/wombat with
the filename as the first arguement. Between the kernel reading the first
line and /bin/wombat opening its first arguement, the program could have
been replaced! Systems with /dev/fd properly implemented pass the '/dev/fd/16'
or whatever filedescriptor the kernel happens to open the file on (to read its
first line), so unless the file is writable you can not change its contents.

>> Vikas> But I have just 64 files in /dev/fd i.e. /dev/fd/0 thru
>> Vikas> /dev/fd/63. Does this mean that only 64 pipes can be used at
>> Vikas> any given time in the system?
>
>>No.
>
>Why not? If my hard-limit for file descriptors is 64, how can I run
>something like
>command1 <(command2) <(command3) ..... <(command100) ?

You would indeed be unable to do what you ask for. However you could do
command1 | command2 | command3 | .... | command100

as it is a per process limit of 64, not an overall limit.

>>There are two possibilities for /dev/fd/n. On some systems it is simply
>>a directory full of character device nodes, in which case the number
>>present is pretty arbitrary. On other systems, /dev/fd is a special
>>filesystem, and the number of entries reflects the accessing process's
>>soft file descriptor limit.
>

>Yes, I verified that on Solaris, it is the latter. I did a 'ulimit -n 128',
>but it didnt increase the number of entries in /dev/fd which leads me to
>believe that 64 is my hard-limit.
>
>I did 'ulimit -n 12' and Voila, I now have /dev/fd/0 thru /dev/fd/11 only.
>
>But, get this, now when I do a 'ulimit -n 64', I get a error saying
>$ ulimit -n 64
>ksh: ulimit: exceeds allowable limit

You ksh is setting both the hard and soft limit to 12, and only
the superuser can raise the hard limit.

>Why is this so? I had to close that xterm and open up a new one! Why cant I
>bump up my ulimit -n to the hard-limit?

If you had said 'ulimit -Sn 10' you would have been able to.

>How can I find out what my hard and soft limits are? Do I have to write a C
>program using getrlimit(), etc? Is there a shell command to do this? Here
>is my ulimit -a output.
>
>time(seconds) unlimited
>file(blocks) unlimited
>data(kbytes) 2097148
>stack(kbytes) 8192
>coredump(blocks) unlimited
>nofiles(descriptors) 64
>vmemory(kbytes) unlimited

Looks like exactly the information you are looking for. Try using 'ulimit -Ha'
and 'ulimit -Sa' to get the hard and soft limits. The default for printing is
to show the soft limits.

Icarus

Bill Marcum

unread,
Jun 26, 1997, 3:00:00 AM6/26/97
to

In message <slrn5r2kvj...@joshua.insight.att.com>,

vi...@insight.att.com (Vikas Agnihotri) wrote:
>
>Why not? If my hard-limit for file descriptors is 64, how can I run
>something like
>command1 <(command2) <(command3) ..... <(command100) ?
>
Because, in a pipeline, each command is a separate process with its own file
descriptors. Each process has its own stdin (fd/0), stdout (fd/1), and
stderr (fd/2).

Casper H.S. Dik - Network Security Engineer

unread,
Jun 26, 1997, 3:00:00 AM6/26/97
to

vi...@insight.att.com (Vikas Agnihotri) writes:

>Yes, I verified that on Solaris, it is the latter. I did a 'ulimit -n 128',
>but it didnt increase the number of entries in /dev/fd which leads me to
>believe that 64 is my hard-limit.

Solaris 2.x has a table in teh proc structure which has room
for a certain number of open file descriptors; when you exceed that
number, it's reallocated to a bigger number. You inherit the size of that
table from you parent. On my system, /dev/fd has only 24 entries, and that's
because of how tcsh works. (And it's maximized by your limit)

Solaris 2.6 also has /proc/self/fd (or /proc/pid/fd) which has the
actual entries there.

Ksh has the history file open as fd 63, so you see 64 entries:

ksh$ ls -l /proc/$$/fd
c--------- 1 casper tty 24, 7 Jun 26 10:02 0
c--------- 1 casper tty 24, 7 Jun 26 10:02 1
c--------- 1 casper tty 24, 7 Jun 26 10:02 2
-rw------- 1 casper IR 2714 Jun 26 10:02 63

If ksh could do "exec 127>&1" you would see more entries (bash can do this)


>I did 'ulimit -n 12' and Voila, I now have /dev/fd/0 thru /dev/fd/11 only.

>But, get this, now when I do a 'ulimit -n 64', I get a error saying
>$ ulimit -n 64
>ksh: ulimit: exceeds allowable limit

>Why is this so? I had to close that xterm and open up a new one! Why cant I


>bump up my ulimit -n to the hard-limit?

Because you first set the hard & soft limit to 12. ulimit -S -n 12 sets the
soft limit only.

(Interesting, /dev/fd is maximized by your ulimit, ifyou have higher fds open
you won't see them there)

>How can I find out what my hard and soft limits are? Do I have to write a C
>program using getrlimit(), etc? Is there a shell command to do this? Here
>is my ulimit -a output.

>time(seconds) unlimited
>file(blocks) unlimited
>data(kbytes) 2097148
>stack(kbytes) 8192
>coredump(blocks) unlimited
>nofiles(descriptors) 64
>vmemory(kbytes) unlimited


ulimit -H -a

Casper


--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

Jim Dennis

unread,
Jul 5, 1997, 3:00:00 AM7/5/97
to

In article <87u3in7...@erlenstar.demon.co.uk>
Andrew Gierth <and...@erlenstar.demon.co.uk> writes:

>>>>> "Vikas" == Vikas Agnihotri <vi...@insight.att.com> writes:

Vikas> Hello, What are all the files [0-9]* in /dev/fd on Unix boxes?
Vikas> I think they have something to do with 'pipes'.

> Not really.

> They are special files that access whatever files the accessing
> process already has open. Opening "/dev/fd/n" is exactly the same
> as calling dup(n).

....

Vikas> But I have just 64 files in /dev/fd i.e. /dev/fd/0 thru
Vikas> /dev/fd/63. Does this mean that only 64 pipes can be used at
Vikas> any given time in the system?

> No.

> There are two possibilities for /dev/fd/n. On some systems it is simply


> a directory full of character device nodes, in which case the number
> present is pretty arbitrary. On other systems, /dev/fd is a special
> filesystem, and the number of entries reflects the accessing process's
> soft file descriptor limit.

--
Andrew.

Thank you for the very lucid explanation of file descriptor
nodes in a directory tree.

I'd like to add one small note:

Under Linux (and probably under some other Unix' that support
the /proc filesystem) the file descriptor nodes are located
under /proc/self/fd/.

I've made a symlink from there to /dev/fd (cd /dev && ln -s
/proc/self/fd fd) -- which seems to work O.K. and might
even help for some programs or scripts that are coded to
expect /dev/fd/ for these.

The /proc/ filesystem is fascinating to me in that it
provides a unique directory tree to each process that views
it. It also allows high level (shell command) access to
things that would otherwise require low level system calls
(mostly specialized C programs).

I've noticed that /proc/ embodies and extends one of the
earliest principles of Unix design -- that as many interfaces
as possible should be represented as "files." We are
all accustomed to this in the case of character and block
devices, directories, named pipes and sockets. From what
I've seen programmers and admins haven't as accustomed to
accessing process data via this interface (most of us still
use 'ps ... | awk ....' in our scripts to manage processes
rather than some variant of 'find /proc/ .... |' -- and
stuff like that).

I suspect that the reason most of us don't use /proc/ more
is that it's not sufficiently ubiquitous, yet.
--
Jim Dennis, in...@mail.starshine.org
Proprietor, consu...@mail.starshine.org
Starshine Technical Services http://www.starshine.org

PGP 1024/2ABF03B1 Jim Dennis <j...@starshine.org>
Key fingerprint = 2524E3FEF0922A84 A27BDEDB38EBB95A

Roger A. Faulkner

unread,
Jul 6, 1997, 3:00:00 AM7/6/97
to

In article <m390zlc...@antares.starshine.org> ji...@antares.starshine.org (Jim Dennis) writes:
>
[snip]

>
> Under Linux (and probably under some other Unix' that support
> the /proc filesystem) the file descriptor nodes are located
> under /proc/self/fd/.
>
> I've made a symlink from there to /dev/fd (cd /dev && ln -s
> /proc/self/fd fd) -- which seems to work O.K. and might
> even help for some programs or scripts that are coded to
> expect /dev/fd/ for these.

Don't do this on Solaris 2.6.

Solaris 2.6 has the expanded /proc/<pid>/* directory structure
and includes /proc/<pid>/fd/ with entries for each open file in
the process. It also has /proc/self (though ls(1) won't show it;
you just have to know that it is there).

However, the semantics of opening /dev/fd/<n> are different from
the semantics of opening /proc/self/fd/<n>.

open("/dev/fd/1", O_anything) is identical to dup(1)
(this is the reason /dev/fd was invented).

open("/proc/self/fd/1", O_something) will give you a new file
descriptor, with permissions specified by O_something and with
its own seek offset, if the open() is allowed at all (opening
anything but an ordinary file or directory fails).

It is not reasonable for one process to dup() another process's
open file descriptor. If this were allowed, a debugger (say)
that opens and reads a victim process's open file would leave the
seek offset in a position completely inappropriate for the victim.

To allow /proc/self/fd to have the same semantics as /dev/fd,
procfs would have to impose different semantics on opening
/proc/self/fd/<n> from /proc/<pid>/fd/<n> in the case where
<pid> is the process's own process-id. That would be too weird,
so uniform semantics were retained in the implementation.

I don't know how Linux treats /proc/self/fd or /proc/<pid>/fd
w.r.t these matters. Could someone enlighten me?

roger.f...@eng.sun.com

0 new messages