Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Extended file system on UNIX 4.2/4.3 BSD

26 views
Skip to first unread message

Giovanni Aliberti

unread,
Dec 16, 1985, 3:19:11 PM12/16/85
to
Has any one done any work on a UNIX extended file system ??
The minimal `feature' of such a file system, would
be to allow individual file access, via pathnames.

For example; from machine A, one would cat a file on machine B
with:
cat A:/usr/jon/.login
or
cat /dev/net/A/usr/jon/.login


A bit similar to `rcp', except that open(), close(), read(), write() ...
will be implemented as a `DEVICE' at the kernel level.

Any ideas, suggestion ??

Giovanni

Chuq Von Rospach

unread,
Dec 17, 1985, 1:20:52 AM12/17/85
to
> Has any one done any work on a UNIX extended file system ??
> The minimal `feature' of such a file system, would
> be to allow individual file access, via pathnames.
>
> For example; from machine A, one would cat a file on machine B
> with:
> cat A:/usr/jon/.login
> or
> cat /dev/net/A/usr/jon/.login

The purdue people put together something called ibis that moved the
<host>:<file> down to the library level. It was slow and relatively flakey,
but I think someone was looking at moving it into the kernel.

On more transparent access issues, you can look at the Newcastle connection
(V7 based) which uses a superroot scheme or NFS, which was developed by Sun
and is spreading out into the rest of the world. NFS is transparent -- you
mount remote directories onto the local system with the 'mount' call and from
then on you don't care where it comes from. Since I work on NFS at sun I don't
want to turn this into a commercial -- if you're interested in learning more,
drop me a line.

chuq
--
:From catacombs of Castle Tarot: Chuq Von Rospach
sun!ch...@decwrl.DEC.COM {hplabs,ihnp4,nsc,pyramid}!sun!chuq

Power ennobles. Absolute power ennobles absolutely.

Doug Kingston

unread,
Dec 18, 1985, 10:03:22 AM12/18/85
to rnews@haring
The answer is that there is ongoing work on a remote file system
for 4.3BSD. I am currently about to Alpha test a remote filesystem
whose development started at BRL with help from Dan Tso at Rockafeller
University, and has been continued here at CWI (mcvax). The system
uses a kernel mode client and a user mode server.

The RFS is completely transparent to the client system. No user mode
changes are required except to replace the mount/umount programs and
recompile the kmem readers (due to changes in some data structures).
It supports remote chdir, remote core dumps, and all other 4.{2,3}
filesystem related system calls. Remote file systems are currently
mounted just as regular filesystems are mounted. This mechanism may
be augmented to happen automatically later. Authentication is currently
done via the hosts.equiv/.rhost mechanism. Other security measures can
be easily implemented in the user mode server.

Current performance 750/file to 750/dev/null is about 30 Kbytes per second.
This should improve but will never exceed that of RCP and should be slightly
less due to the added overhead of true RPC over the TCP connection.

Now the good news: The CWI/BRL RFS will be made available on mod.sources
when it has sufficently stable. I expect this to be just after
4.3BSD is released, but not before the end of January. I will be sending
diffs of the altered files and the new files I wrote. The RPC protocol has
been designed to map very closely to the 4.2BSD system calls and has
provisions for negotiating byte order, directory format and other machine
and OS dependent actions. It currently uses TCP but is quite able to
exist on other reliable transport systems, either STREAM or DGRAM.
RDP would be a prime choice.

It should port easily to 4.2BSD and 4.2BSD derived systems.

When the system is about to be come available, a note will be sent to
this newsgroup (amongst others). If you have specific inquiries about
this, please address them to me personally: d...@mcvax.uucp.

Cheers and Merry Christmas!
-Doug-

Ronald P. Hughes

unread,
Dec 19, 1985, 4:33:13 PM12/19/85
to
In response to your inquiry regarding extended file systems under 4.2/4.3:

Here at Integrated Solutions I have implemented TRFS (Transparent Remote File
System) which allows access to remote files. It is implemented within the
kernel, so that existing programs can make use of it without recompilation
or relinking. Anywhere you can specify a filename or pathname, you can now
specify a remote pathname of the form "/@machinename/pathname", where
pathname is any arbitrary pathname on the remote machine. For instance,
you can "cat /@bert/etc/passwd" to look at a remote file, or do something like
"vi /@bert/usr/demo/hello.c" to examine (and/or modify) a particular file.
If you want to browse, you can always "cd /@bert/usr/demo; ls -l; vi *.c".
If you don't like the remote pathname syntax, try "ln -s /@bert /B". Since the
target of a symbolic link can be a remote pathname, remote filesystem mounting
is not needed. In fact, there are NO new system calls associated with TRFS.
All the standard protection mechanisms apply. Remote files can be locked,
and remote devices can be accessed. One of our customers discovered that
after "ln -s /@bigmachine/dev/rmt0 /dev/rmt0" he could invoke tar in the
conventional manner to extract files from a remote tape drive.

My apologies to those who are offended by any kind of hype on the network.
Further inquiries should be directed to me at pyramid!isieng!ron, and flames
should be directed to /@bert/dev/null.

Ronald P. Hughes Integrated Solutions pyramid!isieng!ron (408)943-1902

David Parter

unread,
Dec 20, 1985, 3:54:43 AM12/20/85
to
in <1...@isieng.UUCP>, Ronald P. Hughes says:

> In response to your inquiry regarding extended file systems under 4.2/4.3:

> Here at Integrated Solutions I have implemented TRFS (Transparent Remote File

> System) which allows access to remote files. ...
> ... Anywhere you can specify a filename or pathname, you can now


> specify a remote pathname of the form "/@machinename/pathname", where

> pathname is any arbitrary pathname on the remote machine. ...
> ... In fact, there are NO new system calls associated with TRFS.


> All the standard protection mechanisms apply.

how do you resolve remote-access permissions? Is root@a also root@b?
is david@a also david@b?

david
--
--------
david parter
University of Wisconsin Systems Lab

uucp: ...!{allegra,harvard,ihnp4,seismo, topaz}!uwvax!david
arpa: da...@rsch.wisc.edu

Jim Rees

unread,
Dec 21, 1985, 3:17:05 PM12/21/85
to
ATT/RFS implements unix file system semantics exactly at the expense of not
being stateless and not caching data in the client. NFS has a stateless
server at the expense of unix file system semantics. In case it isn't
obvious, the big advantage of a stateless server is that it simplifies
recovery after machine or network failure.

Besides, bet a dollar the people who know from
both groups thought it was a bit misguided often (written by Apollo
folks tho the intent was sincere.)

You owe me a buck. The only complaint the AT&T folks had was over a typo
(the RFS mount command came out looking like the NFS mount command).

Actually, I thought we were remarkably restrained about plugging the
Apollo file system. It has the best caching scheme, but falls down on
unix semantics and heterogeneity. It doesn't require you to mount the
other disks on the network, they are all automatically available, always.
The AT&T folks don't consider that an advantage, but then they haven't
tried to put together a 1000 node network yet.

Wendy Thrash

unread,
Dec 26, 1985, 8:18:33 PM12/26/85
to
In <9...@brl-tgr.ARPA>, bzs%boston...@csnet-relay.arpa (Barry Shein) writes:

> Chuq hesitated to overview the SUN NFS for fear of being accused
> of commercialism....
> Given the fact that SUN has bent over backwards to get other vendors
> to adopt the protocol (making specs and code available) it borders
> on silly to feel that this is very commercial....

If SUN has "bent over backwards to get other vendors to adopt the protocol,"
then our experience must be atypical. We have had _great_ difficulty trying
to purchase NFS from them. Have other vendors who compete directly with SUN
run into similar problems?

Post away, Chuq. We can't buy it, but we can enjoy reading about it.
Note: This is not an attack on Chuq; he has been as helpful as he could.
Unfortunately, there is a bucket of glue somewhere in the pipeline.

John Gilmore

unread,
Dec 30, 1985, 5:34:01 AM12/30/85
to
In article <10...@brl-tgr.ARPA>, gw...@brl-tgr.ARPA (Doug Gwyn) writes:
> AT&T's RFS, I was told, treats a network link going down the same
> as it would a disk going off-line; there will be an error returned
> from any subsequent attempt to do I/O to the inaccessible file.
> The obvious alternative to I/O errors when a net link goes down is
> to block processes doing remote file I/O over the link until it
> comes back up; this is probably unwise for record locking systems.

The Sun NFS provides both options when a link or machine goes down. If
you have mounted the file system "hard", then it blocks I/O ops until
it comes back. If you mount "soft", it retries a few times and then
returns an error code. I tended to mount non-critical stuff soft,
e.g. my net.sources archives, so in case I touched them while the server
was down, I wouldn't hang with unkillable processes. For your root
partition you tend to want a hard mount...

> Note that full support for UNIX file system semantics is a crucial
> issue for AT&T UNIX System V systems, which support record locking.

Note that 4.2BSD also has file locking support, and that it doesn't work
on NFS, and that so few programs break because of this that it's not
worth mentioning. How many things really use Sys V file locking?
Certainly not all the Unix utilities that remain unchanged since V7.

Note also that a serious file locking mechanism on a network must provide
a way for a user program to be notified that the system has broken its lock.
This situation occurs when a process locks a file on another machine,
and a comm link between the two machines goes down. You clearly can't
keep your database down for hours while AT&T (grin) puts your long line
back in service, so the lock arbiter reluctantly breaks the lock. (It
can't tell if your machine crashed or whether it was just a comm
line failure anyway.) Now everybody can get at the file OK, but when the
comm link comes back up, the process will think it owns the lock and
will muck with the file. So far nobody has designed a mechanism to tell
the process that this has happened, which means to be safe the system must
kill -9 any such process when this happens (e.g. it must make it *look*
like the system or process really did crash, even though it was just a
comm link failure). I'm not sure how you even *detect* this situation
though.

This never happened on single machines with file or record locking because
when the kernel crashes, it takes all the user processes with it, so
when it comes back up, they won't be around to munge the file.

Sun (Jo-Mei Chang) is doing some research on how to have the lock
manager know within 30 seconds or so that your host has gone down (so
it can break the lock), but last time I heard, her scheme relied
heavily on broadcast or multicast packets, and gets very inefficient as
soon as you start doing serious traffic thru a gateway or a
non-broadcast network. And even if they implemented the System V file
locking standard using such a lock manager, that doesn't solve the
above problem.

Rick Ace

unread,
Dec 30, 1985, 10:24:52 AM12/30/85
to
> > Chuq hesitated to overview the SUN NFS for fear of being accused
> > of commercialism....
> > Given the fact that SUN has bent over backwards to get other vendors
> > to adopt the protocol (making specs and code available) it borders
> > on silly to feel that this is very commercial....
>
> If SUN has "bent over backwards to get other vendors to adopt the protocol,"
> then our experience must be atypical. We have had _great_ difficulty trying
> to purchase NFS from them. Have other vendors who compete directly with SUN
> run into similar problems?
>
> Post away, Chuq. We can't buy it, but we can enjoy reading about it.

I'm not sure if you'd *want* to buy it. Our site (NYIT) tried to purchase
the NFS source from Sun under an educational license agreement. Sun
sent us licensing paperwork for the NFS software. Among other things,
we were asked to supply the names of five people who would be working
with the NFS software, along with their signatures, home addresses, and
(dig dis) Social Security numbers. Real fast, I got on the blower to
our counsel, who found the request to be extraordinary and somewhat
suspicious; he advised us not to comply.

And we're not even a competitor :-).

Rick Ace
Computer Graphics Laboratory
New York Institute of Technology
Old Westbury, NY 11568
(516) 686-7644

{decvax,seismo}!philabs!nyit!rick

Oleg Kiselev

unread,
Dec 31, 1985, 3:36:06 PM12/31/85
to
In article <10...@brl-tgr.ARPA> lcc.r...@locus.ucla.edu (Richard Mathews) writes:
>I do not know when the LOCUS operating system will be publicly available.
I hear there have been a number of problems with Locus running at UCLA.
There have been grumblings about reliability (frequent crashes), high overhead,
and general uncomfortableness of the system ( which might have been caused by
security mods and attitudes of an educational institution).

My experience with Locus was that as an idea it's great : it IS totally
transparent. The down side is its speed - you pay a heavy toll for all those
nifty features in network sluggishness.
--
Disclamer: I don't work here anymore - so they are not responsible for me.
+-------------------------------+ Don't bother, I'll find the door!
| STAY ALERT! TRUST NO ONE! | Oleg Kiselev.
| KEEP YOUR LASER HANDY! |...!{trwrb|scgvaxd}!felix!birtch!oleg
--------------------------------+...!{ihnp4|randvax}!ucla-cs!uclapic!oac6!oleg

Ron McDaniels

unread,
Jan 3, 1986, 2:56:38 PM1/3/86
to
We obtained a license for NFS from Sun with absolutly no difficulty. Moreover,
I would think that we are certainly more likely to be considered a competitor
to Sun than Integrated Solutions. There must be more to the story than has
been told so far. . .


R. L. (Ron) McDaniels

CELERITY COMPUTING . 9692 Via Excelencia Way . San Diego, California . 92126
(619) 271-9940 . {decvax || ucbvax || ihnp4 || philabs}!sdcsvax!celerity!ron

g...@hpcnoe.uucp

unread,
Jan 6, 1986, 4:17:00 PM1/6/86
to
Hewlett-Packard has had a Remote File Access (RFA) protocol available on
its HP9000 Series 500 systems on the market since 1983, and it is now available
on the Series 300 (680xx-based systems). This RFA maps MOST HP-UX file system
calls (section 2) into remote calls if the file specified resides on a remote
file system. RFA requires the user to "login" to the remote system prior to
using it (this can be done automatically in a ".login" file). All accesses to
files on the remote system depend on the permissions associated with the login
id used for that machine.

HP-UX RFA is not public domain software.

George Feinberg (hpfcla!g_feinberg@HPLABS)

Todd Brunhoff

unread,
Jan 6, 1986, 4:17:16 PM1/6/86
to

I have posted to mod.sources complete software and documentation for
installation, maintenance and adjustment of RFS, a public domain,
kernel-resident distributed file system, written at Tektronix Computer
Research Laboratories* by myself for partial fulfillment of the
master's degree program at the University of Denver. It was designed
to provide complete transparency with respect to file access and
protections for all programs whether they use local or remote files and
directories. It has been installed on VAX BSD 4.2 and 4.3 UNIX,
Pyramid 4.2/5.0 UNIX, version 2.5, and on a Tektronix internal
proprietary workstation, called Magnolia. The instructions are
designed in a way that keeps all changes separate from your standard
sources, in hope that it will encourage sites to try the installation.

The version posted is release 2.0+ (plus bug fixes, easier installation
with patch, etc.), for those of you that may have heard of it. I
mention this also because I am told that plain old 2.0 will appear on
the BSD 4.3 tape under contributed software from U of Colorado at
Boulder.

Before you ask, it is "stateful". It is completely implemented except
for ioctl() and select(). This includes the file locking facility,
flock(2), which has been discussed here recently. The raw speed of
performing a read(2) or write(2) type of system call remotely is about
25% - 40% that of local speed. Typically, this tends to make programs
run about 50% - 90% of normal speed, because most programs spend a
majority of their time doing computing, or local I/O (like display to a
terminal). This makes it roughly twice as fast as rcp. The bulk of
the use it sees here in teklabs, is among 50 or so Magnolias, an 11/780
and an 11/750. Its applications here are largely
- distributed program development
- distributed libraries for local program development
- news
- Rand MH mail
- distributed font files for troff
- distributed man pages
And just about any other application that enables us to move large amounts
of data off of the workstations and onto the VAX. The nicest win is with
distributed program development where you keep all the source for a project
on the vax, and each user (having a Magnolia) uses the source via RFS.

I am happy to answer questions and redistribute bug fixes. I hope you
find this useful.


Todd Brunhoff
toddb%c...@tektronix.csnet
decvax!tektronix!crl!toddb

* RFS should not be confused with another completely different (but
excellent) implementation from Tektronix available on the 6000 series
workstation, called DFS, and done by a separate product group. The
work on RFS was designed and written strictly by the author of this
paper at about the same time as DFS, and draws none of its
implementation details from DFS. RFS is public domain, while DFS is
proprietary.

Ed Gould

unread,
Jan 6, 1986, 4:20:32 PM1/6/86
to
In article <1...@nyit.UUCP> ri...@nyit.UUCP (Rick Ace) writes:
>
>I'm not sure if you'd *want* to buy it. Our site (NYIT) tried to purchase
>the NFS source from Sun under an educational license agreement. Sun
>sent us licensing paperwork for the NFS software. Among other things,
>we were asked to supply the names of five people who would be working
>with the NFS software, along with their signatures, home addresses, and
>(dig dis) Social Security numbers. Real fast, I got on the blower to
>our counsel, who found the request to be extraordinary and somewhat
>suspicious; he advised us not to comply.

When we had to sign a similar agreement with Sun, some of us (myself
included) refused to give our SSNs. Sun said that it was OK.
(Note that it's *illegal* even to ask for a SSN in this context without
a disclaimer that the SSN is not required.) The home address part was
suspicious but not enough to bother us; asking who would be working on the
code seemed reasonable given that they (Sun) wanted more *real*
protection than AT&T gets from their trade secret license.

--
Ed Gould mt Xinu, 2910 Seventh St., Berkeley, CA 94710 USA
{ucbvax,decvax}!mtxinu!ed +1 415 644 0146

"A man of quality is not threatened by a woman of equality."

Duncan Gibson

unread,
Jan 8, 1986, 6:47:58 AM1/8/86
to
In article <30...@sun.uucp> ch...@sun.uucp (Chuq Von Rospach) writes:
>
>On more transparent access issues, you can look at the Newcastle connection
>(V7 based) which uses a superroot scheme or NFS, which was developed by Sun

I don't really know much about NFS, but the Newcastle Connection provides
the means of producing a multi-machine Un*x system, which allows remote file
access, remote execution, piping between processes running on different
machines, etc. transparently.

see "The Newcastle Connection, or UNIXes of the World Unite!" by
D R Brownbridge, L F Marshall an B Randell in
Software - Practice and Experience, Vol. 12, 1147-1162 (1982)
--
UUCP: ..!mcvax!ukc!rlvd!drg JANET: d...@uk.ac.rl.vc ARPA: drg%rl...@ucl.cs.arpa

John Gilmore

unread,
Jan 9, 1986, 11:18:09 PM1/9/86
to
In article <11...@brl-tgr.ARPA>, gw...@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes:
> > Note also that a serious file locking mechanism on a network must provide
> > a way for a user program to be notified that the system has broken its lock.
> > This situation occurs when a process locks a file on another machine,
> > and a comm link between the two machines goes down. You clearly can't
> > keep your database down for hours while AT&T (grin) puts your long line
> > back in service, so the lock arbiter reluctantly breaks the lock. (It
> > can't tell if your machine crashed or whether it was just a comm
> > line failure anyway.) Now everybody can get at the file OK, but when the
> > comm link comes back up, the process will think it owns the lock and
> > will muck with the file. So far nobody has designed a mechanism to tell
> > the process that this has happened, which means to be safe the system must
> > kill -9 any such process when this happens (e.g. it must make it *look*
> > like the system or process really did crash, even though it was just a
> > comm link failure). I'm not sure how you even *detect* this situation
> > though.
>
> I don't see a big problem. There are three possible cases of failure...
> (2) Communication link crashes. (3) Remote system crashes after
> planting a lock. Cases (2) and (3) are the interesting ones, but they
> can be easily handled by simply pinging the locking system when a lock
> conflict occurs. (Various strategies could be used to reduce pinging
> frequency, if desired, but I don't think it would be necessary.) If the
> locker denies knowledge of the lock, then void it locally and proceed.

I don't see how the above proposal solves anything. Take case (2).
The system that contains the data notices a lock conflict. It pings
the system holding the lock. It gets "network not reachable". It
voids the lock and the database is now accessible. OK, but the
database is in an inconsistent state. Maybe when it breaks the lock it
does a database cleanup. OK, now suppose the comm link comes back up.
The system that was out of touch still thinks it holds the lock; it's
been pinging the server trying to get an I/O request in (for example).
When the link comes up, the I/O request will get thru. What does the
server do with this request? If it satisfies it, it has permitted the
database to be changed by someone who doesn't have the lock. It must
reject the request (e.g. a Unix read() or write() call) specifying some
kind of lock failure error code. The application program on the remote
machine thinks it owns the lock. It must be written to go back to the
top of the transaction and try to obtain the lock again, when it gets
this error code. There are no such provisions in the System V locking
facilities. Thus programs written for those facilities will break when
moved onto networks.

How can I make this clearer? I'd be glad to be convinced that there is
no problem, but I think there really is...

Doug Gwyn <gwyn>

unread,
Jan 11, 1986, 10:21:05 PM1/11/86
to
> I don't see how the above proposal solves anything. Take case (2).
> The system that contains the data notices a lock conflict. It pings
> the system holding the lock. It gets "network not reachable". It
> voids the lock and the database is now accessible. OK, but the
> database is in an inconsistent state. Maybe when it breaks the lock it
> does a database cleanup. OK, now suppose the comm link comes back up.
> The system that was out of touch still thinks it holds the lock; it's
> been pinging the server trying to get an I/O request in (for example).
> When the link comes up, the I/O request will get thru. What does the
> server do with this request? If it satisfies it, it has permitted the
> database to be changed by someone who doesn't have the lock. It must
> reject the request (e.g. a Unix read() or write() call) specifying some
> kind of lock failure error code. The application program on the remote
> machine thinks it owns the lock. It must be written to go back to the
> top of the transaction and try to obtain the lock again, when it gets
> this error code. There are no such provisions in the System V locking
> facilities. Thus programs written for those facilities will break when
> moved onto networks.

The model I have in mind requires the owner of the actual file
(where the data is stored) to be the master of the file's locks.
Whenever it has to communicate with any slave about the locked
region, if there is a problem it cancels that slave's lock.
Similarly, each time a slave accesses a locked region, it tells
the master about it, and in case of disagreement about the state
of the locks, the master so informs the slave, which must correct
its local records.

Clearly, this can (as you say) make locks go away if the comm link
is flaky, but you should be doing this on top of virtual circuits
anyway, so that long-lasting communication flakiness is as severe a
problem as losing a disk (something that happens a lot around here).

I agree with your analysis of the necessary actions on the slave
when a lock breaks. The slave is either trying to free a lock
(which is already done by the comm link breakage) or is trying to
do I/O on the locked region, which should return an error if the
master and slave do not agree as to the status of the lock.

Are Gilmore and I the only ones who care about this?
Does anyone have an elegant solution to the problem?
(Disallowing locks is not elegant!)

Jack Jansen

unread,
Jan 14, 1986, 11:24:25 AM1/14/86
to rnews@mcvax

[The problem:
- Program on machine A has lock on file on B.
- Link between A and B goes down.
- B decides A has crashed, breask lock, and does cleanup.
- Link comes back up.
]
In my opinion, the best thing to do is that I/O operations done
by the program that still thinks it holds the lock return with EIO.

This is a condition that is understood by unix programs, for instance
when a disk goes offline), and if it is crucial to the application that
someth action is taken when an operation is partially completed, it
will catch the error, and do something intelligent.
--
Jack Jansen, ja...@mcvax.UUCP
The shell is my oyster.

Nathaniel Mishkin

unread,
Jan 15, 1986, 10:29:22 AM1/15/86
to
Here's how we (Apollo) deal with locking. It's not perfect, but in
practice (e.g. on our internetwork of 1000+ workstations on 7 networks)
it works quite well:

There are two nodes associated with every lock: the home node (i.e.
the node the file lives on), and the locking node (i.e. the node that
the process requesting the lock is running on). The existence of a lock
is registered on both the home node and the locking node. However, the
information on the home node is the one that really matters to the world,
since every lock request for files on that node come to it, not any other
locking nodes. (Obviously, sometimes the home node and the locking node
can be identical, but this case is trivial, so I won't consider it.)

Locks are held in volatile storage (i.e. virtual memory, not disk) and
hence evaporate when a node goes down. If a node is explicitly shut
down, many locks will be unlocked by virtue of processes holding locks
being killed. Of any remaining locks, those held BY the node shutting
down, are force-unlocked. Then the node broadcasts an "unlock all" message
to all other nodes. Recipients of such a message force-unlock all locks
held BY the recipient ON files on the node that sent the message.

When a node boots, it broadcasts an "unlock all" message too.

When a node N locks a remote file, it sends a message to the remote (home)
node asking if it is OK to lock. If the home node says "no, because
process P on node M has the file locked", N sends a message to M asking
if he really has that file locked. If N says he doesn't have the file
locked, N tells the home node to force-unlock the file, and then N tries
to lock the file again. This strategy is helpful in case a node has
missed an "unlock all" message. (Since broadcasts aren't propagated
across bridges between networks, this can happen.) Note that if node
M is unreachable, this scheme doesn't help.

So what do we do if you run into a "bad" case -- internet partition or
crashed node that hasn't been rebooted? Well, someone will try to open
a file (and try to get a lock since all opens must be accompanied by
locks) but will get the error "object is in use". We supply tools for
USERS to see who (what node and process) has the lock. The user can
then decide whether it's safe to forcibly break the lock (there's another
tool to do that). It's not a perfect scheme, but let's remember,
considering people run on Unix systems all the time with NO locking (even
in the local case), it's clearly a step up.

-- Nat Mishkin
Apollo Computer
apollo!mishkin

Brandon Allbery

unread,
Jan 17, 1986, 6:54:21 PM1/17/86
to
Expires:

Quoted from <67...@boring.UUCP> ["Re: File locking on networks"], by ja...@boring.UUCP (Jack Jansen)...
+---------------

+---------------

Maybe I'm missing something... but this seemed obvious to me. Moreover, the
process should still get EIO even after the node comes back up, since the lock
will have evaporated by then anyway (most likely). Of course, if what goes
down is the line between them, it's not certain how to detect it on the node
with the locked file, unless a program on that node tries to use that line...

Doesn't this apply to disk packs as well? I.e. the disk pack goes offline and
all locked files on that pack should be unlocked... and processes with open
fies on that disk pack should get EIO even after the disk pack comes online
again. Or, again, am I missing something?

--Brandon
--
From the Heart of the Golden Apple...

..decvax!cwruecmp!ncoast!tdi2!brandon (cwruecmp is Case.CSNET, O ye Arpanauts)
(..ncoast!tdi2!root for business) 6615 Center St. #A1-105, Mentor, OH 44060
Phone: +01 216 974 9210 CIS 74106,1032 MCI MAIL BALLBERY (part-time)

0 new messages