I would like to know the essential differences between
Network File System and Distributed File System.
NFS works by mounting remote file system ( exported by server)
on client machines. Incremental modifications made in a shared file
will not be visible across other clients.
I think NFS doesn't support migration of file from one server/machine
to other. Are there any othe primary differences. Could you list
any true DFS implemenatations.
Can someone throw some light on these issues ?
Vijayan
Well, I'm not an authority on this but nobody else seems to have
answered for several days so I'll give it a shot. Feel free to
correct me if I misspeak.
DFS is the filesystem used by OSF DCE. The design of DFS is based upon
that of AFS (Transarc, CMU, et al.). It is not as widely used as
either of those OSes, and AFAIK there are no open-source DFS
implementation. DCE never caught on in a big way, and is considered
dead by a lot of people. There are some large sites that continue to
use it, and it is still sold by several companies and advocated by
OSF's -- the Open (hah!) Software Foundation -- successor, the
Opengroup. I'd recommend against going with DCE or DFS unless you know
a fair amount about it and why you need it. You basically have to
have a DCE Kerberos setup and a number of other DCE facilities installed
and running in order to use DFS. That requires some dedicated servers.
It's also tricky to admin if you don't already know DCE
Transarc, IBM, and SGI all sell implementations of DCE/DFS. I'm sure
there are others.
Note that Microsoft has a distributed filesystem called dfs that is
not related to DFS -- there may also be DCE/DFS implementations
for Windows, I don't know.
Coda is another networked filesystem being developed at CMU (where AFS
originated). It's far cooler, and it's open source. With Windows,
Linux, and BSD support. But it's not reached a release state yet, so
isn't an option for serious use yet. It is independent of DCE. Coda
allows for disconnected use, local caching, etc.
AFS, DFS, and Coda are the only networked filesystems I am aware of that
are really designed as highly scalable, high-performance solutions. All
three use Kerberos for authentication and encryption.
For completeness I ought to mention Samba and CIFS, I guess. Samba
competes more in the NFS local-file-sharing category. I don't know much
about CIFS (from Novell), but I get the impression that it's at least
got a security mechanism.
It may be worth noting that when I was at CMU in 1995 they were
talking about moving to DFS; they seem to have found enough problems
with it that they're developing their own system (Coda) rather than
using it, though.
The Opengroup has DCE information (including DFS) at:
http://www.opengroup.org/tech/dce/
Transarc has white papers, including some covering AFS and DFS, at:
http://www.transarc.com/Library/whitepapers/index.html
Coda info is at:
http://coda.cs.cmu.edu
The Linux DCE FAQ is at:
http://jrr.ne.mediaone.net/FAQ/FAQ.html
Kerberos is at MIT somewhere on ftp.athena.mit.edu or similar.
It says there is no DFS implementation for Linux. It also claims that
DCE isn't dead, which is debatable. Also check:
http://www.bu.edu/~jrd/FreeDCE/ for an attempt at a complete
open-source DCE implementation.
There area two AFS clients for Linux; a real one from Transarc (and
alternatively Derrick Atkins at MIT), and Arla, a GPL'd implementation.
Coda support is included in Linux 2.2.
--Sumner
Samba is the GPLed SMB server for UNIX - it's a great way
to integrate a UNIX box in a PC-centric network. The latest
development branch is getting pretty good at being a Primary
Domain Controller; three cheers to the Samba team for proving
that hiding the specs is not a viable way to lock people into
proprietary solutions.
Read more about Samba on http://www.samba.org/
--
Stefaan
--
PGP key available from PGP key servers (http://www.pgp.net/pgpnet/)
___________________________________________________________________
Perfection is reached, not when there is no longer anything to add,
but when there is no longer anything to take away. -- Saint-Exupéry
> AFS, DFS, and Coda are the only networked filesystems I am aware of that
> are really designed as highly scalable, high-performance solutions. All
> three use Kerberos for authentication and encryption.
And not a word about RFS. How sad. I'll just go away and cry.
--
John Hughes <jo...@Calva.COM>,
Atlantic Technologies Inc. Tel: +33-1-4313-3131
66 rue du Moulin de la Pointe, Fax: +33-1-4313-3139
75013 PARIS.
> There area two AFS clients for Linux; a real one from Transarc (and
> alternatively Derrick Atkins at MIT), and Arla, a GPL'd implementation.
> Coda support is included in Linux 2.2.
Arla is not released under GPL, but under a BSD style license.
Arla includes support for FreeBSD, OpenBSD, NetBSD, Linux, Solaris,
AIX, IRIX and Digital UNIX.
It is a sad memory, if you recall how processes that had open files
or a working directory across a broken mount point were killed
with no chance to recover. It was especially fun in combination
with the AT&T DOS Server (samba equivalent) which served several
PCs in the same process. If one user accessed a file that caused
the process to be killed, several other users would lose their
work for no apparent reason...
Les Mikesell
l...@mcs.com
Please don't.
I'm not familiar with it, and I'm sure I'm not alone.
It would be nice to see a precis on it so we may learn something about
it, and whether or not it has merits that are of interest.
--
Now I know someone out there is going to claim, "Well then, UNIX is
intuitive, because you only need to learn 5000 commands, and then
everything else follows from that! Har har har!" (Andy Bates in
comp.os.linux.misc, on "intuitive interfaces", slightly defending
Macs.)
cbbr...@hex.net- <http://www.hex.net/~cbbrowne/linuxkernel.html>
> In article <uf4sm0p...@microlite.calvacom.fr>,
> John Hughes <jo...@AtlanTech.COM> wrote:
> >And not a word about RFS. How sad. I'll just go away and cry.
>
> It is a sad memory, if you recall how processes that had open files
> or a working directory across a broken mount point were killed
> with no chance to recover.
A mere implementation detail. Not at all required by the protocol.
(admittedly you'd get an error if you tried to read a file on a server
that was down, but I see no reason you should get killed).
> On 28 Apr 1999 17:49:58 +0200, John Hughes <jo...@AtlanTech.COM> wrote:
> >And not a word about RFS. How sad. I'll just go away and cry.
>
> It would be nice to see a precis on it so we may learn something about
> it, and whether or not it has merits that are of interest.
It was AT&T's entry in the great distributed filessystem war.
The nicest thing about it was that it was a distributed *Unix*
filesystem, so locking just worked, you could open remote devices, you
could open remote fifos...
It was killed by some bad implementations (they screwed up the byte
ordering in some of them), it may have been rather slow.
It was stateful.
>>And not a word about RFS. How sad. I'll just go away and cry.
>
>Please don't.
>
>I'm not familiar with it, and I'm sure I'm not alone.
>
>It would be nice to see a precis on it so we may learn something about
>it, and whether or not it has merits that are of interest.
It was an attempt to preserve unix semantics to the extent possible
across machines. Device nodes actually refer to devices on the
remote host, FIFOs spanning machines, file locking that works, etc.
It actually worked very well except for the quirk that it killed
any processes that might be affected by a failure. It had to
maintain state and had no way to recover it after a disconnect.
The first version I used ran over AT&T's Starlan instead of IP and
had its own name resolution and domain concepts.
Les Mikesell
l...@mcs.com
>> In article <uf4sm0p...@microlite.calvacom.fr>,
>> John Hughes <jo...@AtlanTech.COM> wrote:
>> >And not a word about RFS. How sad. I'll just go away and cry.
>>
>> It is a sad memory, if you recall how processes that had open files
>> or a working directory across a broken mount point were killed
>> with no chance to recover.
>
>A mere implementation detail. Not at all required by the protocol.
>
>(admittedly you'd get an error if you tried to read a file on a server
>that was down, but I see no reason you should get killed).
It was also a rather strange idea to let device nodes sort-of work
when ioctl()'s between machines of different CPU types (like 3B2/intel)
couldn't possibly be done. Having FIFOs that spanned machines
was kind of neat though.
Les Mikesell
l...@mcs.com