Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Draft of How to Setup a "Dataless" Environment Under OSF/1

24 views
Skip to first unread message

Jon Forrest

unread,
Aug 4, 1993, 4:02:30 PM8/4/93
to
[ This is the first draft of a document that describes how to
set up a dataless environment under OSF/1. Please read it
and let me know if anything doesn't make sense. Better yet,
try creating such an environment yourself. It should take about
1.5 hours to take a workstation out of its box and have it
functioning as a completely functional machine.

After I get enough comments and fix any problems I'm going to get this
in the OSF/1 FAQ. Meanwhile, remember that this is a first draft so
there could be mistakes. I have followed my own advice and installed
several machines this way so I know things basically work.

Jon Forrest (for...@cs.berkeley.edu) 510-643-6764 ]

-

Setting Up A "Dataless" Environment under OSF/1

The following is a description of how to remote mount a /usr
filesystem on an OSF/1 machine. Such a machine is called a
"dataless" machine, which is actually an inaccurate description,
but I'll stick with it anyway. The key point is that the machine
in question mounts its /usr filesystem from a remote server.

Although this description is worded as if an OSF/1 server is required,
in reality the /usr filesystem can be served from any kind machine on
which a complete copy of /usr from an OSF/1 machine exists.

This document is writen assuming that it is being read by an
experienced Unix system manager. Not every detail behind every
explanation is included.

This is based on information posted on USENET by Norman
Gaywood, Marco Luchini, and Arrigo Triulzi, as well as my own research.
Nobody from DEC contributed to this.

Before I begin I want to say that it is a sad state of affairs that DEC
doesn't support a dataless environment under OSF/1, especially when
doing so is so easy. The only reason I can see for not supporting this
is that a dataless environment can't easily be upgraded when a new
release of system software comes out, at least not using the
current installation scripts. However, I don't think it would be
all that hard to change this.

There is a chicken-and-egg problem in mounting a remote /usr
since some of the commands needed to bring up a network and to
do remote mounting are located on /usr. What do you do if you're trying to
mount /usr and the commands you need to mount /usr are on /usr?
It turns out that there is only one important command that falls into
this catagory and it can be copied from your distribution medium.

In writing this description I'm making the following assumptions:

1) Mounting the remote /usr on top a local /usr isn't a
good idea because it wastes space on the local disk and because
it violates one of the reasons for doing the remote mount in the
first place - that there should be just one copy of everything under /usr.
Indeed, once you've set up the remote /usr, you can reinitialize
/usr on your local disk and let it be used for storing private data.

When you install OSF/1, follow the advice in the installation manual to
make /usr large enough to hold all the optional subsets and install
them all when you are asked.

2) The remote /usr filesystem will be mounted read-only locally. One thing
the OSF/1 people (and Ultrix people) did right was to identify
everything on a traditional /usr filesystem that is machine specific or
needs to be written during normal system operation, and to put this
collection under the /var filesystem if you choose to do so during system
setup. This means that every OSF/1 system, both client and server, must be
configured with a separate /var filesystem.

3) When upgrade time comes around you accept the fact that you'll
have to do a complete installation. You won't be able
to do the incremental upgrade. These days, with CDROMs as a common
distribution medium, the length of time it takes to do a complete
installation is much less than it used to take when tapes were
ubiquitous.

There are two sets of modifications you have to make - one to
the NFS server machine that will be exporting /usr, and the other
to each client machine doing a remote mount of /usr. I'll show these
separately.

On The Server:

1) Make sure the server is configured to have /var as a separate
filesystem. If you didn't do this when the server was first setup you're out
of luck and you'll have to reconfigure it from scratch.

2) Add the following entry to the /etc/exports file.

/usr -root=0 -ro [hosts]

where [hosts] is a list of clients that are permitted to mount /usr
from this server. Ordinarily the '-root=0' would be dangerous but since you're
exporting /usr read-only, there is no danger. The reason why '-root=0' is necessary
is because /usr/sbin/siainit needs it.

3) If you run xdm on the client machines, create the directory
/var/X11/xdm and modify /usr/lib/X11/xdm/xdm-config so that the
first four lines are as follows:

DisplayManager.errorLogFile: /var/X11/xdm/xdm-errors
DisplayManager.authDir: /var/X11/xdm
DisplayManager.pidFile: /var/X11/xdm/xdm-pid
DisplayManager.keyFile: /var/X11/xdm/xdm-keys

The idea here is that any file that is created by a client when
it start xdm must be placed somewhere under /usr/var.

Client:

1) When installing OSF/1 on the client make sure you select
the System Management option and configure your disk(s) to
have partitions for root, swap, /var, and /usr. Of course,
you can have more partitions than these. On a system with a single
RZ26 I use the following partition table:


size offset fstype [fsize bsize cpg]
a: 95760 0 4.2BSD 1024 8192 16 # (Cyl. 0 - 119)
b: 282492 95760 unused 1024 8192 # (Cyl. 120 - 473)
c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569)
d: 105336 378252 4.2BSD 1024 8192 16 # (Cyl. 474 - 605)
e: 0 0 unused 1024 8192 # (Cyl. 0 - -1)
f: 0 0 unused 1024 8192 # (Cyl. 0 - -1)
g: 0 0 unused 1024 8192 # (Cyl. 0 - -1)
h: 1567272 483588 4.2BSD 1024 8192 16 # (Cyl. 606 - 2569)

When you are asked which of the layered products you want to
install, choose MANDATORY SUBSETS ONLY. There is no need for
anything else since the server machine will have a fully populated
/usr.

Make sure the client is configured to have /var as a separate
filesystem. This is critical.

2) Make the following changes to /etc/fstab:

Add
/usr@server /usr nfs ro,soft,int,bg 0 0

Modify
/dev/rz3h /usr ufs rw 1 2

to
/dev/rz3h /usr ufs xx 1 2


3) Edit /sbin/rc2 and comment out the lines:

# Just exit if /usr not mounted.
if [ ! -d "/usr/sbin" ]
then
exit
fi

4) Edit /sbin/rc3 and comment out the lines:

# Just exit if /usr not mounted.
if [ ! -d "/usr/sbin" ]
then
exit
fi

5) Move /sbin/rc3.d/S20nfsmount to /sbin/rc3.d/S02nfsmount.
This will cause /usr to be mounted just after the network is
initialized but before any other commands from /usr are executed.

6) Skip this step if the client is on the same subnet as the server.
Copy the version of routed that comes on the CDROM to /sbin/routed.
To do this boot OSF from the CDROM. Once the system is up,
run the MAKEDEV command to make your system disk visable. Then,
change directory to /etc and run fsck on the root partition. This
is necessary because the shutdown procedure leaves the root
dirty in the eyes of the mount command. Then, mount the root
partition on /mnt. Once you've done this, change directory
to /mnt/sbin and copy routed to /sbin/routed. You can now
reboot from your magnetic disk.

You also need to modify /sbin/rc3/S12route to change all references
of /usr/sbin/routed to /sbin/routed. You can do this with any
editor. Once you've done this you need to rename S12route so that
it is executed before S02nfsmount. What I did, since I don't
use quotas, was to exchange S12route and S01quota so that I
ended up with S12quota and S01route. Although this worked, I did get
some error messages from S01route complain about commands like
'head', 'ps', and 'sed' not being found since /usr hadn't been
mounted yet. You can ignore them.

This step is necessary because the version of routed that you would
run if you don't perform this step has two flaws. The first is that
it is located in /usr/sbin/routed which means that you couldn't
use routed to mount /usr since the filesystem on which routed is
located won't be mounted. The second is that the standard routed
is linked with shared libraries under /usr so that routed can't
run until /usr is mounted. The version of routed from the CDROM
has neither of these problems.

If you're running gated then you're out of luck because I don't
know of a version of gated that comes from DEC that has been
statically linked. [check to make sure the CDROM doesn't also
have this]


7) If you are running xdm, follow step #3 for servers.

8) Reboot. You should see that /usr is mounted from the server
machine and that everything that ran before during startup time
still runs. Once you're up in this environment you shouldn't
see any difference between having a local /usr and having the
remote /usr except possibly for slightly slower access times
for files on /usr. We use FDDI a lot here and accessing /usr
over FDDI feels as fast as having a local /usr.

Once you feel confident that this arrangement is working then
you can run 'newfs' on the original /usr filesystem and use
all the space for whatever you want.


[ The following are ideas that have been suggested by early
readers. I need to investigate them.

1. Move S12route to S01route instead of doing the switch. More than one
program can start with the same initial combination.

2. Put /var under root?

]
--
Anything you read here is my opinion and in no way represents the Univ. of Cal.
Jon Forrest WB6EDM for...@cs.berkeley.edu 510-643-6764

0 new messages