Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

What are the recommended size of slices of /var, /tmp, /opt, /usr/ and /export/home?

Skip to first unread message


Aug 27, 2020, 12:41:11 AM8/27/20
I'm planning to install solaris 10 on two hdds. On the first hdd 1 gig
is free in which I want to create / partition and on the second hdd 22G
is free on which I want to install rest of the slices so please tell me
how much disk space I should allocate (manual layout) for /var, /tmp,
/opt, /usr and /export/home on the second hdd.
In my p4 machine, I've already installed slackware and freebsd on the
first hdd and left 1 gig for /

Grant Taylor

Aug 27, 2020, 12:51:52 AM8/27/20
On 8/26/20 10:41 PM, gendila wrote:
> I'm planning to install solaris 10 on two hdds. On the first hdd 1 gig
> is free in which I want to create / partition and on the second hdd 22G
> is free on which I want to install rest of the slices so please tell me
> how much disk space I should allocate (manual layout) for /var, /tmp,
> /opt, /usr and /export/home on the second hdd.

Why not use the free space on the second hard disk and create a ZFS
pool. Then create ZFS file systems for each of the mount points that
you listed.

The ZFS pool will make the size question obsolete.

> In my p4 machine, I've already installed slackware and freebsd on the
> first hdd and left 1 gig for /

I probably wouldn't bother with that 1 GB and only focus on the 22 GB.

Grant. . . .
unix || die


Aug 27, 2020, 6:08:48 AM8/27/20
Thank you. I need that 1 gb on the first hdd otherwise I can't boot it
as os needs the first partition to install boot records.( plz don't
mind, I know this much as i tried to install solaris on the second hdd
on 22 gigs but couldn't boot. My sys is multiboot (slack and freebsd).

I don't know anything about zfs as I never used it before so it'd be
risky to create zfs. Plz tell me how much space I should create for the
aforementioned slices (/var, /opt, /tmp, /usr and /export/home/


Aug 27, 2020, 8:27:45 AM8/27/20

Aug 28, 2020, 9:18:27 AM8/28/20

How are you doing this? Some kind of boot manager or VM/emulator?

It's been a decade since I went through a s10 install but if someone had a
gun to my head to work on something like this, what I would do is the
minimum install which I think is just under 1GB.

Once that is done and everything is under /, just disk/format/fdisk the 22
gb drive and using /etc/vfstab, just link the /usr and /var over to it/them.

But if I remember correctly, those file systems don't need to be defined.

With the installer getting up to that point, you can just back space over
the names (sort of leaving those blank), recalculate the size for / (because
of the free space) and use that for everything.

If it comes out looking something like this:

Part Tag Flag Cylinders Size Blocks
0 root wm 132 - 8652 65.27GB (8521/0/0) 136889865
1 swap wu 1 - 131 1.00GB (131/0/0) 2104515
2 backup wm 0 - 8652 66.29GB (8653/0/0) 139010445
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0

You are in the ballpark.

All those /usr /opt /export/home and the others will just be part of the
first partition or slice (s0).

But what I don't get is, what exactly is the point of this exercise? S10 is
like a 10 year old operating system which you aren't going to have any
support with. Except for the CSW stuff (which may or may not work because of
the weird things with, depending on the version), there are little
to no archives of software anywhere. There is no apt-get, yum or ports to
magically bring in software. Even much of the CSW stuff is long in the
tooth and not all that current.

Even if you get it to boot and are able to login, then what?


Richard L. Hamilton

Aug 31, 2020, 8:54:06 AM8/31/20
ZFS is easier usually; but if you don't have two drives (to mirror the
root storage pool) it may not be worth it. It creates one or more
storage pools out of possibly multiple disks (or disk partitions),
which have to be mirrors for the one it boots from, but can be a
variation of RAID 5 for other pools. Or other arrangements too, but
you really want a redundant arrangement; that way you can just replace
a failed drive and keep going. And never an fsck needed (or even
possible), the way it handles updates eliminates the need for that.

As I recall, if you're not using ZFS, on x86 Solaris typically uses a single
partition, and creates its own slices within that. So you still only need
one disk partition big enough for all of it. The installer will do or
prompt for the rest. You could of course have more than one partition, such
as if you wanted to mirror them with Disk Suite (or Solaris Volume Manager,
or whatever it's called now).

/tmp is by default tmpfs (virtual memory, not directly disk, although if
you are short on RAM, you should at LEAST be generous with swap). Not a bad
idea to give /tmp and other tmpfs filesystems (/var/run) a size=# (like 512m)
boot option, so they can't run out out of virtual memory. But anyway you
likely will not need a separate partition or slice for /tmp.

As to sizes, it entirely depends on how much you install, how much data
you'll have, how much extra software (opencsw packages, for example). Mine
are probably NOT a useful example, since (a) I'm using zfs, and (b)
most of my Solaris is SPARC with lots of stuff on it; the rest is x86 VMs
with very little on them (just for testing, usually down), and I don't
recall whether I did a full install or not.

Still, here's one of my x86 VMs; remember, it's zfs (which doesn't
make that much difference on total space, but is more flexible about
where the space can be used); looks like I have about 14.4GB in use total
(from the zfs pool stats, not shown). Looks like I have at least most of
the software installed (including desktop stuff, which is big, and some
opencsw packages, and the Sun C compiler packages), and not much in the
way of data, etc. /opt isn't a separate partition, although that's up to you.
# df -h -F zfs;df -h -F tmpfs
Filesystem Size Used Available Capacity Mounted on
29G 10G 15G 41% /
29G 665M 15G 5% /var
rpool/export 29G 32K 15G 1% /export
rpool/export/home 29G 1.6G 15G 10% /export/home
rpool 29G 42K 15G 1% /rpool
swap 2.0G 1.0M 2.0G 1% /etc/svc/volatile
swap 2.0G 40K 2.0G 1% /tmp
swap 2.0G 44K 2.0G 1% /var/run

Looks like I didn't bother with a size= limit on the tmpfs filesystems
(the last three). So there's really 2.0G free at the moment for all of
them, not for each!

Still, those just reflect what I had available, they don't really take
into account any attempt at being smart about sizing. Since it's a VM,
it's really just one big fat file on the host anyway (which has a 1TB SSD,
so what do I care if there's a 30GB file on there for the disk image). The
only Solaris I have that's not using ZFS is Solaris 9 on a (SPARC)
Sun Blade 100, and it's using Disk Suite, with mirroring across two disks,
and just / and /export partitions (for all intents and purposes), and with
a lot of data on there relative to the disk size. So it's a pretty useless

If your hardware is up to the challenge, and if only you had an extra
disk drive (for a mirror for zfs), you'd do MUCH better to run Solaris
11. Way easier to install additional software, updates, etc. However,
unless you can afford a maintenance contract, you will ONLY get an
update when they bump version (11.3 to 11.4, to, for example),
and you will NOT be able to take it as an update, you'll have to do a
full re-install. Same with Solaris 10, as far as that goes; without a
maintenance contract, no patch access. (but patches on Solaris 10 and earlier
aren't much fun to install anyway...although if this will have Internet
connectivity, you maybe shouldn't run Solaris at all if you can't afford
a maintenance contract, since un-updated software is just begging to get

In article <ri80ne$jm6$>,


Sep 25, 2020, 5:37:23 AM9/25/20
There is nothing risky about using ZFS, it is very mature. In fact, it
gets you out of more tight spots than you can imagine.

The old UFS fixed sizes of /var, /opt etc are very restrictive.

If you are just building a system for simple use then go for a swap
(/tmp) partition of 2x ram and dump the rest in slice0

Otherwise you need to understand the sizing of the OS bundle (SUWNXCall
??) , apps and user usage to decide how big you want the partitions.


Bruce Porter
"The internet is a huge and diverse community but mainly friendly"
There *is* an alternative!
0 new messages