Well I had a terrible but apparently workable idea: what happens if I
use ZFS on top of SVM? Don't laugh, I know it's a dumb idea, but if it
works it gives me at least the management flexibility of ZFS (which is
mostly what I'm after: I need cheap snapshots) without waiting for ever
for a patch.
And it does seem to work. I get the following figures on a 400MHz/1GB
Netra X1 with a pair of mundane 120GB disks in it. These are timings
to install a whole-root zone (bar /opt/sfw). Installing a sparse zone
gives even more extreme differences because most of the extra time is
spent initialising packages in the bad ZFS cases.
UFS on an SVM mirror 45m;
UFS single disk 40m;
ZFS on an SVM mirror 38m;
ZFS on zpool mirror of 2 simple SVM devices 38m;
ZFS on zpool mirror 127m;
ZFS single disk 52m.
I realise these are not serious FS benchmarks, but they do test
something I want to do on dev boxes quite often.
The underlying devices in all these tests were s7 of one or both of the
IDE disks - I can't give ZFS the whole disks because there *are* only 2
disks.
In the ZFS on zpool mirror case I did something like:
# metainit d14 1 1 c0t0d0s7
d14: Concat/Stripe is setup
# metainit d24 1 1 c0t2d0s7
d24: Concat/Stripe is setup
# zpool create export mirror /dev/md/dsk/d14 /dev/md/dsk/d24
warning: device in use checking failed: No such device
warning: device in use checking failed: No such device
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 93.5G 58.5K 93.5G 0% ONLINE -
I think I will play with this configuration as it seems to do pretty
well - better than UFS even.
Again, I know this is a stupid config. But it is a *usable* config,
which zfs otherwise is not on this HW.
--tim
nice hack!
-frank
Wondering how volatile in fact is raidz in a real run..
And, now I am tempted to test ZFS raidz over SVM metadisks
instead of physical slices :-)
Regards,
Andrei