Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ZFS on old IDE-based SPARC machines redux

5 views
Skip to first unread message

Tim Bradshaw

unread,
Oct 3, 2006, 9:12:43 AM10/3/06
to
I've been occasionally whining about the appalling performance of ZFS
on oldish IDE-based SPARC boxes, caused I think by some known bug in
the driver.

Well I had a terrible but apparently workable idea: what happens if I
use ZFS on top of SVM? Don't laugh, I know it's a dumb idea, but if it
works it gives me at least the management flexibility of ZFS (which is
mostly what I'm after: I need cheap snapshots) without waiting for ever
for a patch.

And it does seem to work. I get the following figures on a 400MHz/1GB
Netra X1 with a pair of mundane 120GB disks in it. These are timings
to install a whole-root zone (bar /opt/sfw). Installing a sparse zone
gives even more extreme differences because most of the extra time is
spent initialising packages in the bad ZFS cases.

UFS on an SVM mirror 45m;
UFS single disk 40m;
ZFS on an SVM mirror 38m;
ZFS on zpool mirror of 2 simple SVM devices 38m;
ZFS on zpool mirror 127m;
ZFS single disk 52m.

I realise these are not serious FS benchmarks, but they do test
something I want to do on dev boxes quite often.

The underlying devices in all these tests were s7 of one or both of the
IDE disks - I can't give ZFS the whole disks because there *are* only 2
disks.

In the ZFS on zpool mirror case I did something like:
# metainit d14 1 1 c0t0d0s7
d14: Concat/Stripe is setup
# metainit d24 1 1 c0t2d0s7
d24: Concat/Stripe is setup
# zpool create  export mirror /dev/md/dsk/d14 /dev/md/dsk/d24
warning: device in use checking failed: No such device
warning: device in use checking failed: No such device
bash-3.00# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
export                 93.5G   58.5K   93.5G     0%  ONLINE     -

I think I will play with this configuration as it seems to do pretty
well - better than UFS even.

Again, I know this is a stupid config. But it is a *usable* config,
which zfs otherwise is not on this HW.

--tim

Frank Cusack

unread,
Oct 3, 2006, 8:10:05 PM10/3/06
to
On Tue, 3 Oct 2006 14:12:43 +0100 Tim Bradshaw <t...@tfeb.org> wrote:
> Well I had a terrible but apparently workable idea: what happens if I
> use ZFS on top of SVM? Don't laugh, I know it's a dumb idea, but if
> it works it gives me at least the management flexibility of ZFS (which
> is mostly what I'm after: I need cheap snapshots) without waiting for
> ever for a patch.
>
> And it does seem to work.

nice hack!
-frank

ary...@spasu.net

unread,
Oct 5, 2006, 4:40:24 AM10/5/06
to
Performance differences between SVM and ZFS puzzle me as well.
While zfs is terribly slow on a single spindle, comparing to UFS,
a raidz over 6 spindles can be almost 3 times faster than SVM/UFS
raid-5 over the same, in a 50/50 RW test.
In fact, ZFS raidz is even slightly faster than SVM/UFS stripe/raid-0!

Wondering how volatile in fact is raidz in a real run..

And, now I am tempted to test ZFS raidz over SVM metadisks
instead of physical slices :-)

Regards,
Andrei

0 new messages