After using zfs quite a bit at my last job, on the Sun thumpers with
48 disks & Sol10/x64.
What I discovered there was trying to compress a large number of very
small files ensures it will grind to a halt eventually...
Now I've been trying it out at home.
This machine has 6 disks (Samsung 1Tb, 32 Mb cache) for zfs usage
connected through an Adaptec 16 port SATA RAID card, with 8Gb RAM & an
AMD 5600+ AM2 proc running Ubuntu Jaunty (9.04)
From reading the list and other google-found info, I've now got the
techarcana patches which means the write performance isn't completely
abysmal but I've still got some questions.
fuse is 2.8.0-pre2-0techarcana0 (from source)
zfs-fuse is 0.5.1-1ubuntu2.1 (from source)
So far, using bonnie++ without extra options, the fastest results have
been zfs stripe over 3x hardware mirrors, read/write caches disabled -
so very redundantm, but 50% disk space)
Now, supposedly with the 0.5x release of zfs-fuse, the disk cache
settings didn't matter, but that's not what I'm seeing.
Last night I ran some basic tests with a 6xraidz2, with & without
compression & disk caches enabled/disabled.
What I could see from the activity lights and zpool iostat, was that
enabling the caches killed the performance and left the drives
twiddling their thumbs so to speak, waiting around for 'something',
which I thought they weren't supposed to do any more?..
I still want to try the raidz & raid10 equivalent again with the new
packages, but my question here is:
Should I go with what appears to be obvious and keep the disk caches
off & compression on? or is there some combination that uses the read/
write caches that can give me better write performance? (as reading
Given that the array will be mainly large a/v files that don't change
often but not wanting to spend days waiting for them to copy...
zfs raidz2 over 6 drives, compression off, all drive caches off
as above, but with compression on
compression off, all caches enabled
compression on, all caches enabled