test of zol so far...

47 views
Skip to first unread message

Emmanuel Anne

unread,
Jan 14, 2012, 5:57:01 AM1/14/12
to zfs-...@googlegroups.com
It's usable now, I got only 1 crash, but with no consequence this time (except a reboot of course !).
But the usability is disappointing : I tested the compilation of mplayer, and it was 5s faster using zfs-fuse than using zol ! (ok, it's my version with the buffers, attr-timeout = entry-timeout = 3600, and I suspect the default settings of zol are far from optimized, it seems to use the disk too much so the cache is probably minimized, maybe to avoid troubles in case of crash. Plus this zol version uses my lzo patch which is far from being optimized, but it shouldn't slow down things even if it doesn't make them faster). Well 5s is not a huge difference, I could live with it, but the other problem is that on high disk activity the whole system slows down, which does not happen with zfs-fuse.
There are operations for which zol seems faster than zfs-fuse though even though I am unable to say which ones precisely, the startup time of kde is slightly better for example (but it's really not a big difference).
So for the compilation I moved the dir to a btrfs partition, much faster than evrything zfs and still using lzo.
And for my home, I reverted to zfs-fuse for now, because I really don't feel like diving into the code of zol for now. I fear it might be very hard to improve, I already know what it looks like, and it's at the opposite of the mentality of most of the linux kernel (which is avoid locks as much as possible, zfs is a nightmare of locks everywhere!).

Oh well, I keep my home on zfs, so I might return to zol later, it's just that for now I work on something else and I prefer to avoid usability issues.

sgheeren

unread,
Jan 14, 2012, 8:07:48 PM1/14/12
to zfs-...@googlegroups.com
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/

Just like last time [1] I don't think this is a useful `benchmark` at all. Compilation is going to measure CPU time spent _compiling_, obviously.

Out of curiosity, I ran mplayer compilation [2] on the same 'monster system' [3] of old times to see what gave.
  • Running on tmpfs (no disk activity, whatsoever), compilation takes 1m23s
  • Running on ZoL 0.6.0.5-0ubuntu3~natty1, mirrored pool on 2xSSD 'slices' (not whole-disks), compilation takes 1m23s
  • Running on ZoL (ibid.) but with compression=gzip-9, takes 1m35s (2.50x compression)
  • Running on ZoL (ibid.) but with compression=lzjb, takes 1m26s (1.77x compression)
I think it tells the whole story right there:
  1. choice of FS simply doesn't matter at all.
  2. compression (obviously) is a factor, but if you don't want a good trade-off, use lzjb
All timings repeated twice and averaged. 

An even more interesting data point is, that leaving a terminal 'open' and 'on top' will increase the runtime by about 7 - 9 seconds regardless of configuration (just doing the X screen updating)!
So I ran all of my benchmarks again, twice, after I found that out, but with the console hidden from view.

If I may give you some solid advice:
  1. compile under screen or at least hide the terminal window during compilation; it may save you more than those 5 seconds that were a usability problem for you
  2. compile on tmpfs to save disk **wear**, not time

------------------ ON BUGGINESS ---------------------------------------

Much more interesting, my not-so-powerful system (my fileserver, Intel Atom D510, 4Gb) yields:
  • Running on tmpfs, compilation takes 13m37s
  • Running on ZoL (spl-0.6.0-rc6-26-g5f6c14b, zfs-0.6.0-rc6), mirrored pool on 2x1.5Tb HHDs (whole-disks), compilation takes 13m35s **BUT CRASHES ZoL**
That is more interesting :) In case you were interested, the OOPS is consistently at writing of the ../*.deb files (which is obviously the very last thing being done)
[ 3965.435142] BUG: unable to handle kernel NULL pointer dereference at 0000000000000030
[ 3965.436023] IP: [<ffffffffa0489af5>] zpl_fsync+0x21/0x43 [zfs]
As you can see that was running an unstable version of ZoL there so I'm not reporting it as a bug (yet).

Cheers,
Seth


[1] Nov 7 2009 http://groups.google.com/group/zfs-fuse/browse_thread/thread/01587f73bff03d31/ff469c1dd196cfc1
[2] sudo apt-get build-dep mplayer; apt-get source mplayer; cd mplayer-...../; time dpkg-buildpackage
     Note that includes the HTML documentation which is built in singlethreaded fashion(?) and takes the longest part of the package building

[3] unchanged since 2009 i.e. Q9550 / 8Gb / 2xSSD - same SSDs. I _did_ change to a 64-bit kernel for some time now. Interesting things to note:

  * SSDs don't 'die' nearly as early as people often make it appear
  * I'm actually running for a decent period on the same hardware. I might consider upgrading soon. I always tell myself I don't need new hardware since I upgraded 'recently'. I can now safely say, less recently than I thought

sgheeren

unread,
Jan 14, 2012, 8:54:05 PM1/14/12
to zfs-...@googlegroups.com
On 01/15/2012 02:07 AM, sgheeren wrote:
  • Running on tmpfs (no disk activity, whatsoever), compilation takes 1m23s
  • Running on ZoL 0.6.0.5-0ubuntu3~natty1, mirrored pool on 2xSSD 'slices' (not whole-disks), compilation takes 1m23s
  • Running on ZoL (ibid.) but with compression=gzip-9, takes 1m35s (2.50x compression)
  • Running on ZoL (ibid.) but with compression=lzjb, takes 1m26s (1.77x compression)

For fun and glory, I reran the tests with the various branches of zfs-fuse (using the infamous dorky script, creating a pool backed on tmpfs)

All compilation: scons -C src debug=1 optim=-O2

RESULTS With no zfsrc and no command line options:
-------------------------------------------------

maint: 1m57s, 1m58s
rainemu/master: 2m7s, 2m8s
unstable: fails to build
   (I get linkage errors?! apparently data is corrupted; I think I remember you posting a 'late' bugfix to the buffers thing on the zfs-fuse list, which probably fixed this kind of problem)
   Message: /usr/bin/ld: mplayer.o symbol number 116 references nonexistent SHT_SYMTAB_SHNDX section

RESULTS with no zfsrc but -a 1 -e 1:
------------------------------------

maint: 1m32s, 1m33s
testing: 1m34s, 1m32s
unstable: fails to build
rainemu/master: : 1m37s, 1m38s


Timings are from first and second runs (you can average them yourself :))

So the take-away is:

  • there is still no difference: zfs-fuse is as fast as tmpfs/ZoL
  • unless... you run with -a 0 -e 0 - that is always dog-slow
  • with the new unstable branch, the name is actually well-chosen :)
  • the current rainemu/master [1] is ... actually consistently slower by some 5 seconds; I guess you should conclude that it has usability issues over the maint branch as well?

[1] http://rainemu.swishparty.co.uk/git/zfs

sgheeren

unread,
Jan 14, 2012, 9:18:21 PM1/14/12
to zfs-...@googlegroups.com
On 01/15/2012 02:54 AM, sgheeren wrote:

  • there is still no difference: zfs-fuse is as fast as tmpfs/ZoL

Correction: the difference is about 10 seconds (in favour of ZoL), for the fastest branch/configuration of zfs-fuse tested.

That is with a single vdev backing by a large file on tmpfs...

For completeness, I tried
  • btrfs (that was complicated). I ran it against a loopdevice backed by tmpfs (to be fair), and it built in 1m22s. No surprises there.
  • nilfs2 (much simpler :)) Same deal, taking 1m23s
So in effect, it looks very much like _every filesystem in the world_ is equally fast for compiling mplayer _except_ zfs-fuse which takes about 10s longer. (Meh. Not very interesting difference, but of course using real spindles and less powerful systems it may get annoying).

HTH
Seth

Emmanuel Anne

unread,
Jan 15, 2012, 2:34:42 AM1/15/12
to zfs-...@googlegroups.com
Well I think your hardware is so fast it makes fs irrelevant.
In this case just do whatever you like, I think you might be able to use absolutely any fs, even ext2 through fuse and see no noticeable difference.
For me where I can't get a result comparable to tmpfs, there is a huge difference though (btrfs is more than twice as fast, almost 3 times as fast).

Actually why are you still bothering with filesystems with such a configuration ? ;-)

2012/1/15 sgheeren <sghe...@hotmail.com>

--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/



--
my zfs-fuse git repository : http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary

Emmanuel Anne

unread,
Jan 15, 2012, 2:38:36 AM1/15/12
to zfs-...@googlegroups.com
And ps : it was not intended as a "benchmark", just something to try to evaluate daily usage.
I don't run benchmarks daily, but I often compile stuff, so this stuff interests me much more than anything. And there was obviously a big difference here, so it was worth trying to measure it.
But here again, it's totally irrelevant for you obviously !

And I posted them because I was quite surprised to notice that zfs-fuse is still useful these days after all !

2012/1/15 Emmanuel Anne <emmanu...@gmail.com>

sgheeren

unread,
Jan 15, 2012, 6:55:21 AM1/15/12
to zfs-...@googlegroups.com
On 01/15/2012 08:34 AM, Emmanuel Anne wrote:
Well I think your hardware is so fast it makes fs irrelevant.
Clearly you missed the part where I timed it on my fileserver: the D510 Atom processor and _real_ disks (a mixed mirror of WD150EADS+WD150EARS).

The build of the package takes over 13 minutes there, so you can't say my 'powerful hardware' makes the measurements irrevelant :)
Note that the compilation on ZoL runs exactly as long as on tmpfs on that very configuration.
If there even was a difference (e.g. due to writing those debs taking surprisingly long), it would be certainly be below the margin of error for the benchmark (say if writing the debs suddenly took the absurd amount of 12 whole seconds: 10s out of 3m37s is ~1%).

It doesn't get much clearer than that: any difference is most likely
  • below margin of error
  • time taken doing compression
  • or plain configuration problems

On 01/15/2012 08:38 AM, Emmanuel Anne wrote:
I don't run benchmarks daily, but I often compile stuff, so this stuff interests me much more than anything. And there was obviously a big difference here

Either you quoted the wrong numbers or that conclusion is ludicrous. You claim compilation takes **5 seconds** longer (yawn), stating that is a usability concern since you compile a lot.

Well, let's have a look, on my, allegedly, fast system, compilation takes about 1m30s, so 5s would mean a whopping 5% loss (meh - margin of errors. Again, I remind you that circumstances like whether the console is visible/updating during the time has a far greater effect).

Now, you were dismissing my timings, because, apparently my system is so stupid fast. That must mean your compiles take significantly **longer**, and 5 seconds will be even less relevant and well below the margin of error. [1]


Cheers,
Seth

[1] Say your compilation times are somewhere in between my fast and slow systems, so say 6 minutes. 5s out of 300s is ~1-2%... meh

sgheeren

unread,
Jan 15, 2012, 7:12:56 AM1/15/12
to zfs-...@googlegroups.com
On 01/15/2012 08:38 AM, Emmanuel Anne wrote:
> And ps : it was not intended as a "benchmark", just something to try
> to evaluate daily usage.
> I don't run benchmarks daily, but I often compile stuff, so this stuff
> interests me much more than anything. And there was obviously a big
> difference here, so it was worth trying to measure it.
> But here again, it's totally irrelevant for you obviously !
>
> And I posted them because I was quite surprised to notice that
> zfs-fuse is still useful these days after all !

How would you rate the fact that zfs-fuse is 10s slower on tmpfs on my
stupid fast system?

It is not just 10s slower than ZoL, it 10s slower than anything else
(ext4, btrfs, nilf2, ZoL and tmpfs) on the same configuration. It is
even 10s slower than ZoL on SSDs on the same box.

Not that this 10s is actually 12% and it was tested under careful
circumstances (no console output, no system load, freshly started demon
with newly created pool each time), so the margin of error would be
relatively small.
Also note I used single vdev on tmpfs: note that this precludes fsync so
zfs-fuse was getting a bit of unfair advantage over ZoL-on-SSD, and it
even removed any costs related to raidz or compression due to the simple
pool config.

And how do you rate the fact the the rainemu/master branch performs
worse than _that_ on my system (widening the gap to 18%, nearly a fifth)?

Seth

Reply all
Reply to author
Forward
0 new messages