interesting results. I like this kind of test because it's something I do often so it's more usefull than a benchmark of extreme situation. This is the compilation of zfs-fuse source by running time scons just after having mounted the directory (and after having run scons -c before unmounting it) :
3 would be better, as it also drops dentries and inodes, on top of
pagecache.
interesting results. I like this kind of test because it's something I do often so it's more usefull than a benchmark of extreme situation. This is the compilation of zfs-fuse source by running time scons just after having mounted the directory (and after having run scons -c before unmounting it) :
1) on a jfs filesystem : 1:43
5) same thing but with prefetch cache enabled : 1:55
1:52 is acceptable (at least for me).
Hi all,
2. Command sequence:
Emmanuel Anne wrote:
> interesting results. I like this kind of test because it's something I
> do often so it's more usefull than a benchmark of extreme situation.
> This is the compilation of zfs-fuse source by running time scons just
> after having mounted the directory (and after having run scons -c
> before unmounting it) :
>
> 1) on a jfs filesystem : 1:43
I'm deffo spoilt. My ext4 home does it in 0:21 (with -j 50 in 18.6s
(average) - just for fun).
On my compressed zpool:
zfsrc, 'scons debug=2 -j 5': 0:16
zfsrc, 'scons -j 5': 0:25
no zfsrc, 'scons debug=2 -j 5': 0:31
no zfsrc, 'scons -j 5': 0:40
> 5) same thing but with prefetch cache enabled : 1:55
Okay, so I _had_ to try with prefetch disabled (echo
zfs-prefetch-disable >> /etc/zfs/zfsrc):
zfsrc, zfs-prefetch-disable, 'scons debug=2 -j 5': 0:16
zfsrc, zfs-prefetch-disable, 'scons -j 5': 0:26
I'd say: no significant difference.
>
> 1:52 is acceptable (at least for me).
I think I never saw more than 11 child processes spawned at any given
time (due to dependencies) and of these a fair number will usually have
been awaiting disk IO (especially the write IO which is significantly
assymetric on SSD).
Seeing that I have 4 CPU cores, -j 4 or maybe -j 5 for coordinating
tasks would make enough sense. However, I wanted to see if I could push
it. E.g. if I could make as much read IO take place immediately for
maximum parallel reads, even though the subsequent compilation/link CPU
cycles would obviously have to wait for an available CPU scheduler slot.
:) It turns out that it indeed squeezes about 3 seconds out of the real
elapsed time. Quite possibly this exact same time win would occur with,
say, -j 11. But then again, this way (-j500) I can make the computer
work out the available processing power.
Now if I were to schedule 500 mp3 encoding jobs, I certainly would not
raise my -j factor above 5 for gnu make (I use make files for this type
jobs). I use xargs -P and xjobs (of Solaris fame) frequently if I don't
have makefiles handy :)
Oh and 'sudo apt-get install ccontrol ccache distcc
distcc-monitor-gnome; ccontrol-init' will lend you a lot of insight in
build parallelization and optimization even on a single compile host.
Often paralellization/distribution completely breaks down on the link
times and library dependencies. But I'm getting off-topic.
> The reason why the zfsrc file makes any difference is very probably
> because of the fuse_entry_timeout and fuse_attr_timeout arguments, the
> others are less important.
I reckoned so. Infallible permission logic is not a concern on my desktop :)
>
> At least it's good news for zfs usage, it shows that if your disks are
> really fast, then fuse doesn't slow things down noticeably ! ;-)
>
> Impressive anyway, thanks to have taken the time for that !
It was kind of fun to do, especially since I didn't have to wait _that_
long (like e.g. the popular kernel compile benchmarks).