I have done some benchmarks on bare metal. Installed gentoo for the purpose. All details and results are here:BTW, VM may not give meaningful benchmarks without some effort. VB does have a sync option for virtual disk. May be that will be handy while benchmarking. Can you please post your numbers once you have some?
> --
> To post to this group, send email to zfs-...@googlegroups.com
> To visit our Web site, click on http://zfs-fuse.net/
> I'm sure there will
> be further improvements in both the kernel and fuse as we get closer
> to the final release.
I'm not so sure. The kernel work seems pretty clear-cut. On the other
hand, there might be more to gain by adapting zfs-fuse to any new fuse
interface (if any).
Seth
> just
> starting firefox resulted in heavy swapping
hehe that won't happen here as no swap will enter my house!
I will see if I can get the same results
Ermmm... you _are_ running SMP are you (LOL). Seriously, what's the CPU
config? Perhaps cat /proc/cpuinfo
> Version 1.96 ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP /sec %CP
> 2.6.34 PCRYPT 2572M 31 1 78698 1 68334 1 +++++ +++ 211736 3
> 261.2 0
> Latency 289ms 759ms 629ms 2177us 17548us
> 204ms
> Version 1.96 ------Sequential Create------ --------Random
> Create--------
> 2.6.34 PCRYPT -Create-- --Read--- -Delete-- -Create-- --Read--- -
> Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP /sec %CP
> 16 10089 32 +++++ +++ 13623 13 9714 26 +++++ +++
> 14010 11
> Latency 14514us 472us 493us 25172us 35us
> 113us
>
> The numbers are consistently better in 2.6.34. So, there is a
> regression in 2.6.35.
>
>
? seems to be, would be more careful wording
> I have done some benchmarks on bare metal. Installed gentoo for the
> purpose. All details and results are here:
>
> http://downloads.sehe.nl/zfs-fuse/splice-support/splice_benchmarks.html
[...]
> I haven't got the time to draw any final conclusions. Please help
> yourself with my raw results for now :)
At last I also run Linux version 2.6.35-gentoo-r1 on my server in the
basement. Are there any tests I could do to help in a way?
Or is this fuse/splice-thing not yet important with current zfs-fuse-0.6.9 ?
Stefan
> Well it should result in (afaict minor) performance improvements without
> alteration to zfs-fuse. So, you could do an A-B comparison too, so we
> have more data :)
"minor improvements" ??
I want the big thing! ;-)
jokes aside, point me at specific things to test. dd-ing files or what??
S
We hoped to see (significant) improvement of write speeds, which are
(very) lacking up till now.
You could simply do the A-B test with bonnie, like I did, so we can have
one more datapoint telling us whether there is any discernable difference.
I ran bonnie++ on a plain single-vdev pool with 'bonnie++ -d
/BONNIEPOOL/ -u myuser'
> You could simply do the A-B test with bonnie, like I did, so we can have
> one more datapoint telling us whether there is any discernable difference.
>
> I ran bonnie++ on a plain single-vdev pool with 'bonnie++ -d
> /BONNIEPOOL/ -u myuser'
got it.
Any specific way to format the output, btw?
Looks strange here.
;-)
S
> Any specific way to format the output, btw?
> Looks strange here.
example:
# bonnie++ -d /tank/bonnie -u sgw:users
Using uid:101, gid:100.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mythtv 8G 22 18 40340 6 17233 2 2060 91 89034 4
240.6 1
Latency 416ms 2407ms 2425ms 69970us 358ms
955ms
Version 1.96 ------Sequential Create------ --------Random
Create--------
mythtv -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 5199 11 14972 16 4987 10 3797 8 21811 18
5954 8
Latency 45762us 3218us 2105us 98667us 1911us
5202us
1.96,1.96,mythtv,1,1282064157,8G,,22,18,40340,6,17233,2,2060,91,89034,4,240.6,1,16,,,,,5199,11,14972,16,4987,10,3797,8,21811,18,5954,8,416ms,2407ms,2425ms,69970us,358ms,955ms,45762us,3218us,2105us,98667us,1911us,5202us
Another run, right now (should have been less load now) and formatted
(somehow ;) ):
# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
disk/by-id/ata-ST31000333AS_9TE0A3FP ONLINE 0 0 0
disk/by-id/ata-ST31000333AS_9TE0A0KF ONLINE 0 0 0
# cat result.csv| bon_csv2txt
Version 1.96 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mythtv 8G 22 20 40181 7 16939 2 2091 93 91429 4
208.8 1
Latency 417ms 2362ms 2637ms 54710us 324ms
817ms
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
mythtv 16 6130 15 19005 22 4583 11 4178 10 28348 20
7618 13
Latency 32472us 2409us 3122us 77280us 1446us
20157us
You're right, I was stupid .... more to come ... *sigh*