By comparison, I have a 3x2TB raidz configuration on FreeBSD (FreeNAS actually) on a 1000Gbps network. The CPU is a single core 2GHz Athlon64.
On this system, Zfs average write speed over the network is in the order of 15MB/s ... It doesn't seem to be limited by CPU but by the I/O bottleneck, everything is hanging off an old-fashion PCI bus.
I don't think ZFS is known for writing performance.
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
ZFS and raidz were designed to solve the problem outlined in this article.
I actually have had this problem: one disk died, I replaced it. While the array was rebuilding itself, there was one bad sector on another disk that had never been detected. The array was unrebuildable. I lost everything (I had backup).
With raidz, I would have lost one sector worth of data. With ZFS, if you scrub from time to time you can detect bit rot and failing disks in advance.
ZFS is not for speed, it is for data integrity.
===
Hugues Talbot
9 allées des cornouillers, 77420 Champs sur Marne, France
+33 6 72 07 51 26
Further tests with zfs and mirroring under Linux indicate that the speed is fine.
Please keep us posted with your results and conclusions.
If you have references for us to read that would be helpful too.
Hugues Talbot (depuis mobile)
stone
On 03/10/2011 06:40 PM, Jeff wrote:
> Ah that makes sense. Not a very good test then, although I would hope
> my CPU can compress/dedup zeros faster than 220MB/s. When I tried to
> use dd if=/dev/random I was getting writes of 0.3KB/s. I will try
> netcat when I have some more time. Interesting though when I had
> dedup=on zpool list shows no deduplication for the mkfile. Once I get
> more comfortable with solaris I'd like to run a comparison using
> IOstat like my other tests, but I was just excited I had something
> mounted.
>
> On Mar 10, 1:14 am, k...@ironsoftware.de wrote:
>> Very interesting test. More reflective would be a test where you pipe RANDOM data through a network socket into a file.
>> Netcat is your friend.
>>
>> So basically with mkfile u r compressing zeros at cpu speed and dedup'ing existing block hashes from RAM.
>>
>> Viele Gr��e / Kind Regards / Un Saludo
>>
>> ---
>> Dipl.-Ing. Christian Kendi
>> Iron Software GbR
>> G�rtnerstr. 62b
>>> Le 2 mars 2011 � 03:36, Jeff<jefe.s...@gmail.com> a �crit :
>>>>> 9 all�es des cornouillers, 77420 Champs sur Marne, France
/dev/random can be very slow because it outputs truly random numbers from various hardware sources and will block if not enough entropy is available in the system.
/dev/urandom is usually much faster, but output only pseudorandom numbers.
This is under Linux, I don't know what the behaviour is under Solaris.