Performance across SATA chips

69 vues
Accéder directement au premier message non lu

Alex Wasserman

non lue,
22 nov. 2013, 11:58:1122/11/2013
à zfs-...@googlegroups.com
Quick question here, this might not be the best place for hardware questions, but given it's ZFS on OSX, not many other places will have the ZFS expertise.

I have a hackintosh based on a Gigabyte X58A-UD3R board (rev 2, FH firmware). It's a nice board, all been stable for the last couple of years.

I have a ZFS pool consisting of 3 mirrored vdevs, 2x3Tb drives, 2x2Tb drives, 2x1Tb drives.

In addition I have an SSD drive for my system, and a couple of other drives for other things (1 for Windows, 1 for Illumos).

So, total of 9 drives in my case, it's a tight fit.

Connectivity:

South bridge:
6 x SATA 3Gb/s connectors (SATA2_0, SATA2_1, SATA2_2, SATA2_3, SATA2_4, SATA2_5) supporting up to 6 SATA 3Gb/s devices
Gigabye chip:
2 x SATA 3Gb/s connectors (GSATA2_8, GSATA2_9) supporting up to 2 SATA 3Gb/s devices
Marvell chip:
2 x SATA 6Gb/s connectors (GSATA3_6, GSATA3_7) supporting up to 2 SATA 6Gb/s devices

I'm looking for expertise, advice, or just comments on how best to distribute the disks across the chips available.

Some considerations:

Putting all 6 ZFS disks on the southbridge means they're all communicating over a single bus and is a single point of failure.

The SSD is an older model, and won't benefit from the Marvel 6Gb/s, but, neither would any of the spinning disks either.

Should I spread the vdevs over each chip (ie. 1 vdev per chip), or spread each disk in each vdev over the chips to make sure each vdev has a disk on different chips to prevent the failure impacting anything.

 

Jason Belec

non lue,
22 nov. 2013, 13:26:3122/11/2013
à zfs-...@googlegroups.com
All valid info and goals, the only real thing to worry about is scrubbing regularly. That will catch any issues well in advance of problems specific to a chip, cable, power module, etc. backups also help, just in case the worst happens.


--
Jason Belec
Sent from my iPad
--
 
---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Alex Wasserman

non lue,
22 nov. 2013, 13:31:3422/11/2013
à zfs-...@googlegroups.com,zfs-...@googlegroups.com
I scrub weekly, I put in a launchd job for that.

Was hoping for a solid recommendation on optimal performance. 

Stability seems pretty good.

-- 
Alex Wasserman
You received this message because you are subscribed to a topic in the Google Groups "zfs-macos" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/zfs-macos/Q_ZmFfIxqQU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to zfs-macos+...@googlegroups.com.

Jason Belec

non lue,
22 nov. 2013, 15:07:2922/11/2013
à zfs-...@googlegroups.com
Optimal would be an opinion unless someone had the same setup. I've run tests across about 65 set ups over the years but not that particular one. The last Gigabyte board (Hackintosh) I ran tests on died a grisly death, luckily ZFS saved the data as the system had been in production 2 yrs. Your SSD should be on the Marvell, the rest can be broken up across chips as you mentioned before. However optimal for what end? Speed, then mirror is only viable by pool ( 2 pools mirrored ) not drives and the pools would be RAIDZ. That would probably give you the best speed and security. If your setup for it then a SSD could be used for cache but only on the development version of ZFS-OSX. However just for mirrored pools like you have it will be had to get optimum over security. 

Someone else can chime in with ideas. 

Jason
Sent from my iPhone 5S

Boyd Waters

non lue,
22 nov. 2013, 16:52:5022/11/2013
à zfs-...@googlegroups.com
I’ve been running ZFS for personal use since 2007, but I haven’t tried recent MacZFS.

I suggest a single pool of concatenated mirrored disks, which it seems that you are doing. I believe that RAIDZ is a good choice for read-mostly enterprise applications that can afford more than 5 disks per raid set. Not a good fit for me, as I am writing at least as often as reading.

Your attempt to spread the load across various paths is not a bad idea. But a similar strategy didn’t work out for me.

At first, I tried to create fully-redundant mirror sets: each side of the mirror had its own SATA controller. But I found that my choice of inexpensive Sil3132 controllers were choking under load, injecting a large number of write faults into the chain. So “cheap” redundancy was worse than no redundancy.

I went back to an LSI controller — that Intel-branded thing… Intel SASUC8I. Excellent bang for buck there.

My huge performance problem is a large pool with de-duplication enabled. De-duplication does what it claims to do, but the performance ramifications can be terrifying. I figure that I need at least 64 GB just to hold the de-dupe table, and if that isn’t in RAM then we have a problem. I don’t have a machine that can accommodate that much RAM. SSD doesn’t help a great deal here. Even with L2ARC, the de-dupe table will never utilize more than 1.5x RAM size of that SSD.

The SSD L2ARC *did* help improve scrub performance, dramatically so. My L2ARC isn’t redundant, it is just a "read-through” cache, and I believe that if the block checksums fail then we try again, read from the disk; so a fault in the L2ARC will impact read performance but won’t inject data errors into your stream. I think.

I’m not sure if my rambling story makes any sense. Don’t use super-cheap SATA controllers. Mirrored pairs offer the best performance/redundancy balance for general, personal use. Specific applications, such as a write-once, read-mostly media center might benefit from RAIDZ. RAM costs more than hard disk space, so leave de-duplication in the data center.

Alex Wasserman

non lue,
22 nov. 2013, 21:19:5022/11/2013
à zfs-...@googlegroups.com
Jason,

Thanks for the reply. I could partition the system SSD and use a secondary partition as a cache. Are they any guidelines on sizing?

Of the 6Tb in the pool, usage is a little under 3Tb.

Alex

Alex Wasserman

non lue,
22 nov. 2013, 21:27:4822/11/2013
à zfs-...@googlegroups.com
Boyd,

Makes sense - thanks for the feedback. However, this is an up and running system, and not looking to invest a huge amount extra in it. It's not the newest box, just want to see if there's anything I can do that's reasonably trivial to optimise things. I can rearrange the wiring in the box, but too lazy to start messing with additional cards at the moment.

Definitely no dedup, I have 16Gb of RAM in this box, not nearly enough to drive the dedup needs of ZFS, and nice to have that RAM for things other than just the disks.

Thanks,

Alex

Jason Belec

non lue,
23 nov. 2013, 06:58:4323/11/2013
à zfs-...@googlegroups.com
A little googling will provide you with the proper instructions, please understand this process before doing and understand the current/past MacZFS doesn't support it, you would have to be using the new development version AND test thoroughly before relying on it. Pools are built different, but they will be current to the ZFS community at large.



--
Jason Belec
Sent from my iPad

Jason Belec

non lue,
23 nov. 2013, 07:00:5723/11/2013
à zfs-...@googlegroups.com
In all fairness I have seen few systems shy of 64GB running dedup efficiently, it is just a nasty hog of resources. It sounds great in theory, but no so nice in practice. ;)



--
Jason Belec
Sent from my iPad
Répondre à tous
Répondre à l'auteur
Transférer
0 nouveau message