Quick question here, this might not be the best place for hardware questions, but given it's ZFS on OSX, not many other places will have the ZFS expertise.
I have a hackintosh based on a Gigabyte X58A-UD3R board (rev 2, FH firmware). It's a nice board, all been stable for the last couple of years.
I have a ZFS pool consisting of 3 mirrored vdevs, 2x3Tb drives, 2x2Tb drives, 2x1Tb drives.
In addition I have an SSD drive for my system, and a couple of other drives for other things (1 for Windows, 1 for Illumos).
So, total of 9 drives in my case, it's a tight fit.
Connectivity:
South bridge:
6 x SATA 3Gb/s connectors (SATA2_0, SATA2_1, SATA2_2, SATA2_3, SATA2_4, SATA2_5) supporting up to 6 SATA 3Gb/s devices
Gigabye chip:
2 x SATA 3Gb/s connectors (GSATA2_8, GSATA2_9) supporting up to 2 SATA 3Gb/s devices
Marvell chip:
2 x SATA 6Gb/s connectors (GSATA3_6, GSATA3_7) supporting up to 2 SATA 6Gb/s devices
I'm looking for expertise, advice, or just comments on how best to distribute the disks across the chips available.
Some considerations:
Putting all 6 ZFS disks on the southbridge means they're all communicating over a single bus and is a single point of failure.
The SSD is an older model, and won't benefit from the Marvel 6Gb/s, but, neither would any of the spinning disks either.
Should I spread the vdevs over each chip (ie. 1 vdev per chip), or spread each disk in each vdev over the chips to make sure each vdev has a disk on different chips to prevent the failure impacting anything.