Guys,
I have a pool that's been through MacZFS, Zevo, and OpenZFS. I think for a time it was running under Illumos too. Currently it's running OpenZFS on OSX (ZFS-OFX).
Now that we're using code unified, does that mean I can just throw a whole disk into a vdev, rather than running through the old MacZFS partitioning? I've been experimenting.
I've noticed that I can't follow the old MacZFS getting started guide any more, as ZFS isn't a listed filesystem to diskutil:
alex@smiley:~|⇒ diskutil partitiondisk /dev/disk0 GPTFormat ZFS %noformat$ 100%
ZFS does not appear to be a valid file system format or partition type
Use diskutil listFilesystems to view a list of supported file systems
But, given the goal here seems to be the GPTFormat label, I just substituted in HFS+ instead to get the label on there:
alex@smiley:~|⇒ diskutil partitiondisk /dev/disk0 GPTFormat HFS+ %noformat% 100%
Started partitioning on disk0
Unmounting disk
Creating the partition map
Waiting for the disks to reappear
Finished partitioning on disk0
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk0
1: EFI 209.7 MB disk0s1
2: Apple_HFS 3.0 TB disk0s2
All good, now I have a disk with the correct label on it.
Now to add it, as per Solaris documentation:
alex@smiley:~|⇒ zpool attach ZFS_Pool disk4s2 /dev/disk0
checking path '/dev/disk0'
alex@smiley:~|⇒ zpool status
pool: ZFS_Pool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Dec 4 21:38:30 2013
3.35G scanned out of 2.87T at 3.19M/s, 261h48m to go
1.21G resilvered, 0.11% done
config:
NAME STATE READ WRITE CKSUM
ZFS_Pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
disk3s2 ONLINE 0 0 0
disk6s2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
disk5s2 ONLINE 0 0 0
disk4s2 ONLINE 0 0 0
disk0 ONLINE 0 0 0 (resilvering)
mirror-2 ONLINE 0 0 0
disk2s2 ONLINE 0 0 0
disk7s2 ONLINE 0 0 0
errors: No known data errors
Now, it looks like we're good.
But, is this actually good, or just appearing that way? From a ZFS on Solaris perspective, this is fine. Is ZFS-OFX running enough of the standard codebase for this to work?
One of my disks in mirror-1 is throwing SMART errors, so I want to add in disk0, let the pool stabilize, then pull out the failing disk. I don't want to do that till I know the new disk is actually fine. I can easily pull it from the vdev and add in disk0s1 instead if necessary.
Interestingly, unlike my other disks, disk0 is formatted a little differently, no doubt due to the way the new ZFS has done it:
alex@smiley:~|⇒ diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk0
1: ZFS 3.0 TB disk0s1
2: 6A945A3B-1DD2-11B2-99A6-080020736631 8.4 MB disk0s9
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *90.0 GB disk1
1: EFI 209.7 MB disk1s1
2: Apple_HFS System 89.7 GB disk1s2
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk2
1: EFI 209.7 MB disk2s1
2: ZFS 999.9 GB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk3
1: EFI 209.7 MB disk3s1
2: ZFS 2.0 TB disk3s2
/dev/disk4
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk4
1: EFI 209.7 MB disk4s1
2: ZFS 3.0 TB disk4s2
/dev/disk5
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk5
1: EFI 209.7 MB disk5s1
2: ZFS 3.0 TB disk5s2
/dev/disk6
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk6
1: EFI 209.7 MB disk6s1
2: ZFS 2.0 TB disk6s2
/dev/disk7
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk7
1: EFI 209.7 MB disk7s1
2: ZFS 999.9 GB disk7s2
Thanks,
Alex