Command differences between MacZFS, Zevo, and OpenZFS

441 views
Skip to first unread message

Alex Wasserman

unread,
Dec 4, 2013, 10:01:52 PM12/4/13
to maczfs...@googlegroups.com
Guys,

I have a pool that's been through MacZFS, Zevo, and OpenZFS. I think for a time it was running under Illumos too. Currently it's running OpenZFS on OSX (ZFS-OFX).

Now that we're using code unified, does that mean I can just throw a whole disk into a vdev, rather than running through the old MacZFS partitioning? I've been experimenting.

I've noticed that I can't follow the old MacZFS getting started guide any more, as ZFS isn't a listed filesystem to diskutil:

alex@smiley:~|⇒  diskutil partitiondisk /dev/disk0 GPTFormat ZFS %noformat$ 100%
ZFS does not appear to be a valid file system format or partition type
Use diskutil listFilesystems to view a list of supported file systems

But, given the goal here seems to be the GPTFormat label, I just substituted in HFS+ instead to get the label on there:

alex@smiley:~|⇒  diskutil partitiondisk /dev/disk0 GPTFormat HFS+ %noformat% 100%
Started partitioning on disk0
Unmounting disk
Creating the partition map
Waiting for the disks to reappear
Finished partitioning on disk0
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:                  Apple_HFS                         3.0 TB     disk0s2


All good, now I have a disk with the correct label on it.

Now to add it, as per Solaris documentation:

alex@smiley:~|⇒  zpool attach ZFS_Pool disk4s2 /dev/disk0
checking path '/dev/disk0' 
alex@smiley:~|⇒  zpool status
  pool: ZFS_Pool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Wed Dec  4 21:38:30 2013
    3.35G scanned out of 2.87T at 3.19M/s, 261h48m to go
    1.21G resilvered, 0.11% done
config:
NAME         STATE     READ WRITE CKSUM
ZFS_Pool     ONLINE       0     0     0
 mirror-0   ONLINE       0     0     0
   disk3s2  ONLINE       0     0     0
   disk6s2  ONLINE       0     0     0
 mirror-1   ONLINE       0     0     0
   disk5s2  ONLINE       0     0     0
   disk4s2  ONLINE       0     0     0
   disk0    ONLINE       0     0     0  (resilvering)
 mirror-2   ONLINE       0     0     0
   disk2s2  ONLINE       0     0     0
   disk7s2  ONLINE       0     0     0
errors: No known data errors 

 Now, it looks like we're good.

But, is this actually good, or just appearing that way? From a ZFS on Solaris perspective, this is fine. Is ZFS-OFX running enough of the standard codebase for this to work?

One of my disks in mirror-1 is throwing SMART errors, so I want to add in disk0, let the pool stabilize, then pull out the failing disk. I don't want to do that till I know the new disk is actually fine. I can easily pull it from the vdev and add in disk0s1 instead if necessary.

Interestingly, unlike my other disks, disk0 is formatted a little differently, no doubt due to the way the new ZFS has done it:

alex@smiley:~|⇒  diskutil list 
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk0
   1:                        ZFS                         3.0 TB     disk0s1
   2: 6A945A3B-1DD2-11B2-99A6-080020736631               8.4 MB     disk0s9
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *90.0 GB    disk1
   1:                        EFI                         209.7 MB   disk1s1
   2:                  Apple_HFS System                  89.7 GB    disk1s2
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk2
   1:                        EFI                         209.7 MB   disk2s1
   2:                        ZFS                         999.9 GB   disk2s2
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk3
   1:                        EFI                         209.7 MB   disk3s1
   2:                        ZFS                         2.0 TB     disk3s2
/dev/disk4
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk4
   1:                        EFI                         209.7 MB   disk4s1
   2:                        ZFS                         3.0 TB     disk4s2
/dev/disk5
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk5
   1:                        EFI                         209.7 MB   disk5s1
   2:                        ZFS                         3.0 TB     disk5s2
/dev/disk6
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk6
   1:                        EFI                         209.7 MB   disk6s1
   2:                        ZFS                         2.0 TB     disk6s2
/dev/disk7
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk7
   1:                        EFI                         209.7 MB   disk7s1
   2:                        ZFS                         999.9 GB   disk7s2

Thanks,

Alex

Mike Trogni

unread,
Dec 9, 2013, 6:52:38 AM12/9/13
to maczfs...@googlegroups.com
I've uninstalled Zevo and installed the lundman's 20131130 dmg . I had to change the mountpoint for the core file system (media, in my case) e.g. sudo zfs set mountpoint media /Volumes/media

I have a 8x3Tb Zevo initialized raid2z on two 4 bay SANS digital ESATA.(2008 Mac Pro with 32Gig ram) it mounts and I can see the data.  My Finder crashed last night (which was a weird error, 10.8.5 Mac OS X) but I use an SSD to
accelerate the reads as a cache device and can't seem to initialize it correctly.  As the previous poster posted, I cannot figure out the diskutil command to initialize the SSD to ZFS mode.

Under Zevo, the SSD as cache device worked fine. The system locked up when I tried to open a MKV with VLC last night. Will continue to test as someday I want to upgrade this box to 10.9. thanks
-Mike

ilov...@icloud.com

unread,
Dec 12, 2013, 8:27:41 AM12/12/13
to maczfs...@googlegroups.com
You don't need to use Disk Utility. Give the entire disk to the zpool command and use the -f option.

Alex Wasserman

unread,
Dec 14, 2013, 10:24:21 AM12/14/13
to maczfs...@googlegroups.com
Definitely the easiest way.

I think part of my point here is that some of the documentation has to change for the new alpha version.

What's great is that this version now closely aligns to the Solaris instructions.

Does zpool automatically put a gpt label on there too? I know we discussed on IRC the other day, just can't remember.

- Alex
Reply all
Reply to author
Forward
0 new messages