Transferring from 10.8.4/ZEVO to 10.9/macZFS?

240 views
Skip to first unread message

Robert Rehnmark

unread,
Feb 17, 2014, 11:11:18 AM2/17/14
to zfs-...@googlegroups.com
Yes yes, I admit it.
It was kinda stupid to move to ZEVO.
It seemed more integrated and painless for someone like me, someone who just want it to be easy and have the most feature rich implementation.
I was hoping that ZEVO might be better when it comes to sharing and networking, I've had terrible speeds with macZFS in the past.

Anyways..
I want to move my hackintosh to Mavericks.
If I boot 10.9, install macZFS and create a pool with:

sudo zpool create -O casesensitivity=insensitive -O normalization=formD -O ashift=12 POOLNAME mirror /dev/diskXsX /dev/diskXsX

..do you think I can mount that pool as read/writeable in 10.8/ZEVO to copy the data over?

Does anybody have any experience with this?


I have an extra machine on which I guess I can install any OS and try to import the ZEVO pool and send across network to the new 10.9/macZFS but I fear it will be a LOT more work and very slow transfer speeds.



Thanks in advance for any input.

Regards.

Robert

Jason Belec

unread,
Feb 17, 2014, 11:32:02 AM2/17/14
to zfs-...@googlegroups.com
Best is to just try whatever options are available.

From my own experience, not including ZEVO, making a new pool on the new system and copying the data over from the old system with RSYNC is the best approach. Solved a lot of headaches I encountered with just upgrading, or trying to send old snapshots to a new system that differed so much.


--
Jason Belec
Sent from my iPad
--
 
---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Robert Rehnmark

unread,
Feb 17, 2014, 12:04:39 PM2/17/14
to zfs-...@googlegroups.com
I'll just give it a try and report back.
Still curious if ZEVO will mount the macZFS properly though. I guess I will get some answers as soon as I try..
Which release should I download to install on Mavericks? 
74.3.2b?

/Robert

Jason Belec

unread,
Feb 17, 2014, 12:07:25 PM2/17/14
to zfs-...@googlegroups.com
Sorry, not sure which pre-built build is best for you, I currently build daily as I'm testing fixes. So far, amazing results under Mavericks.



--
Jason Belec
Sent from my iPad

Rob Freedomfighter

unread,
Feb 17, 2014, 12:15:14 PM2/17/14
to zfs-...@googlegroups.com
Ok, is it hard to build it?
Do I need knowledge in excess of just some simple terminal commands I can copy and paste?

Thanks for bearing with me.

/Robert






---- signature ---------------------------------------------------------
Kryptera gärna e-post till mig.
Den bifogade filen "RobertRehnmark<robertArehnmarkDOTnet>PublicKey.gpgkey" är min offentliga krypteringsnyckel,
använd den med PGP/GPG-program för att kryptera e-post till mig.

Feel free to encrypt mail to me.
The attached file "RobertRehnmark<robertArehnmarkDOTnet>PublicKey.gpgkey" is my public encryption key,
use it with PGP/GPG-application to encrypt e-mails to me.

RobertRehnmark<robertArehnmarkDOTnet>PublicKey.gpgkey

Geoff Smith

unread,
Feb 17, 2014, 12:20:23 PM2/17/14
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
I'm still on 74.3.1 under 10.9.1, 74.3.2 and 74.3.2b give me KP on boot.

Sent from my iPhone

Rob Freedomfighter

unread,
Feb 17, 2014, 12:23:08 PM2/17/14
to zfs-...@googlegroups.com
Thank you for the info Geoff.
Then I will try with 74.3.1


/Robert


RobertRehnmark<robertArehnmarkDOTnet>PublicKey.gpgkey

Jason Belec

unread,
Feb 17, 2014, 12:51:51 PM2/17/14
to zfs-...@googlegroups.com
Well actually ilovezfs has a wonderful little script that does all the magic for you and he seems quite happy to share on the IIRC chat room. It helped me see several things I was doing wrong. From that script you can then move the important files into the proper places and grant them permission, same that the downloadable build does. All pretty easy and repeatable on the command line.



--
Jason Belec
Sent from my iPad
<RobertRehnmarkPublicKey.gpgkey>

Rob Freedomfighter

unread,
Feb 17, 2014, 1:38:35 PM2/17/14
to zfs-...@googlegroups.com
A script that compiles and installs everything?
I downloaded 74.3.2b after all. It says it should work on Mavericks.
First I will just try to upgrade a cloned copy of my system to see if I can easily get everything working the way it should. (hackintosh)
If that works out I will create a pool and see if I can import it with my original ZEVO install.
If that works out well I will revert the cloned disk to 10.8/ZEVO and upgrade my main install.

Wish me luck!

/Robert
RobertRehnmark<robertArehnmarkDOTnet>PublicKey.gpgkey

Robert Rehnmark

unread,
Feb 17, 2014, 6:38:39 PM2/17/14
to zfs-...@googlegroups.com
I got Mavericks up and running nicely but the 74.3.2b made it panic at boot.
So what do I install instead?
And how do I do that?
This ilovezfs, does he/she have this info and script on a webpage somewhere or can I only get it by contact through IIRC?

/Robert

Jason Belec

unread,
Feb 17, 2014, 7:12:15 PM2/17/14
to zfs-...@googlegroups.com

This is the script that should pull everything and build. Instructions are in the script for the script.
You will need a Developer folder in your Home directory, see script. You will need to chmod the script to run. Then sudo ./zfsadmin to run, and options like -l to load after build as per instructions in the script. You will need sudo to run commands like sudo zpool list. Until you make the install permanent. IRC is a great bunch of people working pretty much around the clock.

You can check the developer section of the MacZFS site for more info.

Oh, and your doing all this at your own risk. ;) Seriously. 



--
Jason Belec
Sent from my iPad

Robert Rehnmark

unread,
Feb 17, 2014, 7:37:24 PM2/17/14
to zfs-...@googlegroups.com
Thank you for your help.
I'll see if I have to do it this way.
I got my hands on the 74.3.1 and it installed and is running.
Is there something about this version that I should know and or stay away from?

Again, thanks a lot.

/Robert

Jason Belec

unread,
Feb 17, 2014, 8:08:30 PM2/17/14
to zfs-...@googlegroups.com
No idea. Haven't touched any of the pre-builts as I've been asked to test other things not yet considered stable. No known loss of data, so on that front you should be good. ;)



--
Jason Belec
Sent from my iPad

ilov...@icloud.com

unread,
Feb 17, 2014, 8:18:42 PM2/17/14
to zfs-...@googlegroups.com
Hey, this is ilovezfs.

I think there may be some confusion here as to what each of these versions are. For example, you referenced the command:

sudo zpool create -O casesensitivity=insensitive -O normalization=formD -O ashift=12 POOLNAME mirror /dev/diskXsX /dev/diskXsX

which is not a command that would work on MacZFS 74.x.x.

MacZFS 74.3.x is the same as MacZFS 74.2.x, but simply upgraded to work with Mavericks.
MacZFS 74.2.x/74.3.x is pool version 8, zfs file system version 2.

My "zfsadm" script is not related to MacZFS 74. It is for the newer, OpenZFS based version of ZFS for OS X, which has been referred to variously as "ZFS-OSX," and "OSX-ZFS," "MacZFS 99," "MacZFS prototype generation," and "MacZFS experimental generation."  The GitHub repository https://github.com/zfs-osx has the code for this version.

https://github.com/zfs-osx (like every implementation of OpenZFS) is pool version 5000, file system version 5. For reference, ZEVO is pool version 28, file system version 5. There is a comparison table here: http://bitly.com/osxzfs

If you want to use MacZFS 74, I suggest that you run the command

sudo nvram boot-args="keepsyms=y"

and reboot. Then you should install MacZFS 74.3.2b and reboot. If you get a kernel panic, post your panic report here. The panic reports are in /Library/Logs/DiagnosticReports.

In my opinion, using MacZFS 74.3.1 is not a good answer, given that it has been fully superseded by MacZFS 74.3.2b. If there is a kernel panic in 74.3.2b that you have discovered, that must be dealt with not avoided. However, the kernel panic is likely attributable to a problem with your setup given that others are using 74.3.2b without a panic, in which case that problem in your setup needs to be resolved, not avoided.

If instead you want to use https://github.com/zfs-osx/zfs, then yes I'd recommend using https://gist.github.com/ilovezfs/7713854. A tutorial explaining one way it can be used is here: http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs.

If you have questions, the IRC channel is #mac-zfs on freenode. Colloquy http://colloquy.info/ is a user-friendly client.

Jason Belec

unread,
Feb 17, 2014, 8:28:55 PM2/17/14
to zfs-...@googlegroups.com
Like I said, very helpful fella.



--
Jason Belec
Sent from my iPad

Bjoern Kahl

unread,
Feb 18, 2014, 3:58:33 AM2/18/14
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Just a quick but important note about 74.3.2 :

Am 18.02.14 00:38, schrieb Robert Rehnmark:
> I got Mavericks up and running nicely but the 74.3.2b made it panic
> at boot. So what do I install instead? And how do I do that?

74.3.2b has a serious bug we discovered (and fixed) a few days ago
which will panic any system 10.6 - 10.9) under certain loads,
especially when used in conjunction with the launchd scripts.

A new release is underway and expected to appear later this week.


> This ilovezfs, does he/she have this info and script on a webpage
> somewhere or can I only get it by contact through IIRC?


Best

Björn

- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAgUBUwMgqFsDv2ib9OLFAQKuaAP/dGstdHsFsIGDXQjH2+5zx4lHZga8IZoi
zRiAV2XKuqlUEWFOrSbdhkuXqZTGsyiJJS9oYj/EG4CT+mhlMQSso+5ZQjKhfAbT
0rYIu08dYTWaRNKWPldOCxo5RtkXepVEBqlRq6K29EBw6CNUe0dPmkV1w7jewZLd
qUow3CCbRVQ=
=iCeh
-----END PGP SIGNATURE-----

Robert Rehnmark

unread,
Feb 18, 2014, 7:11:12 AM2/18/14
to zfs-...@googlegroups.com
Thank you so much for the help.
I really appreciate it.


Den tisdagen den 18:e februari 2014 kl. 02:18:42 UTC+1 skrev ilov...@icloud.com:
Hey, this is ilovezfs.

I think there may be some confusion here as to what each of these versions are. For example, you referenced the command:

sudo zpool create -O casesensitivity=insensitive -O normalization=formD -O ashift=12 POOLNAME mirror /dev/diskXsX /dev/diskXsX

which is not a command that would work on MacZFS 74.x.x.
 
I realized this (duh!) and adjusted accordingly. :)

 

MacZFS 74.3.x is the same as MacZFS 74.2.x, but simply upgraded to work with Mavericks.
MacZFS 74.2.x/74.3.x is pool version 8, zfs file system version 2.

My "zfsadm" script is not related to MacZFS 74. It is for the newer, OpenZFS based version of ZFS for OS X, which has been referred to variously as "ZFS-OSX," and "OSX-ZFS," "MacZFS 99," "MacZFS prototype generation," and "MacZFS experimental generation."  The GitHub repository https://github.com/zfs-osx has the code for this version.

https://github.com/zfs-osx (like every implementation of OpenZFS) is pool version 5000, file system version 5. For reference, ZEVO is pool version 28, file system version 5. There is a comparison table here: http://bitly.com/osxzfs

I understand that this might be hard or impossible to answer but:
Can I use your script to compile and install OpenZFS, maybe even mount the ZEVO pools… and expect it to be stable?
I would REALLY like to have features like case insensitivity and UTF8 normalization. Also it would be nice if the mounted filesystems were actually visible in volumes. :)
So.. how far has it come?
Can I use it for my iPhoto, iTunes and general file storage?
Can it be scanned by Spotlight?
 

If you want to use MacZFS 74, I suggest that you run the command

sudo nvram boot-args="keepsyms=y"

and reboot. Then you should install MacZFS 74.3.2b and reboot. If you get a kernel panic, post your panic report here. The panic reports are in /Library/Logs/DiagnosticReports.

In my opinion, using MacZFS 74.3.1 is not a good answer, given that it has been fully superseded by MacZFS 74.3.2b. If there is a kernel panic in 74.3.2b that you have discovered, that must be dealt with not avoided. However, the kernel panic is likely attributable to a problem with your setup given that others are using 74.3.2b without a panic, in which case that problem in your setup needs to be resolved, not avoided.

Thank you, if OpenZFS is not an option for everyday use then I will go for this.
 

If instead you want to use https://github.com/zfs-osx/zfs, then yes I'd recommend using https://gist.github.com/ilovezfs/7713854. A tutorial explaining one way it can be used is here: http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs.

If you have questions, the IRC channel is #mac-zfs on freenode. Colloquy http://colloquy.info/ is a user-friendly client.

Thank you!


/Robert 

Jason Belec

unread,
Feb 18, 2014, 7:49:13 AM2/18/14
to zfs-...@googlegroups.com
I think your question was answered by this...


I understand that this might be hard or impossible to answer but:
Can I use your script to compile and install OpenZFS, maybe even mount the ZEVO pools… and expect it to be stable?
I would REALLY like to have features like case insensitivity and UTF8 normalization. Also it would be nice if the mounted filesystems were actually visible in volumes. :)
So.. how far has it come?
Can I use it for my iPhoto, iTunes and general file storage?
Can it be scanned by Spotlight?

Spotlight is still an issue, but being worked on. If your using the build version and you did so daily, you would of course be getting all fixes as they come out. Do ensure you have everything backed up regularly. No issues so far but best to assume the worst.

Another great community supporter is Björn.


--
Jason Belec
Sent from my iPad
--

ilov...@icloud.com

unread,
Feb 18, 2014, 7:56:43 AM2/18/14
to zfs-...@googlegroups.com
The best description of the current status of the Open ZFS port is here: State of osx.zfs Dec 2013
 
Can I use your script to compile and install OpenZFS
Yes you can use the script to install it, as outlined here http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs

maybe even mount the ZEVO pools…
Yes pool version 5000 is backwards compatible with pool version 28. It is basically pool version 28 plus feature flags: http://open-zfs.org/wiki/Features
It may be a better idea to start with a fresh pool so that you can take advantage of lz4 compression.

zpool create -o ashift=13 -O casesensitivity=insensitive -O compression=lz4 -O atime=off -O normalization=formD -f tank …

If you intend never to replace any of the disks with SSD, ashift=12.
 
and expect it to be stable?
Yes, it is pretty stable at this point. You might rarely hit a kernel panic or not be able to export the pool with a forced shutdown, neither of which should cause any actual harm to your data.

I would REALLY like to have features like case insensitivity and UTF8 normalization.
Yes it has those features.

Also it would be nice if the mounted filesystems were actually visible in volumes.
The default is for the pool foo to mount at /foo, and a dataset foo/bar to mount at /foo/bar. I explained the reason we do not use /Volumes (at least not yet) here: https://github.com/zfs-osx/zfs/issues/64
Even though foo mounts at /foo not /Volumes/foo, it will still appear as a volume with a disk icon in Finder. Ditto for foo/bar, etc.

Can I use it for my iPhoto, iTunes and general file storage?
Yes you can use it for iPhoto, iTunes and general file storage, but do not use it for an entire home directory yet.

Can it be scanned by Spotlight?
No Spotlight does not work yet. However, if you want Spotlight, you can use either a sparsebundle or zvol. zvols are not available on ZEVO, so ZEVO users are often unfamiliar with them. Here's a description of their use on Linux:

The only difference on OS X would be that you'd want to use only -b 512 or -b 4096 (our default is 4096), and an HFS+ file system.

Thank you, if OpenZFS is not an option for everyday use then I will go for this.
If you have independent backups, then this is primarily a question of whether you value stability or new features more. If you do not have independent backups, this conversation is moot because your one and only concern should be creating an independent backup of all of your data. By "independent backups," I mean that if every disk in your zpool is tossed in the swimming pool, you still have your data because it is available somewhere entirely separate from your zpool. Ideally, the backup is offsite, or you have both an independent local back and an independent offsite backup.

Thank you!
You're welcome.

ilov...@icloud.com

unread,
Feb 18, 2014, 8:03:03 AM2/18/14
to zfs-...@googlegroups.com
... not be able to export the pool with a forced shutdown ...
s/with/without/

Jason Belec

unread,
Feb 18, 2014, 8:33:19 AM2/18/14
to zfs-...@googlegroups.com
Been testing Zvols, HFS+ formatted and Spotlight does seem to like them, which technically means, Mail, iPhoto, etc., etc., can function normally.



--
Jason Belec
Sent from my iPad
--

Robert Rehnmark

unread,
Feb 18, 2014, 1:06:52 PM2/18/14
to zfs-...@googlegroups.com
ilovezfs, if you were here right now I'd probably kiss you straight on the mouth. haha :P

I have had some great progress today..
Open ZFS is installed and running on Mavericks.
ZEVO pool is imported and seems to be working just fine.

I have one question though that I couldn't find an answer to.
Should I use the slices or just the whole disk when creating a new pool or attaching/adding disks?
Like zpool create .... puddle /dev/disk3 OR spool create ... puddle /dev/disk3s2 

Now I'm just hoping it will be more or less smooth sailing from here on. (crossing fingers)
But at least I found a really good way of transitioning and transferring without risking my data or trading in usability/features.
Once again, thanks a lot!


/Robert

Jason Belec

unread,
Feb 18, 2014, 3:07:20 PM2/18/14
to zfs-...@googlegroups.com
No kissie, no kissie!!

 zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 pool raidz disk1 disk2 disk3



--
Jason Belec
Sent from my iPad
--

Robert Rehnmark

unread,
Feb 19, 2014, 4:34:05 AM2/19/14
to zfs-...@googlegroups.com
So now I'm running OS X Mavericks 10.9.1 with Open ZFS installed with this guide (very easy).
It imported and handled the ZEVO pool whiteout any noticeable problems.
I'm in the process of backing up the critical data and then I will juggle the drives around a little bit to create a completely new pool with Open ZFS.
The only problem I have is one of my Barracudas stopped working on SATA 3 and is giving some errors so I will have it changed on warranty.
.. no data loss though, gotta love ZFS. :)

I have another question though.
Is it possible (and good practice) to turn off compression for a child filesystem?
It will probably just make it slower instead of faster when I'm storing only pictures, highly compressed videos and mp3's on it, right?
I have a hexa core Xeon X5680 running at 4,1 GHz so processing power is not a problem though.

Den tisdagen den 18:e februari 2014 kl. 21:07:20 UTC+1 skrev jasonbelec:
No kissie, no kissie!!

 zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 pool raidz disk1 disk2 disk3

Haha :P
Yes that's basically the line I ended up using, except I set it up with ashift=13 (future proofing for SSD) and a mirror.
 

/Robert

Jason Belec

unread,
Feb 19, 2014, 7:13:37 AM2/19/14
to zfs-...@googlegroups.com
In the past we found compression to make things faster not slower. 

Jason
Sent from my iPhone 5S
--

ilov...@icloud.com

unread,
Feb 19, 2014, 7:43:38 AM2/19/14
to zfs-...@googlegroups.com
Should I use the slices or just the whole disk when creating a new pool or attaching/adding disks?
 
Either is acceptable.

If you specify the whole disk (say disk3), ZFS will do the following:
1. create a GPT partition table for you
2. add two partitions (disk3s1 and disk3s9) with the correct partition types to that table
3. use disk3s1
4. mark the device as being "whole disk = 1"
5. enable the disk's write cache (don't think we're doing this yet).

If, instead, you specify just a partition (say disk3s2, created in Disk Utility.app), then ZFS will do the following:
1. not mess with your pre-existing partition table
2. not modify the partition type even if the type is wrong (e.g., HFS+), which you ought to fix yourself using the /usr/sbin/gpt command or using gdisk http://sourceforge.net/projects/gptfdisk/.
3. use whatever partition you specified (disk3s2)
4. mark the device as being "whole disk = 0"
5. not enable the disk's write cache.

It's worth mentioning that if you specify the whole disk (disk3), sudo zpool status will report that the vdev is using "disk3," but in reality it will be using disk3s1. You can see this is true by comparing the output of sudo zdb -l /dev/disk3s1 and sudo zdb -l /dev/disk3. If you specify a partition (disk3s2), sudo zpool status will report that the vdev is using "disk3s2."

I have another question though.
Is it possible (and good practice) to turn off compression for a child filesystem?
It will probably just make it slower instead of faster when I'm storing only pictures, highly compressed videos and mp3's on it, right?
I have a hexa core Xeon X5680 running at 4,1 GHz so processing power is not a problem though.

In the case of lz4 compression, you can just leave it on without any significant penalty.

Geoff Smith

unread,
Feb 20, 2014, 6:02:25 PM2/20/14
to zfs-...@googlegroups.com
Thanks a lot for this info ilovezfs, really interesting.

I guess I stuck with 74.3.1 because as far as I could tell it worked reasonably well (no data loss, 2+ years running), and no one else was talking about KPs related to it, so I didm’t have much to go on.

Adopting zfs-osx seemed a little convoluted and risky, but ran through the excellent guide here tonight in a test environment and it all went surprisingly smoothly.

A few things to note for other people following this, it seems that the guide is a tiny bit out of date, and in places could do with being a little more user-friendly;

The first line, to get Homebrew uses a broken link. The correct command to use is;


The part about ~/.bash_profile is a little vague; if you don’t have a .bash_profile under ~, you can create one with ‘touch .bash_profile’. Paste in the export path mentioned, then run ‘. .bash_profile’ to apply the update.

You can then run the zpool command.

The question I have at this point is, if I remove MacZFS 74.3.1 using this guide, is it simply a matter of exporting my zpool, removing MacZFS, installing zfs-osx, importing my zpool then updating it, or is there more to it than that?

Thanks again for all your help.

Rick Bartram

unread,
Feb 21, 2014, 3:30:31 PM2/21/14
to zfs-...@googlegroups.com
I'm a zfs noob so I have to give a big thank you to ilovezfs for the work he has done. Using his zfsadm script I was able to install openzfs without a hitch (so far).

I installed a zvol formatted to hfs+ on a raidz2 4x2TB zpool which I am using as a  timemachine archive (one of two).
It's still early and I haven't stressed the pool but TimeMachine appears to working just fine.

Congrats to all the contributors. It looks like progress is being made.

Robert Rehnmark

unread,
Feb 21, 2014, 6:01:29 PM2/21/14
to zfs-...@googlegroups.com
Yes, this was very easy to get up and running.
I have a couple of questions though.

1. Automount. The pool is not mounting automatically at boot but I would really like it to. At login is kinda too late, it need to be mounted at boot.

2. Zvol's. If I create a zvol, will it take all that space in the pool immediately?
There is no way to resize it, right?
I can't make it like a growing disk image?


/Robert

Daniel Becker

unread,
Feb 21, 2014, 6:16:28 PM2/21/14
to zfs-...@googlegroups.com
By default, creating a zvol will reserve space equal to its volume size. You can avoid that by passing “-s” to the zfs create command; however, note that the amount of space left in the parent will not be passed through to whatever FS you create on the zvol, so bad things happen when the parent fills up to a point where it’s got less space left than the FS on the zvol thinks it has.


Reply all
Reply to author
Forward
0 new messages