I've completed the merge of the OpenSolaris onnv_72 build and the Apple bits, and pushed them up to GitHub.
http://github.com/alblue/mac-zfs/tree/maczfs_72/
This compiles and runs on 10.5 and compiles on 10.6, though I've not tested it running on there.
I've put it through its paces a little - creating a few hundred file systems, the odd recursive snapshot, mass deletions and so on - and it seems fine for most things.
This fixes:
Issue 29 - merge with onnv_72 (http://code.google.com/p/maczfs/issues/detail?id=29)
Issue 23 - deprecate read-only extension (http://code.google.com/p/maczfs/issues/detail?id=23)
Issue 22 - zfs pools are created at version 6 (http://code.google.com/p/maczfs/issues/detail?id=22)
Right now, this code shouldn't be used for production use. After all, it doesn't change any functionality other than what's in the existing binaries, so there's no real point - but I have discovered at least one critical bug which means I'd prefer it if we didn't end up with a binary created on this or the derivatives until we can fix it:
Issue 36 - exporting pool or restarting computer causes kernel panic (http://code.google.com/p/maczfs/issues/detail?id=36)
It would be good if there are people willing to try and find other holes with the build, because this is (probably) going to be what we build on going forwards. If I make any further progress on Issue 36 then I'll post back here. And once we've got that, we can start to roll forwards over the onnv_ releases, which I've made available from GitHub as well:
http://github.com/alblue/onnv-gate-zfs/tree/onnv_72
I'll likely seed the GitHub copy with a few more tags, but I only initially went up to 74 whilst we figured out if it's the right thing to do. In addition, I think we need to include some more stuff (the .zfs dir is managed by the OpenSolaris GFS module, http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/gfs.c) which we might want to pull in before pushing ahead with the _73, _74, ... tags.
For your diffing pleasure, I've set up some tags in my GitHub repository:
pre_merge_72 - the point of the Apple codebase (equivalent to my master) which can diff against the 'apple' code
onnv_72 - the OpenSolaris onnv_72 tag, which is just OS patches
maczfs_72 - the merge result of merging the above two
I've also written up a bit on my blog about the journey getting here ...
http://alblue.blogspot.com/2010/03/merged-zfs-from-opensolaris-to-osx.html
Alex
On Mar 5, 5:47 pm, Alex Blewitt <alex.blew...@gmail.com> wrote:
> To paraphrase Professor Fanrsworth ...
>
> I've completed the merge of the OpenSolaris onnv_72 build and the Apple bits, and pushed them up to GitHub.
>
> http://github.com/alblue/mac-zfs/tree/maczfs_72/
>
> This compiles and runs on 10.5 and compiles on 10.6, though I've not tested it running on there.
>
> I've put it through its paces a little - creating a few hundred file systems, the odd recursive snapshot, mass deletions and so on - and it seems fine for most things.
>
> This fixes:
>
> Issue 29 - merge with onnv_72 (http://code.google.com/p/maczfs/issues/detail?id=29)
> Issue 23 - deprecate read-only extension (http://code.google.com/p/maczfs/issues/detail?id=23)
> Issue 22 - zfs pools are created at version 6 (http://code.google.com/p/maczfs/issues/detail?id=22)
>
> Right now, this code shouldn't be used for production use. After all, it doesn't change any functionality other than what's in the existing binaries, so there's no real point - but I have discovered at least one critical bug which means I'd prefer it if we didn't end up with a binary created on this or the derivatives until we can fix it:
>
> Issue 36 - exporting pool or restarting computer causes kernel panic (http://code.google.com/p/maczfs/issues/detail?id=36)
>
> It would be good if there are people willing to try and find other holes with the build, because this is (probably) going to be what we build on going forwards. If I make any further progress on Issue 36 then I'll post back here. And once we've got that, we can start to roll forwards over the onnv_ releases, which I've made available from GitHub as well:
>
> http://github.com/alblue/onnv-gate-zfs/tree/onnv_72
>
> I'll likely seed the GitHub copy with a few more tags, but I only initially went up to 74 whilst we figured out if it's the right thing to do. In addition, I think we need to include some more stuff (the .zfs dir is managed by the OpenSolaris GFS module,http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/com...) which we might want to pull in before pushing ahead with the _73, _74, ... tags.
>
> For your diffing pleasure, I've set up some tags in my GitHub repository:
>
> pre_merge_72 - the point of the Apple codebase (equivalent to my master) which can diff against the 'apple' code
> onnv_72 - the OpenSolaris onnv_72 tag, which is just OS patches
> maczfs_72 - the merge result of merging the above two
>
> I've also written up a bit on my blog about the journey getting here ...
>
> http://alblue.blogspot.com/2010/03/merged-zfs-from-opensolaris-to-osx...
>
> Alex
In terms of testing Alex, I assumed you used a repetitive script of
some sort? (fstools?)
Oh, what was the verdict regarding the icon in the end? I put up the
snowflake and have the files here if they're needed/wanted etc...
Has anyone every heard from Noel Delafano and co. ? since standing on
its own legs?
<bait>
Surely he's watching with interest and perhaps could even chip in
without getting into trouble from Apple?
</bait>
Cheers,
Raoul.
This is great news and will raise the spirit of everyone.
Roddi
> Great work Alex!
>
> In terms of testing Alex, I assumed you used a repetitive script of
> some sort? (fstools?)
I had a script which I've used for earlier tests; basically creates a
few hundred filesystems with different properties, unzips the odd copy
of Eclipse and so on. For ease of testing I often use "mkfile" and
generate pools that way rather than an external drive.
The goal is to get ztest working though. I haven't looked at that yet
but we have the OpenSolaris one in there now.
The immediate bug of concern is issue 36 though.
> Oh, what was the verdict regarding the icon in the end?
I asked people to attach images to the issue in Google Code a few
times. I've not seen any there yet. I think the conclusion was that
the snowflake was generally the better one.
> Has anyone every heard from Noel Delafano and co. ? since standing
> on its own legs?
>
> Surely he's watching with interest and perhaps could even chip in
> without getting into trouble from Apple?
I do wonder if *she* is listening myself sometimes :-) I certainly
have a vanity alert on Google search, so it's possible. If so, hi
Noël! But realistically it would probably be a career limiting move if
she were to reach out, especially if they are now working on ZFS+
internally. I know if the situations were reversed, I wouldn't want to
comment.
Alex
but now cloning is no longer working from github, it fails when processing
the latest commit (well at lest that's what I think, I have no clue of git)
# git clone -v http://github.com/alblue/mac-zfs.git
Initialized empty Git repository
in /Users/bj/Projekte/mac-zfs/t/mac-zfs/.git/
error: Unable to get pack file
http://github.com/alblue/mac-zfs.git/objects/pack/pack-b566dea44a0e2d7c49a5ba378db9b7958f0b6de6.pack
transfer closed with 19315277 bytes remaining to read
error: Unable to find a20a8b4fab2f29e4d84e2d091a3705a429c89db6 under
http://github.com/alblue/mac-zfs.git
Cannot obtain needed object a20a8b4fab2f29e4d84e2d091a3705a429c89db6
while processing commit 8f645fe1e30079a2df8656db9ff36c24d30b5647.
error: Fetch failed.
Björn
--
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
Björn,
Thanks for trying to give this a go!
I tried a fresh checkout (with the 'git' protocol) and it worked OK:
$ git clone -v git://github.com/alblue/mac-zfs.git
Initialized empty Git repository in /private/tmp/mac-zfs/.git/
remote: Counting objects: 11546, done.
remote: Compressing objects: 100% (6142/6142), done.
remote: Total 11546 (delta 2962), reused 10976 (delta 2465)
Receiving objects: 100% (11546/11546), 25.39 MiB | 235 KiB/s, done.
Resolving deltas: 100% (2962/2962), done.
In versions of Git prior to 1.6.6.1 (I think) the HTTP checkout was sub-optimal. However, quite a lot of servers don't support the smart HTTP checkout, and I don't think GitHub do yet either.
I tried a fresh checkout with 'http' as well:
$ git clone -v http://github.com/alblue/mac-zfs.git
Initialized empty Git repository in /tmp/mac-zfs/.git/
error: Unable to get pack file http://github.com/alblue/mac-zfs.git/objects/pack/pack-b566dea44a0e2d7c49a5ba378db9b7958f0b6de6.pack
transfer closed with 17683277 bytes remaining to read
error: Unable to find a20a8b4fab2f29e4d84e2d091a3705a429c89db6 under http://github.com/alblue/mac-zfs.git
Cannot obtain needed object a20a8b4fab2f29e4d84e2d091a3705a429c89db6
while processing commit 8f645fe1e30079a2df8656db9ff36c24d30b5647.
error: Fetch failed.
The dumb HTTP protocol needs some extra files generated to tell it where to look, and obviously that's become stale. Since I don't know what's happening here, I raised an issue on the GitHub support boards:
In the meantime, if you're able to, you should be able to clone via git clone -v git://github.com/alblue/mac-zfs.git instead.
Alex
http://support.github.com/discussions/repos/2671-unable-to-clone-via-http-can-clone-via-git
Alex
> http://support.github.com/discussions/repos/2671-unable-to-clone-via-http
>-can-clone-via-git/autosuggest
>
> In the meantime, if you're able to, you should be able to clone via git
> clone -v git://github.com/alblue/mac-zfs.git instead.
Just for the records: Cloning with git protocol worked fine!
On Saturday, 06 March 2010 11:32 wrote, Alex Blewitt:
> On 6 Mar 2010, at 03:09, Raoul <tan...@mac.com> wrote:
> > In terms of testing Alex, I assumed you used a repetitive script of
> > some sort? (fstools?)
>
> The goal is to get ztest working though. I haven't looked at that yet
> but we have the OpenSolaris one in there now.
Just for me to not loose track:
Are you currently working on ztest?
If yes, we may should get in touch, as I am actively working on ztest
and libzpool, thought I will probably fail my internal milestone of having
ztest compile and link by end of this week. To much workload in my dayjob.
My current target / working codebase is zfs-119 (i.e. what I found in your
alblue/mac-zfs repository as of beginning of February). Is this still the
right target for production releases? (I think having a ztest for
production code is the most useful for now. If someone has other
preferences, let me know!)
Best
Hi Alex & All,
On Saturday, 06 March 2010 11:32 wrote, Alex Blewitt:On 6 Mar 2010, at 03:09, Raoul <tan...@mac.com> wrote:In terms of testing Alex, I assumed you used a repetitive script ofsome sort? (fstools?)The goal is to get ztest working though. I haven't looked at that yetbut we have the OpenSolaris one in there now.
Just for me to not lose track:
Are you currently working on ztest?
If yes, we may should get in touch, as I am actively working on ztest
and libzpool, thought I will probably fail my internal milestone of having
ztest compile and link by end of this week. Too much workload in my dayjob.
My current target / working codebase is zfs-119 (i.e. what I found in your
alblue/mac-zfs repository as of beginning of February). Is this still the
right target for production releases? (I think having a ztest for
production code is the most useful for now. If someone has other
preferences, let me know!)
Thanks,
--
-- AL --
Sent from my (new) iPhone
Can you post the output of 'sudo dtruss -f zpool list'?
- Steven
Thanks for the help.
--
-- AL --
Note to PPC users; this should not prevent it from being used on PPC. My intention is to keep supporting that for as long as my G5 keeps going :-)
626d83e Updated build environemnt variables for 10.5/10.6
c9f85a5 Fixes Issue 39 - Crash at zfs_vnop_reclaim
090e1ba Fixes Issue 36 - crash when exporting pool/unloading kernel extension
bcba606 Added testcases
I've tested this on a 10.5 and 10.6 Intel this time; I don't see any reason why it shouldn't also work on a PPC (though I've not tried it since last build; but then, it was only problematic on the 10.6 builds, of which PPC isn't included anyway).
Note that for compiling, you should be able to select the appropriate SDK from the drop-down list; so choose 10.5 if you have a 10.5 system, and 10.6 if you have a 10.6 system.
If compiling from the command line, the switches are:
xcodebuild -sdk macosx10.5
xcodebuild -sdk macosx10.6
Alex
I was able to build yesterday, but doing a "zpool create" caused a
kernel panic. I basically created some files with mkfile and ran zpool
create with the full path to those files. I saw a removable drive icon
appear on my desktop, then the system crashes. OS X 10.6 (latest
patches), running 64 bit kernel, MacBook Pro 15", 5th gen.
--
-- AL --
> Thanks, Alex. I'll check out the master branch, pull, and attempt a
> rebuild. I will let you know if I encounter any build issues.
>
> I was able to build yesterday, but doing a "zpool create" caused a
> kernel panic. I basically created some files with mkfile and ran zpool
> create with the full path to those files. I saw a removable drive icon
> appear on my desktop, then the system crashes. OS X 10.6 (latest
> patches), running 64 bit kernel, MacBook Pro 15", 5th gen.
Do you know what git commit this was against (in other words, what does 'git log --oneline | head' show you?) I fixed a couple of kernel pancis which were later than the 'MacZFS-72' tag but in the head of master; if you can still reproduce these with my current master (626d83e) then I'd like to get a paniclog (if one exists); if you can also send the 'zfs.kext.dSYM' file and the com.apple.filesystem.zfs.sym file (generated into /tmp by sudo kextload -s /tmp zfs.kext) then I can try debugging it further. Probably best to file an issue in the Google Code project and attach them there rather than the mailing list.
There is a known panic with memory exhaustion at the moment; that's in zfs_context.c. If that's the case (and I've not done much more than just try creating a few small files) then we might have a memory leak somewhere which causes this to occur after time.
Alex
I've pulled your latest commit, and will try to build with that. I'll
let you know if I encounter any additional issues, and provide the info
you're requesting, open a ticket, etc.
--
-- AL --
Does any of the work that's being done take into account any differences
between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?
--
-- AL --
On Wed, 2010-03-10 at 12:45 +0000, Alex Blewitt wrote:
> After building and deploying from 626d83e, I get "failed to initialize
> ZFS library" again. This was with the 64-bit kernel. I rebooted into
> the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
> rebooted (32-bit, again), and got pretty much the same results that I
> got with c9f85a5 with the recommended change, which is that the "zpool"
> and "zfs" commands worked, but when I create a pool, I see a drive icon
> on my desktop and then get a kernel panic.
>
> Does any of the work that's being done take into account any differences
> between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?
They should be built with both 32 and 64 bit extensions regardless of whether you do a 'release' or 'debug' build, so I don't think you should need to explicitly select an architecture.
Do you already have an install of ZFS installed (which, presumably you are leaving in-place)? If so, then when executing the 'zpool' and 'zfs' commands from a local directory, you'll need to tell it to load the (new) libzfs.dylib instead of the (existing) one in /usr/lib/libzfs.dylib. Running the new zpool with the old lib and new kernel is likely to cause problems :-)
If you do have an existing /usr/lib/libzfs.dylib, leave that in place and do:
export DYLD_LIBRARY_PATH=/path/to/build/Debug
export PATH=/path/to/build/Debug:$PATH
sudo kextload -s /path/to/build/Debug /path/to/build/Debug/zfs.kext
mkfile 100m /tmp/bigfile
sudo /path/to/build/Debug/zpool create bigpool /tmp/bigfile
That should at least run with a consistent set of bits. If this still fails, then can you attach the com.apple.filesystem.sym and .dSym files in the /path/to/build/Debug directory to an issue and I'll see if I can track it down.
Alex
This is probably just my misunderstanding and lack of experience with
XCode. There's a dropdown in the top left portion of XCode which allows
several options to be selected. The last one is "Active Architecture",
which has two options, i386 and x86_64, and one (and only one) of the
two must be selected. I was assuming that this option decided which
kernel you were building your binary against.
> Do you already have an install of ZFS installed (which, presumably you are leaving in-place)? If so, then when executing the 'zpool' and 'zfs' commands from a local directory, you'll need to tell it to load the (new) libzfs.dylib instead of the (existing) one in /usr/lib/libzfs.dylib. Running the new zpool with the old lib and new kernel is likely to cause problems :-)
It's a pretty fresh machine, and I never installed ZFS on it. I have
installed BootCamp, and set it up to dual (faux-triple) boot Snow
Leopard, Windows/Linux. When installed kexts, etc., have been failing,
I have been removing them. So,
basically, /S/L/E/zfs*, /S/L/Filesystems/zfs*, /usr/sbin/[zfs|zpool]
and /usr/lib/libzfs* do not exist.
I'm also clobbering the build directory before each build. Maybe I'm
doing lots of wrong things. I'm more familiar with building software on
Linux just using make, and XCode has a lot of little clicky-clicky
options I'm not used to.
> If you do have an existing /usr/lib/libzfs.dylib, leave that in place and do:
>
> export DYLD_LIBRARY_PATH=/path/to/build/Debug
> export PATH=/path/to/build/Debug:$PATH
> sudo kextload -s /path/to/build/Debug /path/to/build/Debug/zfs.kext
> mkfile 100m /tmp/bigfile
> sudo /path/to/build/Debug/zpool create bigpool /tmp/bigfile
>
> That should at least run with a consistent set of bits. If this still fails, then can you attach the com.apple.filesystem.sym and .dSym files in the /path/to/build/Debug directory to an issue and I'll see if I can track it down.
>
> Alex
Should I perhaps install the .dmg from the Downloads section of the
repository, install that, then do the build/install from source?
--
-- AL --
This is basically just a selector for the Xcode debugger (so you can
switch between debugging i386 and x86_64 without building binaries for
only one or the other).
- Steven
Well,
i dont know how you run the bins, but when you run them from your build directory without installing them, make sure that the ZFS lib is preloaded.
run as root:
DYLD_INSERT_LIBRARIES=libzfs.dylib ./zpool list
Chris.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (Darwin)
iD8DBQFLl/J+p+9ff145KVIRAimcAKCE/RizVkcJvVbs/dEVvjnSUpsWTgCdF5d9
W8LV/885KTkZPeKmSbjbA3k=
=Ee9W
-----END PGP SIGNATURE-----
> well, i made quite a fast hack and changed the #if ZFS_LEOPARD to #if 1.
>
> I selected universal from the project setting and changed the build env. to 10.6.
>
> that was it.
If you changed it to #if 1, wouldn't that have included the stuff with ZFS_LEOPARD_ONLY? When I did that, it compiled but wouldn't install due to missing dependencies.
There was a mistake in the defs. I needed to remove the flag from the zfs,kext product, and I also had to promote where the ifdef got called in the zfs_context.h; but after doing both of those, I was able to load on 10.6 and 10.5. I pushed the change to github to make it easier for others :)
Alex
On Wed, 2010-03-10 at 16:34 +0000, Alex Blewitt wrote:On 10 Mar 2010, at 16:18, Al Gordon wrote:After building and deploying from 626d83e, I get "failed to initializeZFS library" again. This was with the 64-bit kernel. I rebooted intothe 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, andrebooted (32-bit, again), and got pretty much the same results that Igot with c9f85a5 with the recommended change, which is that the "zpool"and "zfs" commands worked, but when I create a pool, I see a drive iconon my desktop and then get a kernel panic.
This is probably just my misunderstanding and lack of experience with
XCode. There's a dropdown in the top left portion of XCode which allows
several options to be selected. The last one is "Active Architecture",
which has two options, i386 and x86_64, and one (and only one) of the
two must be selected. I was assuming that this option decided which
kernel you were building your binary against.
Do you already have an install of ZFS installed (which, presumably you are leaving in-place)?
It's a pretty fresh machine, and I never installed ZFS on it.
I'm also clobbering the build directory before each build. Maybe I'm
doing lots of wrong things. I'm more familiar with building software on
Linux just using make, and XCode has a lot of little clicky-clicky
options I'm not used to.
Should I perhaps install the .dmg from the Downloads section of the
repository, install that, then do the build/install from source?
Thanks for clarifying that for me. I knew I had to be doing at least
one thing wrong. It won't be the last, I'm sure.
> FYI if you're used to command-line options, then you can do:
>
>
> xcodebuild -sdk macosx10.6 clean build
>
>
> That should create a 'build/Release' directory with the info in - you
> can also add -configuration Debug after the 10.6 but before the clean,
> and it'll generate a debug build instead.
This looks like the approach I'm going to start taking, thanks.
> > Should I perhaps install the .dmg from the Downloads section of the
> > repository, install that, then do the build/install from source?
> >
>
> No, there is obviously some stability issue which I need to recreate.
> The build should be self consistent, in that it should give you
> everything you need.
>
>
> What were you creating the pool on? A slice on the local hard drive,
> external USB drive, or a mkfile temporary repository?
As root (sudo -i), I made a directory (/zfs), and in that used mkfile to
make a few files (512MB each, iirc). I don't have any spare partitions
available to play with at the moment, and from what I had seen in the
past, ZFS on removable media seemed to be less than optimal,
stability-wise, etc. (kernel crash on remove before zfs export, iirc).
--
-- AL --
Ah, I have been removing the old binaries, copying the new ones into
their appropriate locations on the system, and assuming that these new
copies were getting loaded as necessary.
Running the binaries from the build dir seems like an easier approach,
at least for testing/debugging purposes.
--
-- AL --
Let's move this specific issue off the newsgroups and into an issue tracker item. I think there's enough there to indicate that everything you've done so far is right, and the panic is real. Can you upload the symbols and either /Library/Logs/panic.log or the latest /Library/Logs/PanicReporter/*.panic report.
Alex