Good news, everyone

43 views
Skip to first unread message

Alex Blewitt

unread,
Mar 5, 2010, 8:47:31 PM3/5/10
to zfs-...@googlegroups.com
To paraphrase Professor Fanrsworth ...

I've completed the merge of the OpenSolaris onnv_72 build and the Apple bits, and pushed them up to GitHub.

http://github.com/alblue/mac-zfs/tree/maczfs_72/

This compiles and runs on 10.5 and compiles on 10.6, though I've not tested it running on there.

I've put it through its paces a little - creating a few hundred file systems, the odd recursive snapshot, mass deletions and so on - and it seems fine for most things.

This fixes:

Issue 29 - merge with onnv_72 (http://code.google.com/p/maczfs/issues/detail?id=29)
Issue 23 - deprecate read-only extension (http://code.google.com/p/maczfs/issues/detail?id=23)
Issue 22 - zfs pools are created at version 6 (http://code.google.com/p/maczfs/issues/detail?id=22)

Right now, this code shouldn't be used for production use. After all, it doesn't change any functionality other than what's in the existing binaries, so there's no real point - but I have discovered at least one critical bug which means I'd prefer it if we didn't end up with a binary created on this or the derivatives until we can fix it:

Issue 36 - exporting pool or restarting computer causes kernel panic (http://code.google.com/p/maczfs/issues/detail?id=36)

It would be good if there are people willing to try and find other holes with the build, because this is (probably) going to be what we build on going forwards. If I make any further progress on Issue 36 then I'll post back here. And once we've got that, we can start to roll forwards over the onnv_ releases, which I've made available from GitHub as well:

http://github.com/alblue/onnv-gate-zfs/tree/onnv_72

I'll likely seed the GitHub copy with a few more tags, but I only initially went up to 74 whilst we figured out if it's the right thing to do. In addition, I think we need to include some more stuff (the .zfs dir is managed by the OpenSolaris GFS module, http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/gfs.c) which we might want to pull in before pushing ahead with the _73, _74, ... tags.

For your diffing pleasure, I've set up some tags in my GitHub repository:

pre_merge_72 - the point of the Apple codebase (equivalent to my master) which can diff against the 'apple' code
onnv_72 - the OpenSolaris onnv_72 tag, which is just OS patches
maczfs_72 - the merge result of merging the above two

I've also written up a bit on my blog about the journey getting here ...

http://alblue.blogspot.com/2010/03/merged-zfs-from-opensolaris-to-osx.html

Alex

Dustin

unread,
Mar 5, 2010, 8:57:25 PM3/5/10
to zfs-macos

You completely rule. Thanks for all the work.

On Mar 5, 5:47 pm, Alex Blewitt <alex.blew...@gmail.com> wrote:
> To paraphrase Professor Fanrsworth ...
>
> I've completed the merge of the OpenSolaris onnv_72 build and the Apple bits, and pushed them up to GitHub.
>
> http://github.com/alblue/mac-zfs/tree/maczfs_72/
>
> This compiles and runs on 10.5 and compiles on 10.6, though I've not tested it running on there.
>
> I've put it through its paces a little - creating a few hundred file systems, the odd recursive snapshot, mass deletions and so on - and it seems fine for most things.
>
> This fixes:
>
> Issue 29 - merge with onnv_72 (http://code.google.com/p/maczfs/issues/detail?id=29)
> Issue 23 - deprecate read-only extension  (http://code.google.com/p/maczfs/issues/detail?id=23)
> Issue 22 - zfs pools are created at version 6 (http://code.google.com/p/maczfs/issues/detail?id=22)
>
> Right now, this code shouldn't be used for production use. After all, it doesn't change any functionality other than what's in the existing binaries, so there's no real point - but I have discovered at least one critical bug which means I'd prefer it if we didn't end up with a binary created on this or the derivatives until we can fix it:
>
> Issue 36 - exporting pool or restarting computer causes kernel panic (http://code.google.com/p/maczfs/issues/detail?id=36)
>
> It would be good if there are people willing to try and find other holes with the build, because this is (probably) going to be what we build on going forwards. If I make any further progress on Issue 36 then I'll post back here. And once we've got that, we can start to roll forwards over the onnv_ releases, which I've made available from GitHub as well:
>
> http://github.com/alblue/onnv-gate-zfs/tree/onnv_72
>

> I'll likely seed the GitHub copy with a few more tags, but I only initially went up to 74 whilst we figured out if it's the right thing to do. In addition, I think we need to include some more stuff (the .zfs dir is managed by the OpenSolaris GFS module,http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/com...) which we might want to pull in before pushing ahead with the _73, _74, ... tags.


>
> For your diffing pleasure, I've set up some tags in my GitHub repository:
>
> pre_merge_72 - the point of the Apple codebase (equivalent to my master) which can diff against the 'apple' code
> onnv_72 - the OpenSolaris onnv_72 tag, which is just OS patches
> maczfs_72 - the merge result of merging the above two
>
> I've also written up a bit on my blog about the journey getting here ...
>

> http://alblue.blogspot.com/2010/03/merged-zfs-from-opensolaris-to-osx...
>
> Alex

Raoul

unread,
Mar 5, 2010, 10:09:03 PM3/5/10
to zfs-macos
Great work Alex!

In terms of testing Alex, I assumed you used a repetitive script of
some sort? (fstools?)
Oh, what was the verdict regarding the icon in the end? I put up the
snowflake and have the files here if they're needed/wanted etc...

Has anyone every heard from Noel Delafano and co. ? since standing on
its own legs?
<bait>
Surely he's watching with interest and perhaps could even chip in
without getting into trouble from Apple?
</bait>

Cheers,

Raoul.

Ruotger Skupin

unread,
Mar 6, 2010, 4:46:41 AM3/6/10
to zfs-...@googlegroups.com
Thanks, Man!

This is great news and will raise the spirit of everyone.

Roddi

Alex Blewitt

unread,
Mar 6, 2010, 5:32:32 AM3/6/10
to zfs-...@googlegroups.com, zfs-macos
On 6 Mar 2010, at 03:09, Raoul <tan...@mac.com> wrote:

> Great work Alex!
>
> In terms of testing Alex, I assumed you used a repetitive script of
> some sort? (fstools?)

I had a script which I've used for earlier tests; basically creates a
few hundred filesystems with different properties, unzips the odd copy
of Eclipse and so on. For ease of testing I often use "mkfile" and
generate pools that way rather than an external drive.

The goal is to get ztest working though. I haven't looked at that yet
but we have the OpenSolaris one in there now.

The immediate bug of concern is issue 36 though.

> Oh, what was the verdict regarding the icon in the end?

I asked people to attach images to the issue in Google Code a few
times. I've not seen any there yet. I think the conclusion was that
the snowflake was generally the better one.

> Has anyone every heard from Noel Delafano and co. ? since standing
> on its own legs?
>

> Surely he's watching with interest and perhaps could even chip in
> without getting into trouble from Apple?

I do wonder if *she* is listening myself sometimes :-) I certainly
have a vanity alert on Google search, so it's possible. If so, hi
Noël! But realistically it would probably be a career limiting move if
she were to reach out, especially if they are now working on ZFS+
internally. I know if the situations were reversed, I wouldn't want to
comment.

Alex

Bjoern Kahl

unread,
Mar 6, 2010, 9:34:58 AM3/6/10
to zfs-...@googlegroups.com

Thanks a lot for all the work,

but now cloning is no longer working from github, it fails when processing
the latest commit (well at lest that's what I think, I have no clue of git)


# git clone -v http://github.com/alblue/mac-zfs.git
Initialized empty Git repository
in /Users/bj/Projekte/mac-zfs/t/mac-zfs/.git/
error: Unable to get pack file

http://github.com/alblue/mac-zfs.git/objects/pack/pack-b566dea44a0e2d7c49a5ba378db9b7958f0b6de6.pack
transfer closed with 19315277 bytes remaining to read
error: Unable to find a20a8b4fab2f29e4d84e2d091a3705a429c89db6 under
http://github.com/alblue/mac-zfs.git
Cannot obtain needed object a20a8b4fab2f29e4d84e2d091a3705a429c89db6
while processing commit 8f645fe1e30079a2df8656db9ff36c24d30b5647.
error: Fetch failed.

Björn
--
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |

Alex Blewitt

unread,
Mar 6, 2010, 2:32:55 PM3/6/10
to zfs-...@googlegroups.com

On 6 Mar 2010, at 14:34, Bjoern Kahl wrote:
> but now cloning is no longer working from github, it fails when processing
> the latest commit (well at lest that's what I think, I have no clue of git)
>
> # git clone -v http://github.com/alblue/mac-zfs.git
> Initialized empty Git repository
> in /Users/bj/Projekte/mac-zfs/t/mac-zfs/.git/
> error: Unable to get pack file
>
> http://github.com/alblue/mac-zfs.git/objects/pack/pack-b566dea44a0e2d7c49a5ba378db9b7958f0b6de6.pack
> transfer closed with 19315277 bytes remaining to read
> error: Unable to find a20a8b4fab2f29e4d84e2d091a3705a429c89db6 under
> http://github.com/alblue/mac-zfs.git
> Cannot obtain needed object a20a8b4fab2f29e4d84e2d091a3705a429c89db6
> while processing commit 8f645fe1e30079a2df8656db9ff36c24d30b5647.
> error: Fetch failed.

Björn,

Thanks for trying to give this a go!

I tried a fresh checkout (with the 'git' protocol) and it worked OK:

$ git clone -v git://github.com/alblue/mac-zfs.git
Initialized empty Git repository in /private/tmp/mac-zfs/.git/
remote: Counting objects: 11546, done.
remote: Compressing objects: 100% (6142/6142), done.
remote: Total 11546 (delta 2962), reused 10976 (delta 2465)
Receiving objects: 100% (11546/11546), 25.39 MiB | 235 KiB/s, done.
Resolving deltas: 100% (2962/2962), done.

In versions of Git prior to 1.6.6.1 (I think) the HTTP checkout was sub-optimal. However, quite a lot of servers don't support the smart HTTP checkout, and I don't think GitHub do yet either.

I tried a fresh checkout with 'http' as well:

$ git clone -v http://github.com/alblue/mac-zfs.git
Initialized empty Git repository in /tmp/mac-zfs/.git/

transfer closed with 17683277 bytes remaining to read


error: Unable to find a20a8b4fab2f29e4d84e2d091a3705a429c89db6 under http://github.com/alblue/mac-zfs.git
Cannot obtain needed object a20a8b4fab2f29e4d84e2d091a3705a429c89db6
while processing commit 8f645fe1e30079a2df8656db9ff36c24d30b5647.
error: Fetch failed.

The dumb HTTP protocol needs some extra files generated to tell it where to look, and obviously that's become stale. Since I don't know what's happening here, I raised an issue on the GitHub support boards:

http://support.github.com/discussions/repos/2671-unable-to-clone-via-http-can-clone-via-git/autosuggest

In the meantime, if you're able to, you should be able to clone via git clone -v git://github.com/alblue/mac-zfs.git instead.

Alex

Alex Blewitt

unread,
Mar 6, 2010, 2:38:01 PM3/6/10
to Alex Blewitt, zfs-...@googlegroups.com

Bjoern Kahl

unread,
Mar 8, 2010, 4:11:27 PM3/8/10
to zfs-...@googlegroups.com
On Saturday, 06 March 2010 20:32, wrote Alex Blewitt:
> On 6 Mar 2010, at 14:34, Bjoern Kahl wrote:
> > but now cloning is no longer working from github, it fails when
> > processing the latest commit (well at lest that's what I think, I have
> > no clue of git)

> http://support.github.com/discussions/repos/2671-unable-to-clone-via-http


>-can-clone-via-git/autosuggest
>
> In the meantime, if you're able to, you should be able to clone via git
> clone -v git://github.com/alblue/mac-zfs.git instead.

Just for the records: Cloning with git protocol worked fine!

Bjoern Kahl

unread,
Mar 8, 2010, 4:46:14 PM3/8/10
to zfs-...@googlegroups.com

Hi Alex & All,

On Saturday, 06 March 2010 11:32 wrote, Alex Blewitt:
> On 6 Mar 2010, at 03:09, Raoul <tan...@mac.com> wrote:
> > In terms of testing Alex, I assumed you used a repetitive script of
> > some sort? (fstools?)
>

> The goal is to get ztest working though. I haven't looked at that yet
> but we have the OpenSolaris one in there now.

Just for me to not loose track:

Are you currently working on ztest?

If yes, we may should get in touch, as I am actively working on ztest
and libzpool, thought I will probably fail my internal milestone of having
ztest compile and link by end of this week. To much workload in my dayjob.

My current target / working codebase is zfs-119 (i.e. what I found in your
alblue/mac-zfs repository as of beginning of February). Is this still the
right target for production releases? (I think having a ztest for
production code is the most useful for now. If someone has other
preferences, let me know!)


Best

Alex Blewitt

unread,
Mar 8, 2010, 5:23:32 PM3/8/10
to zfs-...@googlegroups.com
On 8 Mar 2010, at 21:46, Bjoern Kahl wrote:

Hi Alex & All,

On Saturday, 06 March 2010 11:32 wrote, Alex Blewitt:
On 6 Mar 2010, at 03:09, Raoul <tan...@mac.com> wrote:
In terms of testing Alex, I assumed you used a repetitive script of
some sort? (fstools?)

The goal is to get ztest working though. I haven't looked at that yet
but we have the OpenSolaris one in there now.

Just for me to not lose track:


Are you currently working on ztest?

I think it's a good thing to look at. I've not done much yet, though I have been poking through the code. Just got under 100 compile errors now :-) I think having libumem might be a good idea, since ZTest uses it:



Apart from anything else, the umem can track when memory gets overwritten with a sentinel. In the meantime, I've created a stub that will just delegate to malloc:

void *umem_alloc(size_t size_t, int _) 
{
return malloc(size);
}

void umem_free(void *mem, size_t _);
{
free(mem);
}

If yes, we may should get in touch, as I am actively working on ztest
and libzpool, thought I will probably fail my internal milestone of having
ztest compile and link by end of this week.  Too much workload in my dayjob.

Know the feeling :-) If you're working on that, I might switch off and do something else. Would be good to record that on the Google Code page

My current target / working codebase is zfs-119 (i.e. what I found in your
alblue/mac-zfs repository as of beginning of February).  Is this still the
right target for production releases?  (I think having a ztest for
production code is the most useful for now.  If someone has other
preferences, let me know!)

It would be good if others can try out the 'maczfs_72' or later build which is based on the sync'd up code with onnv_72 (and a couple of kernel panic fixes later down the line). I've pushed that to my 'master' branch; so it depends on what branch you've cloned. You'd certainly want to base it on the 'reorg' or later, because I moved files all over the place (including the xcodeproject refs) to match the new location. Doing an update to that would probably make sense prior to doing a bunch of new work, but also compiling and giving a bit of a workout would probably also be good :-)


Here's my recent history at GitHub:

c9f85a5 Fixes Issue 39 - Crash at zfs_vnop_reclaim < would be good if people can exercise/test this
090e1ba Fixes Issue 36 - crash when exporting pool/unloading kernel extension
bcba606 Added testcases
ef917ce Added support scripts to facilitate remote debugging
62f86b9 Simple installer for installing into local machine
d601eb0 Build Debug and Release configurations with both valid architectures
a20a8b4 Adding instructions on how to run the Hg converter
6d20fbb Merge commit 'onnv_72' into 'mac-zfs' < this is the big merge with onnv_72, and what we should be working forwards from ultimately < tagged maczfs_72
708fa1d Added instructions on how to use, as well as updates to filemap < tagged pre_merge_72
8509823 Remove detritus
abc4602 Strip all needed files out of exclusion lists
7e6ecd9 Added list of files under sys
4d7eb44 Added hg.convert.filemap for stripping out the ZFS required stuff from o
8f645fe Merge branch 'doc' of github.com:alblue/mac-zfs into doc
7f6f991 Fixed duplicate name for gethrtime
d2672be Committed fix for renaming of gethrtime function
739c111 Added ZFS_LEOPARD to release build as well
4f8030e Moved all files to known locations in onnv format < this was the big set of renames/moves to put the OpenSolaris data under /usr/src etc.



Christian Kendi

unread,
Mar 9, 2010, 8:08:47 AM3/9/10
to zfs-...@googlegroups.com
Hi,

i was testing the new maczfs_72 under 10.6. i just had to remove the ZFS_LEOPARD define to make it load. The flag causes troubles for the VFS rootnode on 10.6 etc...
As maczfs_72 still is pool version 8 and i use the SL bits with version 11 i couldn't test it further. So far:
* the module compiled w/ a couple of adjustments to the xproject file. (ZFS_LEOPARD, universal build)
* the binaries worked

Chris.
PGP.sig

Alex Blewitt

unread,
Mar 9, 2010, 10:26:31 AM3/9/10
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
Thanks. Was it just the one flag you needed to change? It should be automatically set on a 10.5 build, but not set/needed on the 10.6 build.  

Also, what did you need to change for universal build? If you build against 10.6, it selects both Intel arch; if you select 10.5, it uses both 32 bit arch.

Thanks for testing it. I plan to see if I can do some more stress testing and then work on a repeatable/automated build for both systems and package into a universal universal installer. I may not get to this for a while. 

Alex 

Sent from my (new) iPhone

Al Gordon

unread,
Mar 9, 2010, 2:50:35 PM3/9/10
to zfs-...@googlegroups.com
Chris, can you share your "adjustments"? I am trying to get this working
too, so that I can do some testing. I'm running 10.6 x86_64, and get
"internal error: failed to initialize ZFS library" when I do a "sudo
zpool list" or "sudo zfs list".

Thanks,

--

-- AL --

Alex Blewitt

unread,
Mar 9, 2010, 4:20:26 PM3/9/10
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
I'll push out a patch shortly. In the meantime, go to the project file
in the .Xcode directory and remove the line with ZFS_LEOPARD_ONLY in it.

Sent from my (new) iPhone

Steven Noonan

unread,
Mar 9, 2010, 4:16:51 PM3/9/10
to zfs-...@googlegroups.com
I built it locally and didn't have that problem.

Can you post the output of 'sudo dtruss -f zpool list'?

- Steven

Al Gordon

unread,
Mar 9, 2010, 4:56:58 PM3/9/10
to zfs-...@googlegroups.com
Looks like I got it now. It was the zfs.xcodeproj/project.pbxproj file.
It appears that there are two instances of "ZFS_LEOPARD_ONLY", one for
Debug and one for Release. I removed them both, and it looks like the
kext, etc. loads now.

Thanks for the help.


--

-- AL --

Alex Blewitt

unread,
Mar 9, 2010, 5:59:28 PM3/9/10
to zfs-...@googlegroups.com
I've updated my master with the removal of unnecessary attributes. You should be able to give that a go.

Note to PPC users; this should not prevent it from being used on PPC. My intention is to keep supporting that for as long as my G5 keeps going :-)

626d83e Updated build environemnt variables for 10.5/10.6


c9f85a5 Fixes Issue 39 - Crash at zfs_vnop_reclaim

090e1ba Fixes Issue 36 - crash when exporting pool/unloading kernel extension
bcba606 Added testcases

I've tested this on a 10.5 and 10.6 Intel this time; I don't see any reason why it shouldn't also work on a PPC (though I've not tried it since last build; but then, it was only problematic on the 10.6 builds, of which PPC isn't included anyway).

Note that for compiling, you should be able to select the appropriate SDK from the drop-down list; so choose 10.5 if you have a 10.5 system, and 10.6 if you have a 10.6 system.

If compiling from the command line, the switches are:

xcodebuild -sdk macosx10.5
xcodebuild -sdk macosx10.6

Alex

Al Gordon

unread,
Mar 10, 2010, 7:16:46 AM3/10/10
to zfs-...@googlegroups.com
Thanks, Alex. I'll check out the master branch, pull, and attempt a
rebuild. I will let you know if I encounter any build issues.

I was able to build yesterday, but doing a "zpool create" caused a
kernel panic. I basically created some files with mkfile and ran zpool
create with the full path to those files. I saw a removable drive icon
appear on my desktop, then the system crashes. OS X 10.6 (latest
patches), running 64 bit kernel, MacBook Pro 15", 5th gen.

--

-- AL --

Alex Blewitt

unread,
Mar 10, 2010, 7:45:09 AM3/10/10
to zfs-...@googlegroups.com
On 10 Mar 2010, at 12:16, Al Gordon wrote:

> Thanks, Alex. I'll check out the master branch, pull, and attempt a
> rebuild. I will let you know if I encounter any build issues.
>
> I was able to build yesterday, but doing a "zpool create" caused a
> kernel panic. I basically created some files with mkfile and ran zpool
> create with the full path to those files. I saw a removable drive icon
> appear on my desktop, then the system crashes. OS X 10.6 (latest
> patches), running 64 bit kernel, MacBook Pro 15", 5th gen.

Do you know what git commit this was against (in other words, what does 'git log --oneline | head' show you?) I fixed a couple of kernel pancis which were later than the 'MacZFS-72' tag but in the head of master; if you can still reproduce these with my current master (626d83e) then I'd like to get a paniclog (if one exists); if you can also send the 'zfs.kext.dSYM' file and the com.apple.filesystem.zfs.sym file (generated into /tmp by sudo kextload -s /tmp zfs.kext) then I can try debugging it further. Probably best to file an issue in the Google Code project and attach them there rather than the mailing list.

There is a known panic with memory exhaustion at the moment; that's in zfs_context.c. If that's the case (and I've not done much more than just try creating a few small files) then we might have a memory leak somewhere which causes this to occur after time.

Alex

Al Gordon

unread,
Mar 10, 2010, 10:58:22 AM3/10/10
to zfs-...@googlegroups.com
The build was against c9f85a5, "Fixes issue #39 - Crash at
zfs_vnop_reclaim".

I've pulled your latest commit, and will try to build with that. I'll
let you know if I encounter any additional issues, and provide the info
you're requesting, open a ticket, etc.

--

-- AL --

Al Gordon

unread,
Mar 10, 2010, 11:18:11 AM3/10/10
to zfs-...@googlegroups.com
After building and deploying from 626d83e, I get "failed to initialize
ZFS library" again. This was with the 64-bit kernel. I rebooted into
the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
rebooted (32-bit, again), and got pretty much the same results that I
got with c9f85a5 with the recommended change, which is that the "zpool"
and "zfs" commands worked, but when I create a pool, I see a drive icon
on my desktop and then get a kernel panic.

Does any of the work that's being done take into account any differences
between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?

--

-- AL --


On Wed, 2010-03-10 at 12:45 +0000, Alex Blewitt wrote:

Alex Blewitt

unread,
Mar 10, 2010, 11:34:32 AM3/10/10
to zfs-...@googlegroups.com
On 10 Mar 2010, at 16:18, Al Gordon wrote:

> After building and deploying from 626d83e, I get "failed to initialize
> ZFS library" again. This was with the 64-bit kernel. I rebooted into
> the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
> rebooted (32-bit, again), and got pretty much the same results that I
> got with c9f85a5 with the recommended change, which is that the "zpool"
> and "zfs" commands worked, but when I create a pool, I see a drive icon
> on my desktop and then get a kernel panic.
>
> Does any of the work that's being done take into account any differences
> between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?


They should be built with both 32 and 64 bit extensions regardless of whether you do a 'release' or 'debug' build, so I don't think you should need to explicitly select an architecture.

Do you already have an install of ZFS installed (which, presumably you are leaving in-place)? If so, then when executing the 'zpool' and 'zfs' commands from a local directory, you'll need to tell it to load the (new) libzfs.dylib instead of the (existing) one in /usr/lib/libzfs.dylib. Running the new zpool with the old lib and new kernel is likely to cause problems :-)

If you do have an existing /usr/lib/libzfs.dylib, leave that in place and do:

export DYLD_LIBRARY_PATH=/path/to/build/Debug
export PATH=/path/to/build/Debug:$PATH
sudo kextload -s /path/to/build/Debug /path/to/build/Debug/zfs.kext
mkfile 100m /tmp/bigfile
sudo /path/to/build/Debug/zpool create bigpool /tmp/bigfile

That should at least run with a consistent set of bits. If this still fails, then can you attach the com.apple.filesystem.sym and .dSym files in the /path/to/build/Debug directory to an issue and I'll see if I can track it down.

Alex

Al Gordon

unread,
Mar 10, 2010, 2:17:03 PM3/10/10
to zfs-...@googlegroups.com
On Wed, 2010-03-10 at 16:34 +0000, Alex Blewitt wrote:
> On 10 Mar 2010, at 16:18, Al Gordon wrote:
> > After building and deploying from 626d83e, I get "failed to initialize
> > ZFS library" again. This was with the 64-bit kernel. I rebooted into
> > the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
> > rebooted (32-bit, again), and got pretty much the same results that I
> > got with c9f85a5 with the recommended change, which is that the "zpool"
> > and "zfs" commands worked, but when I create a pool, I see a drive icon
> > on my desktop and then get a kernel panic.
> >
> > Does any of the work that's being done take into account any differences
> > between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?
>
>
> They should be built with both 32 and 64 bit extensions regardless of whether you do a 'release' or 'debug' build, so I don't think you should need to explicitly select an architecture.

This is probably just my misunderstanding and lack of experience with
XCode. There's a dropdown in the top left portion of XCode which allows
several options to be selected. The last one is "Active Architecture",
which has two options, i386 and x86_64, and one (and only one) of the
two must be selected. I was assuming that this option decided which
kernel you were building your binary against.

> Do you already have an install of ZFS installed (which, presumably you are leaving in-place)? If so, then when executing the 'zpool' and 'zfs' commands from a local directory, you'll need to tell it to load the (new) libzfs.dylib instead of the (existing) one in /usr/lib/libzfs.dylib. Running the new zpool with the old lib and new kernel is likely to cause problems :-)

It's a pretty fresh machine, and I never installed ZFS on it. I have
installed BootCamp, and set it up to dual (faux-triple) boot Snow
Leopard, Windows/Linux. When installed kexts, etc., have been failing,
I have been removing them. So,
basically, /S/L/E/zfs*, /S/L/Filesystems/zfs*, /usr/sbin/[zfs|zpool]
and /usr/lib/libzfs* do not exist.

I'm also clobbering the build directory before each build. Maybe I'm
doing lots of wrong things. I'm more familiar with building software on
Linux just using make, and XCode has a lot of little clicky-clicky
options I'm not used to.

> If you do have an existing /usr/lib/libzfs.dylib, leave that in place and do:
>
> export DYLD_LIBRARY_PATH=/path/to/build/Debug
> export PATH=/path/to/build/Debug:$PATH
> sudo kextload -s /path/to/build/Debug /path/to/build/Debug/zfs.kext
> mkfile 100m /tmp/bigfile
> sudo /path/to/build/Debug/zpool create bigpool /tmp/bigfile
>
> That should at least run with a consistent set of bits. If this still fails, then can you attach the com.apple.filesystem.sym and .dSym files in the /path/to/build/Debug directory to an issue and I'll see if I can track it down.
>
> Alex

Should I perhaps install the .dmg from the Downloads section of the
repository, install that, then do the build/install from source?

--

-- AL --


Steven Noonan

unread,
Mar 10, 2010, 2:23:59 PM3/10/10
to zfs-macos
On Wed, Mar 10, 2010 at 11:17 AM, Al Gordon <a...@runlevel7.org> wrote:
> On Wed, 2010-03-10 at 16:34 +0000, Alex Blewitt wrote:
>> On 10 Mar 2010, at 16:18, Al Gordon wrote:
>> > After building and deploying from 626d83e, I get "failed to initialize
>> > ZFS library" again.  This was with the 64-bit kernel.  I rebooted into
>> > the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
>> > rebooted (32-bit, again), and got pretty much the same results that I
>> > got with c9f85a5 with the recommended change, which is that the "zpool"
>> > and "zfs" commands worked, but when I create a pool, I see a drive icon
>> > on my desktop and then get a kernel panic.
>> >
>> > Does any of the work that's being done take into account any differences
>> > between the 32-bit and 64-bit kernels in addition to 10.5 vs. 10.6?
>>
>>
>> They should be built with both 32 and 64 bit extensions regardless of whether you do a 'release' or 'debug' build, so I don't think you should need to explicitly select an architecture.
>
> This is probably just my misunderstanding and lack of experience with
> XCode.  There's a dropdown in the top left portion of XCode which allows
> several options to be selected.  The last one is "Active Architecture",
> which has two options, i386 and x86_64, and one (and only one) of the
> two must be selected.  I was assuming that this option decided which
> kernel you were building your binary against.

This is basically just a selector for the Xcode debugger (so you can
switch between debugging i386 and x86_64 without building binaries for
only one or the other).

- Steven

Christian Kendi

unread,
Mar 10, 2010, 2:26:54 PM3/10/10
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Well,

i dont know how you run the bins, but when you run them from your build directory without installing them, make sure that the ZFS lib is preloaded.
run as root:
DYLD_INSERT_LIBRARIES=libzfs.dylib ./zpool list

Chris.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (Darwin)

iD8DBQFLl/J+p+9ff145KVIRAimcAKCE/RizVkcJvVbs/dEVvjnSUpsWTgCdF5d9
W8LV/885KTkZPeKmSbjbA3k=
=Ee9W
-----END PGP SIGNATURE-----

Christian Kendi

unread,
Mar 10, 2010, 2:29:09 PM3/10/10
to zfs-...@googlegroups.com
well, i made quite a fast hack and changed the #if ZFS_LEOPARD to #if 1.

I selected universal from the project setting and changed the build env. to 10.6.

that was it.

Chris.
PGP.sig

Alex Blewitt

unread,
Mar 10, 2010, 3:07:40 PM3/10/10
to zfs-...@googlegroups.com
On 10 Mar 2010, at 19:29, Christian Kendi wrote:

> well, i made quite a fast hack and changed the #if ZFS_LEOPARD to #if 1.
>
> I selected universal from the project setting and changed the build env. to 10.6.
>
> that was it.

If you changed it to #if 1, wouldn't that have included the stuff with ZFS_LEOPARD_ONLY? When I did that, it compiled but wouldn't install due to missing dependencies.

There was a mistake in the defs. I needed to remove the flag from the zfs,kext product, and I also had to promote where the ifdef got called in the zfs_context.h; but after doing both of those, I was able to load on 10.6 and 10.5. I pushed the change to github to make it easier for others :)

Alex


Alex Blewitt

unread,
Mar 10, 2010, 3:15:12 PM3/10/10
to zfs-...@googlegroups.com
On 10 Mar 2010, at 19:17, Al Gordon wrote:

On Wed, 2010-03-10 at 16:34 +0000, Alex Blewitt wrote:
On 10 Mar 2010, at 16:18, Al Gordon wrote:
After building and deploying from 626d83e, I get "failed to initialize
ZFS library" again.  This was with the 64-bit kernel.  I rebooted into
the 32-bit kernel, rebuilt (choosing 10.6 and i386), installed, and
rebooted (32-bit, again), and got pretty much the same results that I
got with c9f85a5 with the recommended change, which is that the "zpool"
and "zfs" commands worked, but when I create a pool, I see a drive icon
on my desktop and then get a kernel panic.

OK, so that's concerning, and is an issue that needs to be resolved. I'm building and testing on PPC (32 bit) and Intel (32 bit) but as yet haven't rebooted into 64-bit mode to test that; maybe there's an issue that needs resolving for the 64-bit builds. 

This is probably just my misunderstanding and lack of experience with
XCode.  There's a dropdown in the top left portion of XCode which allows
several options to be selected.  The last one is "Active Architecture",
which has two options, i386 and x86_64, and one (and only one) of the
two must be selected.  I was assuming that this option decided which
kernel you were building your binary against.

As Steven said, it's just the option which is used to debug. (The Project's info has a 'build' tab which configures what options are used etc. and one of them i s'build architectures' - should default to i386 x86_64 for a 10.6 build.) But Xcode isn't particularly clear about a few things :-)

Do you already have an install of ZFS installed (which, presumably you are leaving in-place)?
It's a pretty fresh machine, and I never installed ZFS on it.

OK, that rules out any possibility of problems that might be caused by running them. The kernel is standalone anyway; the libzfs is used by both zpool and zfs. So if you can run them (without getting a 'unable to load library) then it's a real problem.

I'm also clobbering the build directory before each build.  Maybe I'm
doing lots of wrong things.  I'm more familiar with building software on
Linux just using make, and XCode has a lot of little clicky-clicky
options I'm not used to.

FYI if you're used to command-line options, then you can do:

xcodebuild -sdk macosx10.6 clean build

That should create a 'build/Release' directory with the info in - you can also add -configuration Debug after the 10.6 but before the clean, and it'll generate a debug build instead.

Should I perhaps install the .dmg from the Downloads section of the
repository, install that, then do the build/install from source?

No, there is obviously some stability issue which I need to recreate. The build should be self consistent, in that it should give you everything you need. 

What were you creating the pool on? A slice on the local hard drive, external USB drive, or a mkfile temporary repository?

Alex

Al Gordon

unread,
Mar 10, 2010, 5:02:59 PM3/10/10
to zfs-...@googlegroups.com
On Wed, 2010-03-10 at 20:15 +0000, Alex Blewitt wrote:
> As Steven said, it's just the option which is used to debug. (The
> Project's info has a 'build' tab which configures what options are
> used etc. and one of them i s'build architectures' - should default to
> i386 x86_64 for a 10.6 build.) But Xcode isn't particularly clear
> about a few things :-)

Thanks for clarifying that for me. I knew I had to be doing at least
one thing wrong. It won't be the last, I'm sure.

> FYI if you're used to command-line options, then you can do:
>
>
> xcodebuild -sdk macosx10.6 clean build
>
>
> That should create a 'build/Release' directory with the info in - you
> can also add -configuration Debug after the 10.6 but before the clean,
> and it'll generate a debug build instead.

This looks like the approach I'm going to start taking, thanks.

> > Should I perhaps install the .dmg from the Downloads section of the
> > repository, install that, then do the build/install from source?
> >
>
> No, there is obviously some stability issue which I need to recreate.
> The build should be self consistent, in that it should give you
> everything you need.
>
>
> What were you creating the pool on? A slice on the local hard drive,
> external USB drive, or a mkfile temporary repository?

As root (sudo -i), I made a directory (/zfs), and in that used mkfile to
make a few files (512MB each, iirc). I don't have any spare partitions
available to play with at the moment, and from what I had seen in the
past, ZFS on removable media seemed to be less than optimal,
stability-wise, etc. (kernel crash on remove before zfs export, iirc).

--

-- AL --


Al Gordon

unread,
Mar 10, 2010, 5:05:35 PM3/10/10
to zfs-...@googlegroups.com
On Wed, 2010-03-10 at 20:26 +0100, Christian Kendi wrote:
> Well,
>
> i dont know how you run the bins, but when you run them from your build directory without installing them, make sure that the ZFS lib is preloaded.
> run as root:
> DYLD_INSERT_LIBRARIES=libzfs.dylib ./zpool list
>
> Chris.

Ah, I have been removing the old binaries, copying the new ones into
their appropriate locations on the system, and assuming that these new
copies were getting loaded as necessary.

Running the binaries from the build dir seems like an easier approach,
at least for testing/debugging purposes.

--

-- AL --

Alex Blewitt

unread,
Mar 11, 2010, 2:54:13 AM3/11/10
to zfs-...@googlegroups.com

Let's move this specific issue off the newsgroups and into an issue tracker item. I think there's enough there to indicate that everything you've done so far is right, and the panic is real. Can you upload the symbols and either /Library/Logs/panic.log or the latest /Library/Logs/PanicReporter/*.panic report.

Alex

Reply all
Reply to author
Forward
0 new messages