Kernal Panics after 10.8.5 update

270 views
Skip to first unread message

Alexandre Takacs

unread,
Sep 19, 2013, 1:27:56 PM9/19/13
to zfs-...@googlegroups.com
Folks

Recently upgraded to 10.8.5 and I have unfortunately systematic crashed involving MacZFS.

I'm pretty sure that there should be something else in the equation as I seem to be the only one seeing this but whenever I start my system it boots normally ans after 15-20 sec goes into a kernel panic "Preemption level underflow, possible cause unlocking an unlocked mutex or spinlock" (no idea what it means). This started just after applying 10.8.5 on what was so far a rock solid MacBook pro 13'

The crash always happens in org.maczfs.zfs.fs(74.3)[65B33F1B-0B98-35DF-BCFE-6844DFB71851]@0xffffff7f85256000->0xffffff7f852befff and only if a ZFS disk is connected. Otherwise every works fine.

Interestingly the process is reported as mds which is - if my understanding is correct - Spotlight related. I have (obviously, since installing MacZFS eons ago) turned off indexation of the zfs volumes and also tried to turn it off altogether but the crash persists - 100% reproducible.

I plan to try to import the volume into a FreeNAS VM a run a scrub - no much else I can think off at the moment so if anyone has suggestion it's most welcome

--alexT
 

Graham Perrin

unread,
Sep 19, 2013, 3:58:24 PM9/19/13
to zfs-...@googlegroups.com
On Thursday, 19 September 2013 18:27:56 UTC+1, Alexandre Takacs wrote:
 
… boots normally ans after 15-20 sec goes into a kernel panic …

Is the panic before or after login? 

If after login: which apps run (or are launching) at the time of the panic? 

Can you attach one of the .panic files? 

Thanks. 

Alexandre Takacs

unread,
Sep 19, 2013, 5:28:22 PM9/19/13
to zfs-...@googlegroups.com
Is the panic before or after login?

After login

 
If after login: which apps run (or are launching) at the time of the panic? 


Just sitting idle in Finder
 
Can you attach one of the .panic files? 

Done


Panic.zip

Alexandre Takacs

unread,
Sep 19, 2013, 7:26:40 PM9/19/13
to zfs-...@googlegroups.com
Further update - attached the disk as a raw device to Vmware running FreeNAS 9,

When trying to import the disk (which is seen a zfs volume with the proper name) I get an error "Disk has a block alignment that is larger than the pool's alignment".

Not sure what to do from there...

Alexandre Takacs

unread,
Sep 20, 2013, 4:11:05 PM9/20/13
to zfs-...@googlegroups.com

Further testing - I have created a pristine Boot disk with 10.8.5 (straight from Apple) and MacZFS. Exact same problem (crash dump enclosed) :(

No what I'd really like is a way to mount my disk for more than a few seconds in order to be able to transfer the content... Any suggestion ?

Jason Belec

unread,
Sep 20, 2013, 4:18:31 PM9/20/13
to zfs-...@googlegroups.com
OK. I've seen this in the past. Have you tried reinstalling MacZFS and shutdown restart?

If yes try to boot as new admin user, export pool, re-import with -o option where you set mounting to off. 

Let me know. 

Everything is technically fine. Are you using advanced format drives? If yes how exactly did you create the pool?

Jason
Sent from my iPhone

On Sep 20, 2013, at 4:11 PM, Alexandre Takacs <ata...@gmail.com> wrote:


Further testing - I have created a pristine Boot disk with 10.8.5 (straight from Apple) and MacZFS. Exact same problem (crash dump enclosed) :(

No what I'd really like is a way to mount my disk for more than a few seconds in order to be able to transfer the content... Any suggestion ?

--
 
---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Alexandre Takacs

unread,
Sep 21, 2013, 3:17:03 AM9/21/13
to zfs-...@googlegroups.com

OK. I've seen this in the past. Have you tried reinstalling MacZFS and shutdown restart?

Yes (also same problem with fresh OSX install)
 
If yes try to boot as new admin user, export pool, re-import with -o option where you set mounting to off. 

Not exactly sure how I would do that (ie. the zfs part). Can you give me some instructions ?

Everything is technically fine. Are you using advanced format drives? If yes how exactly did you create the pool?

This is a plain vanilla disk in my MacBook Pro (in replacement of the DVD). It was formatted with macZFS (an earlier version under 10.6) and then upgraded to the current version.

Alexandre Takacs

unread,
Sep 22, 2013, 4:49:23 PM9/22/13
to zfs-...@googlegroups.com
Further reports:

I booted up on an old 10.6 box and installed macZFS. I can mount the disk, scrub it (took an entire night - no problem reported) but as soon as I start copying things I end up with a kernel panic...

Seems I have some sort of corruption that will get through tests but still kills masZFS.

Any suggestion ?

Jason Belec

unread,
Sep 22, 2013, 5:21:58 PM9/22/13
to zfs-...@googlegroups.com
Yes. Correct. Sorry been busy. Will try to get you instructions bu tomorrow morning. Apologies. 


Jason
Sent from my iPhone
--

Alexandre Takacs

unread,
Sep 23, 2013, 3:28:13 AM9/23/13
to zfs-...@googlegroups.com
thanks - any help (if possible !) would be appreciated !

Alexandre Takacs

unread,
Sep 24, 2013, 1:57:45 AM9/24/13
to zfs-...@googlegroups.com
Is there any *nix distro I could use to try to salvage something out of my HD (fortunately nothing critical but still annoying to loose) ? FreeNAS would not read that disk...


On Sunday, September 22, 2013 11:21:58 PM UTC+2, jasonbelec wrote:

Chris Ridd

unread,
Sep 24, 2013, 2:53:01 AM9/24/13
to zfs-...@googlegroups.com

On 24 Sep 2013, at 06:57, Alexandre Takacs <ata...@gmail.com> wrote:

> Is there any *nix distro I could use to try to salvage something out of my HD (fortunately nothing critical but still annoying to loose) ? FreeNAS would not read that disk...

Try to boot a current Solaris or Solarish distro, as they have native ZFS. Watch they don't tempt you to upgrade the pool beyond what MacZFS can handle.

Useful Solarish distros include OpenIndiana, SmartOS, OmniOS, Tribblix, ...

They also have a ZFS debugging tool, zdb, which could help.

Chris

Alex Blewitt

unread,
Sep 24, 2013, 4:44:36 AM9/24/13
to zfs-...@googlegroups.com
There's also FreeBSD which has a reasonably up to date ZFS implementation which you might be able to use.

Alex

jasonbelec

unread,
Sep 24, 2013, 1:26:28 PM9/24/13
to zfs-...@googlegroups.com
OK, apologies, how many snapshots are present? How many before things went kaboom? Do you have any clones active of any snapshots? It sounds like no backups exist, so we can try to get you back to a clean and functioning pool. The kaboom on copying tells me the most info as I have had that problem twice with system updates and disks that are 4K. They worked OK until the system recognizes them for what they are. note: destroying the most recent snapshot(s) will nuke that info within. You have been warned. You want to get back to a state that allows copying, so you destroy the latest snapshot then try copying, if it fails destroy the next one and repeat. 


Graham Perrin

unread,
Sep 24, 2013, 1:49:23 PM9/24/13
to zfs-...@googlegroups.com
On Sunday, 22 September 2013 21:49:23 UTC+1, Alexandre Takacs wrote:
 
… as soon as I start copying things I end up with a kernel panic …

 Please: is the panic in that situation the same as the six .panic files that were posted to the group on the 19th? 

>> … Preemption level underflow, possible cause unlocking an unlocked mutex or spinlock … 

Alexandre Takacs

unread,
Sep 26, 2013, 7:25:03 AM9/26/13
to zfs-...@googlegroups.com

Alexandre Takacs

unread,
Sep 26, 2013, 7:27:35 AM9/26/13
to zfs-...@googlegroups.com
No snapshot, no funky ZFS wizardy :) It is an internal disk in my MacBook in replacement of the DVD Rom - just thought zfs would be "safer" for basic storage needs... Mostly a "scratch" drive but as always I would be happy to be able to recover it.

Alexandre Takacs

unread,
Sep 26, 2013, 7:28:56 AM9/26/13
to zfs-...@googlegroups.com
Have to trigger one - watch this space :)

Jason Belec

unread,
Sep 26, 2013, 8:05:38 AM9/26/13
to zfs-...@googlegroups.com
It is. Your data is technically fine. However you have done the exact same thing people without tools have done, nothing. The snapshots at the very least would help control how much data might be damaged and sending those to a backup would allow you to fix as necessary. You probably aren't running any scrubs, so you may not know of health issues for your data. 

That said, check against the issues Graeme asked about. I have more info now and can look at a few ideas for you to work through.


--
Jason Belec
Sent from my iPad

Alexandre Takacs

unread,
Sep 26, 2013, 4:34:45 PM9/26/13
to zfs-...@googlegroups.com

 Please: is the panic in that situation the same as the six .panic files that were posted to the group on the 19th? 

Yes and no I'd day (see enclosed - remember this is a 10.6 machine - they seem still in the same ballpark to my untrained eyes)
more_crash.zip

Alexandre Takacs

unread,
Sep 26, 2013, 4:36:19 PM9/26/13
to zfs-...@googlegroups.com
 You probably aren't running any scrubs, so you may not know of health issues for your data.

Not exactly sure to understand what you mean here - As mentioned earlier I can mount the disk an run a full scrub that will report no error...

Jason Belec

unread,
Sep 26, 2013, 4:40:47 PM9/26/13
to zfs-...@googlegroups.com
OK, that is good. Missed that. Trying a test on an old system with similar error I keep cloned. Hope to have you something in a bit. 


Jason
Sent from my iPhone

On Sep 26, 2013, at 4:36 PM, Alexandre Takacs <ata...@gmail.com> wrote:

 You probably aren't running any scrubs, so you may not know of health issues for your data.

Not exactly sure to understand what you mean here - As mentioned earlier I can mount the disk an run a full scrub that will report no error...

--

Chris Ridd

unread,
Sep 27, 2013, 1:04:34 PM9/27/13
to zfs-...@googlegroups.com

On 26 Sep 2013, at 12:25, Alexandre Takacs <ata...@gmail.com> wrote:

>
>
> I have just given a try to OpenIndiana but I don't see ZFS support (probably caused by my ignorance...):

Probably! OI definitely has ZFS support because that is what it uses to install.

The zfs and zpool commands are probably in /usr/sbin, but I don't have an OI system here to check. You'll need to have a look around the filesystem a bit, and you'll need to get used to using the command-line tools...

Chris

Gregg Wonderly

unread,
Sep 27, 2013, 2:35:23 PM9/27/13
to zfs-...@googlegroups.com, Chris Ridd
OI doesn't put ZFS tools in the path of normal users. You need to "su -" to
root to get the correct path and to be able to run the tools. This is just
absolutely the most frustrating and unbelievably "wrong" configuration I've seen
yet, for system tools. The "Status" of a "ZFS" pool is a vital part of what
people need to know about their system. Every single user of a system should be
able to run "zpool status" to see what's going on. I understand that this is
probably an "enterprise installation" issue, but ultimately, there are, no
longer, 100s of users logged onto "servers" at command lines who need to have
"limited" access to the system. Instead, there are systems deployed as VM
hosts, or web servers or app servers, and the only people logged on, are going
to be "the admins". This silliness in tooling causes people to have to be
logged in as root, and guess what that means? It means that there is more
opportunity to make mistakes with wild cards and other things which historically
have been at the root of system "damage". It also means that systems will be
left logged in "as root", or with root shell access in X-Windows environments.

Pretty silly stuff, for an enterprise system... This is probably one of the
many reasons why people ignore OI and use FreeBSD or Linux or something else for
ZFS, more and more these days... I know it's at the top of my list of silly
things that I just don't have time for. Number 2 is the fact that my keyboard
is never mapped correctly on OI/Solaris and thus I can't get anything done
without having to remap it. Why would a US keyboard mapping with a standard
101-key keyboard not be the default in this day and age....

Gregg

Jason Belec

unread,
Sep 27, 2013, 2:58:44 PM9/27/13
to zfs-...@googlegroups.com
Thanks Gregg this explains so much of the frustrations people have. Never OI so news to me.

Jason
Sent from my iPhone

Alexandre Takacs

unread,
Sep 27, 2013, 3:56:23 PM9/27/13
to zfs-...@googlegroups.com
Being the ingorant SOB that I am I decided to skip OI and elected to use FreeBSD. I immediately managed to import my disk. Ran a scrub which again reported no problem (but executed in about 1/4 of the time it would take under OSX/MacZFS, using a VM on the very same hardware while doing other tasks in MacOS...). I then scp-ed all my files from the "sick" ZFS volume to a brand new mac disk - no problem whatsoever !

So it seem that there is something fundamentally broken with MacZFS as trying to copy files from this disk would create a kernal panic both in 10.6 and 10.8, yet not trigger any error in scrubbing... I would have volunteered to ship you the disk for further testing but it unfortunately contains some confidential third party documents that I can not part with. That being said I can put in storage if you want me to test it with future betas (and yes I also tested with the latest OpenZFS build with the same results).

Jason Belec

unread,
Sep 27, 2013, 4:15:41 PM9/27/13
to zfs-...@googlegroups.com
Yeah for you getting access. That is the overall goal.



--
Jason Belec
Sent from my iPad
--

Graham Perrin

unread,
Sep 27, 2013, 4:18:36 PM9/27/13
to zfs-...@googlegroups.com
On Friday, 27 September 2013 20:56:23 UTC+1, Alexandre Takacs wrote:
 
… FreeBSD … scrub which again reported no problem 

In rare cases (not limited to MacZFS) it's possible to have: 

* a pool that scrubs without error

* within that pool, corruption at a dataset level

– but I should not jump to that conclusion for your case; in one of Jason's post <https://groups.google.com/d/msg/zfs-macos/Modx_ufHv9c/8knJSuf8xOYJ> we note: 

>> … Trying a test on an old system with similar error I keep cloned. …

Back to your FreeBSD: 
 
… I then scp-ed all my files from the "sick" ZFS volume to a brand new mac disk - no problem
 
That's great news :-) but from that use – with scp – I should not draw any conclusion. More on this in a separate post …

So it seem that there is something fundamentally broken with MacZFS 

Not necessarily …  

Graham Perrin

unread,
Sep 27, 2013, 4:36:51 PM9/27/13
to zfs-...@googlegroups.com
Below: extracts from the four .panic files most recently provided (thanks to Alexandre). The third is noticeably different but I'll not attempt to interpret the differences. 

In the third: 

> BSD process name corresponding to current thread: rsync

A thought … 

scp(1) Mac OS X Manual Page
– for Mac OS X 10.6, in particular: 

-E Preserves extended attributes, resource forks, and ACLs. Requires both ends to be running Mac OS X 10.4 or later.

OK, so I don't know about scp at the FreeBSD end (see earlier posts) but I wonder whether: 

* where rsync was associated with a panic, scp succeeded because its routine was less thorough. 

Jason and all: if that guess is unreasonable, please let me know – thanks. 

------------------------------------
Kernel_2013-09-22-223157_MPB15.panic
------------------------------------

panic(cpu 1 caller 0x226ec0): "thread_invoke: preemption_level -1, possible cause: unlocking an unlocked mutex or spinlock"@/SourceCache/xnu/xnu-1504.15.3/osfmk/kern/sched_prim.c:1471

BSD process name corresponding to current thread: SystemUIServer

------------------------------------
Kernel_2013-09-22-224418_MPB15.panic
------------------------------------

panic(cpu 0 caller 0x226ec0): "thread_invoke: preemption_level -1, possible cause: unlocking an unlocked mutex or spinlock"@/SourceCache/xnu/xnu-1504.15.3/osfmk/kern/sched_prim.c:1471

BSD process name corresponding to current thread: kextcache

------------------------------------
Kernel_2013-09-23-092643_MPB15.panic
------------------------------------

panic(cpu 1 caller 0x2abf6a): Kernel trap at 0x21d6cb3f, type 14=page fault, registers:
CR0: 0x80010033, CR2: 0x00000020, CR3: 0x00101000, CR4: 0x00000660
EAX: 0x00000000, EBX: 0x1b973350, ECX: 0x00000045, EDX: 0x00000000
CR2: 0x00000020, EBP: 0x1b973318, ESI: 0x02d9a900, EDI: 0x02d9a900
EFL: 0x00010202, EIP: 0x21d6cb3f, CS:  0x00000004, DS:  0x1b97000c
Error code: 0x00000000

Backtrace (CPU 1), Frame : Return Address (4 potential args on stack)
0x1b973148 : 0x21b837 (0x5dd7fc 0x1b97317c 0x223ce1 0x0) 
0x1b973198 : 0x2abf6a (0x59e3d0 0x21d6cb3f 0xe 0x59e59a) 
0x1b973278 : 0x2a1a78 (0x1b973298 0x0 0x1b9732ac 0x2205e2) 
0x1b973290 : 0x21d6cb3f (0xe 0x48 0x38e10070 0xc) 
0x1b973318 : 0x21d6b1a1 (0x2d9a900 0x0 0x0 0x0) 
0x1b973388 : 0x21d6f063 (0x2d9a900 0x1b973484 0x1b9734c4 0x0) 
0x1b973438 : 0x21d73978 (0x1b973484 0x1b9734c4 0x34695 0x0) 
0x1b973618 : 0x21d82ebc (0x227a1dc0 0x4258000 0x1b973648 0x21d95e06) 
0x1b973648 : 0x21d7c7e2 (0x227a1dc0 0x1 0xcec92 0x0) 
0x1b973688 : 0x2fa913 (0x1b9736a4 0x1 0x4862934 0x0) 
0x1b9736b8 : 0x2e08ac (0x427a43c 0x4862934 0x0 0x0) 
0x1b973718 : 0x2e0a0b (0x427a43c 0x1 0x1b973768 0x221faa) 
0x1b973768 : 0x2e372a (0x0 0x823024 0x1b973798 0x21d95e51) 
0x1b9737d8 : 0x21d83931 (0x0 0x34 0x1b9737fc 0x38e205ac) 
0x1b973848 : 0x21d83ca4 (0x38e205a8 0x0 0x1ec00 0x0) 
0x1b9738e8 : 0x21d7304d (0x1b973998 0x0 0x0 0x1b973998) 
0x1b973968 : 0x21d73419 (0x1b97399c 0x38e20680 0x1b9739f4 0x1b973998) 
0x1b9739b8 : 0x21d8034c (0x38e20680 0x1b9739f4 0x1b973df4 0x2ff05a) 
0x1b973a78 : 0x2fd320 (0x1b973a98 0x3 0x1b973ac8 0x58a38c) 
0x1b973ac8 : 0x2dac58 (0x4195f08 0x1b973df4 0x1b973f08 0x4862934) 
0x1b973b58 : 0x2dba8a (0x1b973ddc 0x100 0x1b973dfc 0x0) 
0x1b973c18 : 0x2eac15 (0x1b973ddc 0x0 0x0 0x0) 
0x1b973d88 : 0x2eb058 (0xbfffd1d0 0x0 0x0 0x0) 
0x1b973f48 : 0x2eb0f1 (0xbfffd1d0 0x0 0x0 0x0) 
0x1b973f78 : 0x4f7f90 (0x3b8c000 0x4862830 0x4862874 0x0) 
0x1b973fc8 : 0x2a1fd8 (0x3a21b08 0x0 0x4 0x3a21b08) 
No mapping exists for frame pointer
Backtrace terminated-invalid frame pointer 0xbfffd0f8
      Kernel Extensions in backtrace (with dependencies):
         org.maczfs.zfs.fs(74.3.0)@0x21d2e000->0x21da7fff

BSD process name corresponding to current thread: rsync

Mac OS version:
10K549

Kernel version:
Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386
System model name: MacBookPro1,1 (Mac-F425BEC8)

System uptime in nanoseconds: 2221847353324
unloaded kexts:
com.apple.driver.AppleFileSystemDriver 2.0 (addr 0x21632000, size 0x12288) - last unloaded 110788807934
loaded kexts:
com.logmein.driver.LogMeInSoundDriver 1.0.2
org.maczfs.zfs.fs 74.3.0

------------------------------------
Kernel_2013-09-23-192909_MPB15.panic
------------------------------------

panic(cpu 0 caller 0x226ec0): "thread_invoke: preemption_level -1, possible cause: unlocking an unlocked mutex or spinlock"@/SourceCache/xnu/xnu-1504.15.3/osfmk/kern/sched_prim.c:1471

BSD process name corresponding to current thread: mdworker

Jason Belec

unread,
Sep 27, 2013, 4:46:15 PM9/27/13
to zfs-...@googlegroups.com
Hi Graham, not sure but that is a very good point to observe further. I'll look at some of the 'problems' I've kept quarantined and see if I can achieve similar results.



--
Jason Belec
Sent from my iPad
--

Michael Newbery

unread,
Sep 27, 2013, 7:15:24 PM9/27/13
to zfs-...@googlegroups.com, Chris Ridd

On 28/09/2013, at 6:35 AM, Gregg Wonderly <greg...@gmail.com> wrote:

> On 9/27/2013 12:04 PM, Chris Ridd wrote:
>> On 26 Sep 2013, at 12:25, Alexandre Takacs <ata...@gmail.com> wrote:
>>
>>>
>>> I have just given a try to OpenIndiana but I don't see ZFS support (probably caused by my ignorance...):
>> Probably! OI definitely has ZFS support because that is what it uses to install.
>>
>> The zfs and zpool commands are probably in /usr/sbin, but I don't have an OI system here to check. You'll need to have a look around the filesystem a bit, and you'll need to get used to using the command-line tools...
> OI doesn't put ZFS tools in the path of normal users. You need to "su -" to root to get the correct path and to be able to run the tools. This is just absolutely the most frustrating and unbelievably "wrong" configuration I've seen yet, for system tools.

zpool status works for me on OI, but then I wound up a test instance where I had admin rights. I'd have thought that Alexandre would have been running with the correct privs.

Also, rather than su or sudo, the Solaris syntax is pfxec, as in pfexec zpool <mumble>.

pfexec is like sudo, but somewhat more nuanced (and anyway, what Solaris uses).


NOTE: for me, as a user (not root), with admin rights, zpool on its own works just fine for me.

Jason Belec

unread,
Sep 27, 2013, 7:31:07 PM9/27/13
to zfs-...@googlegroups.com
Useful info Michael.


--
Jason Belec
Sent from my iPad

Reply all
Reply to author
Forward
0 new messages