Well, it works. Now what…?

299 views
Skip to first unread message

Elliott Wall

unread,
Nov 6, 2012, 9:34:16 PM11/6/12
to zfs-...@googlegroups.com
THANK YOU everybody, I got it working in I think as pain free a way as possible!

The question I have now is, where do I go from here to gracefully migrate as much of my user space as possible to my ZFS partition, e.g., do I make a new Pictures folder on ZFS (with correct permissions), then launch iPhoto and change the default directory to that folder? Surely it isn't as easy as hitting CMD Shift H and dragging everything to ZFS…? I assume with my limited expertise that this would cause permissions problems and who knows what else… how does the system know that's where the User space now is?

As I migrate, I hope I can delete the HFS+ versions of folders, then resize ZFS according until the only thing left on HFS+ is the system?

Finally, the ZFS disk icon on the desktop is *lovely*, whoever put that together, but is there a way to make it invisible on the desktop just like the boot volume normally is (with my Finder set that way), where keystrokes such as CMD Shift H point to the current user's home folder.

And to whom do I give $. I'm excited and a bunch of you are apparently doing a damn fine job~ : D

—Elliott

MacBook Pro unibody mid-2009, 8GB
500GB 7200rpm HD with a little test ZFS partition

Jason

unread,
Nov 6, 2012, 9:46:01 PM11/6/12
to zfs-...@googlegroups.com
Well it depends how you wish to work. You could move your entire User space to ZFS. Don't move Mail if you utilize the search feature since Spotlight currently remains toast on ZFS. Many people do as your suggesting and link certain folders. Do make sure if you move User space like many of us have, you have an extra User not on ZFS in case of boot issues.

Let us know more of what you expect and it'll help to provide you better commands.

Jason
Sent from my iPad
--
 
 
 

Raoul Callaghan

unread,
Nov 6, 2012, 9:58:07 PM11/6/12
to zfs-...@googlegroups.com
You can thank my colleague Michael for that icon.

Someone earlier on dubbed it the "snow flake", where in fact we designed it loosely along the lines of a mandlebrot,  i.e. the analogy that if you zoom-in on a zfs pool, you just keep seeing drive after drive after drive...

The simplicity of adding more disks to a pool seems yesterday's news now, but back then; being able to use cheap hardware and just add drives as you need them was a Godsend for us... ;)

Michael designed it using and old Sandisk 32GB SDD image we got via google and whacked it together in Lightwave a few years back...


glad you're enjoying ZFS...

--
 
 
 

Elliott Wall

unread,
Nov 6, 2012, 9:58:49 PM11/6/12
to zfs-...@googlegroups.com
Thanks for the fast response!

I hardly ever use search in the Finder, but I do all the time in iPhoto, iTunes and Mail— so those have their own indexes then. Since Mail is IMAP it seems like I can always just hope that Google is using ZFS, right? : ) So yeah, not having Mail on there is ok since it should be backed up on the mailserver.

Even though I only have a single drive with an intermittently connected backup drive (FW drive on its way) I just want to migrate to ZFS as much of my valuable music and pictures and anything else as I can. My total HFS+ footprint with system and all users is about 255GB, so it's pretty austere fortunately.

So I should make another little admin account with nothing in it just in case? OK, excellent

I just want to be super careful about permissions since I don't really understand them yet. When upgrading to Lion ages ago manually (without migration assistant) I did something very bad somehow, maybe just dragging the Music folder from one volume to another… I still don't know… and it created a nightmare.

So, do I make a new admin now, come back to main user and drag my home folder on the ZFS pool… just want to be 1000% clear….

(Not that I will hold responsible of course…!)

Jason

unread,
Nov 6, 2012, 10:05:31 PM11/6/12
to zfs-...@googlegroups.com
I'll try to get you a set of instructions and notes tomorrow. Wifey time now.....


Jason
Sent from my iPad
--
 
 
 

Elliott Wall

unread,
Nov 6, 2012, 10:05:35 PM11/6/12
to zfs-...@googlegroups.com
It looks great. I hate those generic white dmg things… hah

Apparently some people yank their optical drives and put HDs/SSDs in there, so that adding disks to the pool thing will really seem magical if/when I do that (which I probably should since there's no way I'm buying another top of the line Apple laptop anytime soon— way too expensive— I'll have to milk this one for a few more years)

It is really a pity that Apple can't incorporate ZFS somehow.

What is stopping us from putting together a better installation script or even UI? I swear I did all this in essentially 5 mins with about 4 commands and 2 mins of DiskUtility, rebooted and it automounts and everything. Very impressive.

Elliott Wall

unread,
Nov 6, 2012, 10:09:44 PM11/6/12
to zfs-...@googlegroups.com
That would be amazing, thank you

I'm hoping I'll document all this properly then put something on my weblog so that there will be another redundant explanation out there people can turn to, so your efforts will not be wasted

cheers

Jason

unread,
Nov 6, 2012, 10:22:32 PM11/6/12
to zfs-...@googlegroups.com
Everyone of my systems has at least 2 drives internally and a large pool accessible externally. ;)


Jason
Sent from my iPad
--
 
 
 

Gregg Wonderly

unread,
Nov 6, 2012, 10:27:34 PM11/6/12
to zfs-...@googlegroups.com
There is a lot of benefit to using something like the Mac Mini as your engine with external drives.   Jason has been reporting good stuff about thunderbolt connected arrays.  The primary issue is to keep your drives cool.  You might use them lightly, but scrub can create enough heat to create problems when there is not enough cooling.

Gregg Wonderly

--
 
 
 

Jason

unread,
Nov 6, 2012, 10:37:44 PM11/6/12
to zfs-...@googlegroups.com
Yeah and crappy sata/esata cables that crack at the connectors give great read/write issues!!! They just don't make stuff with passion for quality anymore. Good thing ZFS doesn't care and allows a quick fix and repair. Posting this, as I found a bad cable today after 2 drives gave me issue right after replacing them in the same spot on the array. Looked great, until inspected closely and a little pressure and taadaa, crap! And as for Thunderbolt setups, so far the 3 different setups have been flawless and fast, they are also backwards compatible with the older systems by changing from Thunderbolt to Esata. ;) nighty night.


Jason
Sent from my iPad
--
 
 
 

Elliott Wall

unread,
Nov 6, 2012, 10:48:10 PM11/6/12
to zfs-...@googlegroups.com
OK… so I copied my entire Pictures folder by just dragging it over and launching iphoto from the iPhoto library icon (in the Pictures folder), everything seems to work…!

How do I resize the ZFS volume/pool so I can throw more stuff in there? I think I read someplace that this wasn't possible through grabbing and resizing the partition borders in Disk Utility  with the mouse. The main HFS+ part I DO reduce in size using that method though I assume.

Raoul Callaghan

unread,
Nov 6, 2012, 11:41:24 PM11/6/12
to zfs-...@googlegroups.com
Yeh don't do that in Disk Utility, (the resize thingy)

Probably best to keep away from DU from now on.

The only way to really resize is to repeat how you created the partition in the first place – which would require an external (ZFS preferably) FW drive to dump everything onto...

then you could blow away your original ZFS partition, reduce the size of your boot HFS partition, and then (finally recreate a larger ZFS partition).

Just so you know:

I get absolutely woeful performance with a single physical disk being shared with HFS and ZFS.
When a process is reading/writing with the HFS partition and another process is reading/writing with the ZFS partition on the same disk...
test with a file a few gig in size...

So much so, that I'm about to reverse what I've done and do it properly and get another internal drive into the (old style) mini.

I can't remember where I read about it now, but someone might chip in.

This is a great read for anyone not sure about whether to jump-in with ZFS.

It really does show how Apple knows that HFS should've been shelved a decade ago.

I think what's going on with corestorage atm is Apple setting its OS up for some major filesystem enhancements. (a.k.a upgrade)

Whether they release HFS2 or go with something else, who knows, but after reading that URL above, you'll too wish Apple persevered with Oracle buying Sun.

.

On 07/11/2012, at 2:48 PM, Elliott Wall <elliott....@gmail.com> wrote:

OK… so I copied my entire Pictures folder by just dragging it over and launching iphoto from the iPhoto library icon (in the Pictures folder), everything seems to work…!

How do I resize the ZFS volume/pool so I can throw more stuff in there? I think I read someplace that this wasn't possible through grabbing and resizing the partition borders in Disk Utility  with the mouse. The main HFS+ part I DO reduce in size using that method though I assume.

--
 
 
 

Elliott Wall

unread,
Nov 7, 2012, 12:10:33 AM11/7/12
to zfs-...@googlegroups.com
Oh no

So single drive is pointless, the computer will run really badly?

What about "nice-ing" the ZFS activity or some such thing

Frank Cusack

unread,
Nov 7, 2012, 12:29:45 AM11/7/12
to zfs-...@googlegroups.com
On Tue, Nov 6, 2012 at 8:41 PM, Raoul Callaghan <tan...@mac.com> wrote:
I get absolutely woeful performance with a single physical disk being shared with HFS and ZFS.
When a process is reading/writing with the HFS partition and another process is reading/writing with the ZFS partition on the same disk...
test with a file a few gig in size...

Just to clarify, that's likely unrelated to zfs.  Do that with 2 HFS partitions and tell us what results you get.  (zfs can consume more memory than HFS, causing systemic problems if the machine has low RAM, but that's likely not what's going on here.)

In real world you are unlikely to experience this kind of problem that you wouldn't experience it anyway, with one disk.  That is, if you have 2 heavy disk I/O processes hitting the same disk, you're going to get crap performance, period.

Elliott Wall

unread,
Nov 7, 2012, 1:40:45 AM11/7/12
to zfs-...@googlegroups.com
OK, I have a brilliant idea: what about putting my system on an Express card solid state thing like this: http://www.youtube.com/watch?v=5irgXU_5XUE
Then make my main hard disk ZFS?!

Daniel Becker

unread,
Nov 7, 2012, 2:09:12 AM11/7/12
to zfs-...@googlegroups.com
With a single disk, you're almost guaranteed to get worse performance with ZFS than with straight up HFS+, if only because copy-on-write inherently means that anything that gets modified frequently is subject to severe fragmentation. Obviously all the checksumming etc. also doesn't come for free, so if you have a puny machine (or not a lot of memory), that might also slow things down.


On Nov 6, 2012, at 10:40 PM, Elliott Wall <elliott....@gmail.com> wrote:

OK, I have a brilliant idea: what about putting my system on an Express card solid state thing like this: http://www.youtube.com/watch?v=5irgXU_5XUE
Then make my main hard disk ZFS?!

--
 
 
 

Elliott Wall

unread,
Nov 7, 2012, 1:02:21 PM11/7/12
to zfs-...@googlegroups.com
So I'm looking into keeping the system on the Expresscard volume, all users on the ZFS formatted hard drive, then keeping a partition for each on my FW backup drive. If the ZFS part on the FW is synced/pooled with my built-in drive once a week or so, wouldn't that clean up the ZFS built in pool?

Elliott Wall

unread,
Nov 8, 2012, 12:39:03 PM11/8/12
to zfs-...@googlegroups.com
BAH, it sounds like Expresscard booting and speed isn't so great and I don't want to throw $ around just to play with it. When my FW drive arrives I'll then have two separate TM backups… I'll wipe my main hard disk and install the system on a say, 30GB HFS+ part, then make the rest of the disk ZFS. It's all an experiment I guess~ I'll share my findings… : ) Thanks for the input everybody

pub

unread,
Nov 9, 2012, 1:20:06 PM11/9/12
to zfs-...@googlegroups.com
In case any of the devs are interested:

I get these a lot on different hardware. This one is from a one-disk laptop, shared with HFS+ boot partition.


panic(cpu 7 caller 0xffffff7f8079fedf): "mutex_enter: locking against myself!"@/Users/alex/Projects/MacZFS/usr/src/maczfs/kernel/zfs_context.c:448
Backtrace (CPU 7), Frame : Return Address
0xffffff812ca73890 : 0xffffff8000220792 
0xffffff812ca73910 : 0xffffff7f8079fedf 
0xffffff812ca73930 : 0xffffff7f80788cdd 
0xffffff812ca73970 : 0xffffff800031979c 
0xffffff812ca739a0 : 0xffffff80002ffead 
0xffffff812ca739f0 : 0xffffff80002ffa2e 
0xffffff812ca73a30 : 0xffffff800030006e 
0xffffff812ca73a60 : 0xffffff80003000cc 
0xffffff812ca73a80 : 0xffffff7f8078ef12 
0xffffff812ca73b50 : 0xffffff7f807860c6 
0xffffff812ca73cb0 : 0xffffff7f8078625f 
0xffffff812ca73cf0 : 0xffffff80002f60ea 
0xffffff812ca73d90 : 0xffffff80002e65fe 
0xffffff812ca73f60 : 0xffffff80005cde98 
0xffffff812ca73fb0 : 0xffffff80002daa79 
      Kernel Extensions in backtrace:
         com.bandlem.mac.zfs.fs(74.1)[27C35C55-D996-EDBE-8EB5-FD03743DFFAF]@0xffffff7f8074b000->0xffffff7f807b3fff

BSD process name corresponding to current thread: ClamXavSentry

Mac OS version:
11G63

Kernel version:
Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64
Kernel UUID: FF3BB088-60A4-349C-92EA-CA649C698CE5
System model name: MacBookPro8,2 (Mac-94245A3940C91C80)

System uptime in nanoseconds: 61538008783879
last loaded kext at 363225159387: com.apple.filesystems.smbfs 1.7.2 (addr 0xffffff7f8080f000, size 241664)
last unloaded kext at 186338713298: com.apple.driver.AppleUSBUHCI 5.1.0 (addr 0xffffff7f80af5000, size 65536)
loaded kexts:
org.virtualbox.kext.VBoxNetAdp 4.1.18
org.virtualbox.kext.VBoxNetFlt 4.1.18
org.virtualbox.kext.VBoxUSB 4.1.18
org.virtualbox.kext.VBoxDrv 4.1.18
com.parallels.kext.prl_vnic 7.0 15098.770637
com.parallels.kext.prl_netbridge 7.0 15098.770637
com.parallels.kext.prl_hid_hook 7.0 15098.770637
com.parallels.kext.prl_hypervisor 7.0 15098.770637
com.parallels.kext.prl_usb_connect 7.0 15098.770637
com.bandlem.mac.zfs.fs 74.1.0
com.apple.filesystems.smbfs 1.7.2
com.apple.driver.AppleHWSensor 1.9.5d0
com.apple.filesystems.autofs 3.0
com.apple.driver.AudioAUUC 1.59
com.apple.driver.AGPM 100.12.75
com.apple.driver.AppleMikeyHIDDriver 122
com.apple.driver.AppleHDA 2.2.5a5
com.apple.driver.AppleMikeyDriver 2.2.5a5
com.apple.driver.AppleUpstreamUserClient 3.5.9
com.apple.kext.ATIFramebuffer 7.3.2
com.apple.driver.AppleIntelHD3000Graphics 7.3.2
com.apple.driver.SMCMotionSensor 3.0.2d6
com.apple.iokit.IOUserEthernet 1.0.0d1
com.apple.driver.AppleSMCLMU 2.0.1d2
com.apple.iokit.IOBluetoothSerialManager 4.0.8f17
com.apple.Dont_Steal_Mac_OS_X 7.0.0
com.apple.driver.AudioIPCDriver 1.2.3
com.apple.driver.ApplePolicyControl 3.1.33
com.apple.driver.ACPI_SMC_PlatformPlugin 5.0.0d8
com.apple.driver.AppleMuxControl 3.1.33
com.apple.driver.AppleLPC 1.6.0
com.apple.driver.AppleMCCSControl 1.0.33
com.apple.ATIRadeonX3000 7.3.2
com.apple.driver.AppleSMCPDRC 5.0.0d8
com.apple.driver.BroadcomUSBBluetoothHCIController 4.0.8f17
com.apple.driver.AppleUSBTCButtons 227.6
com.apple.driver.AppleIRController 312
com.apple.driver.AppleUSBTCKeyboard 227.6
com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1
com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1
com.apple.BootCache 33
com.apple.iokit.SCSITaskUserClient 3.2.1
com.apple.driver.XsanFilter 404
com.apple.iokit.IOAHCISerialATAPI 2.0.3
com.apple.iokit.IOAHCIBlockStorage 2.1.0
com.apple.driver.AppleUSBHub 5.1.0
com.apple.driver.AppleFWOHCI 4.9.0
com.apple.driver.AirPort.Brcm4331 561.7.22
com.apple.driver.AppleSDXC 1.2.2
com.apple.iokit.AppleBCM5701Ethernet 3.2.4b8
com.apple.driver.AppleEFINVRAM 1.6.1
com.apple.driver.AppleSmartBatteryManager 161.0.0
com.apple.driver.AppleAHCIPort 2.3.1
com.apple.driver.AppleUSBEHCI 5.1.0
com.apple.driver.AppleACPIButtons 1.5
com.apple.driver.AppleRTC 1.5
com.apple.driver.AppleHPET 1.7
com.apple.driver.AppleSMBIOS 1.9
com.apple.driver.AppleACPIEC 1.5
com.apple.driver.AppleAPIC 1.6
com.apple.driver.AppleIntelCPUPowerManagementClient 195.0.0
com.apple.nke.applicationfirewall 3.2.30
com.apple.security.quarantine 1.4
com.apple.security.TMSafetyNet 8
com.apple.driver.AppleIntelCPUPowerManagement 195.0.0
com.apple.kext.triggers 1.0
com.apple.driver.DspFuncLib 2.2.5a5
com.apple.iokit.IOSurface 80.0.2
com.apple.iokit.IOFireWireIP 2.2.5
com.apple.iokit.IOSerialFamily 10.0.5
com.apple.driver.AppleHDAController 2.2.5a5
com.apple.iokit.IOHDAFamily 2.2.5a5
com.apple.iokit.IOAudioFamily 1.8.6fc18
com.apple.kext.OSvKernDSPLib 1.3
com.apple.driver.AppleSMC 3.1.3d10
com.apple.driver.IOPlatformPluginLegacy 5.0.0d8
com.apple.driver.AppleSMBusPCI 1.0.10d0
com.apple.driver.AppleGraphicsControl 3.1.33
com.apple.driver.AppleBacklightExpert 1.0.4
com.apple.driver.AppleSMBusController 1.0.10d0
com.apple.iokit.IONDRVSupport 2.3.4
com.apple.kext.ATI6000Controller 7.3.2
com.apple.kext.ATISupport 7.3.2
com.apple.driver.AppleIntelSNBGraphicsFB 7.3.2
com.apple.iokit.IOGraphicsFamily 2.3.4
com.apple.driver.IOPlatformPluginFamily 5.1.1d6
com.apple.driver.AppleUSBBluetoothHCIController 4.0.8f17
com.apple.iokit.IOBluetoothFamily 4.0.8f17
com.apple.driver.AppleThunderboltDPInAdapter 1.8.5
com.apple.driver.AppleThunderboltDPAdapterFamily 1.8.5
com.apple.driver.AppleThunderboltPCIDownAdapter 1.2.5
com.apple.driver.AppleUSBMultitouch 230.5
com.apple.iokit.IOUSBHIDDriver 5.0.0
com.apple.driver.AppleUSBMergeNub 5.1.0
com.apple.driver.AppleUSBComposite 5.0.0
com.apple.iokit.IOSCSIMultimediaCommandsDevice 3.2.1
com.apple.iokit.IOBDStorageFamily 1.7
com.apple.iokit.IODVDStorageFamily 1.7.1
com.apple.iokit.IOCDStorageFamily 1.7.1
com.apple.driver.AppleThunderboltNHI 1.6.0
com.apple.iokit.IOThunderboltFamily 2.0.3
com.apple.iokit.IOSCSIArchitectureModelFamily 3.2.1
com.apple.iokit.IOUSBUserClient 5.0.0
com.apple.iokit.IOFireWireFamily 4.4.8
com.apple.iokit.IO80211Family 420.3
com.apple.iokit.IOEthernetAVBController 1.0.1b1
com.apple.iokit.IONetworkingFamily 2.1
com.apple.iokit.IOAHCIFamily 2.0.8
com.apple.iokit.IOUSBFamily 5.1.0
com.apple.driver.AppleEFIRuntime 1.6.1
com.apple.iokit.IOHIDFamily 1.7.1
com.apple.iokit.IOSMBusFamily 1.1
com.apple.security.sandbox 177.8
com.apple.kext.AppleMatch 1.0.0d1
com.apple.driver.DiskImages 331.7
com.apple.iokit.IOStorageFamily 1.7.2
com.apple.driver.AppleKeyStore 28.18
com.apple.driver.AppleACPIPlatform 1.5
com.apple.iokit.IOPCIFamily 2.7
com.apple.iokit.IOACPIFamily 1.4
Model: MacBookPro8,2, BootROM MBP81.0047.B27, 4 processors, Intel Core i7, 2.2 GHz, 8 GB, SMC 1.69f3
Graphics: AMD Radeon HD 6750M, AMD Radeon HD 6750M, PCIe, 512 MB
Graphics: Intel HD Graphics 3000, Intel HD Graphics 3000, Built-In, 512 MB
Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1067 MHz, 0x857F, 0x483634353155363446373036364700000000
Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1067 MHz, 0x857F, 0x483634353155363446373036364700000000
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0xD6), Broadcom BCM43xx 1.0 (5.106.198.19.22)
Bluetooth: Version 4.0.8f17, 2 service, 18 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en1
Serial ATA Device: ST9500325ASG, 500.11 GB
Serial ATA Device: HL-DT-ST DVDRW  GS31N
USB Device: hub_device, 0x0424  (SMSC), 0x2513, 0xfa100000 / 3
USB Device: Apple Internal Keyboard / Trackpad, apple_vendor_id, 0x0252, 0xfa120000 / 5
USB Device: BRCM2070 Hub, 0x0a5c  (Broadcom Corp.), 0x4500, 0xfa110000 / 4
USB Device: Bluetooth USB Host Controller, apple_vendor_id, 0x821a, 0xfa113000 / 8
USB Device: FaceTime HD Camera (Built-in), apple_vendor_id, 0x8509, 0xfa200000 / 2
USB Device: hub_device, 0x0424  (SMSC), 0x2513, 0xfd100000 / 2
USB Device: IR Receiver, apple_vendor_id, 0x8242, 0xfd110000 / 3




Chris Ridd

unread,
Nov 10, 2012, 4:23:27 AM11/10/12
to zfs-...@googlegroups.com

On 9 Nov 2012, at 18:20, pub <p...@onlinestreams.com> wrote:

> In case any of the devs are interested:
>
> I get these a lot on different hardware. This one is from a one-disk laptop, shared with HFS+ boot partition.
>
>
> panic(cpu 7 caller 0xffffff7f8079fedf): "mutex_enter: locking against myself!"@/Users/alex/Projects/MacZFS/usr/src/maczfs/kernel/zfs_context.c:448
> Backtrace (CPU 7), Frame : Return Address
> 0xffffff812ca73890 : 0xffffff8000220792
> 0xffffff812ca73910 : 0xffffff7f8079fedf
> 0xffffff812ca73930 : 0xffffff7f80788cdd
> 0xffffff812ca73970 : 0xffffff800031979c
> 0xffffff812ca739a0 : 0xffffff80002ffead
> 0xffffff812ca739f0 : 0xffffff80002ffa2e
> 0xffffff812ca73a30 : 0xffffff800030006e
> 0xffffff812ca73a60 : 0xffffff80003000cc
> 0xffffff812ca73a80 : 0xffffff7f8078ef12
> 0xffffff812ca73b50 : 0xffffff7f807860c6
> 0xffffff812ca73cb0 : 0xffffff7f8078625f
> 0xffffff812ca73cf0 : 0xffffff80002f60ea
> 0xffffff812ca73d90 : 0xffffff80002e65fe
> 0xffffff812ca73f60 : 0xffffff80005cde98
> 0xffffff812ca73fb0 : 0xffffff80002daa79
> Kernel Extensions in backtrace:
> com.bandlem.mac.zfs.fs(74.1)[27C35C55-D996-EDBE-8EB5-FD03743DFFAF]@0xffffff7f8074b000->0xffffff7f807b3fff
>
> BSD process name corresponding to current thread: ClamXavSentry

As a temporary workaround I would expect the panics to go away when you disable ClamXavSentry - do they?

Chris

Fastmail Jason

unread,
Nov 10, 2012, 7:27:55 AM11/10/12
to zfs-...@googlegroups.com
Yeah, I was wondering about that ClamXavSentry....


--
Jason Belec
Sent from my iPad
> --
>
>
>

pub

unread,
Nov 10, 2012, 12:01:10 PM11/10/12
to zfs-...@googlegroups.com
[meant to change the subject too]

I presume they will, but it will take months to find out; on this particular machine, it rarely panics, but when it does, I think it always comes from MacZFS with ClamXavSentry running.

I don't know why ClamXavSentry always seems to trip over this bug; I figured that it was either scanning a file with a bad checksum (which I am starting to think always causes a panic?), or it was exercising the driver more.

I'll see what happens when I run a scrub; it is time for one anyway.

>> BSD process name corresponding to current thread: ClamXavSentry
>
> As a temporary workaround I would expect the panics to go away when you disable ClamXavSentry - do they?
>
> Chris
>
> --
>
>
>

Elliott Wall

unread,
Nov 10, 2012, 12:24:21 PM11/10/12
to zfs-...@googlegroups.com
I don't have that many kexts, so if y'all think it's a problem with the Clam thing then I guess all that someone installing ZFS on a single drive system has to worry about is whether he'll get woeful performance, and how Mail behaves, right? (ah, and whether he has other conflicts) Is there anything out there to quantify the single drive install performance hit? I've been looking but there just aren't that many single drive folks talking about their experiences. I have a total of 50 apps (including Photoshop, which I think I've read doesn't like to be installed on non HFS+) and a 500GB 7200rpm drive that's half full. The question being, is it even worth running the experiment to wipe the HD, partition it, say, 32/468GB, install 10.8.2 and Applications on the HFS+ side? I can put up with, say, a 10% performance hit if my data is somewhat safer, as long as I'm not thrashing the drive in the process.

What about system caches and things… all that can happen normally? Where does the system put that stuff… in user space I hope (and imagine… seems stupid and insecure anywhere else). Just want to understand how much storage headroom the System part needs, currently App, Library, opt and System are ~ 20GB.

Later today I'm going to try turning Spotlight off and seeing how Mail behaves (unless someone here can share their experiences).

The Expresscard idea is sketchy, and doing the optical bay… well, I just don't have a lot of $ right now and I'm nervous about reports of the SSDs not lasting long. I still have some old school prejudices about SSDs… sort of afraid of them…. so I want to look into single drive first.

Finally, just so I understand ZFS pooling, my backup FW drive with ZFS part I'll plug in once a week or whenever just like I would with time machine, then I enter some kind of terminal command (or not), and the built in and FW ZFS parts will rejoin and dupe, etc.?

pub

unread,
Nov 10, 2012, 2:03:50 PM11/10/12
to zfs-...@googlegroups.com
I don't think clamX has any kernel extensions itself. AFAIK, it is solely a user-mode application; it can't panic the kernel. But, in my case, probably around once a month it seems to exercise a code path in MacZFS that causes a panic.

I have a mini and a laptop with a shared JHFS+/MacZFS drive. I only have data (home folder for one, disk image for other) on the MacZFS filesystems. I have not noticed any performance impact in low-volume usage, except when accessing and/or modifying lots of small files rapidly. In that case, it is very slow, as if the latency skyrocketed. But it (and the occasional panic) is so far worth it to have snapshots.

Mail on the MacZFS fs works fine in my usage, except that I can't search for anything. But the performance is fine.

As for things to worry about, different people will give you different answers, but in my case I worry about data integrity, and so I keep a lot of backups (not just raidz or mirrors, but actual backups). Actually, I don't trust MacZFS with my critical data anymore, not since I lost a pool with all my data on it a few months ago (got it back from an old JHFS+ backup I was planning to erase). I can't say what went wrong - whether the problem was hardware or MacZFS, only that the pool was marked degraded, and I could not keep the machine on long enough to get much of anything off of it; I am pretty sure there is a hardware problem with one of the drives as I am still having trouble with it as a secondary JHFS+ backup (did I mention the paranoia about backups?). I tried scrubbing it from openindiana in a virtualbox, but even that kernel panicked (and at < 100K/second scrubbing speed, it would have taken months to finish). Hence, my advice would be to keep at least one backup for your critical data, and consider making it a backup on a different type of filesystem (eg: JHFS+, btrfs, ext4, ...). The other filesystem will likely have other failure scenarios where you can lose everything, but they are likely to be different failure scenarios than what you would get from MacZFS, and so hopefully you will be better protected between the two. That's the theory I am going on myself, at least.

I have since built a linux fileserver with ZFSonLinux, and now wish I had kept that original pool around longer before re-formatting so that I could explore the degraded pool in something that wouldn't crash so easily. Oh well, crossing fingers and hoping this one works better. :-)
> --
>
>
>

Daniel Bethe

unread,
Nov 10, 2012, 3:38:52 PM11/10/12
to zfs-...@googlegroups.com
Hi there.  I am sorry you're having trouble with your system, and just for the purpose of discussion, I wanted to make sure that a few things are clarified.  I didn't want any misunderstanding.  So if I understand you correctly...

You suspect a hardware failure...

> just raidz or mirrors, but actual backups). Actually, I don't trust MacZFS
> with my critical data anymore, not since I lost a pool with all my data on it a

...but you're also blaming MacZFS...

> secondary JHFS+ backup (did I mention the paranoia about backups?). I tried
> scrubbing it from openindiana in a virtualbox, but even that kernel panicked
> (and at < 100K/second scrubbing speed, it would have taken months to finish).


...after having taken MacZFS completely out of the situation, and replaced it with the latest upstream code, which your hardware can't keep up with and then crashes on...

> I have since built a linux fileserver with ZFSonLinux, and now wish I had kept
> that original pool around longer before re-formatting so that I could explore
> the degraded pool in something that wouldn't crash so easily.


...and you think that the beta ZFS from another OS would be stable.

Did I read this correctly so far?  I'm jumping in here so I'm sorry if I misunderstood.

Did you run a hardware QA on that system, such as http://memtest.org/ or http://www.memtestosx.org/ ?  And I guess you weren't able to move your drives to another system to test the zpool there.

Daniel Bethe

unread,
Nov 10, 2012, 7:59:45 PM11/10/12
to zfs-...@googlegroups.com

> I presume they will, but it will take months to find out; on this particular
> machine, it rarely panics, but when it does, I think it always comes from MacZFS
> with ClamXavSentry running.
>
> I don't know why ClamXavSentry always seems to trip over this bug; I figured
> that it was either scanning a file with a bad checksum (which I am starting to
> think always causes a panic?), or it was exercising the driver more.


Again, I'm popping in suddenly into this thread, and this time I'm just guessing in case anyone else knows more.  Is it possible that there's a bug in MacZFS which is triggered when there's a single process which walks through the entire filesystem?  I read a few years ago that there was a bug that had made a panic occur when a really large rsync process was run.  An antivirus scanner might be similar behavior.  I don't know if there is still such a bug; just asking the kernel hackers.

> I'll see what happens when I run a scrub; it is time for one anyway.


That's a good idea, but remember that a scrub operates on low level blocks, whereas a userspace app such as ClamAV operates on the POSIX layer.  So it's a different code path.

pub

unread,
Nov 10, 2012, 8:55:14 PM11/10/12
to zfs-...@googlegroups.com
Well, I don't want to start a discussion on it; it is too late to troubleshoot it. I mentioned it only to point out the importance of backups, even when you have a raid.

But, just to clarify ...

It is not really clear what happened to it, and in my rush to re-establish a backup, I deleted the degraded pool rather than continuing to work on it.

I am 90% certain now that at least one of the hard disks is failing. Before deleting the pool, I tried to scrub it on another Mac, but only got a lot of panics. After giving up on the degraded pool (and establishing a new pool on different disks), I put the disks in a different case and formatted as a single CoreStorage JHFS+ pool (CS used to join the disks - no RAID). I copied several TB of data to it over the course of a week before it failed and all the data died. Hence, probably the machines and cases are fine, but the disk(s) are in the process of dying.

I have to correct one part: that OI virtualbox scrub I mentioned - actually, I think that was a different pool (#3 if you are counting); I don't think I tried a scrub on this one in OI, and got the two confused.

I don't blame MacZFS for losing the pool, only for panicking so much I could not do anything with it. The degraded pool was a raidz1, so in hindsight, I should have tried figuring out which drive was bad, and replacing it (didn't know it was a bad drive at the time, though). Perhaps it would not have panicked while doing a rebuild? In any case, that is largely why I don't trust MacZFS - the kernel panics every few weeks. Even if it does not lose data on disk, if the computer can't stay on long enough to fix the pool or copy the data to another pool, then it is effectively lost (unless you have a PC that can recover it).

In any case, I still have the hardware that I still need to troubleshoot; I will try a new pool on it to see if I can re-establish the failure scenario and get better answers.

> ...and you think that the beta ZFS from another OS would be stable.

Yes. Rather, I'll let you know in a year. :-)

I think it will be more stable, but it is not a fair test. My understanding is that MacZFS is a 6 year old alpha code release from Apple that has only been patched since - few engineers, no major development efforts, small user base, and very little active development, right? ZoL, OTOH, has a larger user base, more engineers, is at least in beta, is current (v28), and under active development, if I am not mistaken. Chances are ZoL will be more stable given those conditions. If it is not, well then I guess I will try OI (choose linux to get Luks - the encryption layer).

However, in my case it is not a fair test because the hardware setup is very different. The only option I had on the mini were USB-based cases (and yes I did look into switching to FW instead of a PC). On the PC, all the disks are plugged into the motherboard or PCI card over SATA.


Regardless, I have had a lot of close calls over the last 20ish years with lots of different machines, cases, and technologies; so I have learned to keep lots of backups, and now backups on different filesystem types. And that is ultimately the point I was making.
> --
>
>
>

pub

unread,
Nov 10, 2012, 9:00:16 PM11/10/12
to zfs-...@googlegroups.com
Scrub came back clean.

ClamXavSentry should only have been scanning ~/Library and ~/Downloads (/Users mounted on MacZFS). I don't know what technology it uses to listen for new and changed files, though.

Thanks guys
> --
>
>
>

Daniel Bethe

unread,
Nov 10, 2012, 9:37:33 PM11/10/12
to zfs-...@googlegroups.com

> I don't blame MacZFS for losing the pool, only for panicking so much I could
> not do anything with it. The degraded pool was a raidz1, so in hindsight, I


Yeah that can be inconvenient, having a panic in order to save the data.

> - the kernel panics every few weeks. Even if it does not lose data on disk, if
> the computer can't stay on long enough to fix the pool or copy the data to
> another pool, then it is effectively lost (unless you have a PC that can recover
> it).


Well, you just get another OS for a temporary scrub, unless your hardware is broken as yours seems to have been.

> that MacZFS is a 6 year old alpha code release from Apple that has only been
> patched since - few engineers, no major development efforts, small user base,


What MacZFS is based on *was* an alpha from Apple, long ago, which was then subsequently stabilized and enhanced.

Anyway, in case it helps with your future endeavors, or in case you want to submit more information, there are wiki documents about dealing with kernel panics.  Mainly this:

http://code.google.com/p/maczfs/wiki/Troubleshooting

Graham Perrin

unread,
Nov 11, 2012, 2:22:36 PM11/11/12
to zfs-...@googlegroups.com
On Sunday, 11 November 2012 02:00:21 UTC, ben wrote:
 
… ClamXavSentry … I don't know what technology it uses to listen for new and changed files … 

ClamXav Sentry uses gfslogger, which was functionally an exact clone of fslogger by Amit Singh. 

fslogger was from the Tiger and Spotlight era. An answer in Stack Overflow observes that the file system events API was then not public. 

If problems involving gfslogger are suspected, it may be worth contacting the developer of ClamXav. 

References
==========

A File System Change Logger

How can I receive notifications of filesystem changes in OS X?

File System Events Programming Guide: Introduction

gfslogger: Subscribes to Mac OS X 10.4 fsevents and Displays File System Changes.
– in the Internet Arhive Wayback Machine

Fink - Package Database - Package gfslogger (Displays all filesystem changes)
– in the Internet Arhive Wayback Machine

Re: [clamav-users] Improving Scan Speeds on OS X.4.11
2012-11-11 18-57-41 screenshot.png

Graham Perrin

unread,
Nov 11, 2012, 2:31:11 PM11/11/12
to zfs-...@googlegroups.com
On Friday, 9 November 2012 18:20:11 UTC, ben wrote:



BSD process name corresponding to current thread: ClamXavSentry

…  

The 2012-11-08 release of ClamXav 2.3.3 included (amongst other things) "Fix for numerous Sentry crashes" … crashes, not kernel panics but still, maybe worth considering. 


Please, which version was used at the time of the 11G63 panic that's shown in your 2012-11-09 post?

Do you still have the .panic file and if so, what date and time is logged within the file? (The time logged may be not a true match for the time of the panic, but hopefully close enough in this case.)

Graham Perrin

unread,
Nov 11, 2012, 2:57:37 PM11/11/12
to zfs-...@googlegroups.com
On Wednesday, 7 November 2012 04:41:01 UTC, tan...@mac.com wrote:

… performance with a single physical disk being shared with HFS and ZFS. … 

Yeah, observations/questions arose a few times in the ZEVO area so I made a point of reference: 

latency: avoid mixing HFS Plus and ZFS on a single hard disk

Daniel Becker

unread,
Nov 11, 2012, 3:36:53 PM11/11/12
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
Note that this refers to performance degradation on the HFS+ side, not on the ZFS side.
--
 
 
 

Fastmail Jason

unread,
Nov 11, 2012, 4:07:11 PM11/11/12
to zfs-...@googlegroups.com
Well, I've been running a particular system with HFS+ and ZFS since this group started without issue, at least data loss issue. The drive will and did die 3 times which is to be expected, spinners don't last. On death I just get a new drive, put the clone of the OS back, and send the latest snapshot of the ZFS pool back over and all things are flying again. Most of my other systems are now SSD for the OS and a spinner for ZFS and on some the ZFS is 5 or more drives in various configurations. However running both HFS+ and ZFS on the same drive works just fine, not the fastest, and not for longevity, but no data loss issues here at least (assuming your making backups of course).


--
Jason Belec
Sent from my iPad

Elliott Wall

unread,
Nov 11, 2012, 5:54:18 PM11/11/12
to zfs-...@googlegroups.com
("error posting reply", arrgg)

My spinner HD has lasted from the time I bought the computer in 2009 and I'd like to keep it that way… from all I've read it really sounds like a single drive scenario really won't won't very well— maybe this should be stated more explicitly in the wiki, maybe with something like a firm "minimum specs" statement.

If I want to push on with this and stick with MacOS (tempting just to switch over 100% to FreeBSD or something), is this what I need: http://eshop.macsales.com/item/Other%20World%20Computing/DDMBSSD030/
My system and Applications is only 21GB, is this enough headroom?

Jason

unread,
Nov 11, 2012, 8:06:30 PM11/11/12
to zfs-...@googlegroups.com
Those work, I have 10 systems set up with that and other options to run 2 drives in a system. 

Jason
Sent from my iPhone
--
 
 
 

Elliott Wall

unread,
Nov 11, 2012, 8:07:46 PM11/11/12
to zfs-...@googlegroups.com
That's excellent, thank you!

pub

unread,
Nov 12, 2012, 10:31:04 PM11/12/12
to zfs-...@googlegroups.com
ClamXav version: 2.3.1 (build 267)  - GUI
ClamXavSentry version: 2.7.1 (build 267)  - background process


Kernel panic file:

Kernel_2012-11-09-111002_mac.panic

Graham Perrin

unread,
Nov 13, 2012, 2:35:07 AM11/13/12
to zfs-...@googlegroups.com
OK - 2.3.1 was outdated around three months before the panic. 

If panics of this type are reproducible with current versions of software, and if you suspect an issue with ClamXav Sentry, please consider posting to the ClamXav support forum (or to a separate topic here; we're considerably off from the opening post) – thanks. 

According to phpBB for ClamXav the most recent topic with panic in its title was in 2007: 

Kernal Panic!

Graham Perrin

unread,
Nov 13, 2012, 3:35:58 AM11/13/12
to zfs-...@googlegroups.com
On Tuesday, 13 November 2012 03:31:12 UTC, ben wrote:



Kernel panic file:

… 

Also outdated three months ago: VirtualBox 4.1.18. Aim instead for 4.2.4 or greater. 

For me, 4.2.2 features strongly in the following topic: 

kernel_task kernel panic with Mountain Lion under pressure

I have an uneducated hunch that a bugged third party KEXT might occasionally contribute to a kernel panic, even if the app relating to that KEXT is not currently running. 

Whether any KEXT for VirtualBox is truly bugged I don't know, but as a rule of thumb: in panic situations, I pay attention to all third party KEXTs. 
Reply all
Reply to author
Forward
0 new messages