Drop in replacement for Zevo installs

619 views
Skip to first unread message

George

unread,
Oct 2, 2013, 11:18:36 PM10/2/13
to maczfs...@googlegroups.com
Alright, I thought I'd throw the elephant into the room here.  I don't want to start an ugly debate, but take this from a pragmatic standpoint.  From the dialogue on the ZEVO forum, it sounds like we need to start taking notice of the new efforts going on here.

I worked with Don quite a bit on ZEVO testing and working out some various kinks before.  One of the wonderful things with Don was an attentiveness to detail and his deep experience from the bowels of Apple prime.  There are perhaps aspects of ZEVO that I'm used to (and rightfully so) that I would like to make sure are in place with the next iteration of MacZFS before I jump into testing.

I've not had time to dig through the forum here or details about the current status, so forgive me in advance and let's take this as a fresh discussion for would-be ZEVO converts, aka a ZEVO FAQ.  

Presently ZEVO is a pretty much plug-it-and-forget-it setup in these regards:
  1. imports / mounts automatically on startup
  2. automatically creates snapshots daily (Not sure if I like this feature, but it's there. It would be a really nice feature if there were a table to edit autosnapshots on a per volume basis.)
  3. autoscrubs weekly (Again, not sure if I like this, but it's there.)
  4. has a control panel; handy for: checking updates, setting "show child file systems on the desktop and sidebar" and starting scrubs along with giving status of last scrub started or finished.  (It looks like someone has this, and more, in their sights from this post.)
  5. primary tank and child tanks do show up in Finder
  6. TimeMachine works, but it is also convenient to browse manually in the shell
  7. normalization is set to formD by default; used to have filename problems, especially with nonstandard characters
  8. no panics in a year and never experienced data loss in 2 years.
  9. the ability to masquerade as HFS+ to some applications (perhaps the need to do so to ALL applications for those of us who are smart enough to not try to run HFS+ utils on a ZFS volume).  There are many apps such as Final Cut Pro X and others that simply refuse to grant ZFS a first class citizenship.  I, and probably others, have cried to Apple to address this issue, but no one will reply or lift their eyes it appears.  We unfortunately have to deal with this.

I personally would manually create my tanks via something like:
  • `diskutil partitiondisk /dev/disk0 GPTFormat zfs %noformat% 100%`
  • `zfs create -o atime=off -o utf8only=on -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb -o mountpoint=/Users/username/some_folder`
Will that be accomplishable via Disk Utility ever?  I have not put any thought into how to set this up, but it would be interesting.

Personally, I'm primarily just interested right now in bringing my two large tanks over and I stability is exceptionally important to me right now with a hectic schedule.  I'm a veteran with unix and os x, but I just can't deal with a lot of down time due to my schedule.

Are we roughly to a point where we can drop this in as a ZEVO replacement and expect it to run as we're accustomed to, or do we have some work yet to do?

Any other ZEVO / TensComplement users around who would like to chime in on anything missed above or what else is needed?  

lun...@lundman.net

unread,
Oct 2, 2013, 11:50:16 PM10/2/13
to maczfs...@googlegroups.com


On Thursday, 3 October 2013 12:18:36 UTC+9, George wrote:
Alright, I thought I'd throw the elephant into the room here.  I don't want to start an ugly debate, but take this from a pragmatic standpoint.  From the dialogue on the ZEVO forum, it sounds like we need to start taking notice of the new efforts going on here.

I worked with Don quite a bit on ZEVO testing and working out some various kinks before.  One of the wonderful things with Don was an attentiveness to detail and his deep experience from the bowels of Apple prime.  There are perhaps aspects of ZEVO that I'm used to (and rightfully so) that I would like to make sure are in place with the next iteration of MacZFS before I jump into testing.


Not all things Don brought to the work has been lost. Certainly lots of work in original code, and some personal emails with Don has kept some of his experience around. I do get a feeling that Don (almost German level) has higher attention to detail than I do, as I lean toward "progress" rather than pure stability (at cost of omitting features).



 
Presently ZEVO is a pretty much plug-it-and-forget-it setup in these regards:
  1. imports / mounts automatically on startup

This has been pretty low priority for me, as OSX specific integration is not as exciting as the actual "core" working. Also we always knew it is not a difficult

thing to add, you just need a /System/Library/Filesystem/fs "bundle" to recognise ZFS and call the helper. This does already exist, but is not in the

"master" branch yet.



  1. automatically creates snapshots daily (Not sure if I like this feature, but it's there. It would be a really nice feature if there were a table to edit autosnapshots on a per volume basis.)

This can be done the traditional crontab way, but will eventually go into the future GUI.

  1. autoscrubs weekly (Again, not sure if I like this, but it's there.)

This can be done the traditional crontab way, but will eventually go into the future GUI.

  1. has a control panel; handy for: checking updates, setting "show child file systems on the desktop and sidebar" and starting scrubs along with giving status of last scrub started or finished.  (It looks like someone has this, and more, in their sights from this post.)

We have added dataset properties for this, so you can chose your desire per ZFS dataset, as opposed to a Global Setting, but will eventually go into the future GUI.

  1. primary tank and child tanks do show up in Finder

We have added dataset properties for this, so you can chose your desire per ZFS dataset, as opposed to a Global Setting, but will eventually go into the future GUI.

  1. TimeMachine works, but it is also convenient to browse manually in the shell

TimeMachine does work, more than that, you can create ZFS Volume to store TimeMachine on, with the included ZFS benefits of compression, dedup, snapshot etc. If you so wanted.


  1. normalization is set to formD by default; used to have filename problems, especially with nonstandard characters

Unsure about this. I have not change any ZFS defaults, as a pool created on one system should be the same as others? Or should it be

platform "convenient"?


  1. no panics in a year and never experienced data loss in 2 years.

I have not experienced any data loss, even with my 50+ panics a day in early porting days. Now I will not get panics for vanilla usage.


  1. the ability to masquerade as HFS+ to some applications (perhaps the need to do so to ALL applications for those of us who are smart enough to not try to run HFS+ utils on a ZFS volume).  There are many apps such as Final Cut Pro X and others that simply refuse to grant ZFS a first class citizenship.  I, and probably others, have cried to Apple to address this issue, but no one will reply or lift their eyes it appears.  We unfortunately have to deal with this.

If this is related to AFP fakery, we already included it in the code. If it is something separate, it'd be fun to look at.



  • `diskutil partitiondisk /dev/disk0 GPTFormat zfs %noformat% 100%`
  • `zfs create -o atime=off -o utf8only=on -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb -o mountpoint=/Users/username/some_folder`
IN the new code, creating the pool will automatically make the GPT ZFS partition. I would no longer recommend doing the first step if you are using WholeDisk.



     Will that be accomplishable via Disk Utility ever?  I have not put any thought into how to set this up, but it would be interesting.

The fs.bundle mentioned above includes the probe for "formatting", which is presumably from DiskUtil. But having ZFS show as an option in DiskUtil
itself would be sexy indeed, but I don't see how it can be done (right now)


     Personally, I'm primarily just interested right now in bringing my two large tanks over and I stability is exceptionally important to me right now with a hectic schedule.  I'm a veteran with unix and os x, but I just can't deal with a lot of down time due to my schedule.
     Are we roughly to a point where we can drop this in as a ZEVO replacement and expect it to run as we're accustomed to, or do we have some work yet to do?

This is a scary topic indeed. Personally, I use my new version, and I feel confident with it. But I know if I panic, I just reboot. I have a backup of the data too. As soon as you bring in
other people's data, I feel uncomfortable feeling responsible for their data. Some guys on IRC who are able to take the plunge, have done so, so we can finally move forward
and grow the trust that is needed.

On the GUI. I am not a GUI programmer, I had hoped someone into XCode/Obj-C might have offered to assist. But I will most likely try to make something to happen in the next couple of weeks.


Graham Perrin

unread,
Oct 3, 2013, 12:09:49 AM10/3/13
to maczfs...@googlegroups.com
On Thursday, 3 October 2013 04:50:16 UTC+1, lun...@lundman.net wrote:

… having ZFS show as an option in DiskUtil

itself would be sexy indeed, but I don't see how it can be done (right now)
– a ZFS plugin for Disk Utility. Originated in/around 2010, I don't know whether its code will be easily reusable with ZFS-OSX on more modern versions of the operating system. 

George

unread,
Oct 3, 2013, 1:26:23 AM10/3/13
to maczfs...@googlegroups.com
Okay, so thanks very much for that reply.  I'm starting to feel better about this already.

I think auto importing / mounting should be absolutely a high priority.  If I recall correctly, Don was also working on bootable ZFS volumes, but I'm not sure what the status was on that when he went away from ZEVO.  Additionally here, it REALLY needs to autoimport well before the login screen.  We had some problems with this for a while and it created a problem using it as the home or /Users folder.  This was fixed about a year ago, but it was a problem with not importing early enough in the startup process.

It seems that all existing ZFS volumes should, upon "upgrade" to MacZFS be flagged to show up in Finder.  This would go a long way at preventing the dreaded initial boot-up freak out that may happen otherwise.  A real knee jerk gut reaction could be super negative at first for either of these first two regarding  auto importing or showing up in Finder.  But this is good, I would hide some selectively post upgrade.

Nice re: TimeMachine.  I ended up having to move back off of home ZEVO due to performance limitations that hit at over 80% storage utilization.  I would hit them frequently thanks to autosnapshots and it would bring me to my knees in terms of laggy performance.  It would be nice, even with a home HFS+ to have it all being taken care of on my larger ZFS TimeMachine.  We all thought Apple had done this back when they introduced it… :S

Regarding some default initialization options for the tank/volumes:  I think there should be some sane defaults.  My `-o atime=off -o utf8only=on -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb` options have served me very well.  I had many and varied problems until I got to the point of defining these.

One thing that Don and I were never able to really isolate in terms of the cause was why iTunes had an odd delay when media was stored on ZFS vs HFS+.  Changing media would unequivocally pause between skips.  It would be anywhere from 1 second to several.  After a lot of optimization effort it pretty much got down to 1 second, but still that is an order of magnitude slower than what it is with HFS+ for me.

Additionally, file operations such as `find /Volumes/tank/some/path` were also always very much slower than on HFS+.  It was not anything that was ever really hunted down very well, but it has always been quite annoying.  I don't recall details from our discussions, but it seemed to have to do with a differential in the way file lookups were done in each fs (been a very long time since I had this discussion with him).

So you've gotten no data loss, great.  But what about on HFS+?  I am more concerned about losing data on this fragile bugger with panics hitting.  Further, I'm a guy that keeps like 25 apps open at once and doesn't reboot for 3 weeks as I multitask between projects and manage several developers all day long and have a hard time getting back to a working state so to speak.  I've really got the panics down to near nil.  How is the exception handling going?  With panics you've seen so far, is there any additional handling that can be done to basically just tell you there's inconsistency and that you should restart IMMEDIATELY vs jumping into the panic?  I know this is a little esoteric, but exception handling is always one of those things that's an afterthought as we want to FIX the problems vs actually handling out of bounds.

Re: HFS+ spoofing, this is for apps, nothing else.  FCPX and other pro apps as well as some other apps just simply refuse to work on ZFS.  The argument they give is that other filesystems don't support the features needed that HFS+ provides… Well, we know that that's just bull and lazy development for the most part.  I could share some details as to how he had me hunting, but I will leave this without details for the moment for perhaps some fresh thoughts if you have any for this little puzzle.  I don't believe his method was a catch all and that there are better solutions.  In the end his method seemed hit and miss and I felt that even entirely spoofing a volume as HFS+ would have been the right approach.  Would sure make a nice per filesystem flag on the general vs specific app approach.  Don felt that a per app approach was more "pure" and that app developers should just flat out support ZFS or allow it.  I couldn't disagree with him directly as he's right, BUT pragmatically it would be better to just blanket the spoofing so that we would be beholden to Apple's stagnation in this regard.  Apple certainly wouldn't reply to my pleas to directly support pro apps on ZFS volumes.

I am very encouraged by the fact that it appears, from your mention, that dedup now works properly.  I used dedup for a while in ZEVO and it did work, but it had some serious problems, particularly in the memory and performance arenas.  One thing that was a continual fight was OS X's handling of memory vs Solaris' much superior kernel as I understand it.  Memory battles were always a real headache with ZFS.  I am wondering if / how you are getting around those or has Don's effort upstream helped this situation out?

Finally, sometimes it's not very practical for me to backup large amounts of video editing data or other such convenient data.  It would not be the end of the world to lose this stuff, for example, but I don't really want to back it up all the time.  So this is why I am going to have to push for some more security on the stability front.  I don't want any kinds of personal guarantees of data, I understand that and I've been runnings ZFS even since back in 2008 on FBSD on a file server pushing AFP.  I feel good about ZFS generally as there are additional data loss protection measures taken as I understand it at the core levels.  It's an amazing filesystem.  I just want to not be hitting panics and odd issues that are going to create more of a time sink for me right now.  

One thing I would like to try, I don't know if this is possible, is to try a current build on my ZEVO created volumes, run some benchmarks and so forth, and if things work as I need, then keep going with it, else just push the ZEVO kexts back in and pull out MacZFS until some more progress.  I really would like to give dedup a go again.  That is such a wonderful feature and if it now performs well, heck, I'm there.  

Interestingly on the dedup item, we were talking about building in some functionality that would trigger deduping and compressing filesystems post creation.  For example, a volume that has now already existed for a long time would be triggered to start the process of going through and deduping it and/or compressing it.  That was something we were discussing I think last fall.  Essentially a scrub kind of process as I understood it.

Graham Perrin

unread,
Oct 3, 2013, 2:29:33 AM10/3/13
to maczfs...@googlegroups.com
On Thursday, 3 October 2013 06:26:23 UTC+1, George wrote:

performance limitations that hit at over 80% storage utilization. I would hit them frequently thanks to autosnapshots …

I'll add to <http://superuser.com/a/484224/84988> a link to technical explanation of why eighty is significant for ZFS in general (not for ZEVO alone). 

<https://groups.google.com/d/msg/maczfs-devel/HBurtudl_QQ/d_ZdjkcBpcUJ> includes the notion that a preference pane could offer an amber alert at seventy-five percent, red alert at eighty. 

More recently I believe that there's a plan for ZFS-OSX to use OS X notifications (not Growl) for some things. 

<http://open-zfs.org/wiki/Projects> links to notes from a brainstorm that included: 

>> Pool fragmentation analytics
>> * Provide feedback on when to add storage
>> * Provide “% fragmented” metric
>> Data rebalancing/redistribution/defrag/placement

Graham Perrin

unread,
Oct 3, 2013, 2:48:27 AM10/3/13
to maczfs...@googlegroups.com
On Thursday, 3 October 2013 06:26:23 UTC+1, George wrote:

… why iTunes had an odd delay when media was stored on ZFS vs HFS+.  Changing media would unequivocally pause between skips.  It would be anywhere from 1 second to several.  After a lot of optimization effort it pretty much got down to 1 second, but still that is an order of magnitude slower than what it is with HFS+ for me. …


For me <http://open-zfs.org/wiki/User:Grahamperrin#Everyday_use_of_ZFS> the significant negative factor for performance in general (not iTunes in particular) is HFS Plus on the same disk … I can't say that use of iTunes with ZFS is problematic. I get a pause before playback only rarely. During playback there's never an issue. 

I can't compare with the HFS Plus experience. 

Jorgen Lundman

unread,
Oct 3, 2013, 3:14:30 AM10/3/13
to maczfs...@googlegroups.com


George wrote:
> It seems that all existing ZFS volumes should, upon "upgrade" to MacZFS be
> flagged to show up in Finder. This would go a long way at preventing the
> dreaded initial boot-up freak out that may happen otherwise. A real knee
> jerk gut reaction could be super negative at first for either of these
> first two regarding auto importing or showing up in Finder. But this is
> good, I would hide some selectively post upgrade.

I believe we added properties com.apple.browse, default on. and
com.apple.ignoreowner default off. If you set the former to off it will no
longer show in Finder/Desktop.

This is a recent thing, and we'll see how it plays out, whether or not we
want to change it.


> Nice re: TimeMachine. I ended up having to move back off of home ZEVO due
> to performance limitations that hit at over 80% storage utilization. I

We think of it as 85% full problem, but I believe it was improved
significantly near the higher pool versions. I certainly was under the
impressive that V28 did not suffer from this anymore, so I am surprised to
have it mentioned.


> Regarding some default initialization options for the tank/volumes: I
> think there should be some sane defaults. My `-o atime=off -o utf8only=on
> -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb`
> options have served me very well. I had many and varied problems until I
> got to the point of defining these.

I think everyone does atime=off, compression=lz4 these days, and yet the
default in OpenZFS is not that. :) It could be that utf8 and insensitive
does make sense for an OSX default. Or at least a message to that effect
when you create a pool.


> Additionally, file operations such as `find /Volumes/tank/some/path` were
> also always very much slower than on HFS+. It was not anything that was
> ever really hunted down very well, but it has always been quite annoying.
> I don't recall details from our discussions, but it seemed to have to do
> with a differential in the way file lookups were done in each fs (been a
> very long time since I had this discussion with him).

Listing can indeed be a slower. Some of that was addressed with Hybrid VDEV
pool v29, which only Solaris has. Hopefully OpenZFS will get there eventually.


> So you've gotten no data loss, great. But what about on HFS+? I am more
> concerned about losing data on this fragile bugger with panics hitting.

I have indeed had to reinstall the VM once due to HFS+ shitting all over
the disk upon panic. Luckily, the HFS+ disk was just the OS, all my
interesting things are on ZFS :)


> Further, I'm a guy that keeps like 25 apps open at once and doesn't reboot
> for 3 weeks as I multitask between projects and manage several developers
> all day long and have a hard time getting back to a working state so to
> speak. I've really got the panics down to near nil. How is the exception
> handling going? With panics you've seen so far, is there any additional
> handling that can be done to basically just tell you there's inconsistency
> and that you should restart IMMEDIATELY vs jumping into the panic? I know
> this is a little esoteric, but exception handling is always one of those
> things that's an afterthought as we want to FIX the problems vs actually
> handling out of bounds.

I am of like mind, Unix does not need rebooting. At the moment, the panics
are immediate and final. But it is worth to note that the internal ZFS
panics can be made soft, ie, only display a message and halt ZFS
operations. You can't do anything but reboot of course, but sometimes that
is nicer. (If you remember to keep an eye on the kernel logs). But only 40%
of panics are from ZFS, the rest come from Darwin when we do something
displeasing to it.


>
> Re: HFS+ spoofing, this is for apps, nothing else. FCPX and other pro apps
> as well as some other apps just simply refuse to work on ZFS. The argument
> they give is that other filesystems don't support the features needed that

A proper developer would use the very nice VFS API to query the
capabilities of the filesystem, and base the decision on that. Since we
support everything HFS+ supports, and more, they should just work. "Not
entirely proper developers" will check the Filesystem ID is HFS+ and damn
every other filesystem type.

We could easily add a dataset property to "lie or not" if we find it is
required. What is the smallest App you know of that takes issue with ZFS so
I can take it for a test drive?


> I am very encouraged by the fact that it appears, from your mention, that
> dedup now works properly. I used dedup for a while in ZEVO and it did
> work, but it had some serious problems, particularly in the memory and
> performance arenas. One thing that was a continual fight was OS X's
> handling of memory vs Solaris' much superior kernel as I understand it.
> Memory battles were always a real headache with ZFS. I am wondering if /
> how you are getting around those or has Don's effort upstream helped this
> situation out?

There is definitely a concern around memory. Darwin has carved all
allocations into chunks, like 128, 256, 512, 1024 etc. And each chunk has a
limit. There is no way (that I can find) to see if any particular size is
about to run out, just the running total. We can be using only 50% at any
moment, but (for example) chunk 1024 is depleted and we panic.

This is rather annoying. At the moment, low memory system (4GB or less)
will halve the memory used by ZFS purely to avoid this. I also improved the
ARC to force eviction of all data (Solaris model does not evict metadata,
but let it go away upon vnode reclaim). Currently it does a pretty good job
staying under the calculated limit.

Long term, we might need to internally tally all the chunk sizes and force
eviction based on size.


>
> Interestingly on the dedup item, we were talking about building in some
> functionality that would trigger deduping and compressing filesystems post
> creation. For example, a volume that has now already existed for a long
> time would be triggered to start the process of going through and deduping
> it and/or compressing it. That was something we were discussing I think
> last fall. Essentially a scrub kind of process as I understood it.

Currently, you can zfs send | zfs recv the snapshot to dedup and compress it.

I have toyed with the idea of a smarter scrub, but I want it for BPR, ie,
adding another device to vdev, and rebalance the raid.




--
Jorgen Lundman | <lun...@lundman.net>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell)
Japan | +81 (0)3 -3375-1767 (home)

Graham Perrin

unread,
Oct 3, 2013, 5:02:32 AM10/3/13
to maczfs...@googlegroups.com
On Thursday, 3 October 2013 08:14:30 UTC+1, lun...@lundman.net wrote:
 
George wrote:

> … Additionally, file operations such as `find /Volumes/tank/some/path` were
> also always very much slower than on HFS+.  It was not anything that was
> ever really hunted down very well, but it has always been quite annoying.
>  I don't recall details from our discussions, but it seemed to have to do
> with a differential in the way file lookups were done in each fs (been a
> very long time since I had this discussion with him).

Listing can indeed be a slower. Some of that was addressed with Hybrid VDEV pool v29, which only Solaris has. Hopefully OpenZFS will get there eventually. …

I might have some useful information here, and I think that it deserves a separate topic, or continuation of something that's in the issue tracker for old MacZFS. 

Should we (a) discuss here in the group; (b) add to the old issue; or (c) add an enhancement issue in the ZFS-OSX area? (Jorgen, which one would you prefer?)

George

unread,
Oct 5, 2013, 2:00:43 AM10/5/13
to maczfs...@googlegroups.com
So Jorgen, I am curious.  Let's say I would install MacZFS in the current state as you have it presently, what kind of (an) experience(s) would I have coming from ZEVO?  I am not encouraged by the long durations of response regarding seeing a 10.9 support release from them, so I'm going to start analyzing my potential options.  Building an additional server is not as desirable at present as having it "just work" on my Mac workstation and server.

Graham Perrin

unread,
Oct 7, 2013, 7:57:50 AM10/7/13
to maczfs...@googlegroups.com
On Thursday, 3 October 2013 04:50:16 UTC+1, lun...@lundman.net wrote:

On Thursday, 3 October 2013 12:18:36 UTC+9, George wrote:
 
… normalization is set to formD by default; used to have filename problems, especially with nonstandard characters
 

Unsure about this. I have not change any ZFS defaults, as a pool created on one system should be the same as others? Or should it be platform "convenient"?

formD from the outset is highly recommended where a file system is to be used with OS X. 

<https://diigo.com/016omm> for highlights from reactions (in French) to a post about ZFS-OSX in the context of OpenZFS. The complaints in comment #3 probably refer to MacZFS issue 53: 

File disappear when copying if name contains french character

– clearly a source of frustration. 

In the ZEVO support forum: 

NFD: normalization=formD (normalisation form D)

Hope that helps
Graham

Graham Perrin

unread,
Nov 28, 2013, 1:54:35 AM11/28/13
to maczfs...@googlegroups.com
Cross reference: 

VNOP_SEARCHFS · Issue #100 · zfs-osx/zfs

Jorgen Lundman

unread,
Nov 28, 2013, 2:02:54 AM11/28/13
to maczfs...@googlegroups.com
> Listing can indeed be a slower. Some of that was addressed with
> Hybrid VDEV pool v29, which only Solaris has. Hopefully OpenZFS
> will get there eventually. �
>
>
> I might have some useful information here, and I think that it deserves
> a separate topic, or continuation of something that's in the issue
> tracker for old MacZFS.
>
> Should we (a) discuss here in the group; (b) add to the old issue; or
> (c) add an enhancement issue in the ZFS-OSX area? (Jorgen, which one
> would you prefer?)
>
>
> Cross reference:
>
> VNOP_SEARCHFS � Issue #100 � zfs-osx/zfs
> <https://github.com/zfs-osx/zfs/issues/100>
>

This is actually while trying to get Spotlight to work correctly. In that
nothing spotlight finds makes it to the index it keeps (export/import and
the index is empty).

We did notice that it does try to call vnop_searchfs to scan, but appears
to have a fall-back method (since volumes formatted with FAT32 does work in
Spotlight, and yet does not define vnop_searchfs).

In terms of speeding up searching, adding vnop_searchfs would most likely
be faster. (and indeed, mds keeping a working index would help).

I am not entirely comfortable with the idea that the kernel handles such a
large task as searchfs does, with timeouts etc. But clearly Apple had no
such misgivings.

Eventually we will most likely implement vnop_searchfs, but it appears to
be rather complicated.

The Hybrid pool I have peeked at, and Ryao looked at a little deeper. It is
desirable for pools using raidz and up.

Graham Perrin

unread,
Nov 28, 2013, 8:27:08 AM11/28/13
to maczfs...@googlegroups.com
On Thursday, 28 November 2013 07:02:54 UTC, lun...@lundman.net wrote:


> <https://github.com/zfs-osx/zfs/issues/100>

This is actually while trying to get Spotlight to work correctly.  In that nothing spotlight finds makes it to the index it keeps (export/import and the index is empty).

We did notice that it does try to call vnop_searchfs to scan, but appears to have a fall-back method (since volumes formatted with FAT32 does work in Spotlight, and yet does not define vnop_searchfs).

In terms of speeding up searching, adding vnop_searchfs would most likely be faster. (and indeed, mds keeping a working index would help).

I am not entirely comfortable with the idea that the kernel handles such a large task as searchfs does, with timeouts etc. But clearly Apple had no such misgivings.

Eventually we will most likely implement vnop_searchfs, but it appears to be rather complicated.


Jorgen, thanks. What's above helps me to reconcile a summer test result with some privately given advice. 

I went on to seek additional information in public. <https://diigo.com/01b64u> for highlights (drawn in August 2013) on a 2009 post in the Xsanity forum: 

Xsan 2.2 and FS Search

– whether that can help with ZFS-OSX, I have no idea, but I found it interesting. In particular: 

"… file name (it can perform partial matches, but no globbing / regular expressions), owner, group, size, etc. …".

(Note to self, long term: SMB/CIFS clients (Mavericks, Windows etc.) performing searches with SMB/CIFS served by Mavericks and earlier.)

I *assume* that the following expressions are synonymous with each other: 

* searchfs
* search fs
* FS Search

-- Graham
Reply all
Reply to author
Forward
0 new messages