RAIDZ1 running slow =(

85 views
Skip to first unread message

James Hoyt

unread,
May 18, 2014, 8:27:32 PM5/18/14
to zfs-...@googlegroups.com
So I setup a MacZFS RaidZ rather easily and was happy with myself. I had four 3 TB internal SATA drives in a zpool giving me around 9 TB of space.

jamess-imac:~ sangie$ zpool status murr
  pool: murr
 state: ONLINE
 scrub: none requested
config:

    NAME         STATE     READ WRITE CKSUM
    murr         ONLINE       0     0     0
      raidz1     ONLINE       0     0     0
        disk3s2  ONLINE       0     0     0
        disk1s2  ONLINE       0     0     0
        disk2s2  ONLINE       0     0     0
        disk4s2  ONLINE       0     0     0

errors: No known data errors

So I Filled it up with about 5 GBs of data, mainly images and FLAC/music files and everything just drags on it. It takes a long time for files to be listed in finder and when I try to save an image from Firefox, it will just grind and grind while I try to navigate to a folder. I have vmware Fusion setup on my SSD (my main Mac drive) and doing anything on my zpool from Windows (like using MediaMonkey to organize FLAC files on it) uses up 100% of the CPU, freezing up my computer until the moves are done, even when moving around 30 files.

Is my zpool okay? What's going on? Is this type of slowness normal or do I have a bad drive? How will MacZFS report to me if a drive in the array goes bad? I installed SMARTReporter Lite and it shows all drives as green. If I have some drives on SATA II and others on SATA III would that affect anything?

If you want me to run any tests on it, I will do so gladly. Just let me know.

Thanks!

Jason Belec

unread,
May 18, 2014, 9:49:11 PM5/18/14
to zfs-...@googlegroups.com
Well that sounds wrong, not my experience,

How did you create your pool?


--
Jason Belec
Sent from my iPad
--

---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Daniel Becker

unread,
May 18, 2014, 10:51:35 PM5/18/14
to zfs-...@googlegroups.com
How much memory do you have on that machine, if you're running ZFS and VMs?
--

Dave Cottlehuber

unread,
May 19, 2014, 4:00:32 AM5/19/14
to zfs-...@googlegroups.com
 
From: James Hoyt djna...@gmail.com(mailto:djna...@gmail.com)
Reply: zfs-...@googlegroups.com zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com)
Date: 19. Mai 2014 at 02:27:36
To: zfs-...@googlegroups.com zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com)
Subject: [zfs-macos] RAIDZ1 running slow =(

> So I setup a MacZFS RaidZ rather easily and was happy with myself. I had four 3 TB internal SATA drives in a zpool giving me around 9 TB of space.
>
> jamess-imac:~ sangie$ zpool status murr
> pool: murr
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> murr ONLINE 0 0 0
> raidz1 ONLINE 0 0 0
> disk3s2 ONLINE 0 0 0
> disk1s2 ONLINE 0 0 0
> disk2s2 ONLINE 0 0 0
> disk4s2 ONLINE 0 0 0
>
> errors: No known data errors
>
> So I Filled it up with about 5 GBs of data, mainly images and FLAC/music files and everything just drags on it. It takes a long time for files to be listed in finder and when I try to save an image from Firefox, it will just grind and grind while I try to navigate to a folder. I have vmware Fusion setup on my SSD (my main Mac drive) and doing anything on my zpool from Windows (like using MediaMonkey to organize FLAC files on it) uses up 100% of the CPU, freezing up my computer until the moves are done, even when moving around 30 files.

It’s not clear from this what your actual physical / virtual setup is. Are you booting to OSX, and running Windows in a VM? Is the entire VM then living on the raidz pool?

> Is my zpool okay? What's going on? Is this type of slowness normal or do I have a bad drive? How will MacZFS report to me if a drive in the array goes bad? I installed SMARTReporter Lite and it shows all drives as green. If I have some drives on SATA II and others on SATA III would that affect anything?
>
> If you want me to run any tests on it, I will do so gladly. Just let me know.
>
> Thanks!

I’ve seen precisely this sort of behaviour with vmware fusion when:

1. my SSD was getting worn down (really, I trashed it in 1 year, it was the default apple one coming with early 2011 MBP)
2. the host OS & VM doesn’t have sufficient memory to run correctly without swapping
3. the additional memory within the VM is pulled from a disk swap file, which is by default in the same disk location as the VM itself

Anything less than 8GB of RAM is likely to be tight, VMs will of course make this more complicated. Some notes on http://artykul8.com/2012/06/vmware-performance-enhancing/ may help.

I found that my SSDs were being worn out with constant running of VMs; I use them heavily in my work. The solution I found was to get max RAM in my laptop + imac (16 vs 32 respectively), make a zfs based ramdisk with lz4 compression, and copy the entire VM into the ramdisk before running it. The copy phase only takes a few seconds from SSD, and it gives me a very nice way to “roll back” to the previous image when required. I can comfortably run Windows in a 20GiB ramdisk that fits inside a 10GiB zpool with compression, even on the 16GiB laptop, and allocating 2GiB of ram for the VM itself (10 + 2 for virtualisation & leave 4 for all of OSX stuff).

Here’s the zsh functions I use for this.

# create a 1GiB ramdisk
ramdisk-1g () {
ramdisk-create 2097152
}

# the generic function for the specific one above
ramdisk-create () {
diskutil eject /Volumes/ramdisk > /dev/null 2>&1
diskutil erasevolume HFS+ 'ramdisk' `hdiutil attach -nomount ram://$1`
cd /ramdisk
}

# make a zpool backed ramdisk instead of the HFS+ ones above. Main advantage is compression. I get at least 2x more “disk” for RAM with this approach.
zdisk () {
sudo zpool create -O compression=lz4 -fm /zram zram `hdiutil attach -nomount ram://20971520`
sudo chown -R $USER /zram
cd /zram
}

# self explanatory
zdisk-destroy () {
sudo zpool export -f zram
}


Dave Cottlehuber
d...@jsonified.com
Sent from my Couch



Jason Belec

unread,
May 19, 2014, 8:43:48 AM5/19/14
to zfs-...@googlegroups.com
Dave has posted some good info. Reminds me why I prefer Virtualbox. ;) We do seem to need more detail though to really help the original OP.


Jason
Sent from my iPhone 5S

James Hoyt

unread,
May 19, 2014, 10:05:23 AM5/19/14
to zfs-...@googlegroups.com
Thanks for all the replies guys =D

Sorry for lack of information. I'm running a Hackintosh with a 256 GB
SSD and I sometimes run Windows 8.1 in a virtual machine via VmWare
Fusion. The virtual image file is also located on the SSD. The only
files I have on my zpool are data files. I don't run an OS or VM image
from it. I have 12 GBs of RAM and a four core i5 processor. On the VM,
I dedicate 6 GBs of RAM and 2 cores to it. It should be noted that I
experience the slow down even when vmware is off it's just the drives
act the slowest when the VM is running.

As for how I created the zpool, I followed the Getting Started guide with

zpool create murr raidz disk3s2 disk1s2 disk2s2 disk4s2

Please help... I really hope I don't have to recreate it, but it's
looking that way.

Would it be better if I bought a RAID card and use Mac OS Journaled?
Cost is an issue... the other issue is these are regular desktop 7200
RPM drives.. not NAS drives.

Thanks,

James
> You received this message because you are subscribed to a topic in the Google Groups "zfs-macos" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to zfs-macos+...@googlegroups.com.

Jason Belec

unread,
May 20, 2014, 7:14:37 AM5/20/14
to zfs-...@googlegroups.com
OK, doesn't look like RAM, processor etc., are the issue.... Let's work with that in mind for now.

When the pool and the associated drives are not connected, is the computer back to your expectation of normal? If so, you have one or more bad cables, one or more bad drives, or a bit of both, perhaps a bad or not quite capable power supply (solves 90% of all issues I come across). Maybe even an issue with the motherboard. Simplest thing, have you run a scrub on this pool? Clean?

The type of drives you have is not an issue, the make and known issues with said drives might be, but you didn't provide that info.

Using a raidcard and macJournaled terms, thrown out will not help you, your either ZFS or not. That said, you will not get the same speed from ZFS as from other raid setups, but you will get peace of mind on data integrity. I do hope you are also backing up data from the pool as well or eventually you will be in tears like so many others. A little forum searching under old and new versions of mac zfs will be helpful.

Since your getting started, once this is resolved it might be better to build/run this under the latest (yes its in development) Mac ZFS rather than the old tired version. It is quite a bit different, modern and makes many things a lot easier. (Insert legal disclaimer here) ;)

Interesting aside:
Dave mentioned an interesting point about wearing out SSDs, and I must admit I've had two such occurrences but only with a hackintosh and only with less than stellar drives. Seems that here around the mad science lab Intel SSDs are the most reliable long term. I have two of their originals still outlasting several other brands.

--
Jason Belec
Sent from my iPad

James Hoyt

unread,
May 20, 2014, 9:37:46 AM5/20/14
to zfs-...@googlegroups.com
Thanks for the detailed reply.

The slow performance is only when I'm using the RAID array so I assume
without it connected means I can't use it means there is no slow
performance. I would love instructions on how to scrub/clean the pool.
Does it do a data wipe?

I was trying to think of a good backup solution. I have over 3 TBs of
music in FLAC (lots of which I've paid for) and was hoping RAIDZ would
take away the need for backups. I was thinking of buying a 4 TB drive
and moving all my data on that and storing the drive offsite or
something (in case of burglary, fires, etc). Having a single drive
fail safe seems secure enough for me so I don't think incremental
backups are needed.

As for running the latest beta ZFS, I didn't because the FAQ warned me
not to. What are the differences? Would I have to format and rebuild
the array?

The drives I have are four 3 TB Hitachi HDS723030BLE640.

I started navigating around my computer again, and the slowdown seems
to be when going into folders with over 1000 files (for anything more
it will take 1-3 minutes to just list the files in the directory).
Also when I'm saving images from Firefox (no virtual machine running)
it takes awhile to navigate the folder structure and sometimes not all
the folders show, but they do in the Finder. So I wonder if this is an
issue with programs not getting along with ZFS but the finder being
fine with it.

Other things to note, I did disable Spotlight on the drive to make
sure that isn't running, but I do have QuickSilver. Originally, I had
QuickSilver indexing the drive, but the computer was practically
unusable when it did that so I disabled that.

I look forward to any advice you guys may have.

Thanks,

James

Jason Belec

unread,
May 20, 2014, 10:24:15 AM5/20/14
to zfs-...@googlegroups.com
OK, one thing, any indexing under that version of ZFS is going to kill performance. Long standing issue.

No backups? Did you bump your noggin? With your current setup you have improved your chances if your scrubbing regularly and if you only lose a drive at anyone time. And adding backup will drastically increase your chances.

Not understanding ZFS is a BIG reason to stop and re-evaluate your priorities. It's amazing tech IF used properly.

For what it sounds like you want from ZFS you should use mirrors. You can do 2 mirrors of 2 drives each stripped under ZFS. This will increase the safety of your data. Even that should have a back up drive you move key files or better yet 'snapshots' onto.

BUT you are going to have to understand ZFS to have any hope of not drowning in a pool of tears at some point.

The new ZFS is under development but far more functional. Eliminating many of the old version issues listed numerous times throughout the forum. Either way you should ALWAYS understand the tech you rely on. Period.

Please start learning with the word 'scrub' then the word 'snapshot' and how to swap a failed drive and do it all. Before committing your valuable data. Drives fail. Repeat. Drives fail. Data must be restored at some point. ZFS is magical if you have planned ahead. I have recovered data assumed totally lost, YMMV.

As for those drives are they 4k? If so you formatted your pool incorrectly. I don't have any of those so I don't have notes. Should be a simple Google search to find out. And the wiki has the instructions on 4k drive setup.

Doing things right is what the wiki tries to help people with. The forum allows you to search for other peoples heartbreak to help prevent your own. The wizards tracking this stuff have done a wonderful job.

Hope this gets you rolling. I'd still check your cables as well. Normally I attach a drive, build a pool, test a lot, destroy pool. Add another drive. Repeat. Better safe than sorry. Manufacturers are not safe guarding your data.

Jason
Sent from my iPhone 5S

James Hoyt

unread,
May 20, 2014, 12:09:04 PM5/20/14
to zfs-...@googlegroups.com
You have completely lost me at this point. You were rather
condescending and not helpful. I was hoping for instructions on how to
clean and scrub and saw none of that. At least point me to some proper
links. I also don't know what a 4k drive is.

I carefully followed and read ALL the instructions and FAQ and Getting
Started guide on maczfs.org. Please don't speak to me like I didn't do
my research or follow the proper instructions.

- James

Jason Belec

unread,
May 20, 2014, 12:51:48 PM5/20/14
to zfs-...@googlegroups.com
Sorry you feel that way. We have had a lot of people in your situation. You seem to have skipped over the basics.

Zpool scub murr

Zpool status murr


This command is on every ZFS site. Your openly stating you don't know it and refuse to look it up. I wish you the best.


Jason
Sent from my iPhone 5S

James Hoyt

unread,
May 20, 2014, 12:59:40 PM5/20/14
to zfs-...@googlegroups.com
I did status as you can see from my original post.. I didn't know
scrub and clean. I did my research only on MacZFS because I thought
that's only where it mattered. I didn't trust info on other sites
because I didn't think it was relevant to how Mac ZFS operated.

Please show me where I could have found the scrub command on
maczfs.org because it is not there. I see nothing about clean either.

I'm openly stating I don't know it and it's not stated on the wiki or
FAQ or getting started section on maczfs.org. There is no refusal
going on.

On Tue, May 20, 2014 at 11:51 AM, Jason Belec

Daniel Becker

unread,
May 20, 2014, 1:52:06 PM5/20/14
to zfs-...@googlegroups.com
James,

Perhaps the takeaway here is that MacZFS (and arguably ZFS in general) is really not a great fit for the casual user. ZFS is very powerful once you take the time to really get familiar with it, but it does require a fair amount of research to get started, and it gives you lots of ways to shoot yourself in the foot. And as you found out yourself, there are a fair number of caveats and behavioral oddities when running ZFS on a Mac. If you want something that "just works" without digging into the details and that gives you behavior just as you would expect it from other file systems, it's probably not for you (at least not for anything other than experimentation).

I know that the MacZFS page likes to give a somewhat different impression, but in my opinion encouraging non-technical users to install it is really doing a disservice both to said users and to the community as a whole.

Daniel

Bjoern Kahl

unread,
May 20, 2014, 2:08:16 PM5/20/14
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Hi James,

pretty much all relevant things have already been said, so I can make
this short (well, didn't worked out).

MacZFS (stable version) comes with man pages. A simple "man zpool"
should give you access to all the ZFS pool maintenance commands.

I virtually fell off my chair when I read your two statements
"I thought I need not backups" and "what is scrub, Does it do a data
wipe?"

As Jason said, ZFS is a wonderful piece of technology, but it is not
that kind of software one should use by just following some
setp-by-step guides. It will sooner or later bite you. We tried to
make it save and we tried to make it Mac friendly, but ZFS is
ultimately designed for big data centers and no interface magic can
really hide that fact.


Nevertheless, to answer your questions:


Scrub reads all data on a pool and verifies the checksums ZFS
maintains for each chunk of data stored in a pool. Jason gave you
the commands in his other post.

If (big if) you have redundancy in your pool, that is a mirror or a
raidz, then and only then it can repair damaged data in the background.

It does so, by either getting a good copy from the other side(s) of
the mirror, or by combinatorial calculations from the raidz parity
stripes.

In a raidzX you can loose X drives without immediate data loss, in a
N-way mirror you can loose (N-1) drives without immediate data loss.

Note! The keyword here is *immediate* data loss. If you buy 3 drives
in a batch, and put these drives in a pool (mirror or raidz), then
these drives will experience similar workload under similar condition,
which significantly increases the likelihood to fail around the same
time.

Which means in a raidz1, you have a significant chance, that a second
drive will fail while you are in the process of replacing a first
failed drive. The moment a second drive fails, your data is gone.

That is why you need backups.

I have personally seen this happen more than once, and switched to
always pairing drives from different manufactures and suppliers into
mirror pairs. I say "and suppliers" to not have both drives
experience the same shuffles and drops to the ground while in
transportation.

And you need regular(!) scrubs, to find out that a drive is getting
weak before it fails completely, so you can replace it in time.

And one more word on replacing drives:

Once you have a drive failure, chances are you are in panic mode or at
least in a hurry to fix things, which means prone to make mistakes.
We are all just humans and do make mistakes. So you should exercise a
drive replacement in advance. Replacing a random drive on a redundant
pool using "zpool replace pool drive1 drive2" is supposed to be a safe
operation, so you can simply try it out. The tricky part is how to
hookup the drives and identify the right drive, not the actual
replacement.

Using "zpool replace" instead of the sometimes suggested "zpool
attach" / "zpool detach" saves you from the all to common mistake to
say "zpool add" instead of "zpool attach", a mistake that would screw
up your pool layout and that can only be fixed by destroying and
recreating the pool.


Regarding the slowness:

Using 4k drives in a pool configure for 512b drives (the standard type
since hard drives were invented) will kill performance.

Using 512b drives in a pool configured for 4k drives does no harm,
except wasting a bit of space if you have many small files.

So I suggest to destroy and recreate the pool if your drives are 4k
(also called "enhanced format"). To configure a pool for 4k, you add
"-o ashift=12" the the "zpool create" command. "zpool get all" should
tell you the current ashift value, which is 9 for 512b and 12 for 4k

:-) Exercise for the reader: Which ashift value to use for old style
16k flash memory? (Not that it would last long, but that's not the
point here.)


Regarding slow, long directories:

Another issue our colleagues working on the new MacZFS find out:
The Mac OSX kernel has a problem with caching really long directories,
because it can run out of some internal file resources (the famous
vnodes). This hits ZFS especially hard due to the way it handles its
own short time locking and caching.


Best regards

Björn


Am 20.05.14 18:09, schrieb James Hoyt:
- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAgUBU3uaCVsDv2ib9OLFAQI9qwP9G8qRYdQD1w8q8nXCGKW23M9Ko8LjQq4n
N94yqQjzj7WbFYv6m1UMHl71EJkGuscyzKDzlOOqn3J5/hPsU2N12h0aN60qEgYJ
jJIIm+D5ujA+OqcnS2ChUYVSMgNyG19rd72zo+n5g/PXF/B2N+OVvZRNbs3d30Qa
D61uDWwvY5c=
=JP1Y
-----END PGP SIGNATURE-----

James Hoyt

unread,
May 20, 2014, 2:28:45 PM5/20/14
to zfs-...@googlegroups.com
Hi Bjorn thanks for your reply and thanks for your help Jason in all
this. I've actually been in the IT industry for 12 years, A+
certified, and currently pursuing my CCNA and MCSA so a technical
setup didn't intimidate me. (granted servers are a new beast to me) I
came across Mac ZFS while researching RAID options. As I'm getting new
music in wav/flac daily from a number of sources, a manual backup
system really wouldn't work for me as it's too hard to keep up. I
tried once with a blu-ray writer and it was a nightmare.. plus I'm
regularly categorizing music with MediaMonkey (why I have a virtual
machine because it's Windows only... oh why won't iTunes support FLAC
natively!) so all the tracks are updated now and then. So I thought an
offsite backup that's updated every few months along with a four-drive
RAID setup with one drive for redundancy would be all that I would
need.

MacZFS.org is well put together and the tutorials isn't intimidating
at all. Run a few terminal commands? I can do that. The depth of ZFS
wasn't really covered nor did it really state that more research is
needed (like if I need 4k setup.. or whatever that is D: ) So it's
rather frustrating to just find out I moved all my data off my
individual 2/1.5 TB drives to find out I did it wrong when I was
careful, very careful, to follow the Getting Started guide and FAQ
precisely.

Bleh D:

Jason, I apologize for coming out rough. I felt like I was being
treated like a lazy moron, which I'm not. I've researched a variety of
RAID solutions quite a bit and thought I was all setup for ZFS and
just had to change some configurations to speed it up. I'm sure you
weren't born with this knowledge and needed the help of others to
guide you in the right direction. I was googling things like "slow
zpool ZFS" and other similar terms but just couldn't find anything
concrete (because my problem isn't concrete).

I tried searching if my drives are 4k with no luck. I saw an article
back from 2010 stating hard drives were planning to all be 4k in
2011... this leads me to believe that they are 4k since I purchased
them new last year. Crap D: Is there a for sure way I can see if they
are 4k? Could this be my performance issue or is it just because my
directories have large amounts of folders/files in them?

Again, sorry if I came out rude. This is all new technology to me and
I'm doing my best to become familiar with it.

Thank you,

James
>> On Tue, May 20, 2014 at 11:51 AM, Jason Belec
>> <jason...@belecmartin.com> wrote:
>> > Sorry you feel that way. We have had a lot of people in your situation.
>> > You seem to have skipped over the basics.
>> >
>> > Zpool scub murr
>> >
>> > Zpool status murr
>> >
>> >
>> > This command is on every ZFS site. Your openly stating you don't know it
>> > and refuse to look it up. I wish you the best.
>> >
>> >
>> > Jason
>> > Sent from my iPhone 5S
>> >
>> >> On May 20, 2014, at 12:09 PM, James Hoyt <djna...@gmail.com> wrote:
>> >>

Daniel Becker

unread,
May 20, 2014, 2:44:08 PM5/20/14
to zfs-...@googlegroups.com
Do "ioreg -lw0", then search the output for your drive’s model name (as reported in System Profiler). There should be an instance of AppleAHCIDiskDriver with "Logical Block Size" and "Physical Block Size" properties. You will need the manual ashift override if the logical block size is 512 but the physical one is 4096 (a.k.a. "advanced format" / 512e drive).

Jason Belec

unread,
May 20, 2014, 2:44:15 PM5/20/14
to zfs-...@googlegroups.com
We do understand. We just go through this with people over and over. Humans tend to ask without reading. ;) Yes we were all in 'that' boat at done time, so our warnings are worth considering.

I highly recommend ZFS for what you want. However some time should be taken. Just like adding large storage into a big company like Amazon - test, test again, abuse, test, test.....

I personally have 96 TB storage in the mad science lab and with clients combined, probably close to triple that. And ZFS has saved some clients a lot of stress and money where other tech has left them crying.

We want you to be successful with ZFS.

Jason
Sent from my iPhone 5S

Dave Cottlehuber

unread,
May 20, 2014, 2:53:06 PM5/20/14
to zfs-...@googlegroups.com
 
From: James Hoyt djna...@gmail.com(mailto:djna...@gmail.com)
Reply: zfs-...@googlegroups.com zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com)
Date: 20. Mai 2014 at 15:37:51
To: zfs-...@googlegroups.com zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com)
Subject: Re: [zfs-macos] RAIDZ1 running slow =(

> Thanks for the detailed reply.
>
> The slow performance is only when I'm using the RAID array so I assume

It would be interesting to try a raw dd of=/dev/null concurrently from each
drive while zpool is not imported/mounted and see if the performance is
still sucky. Many of us have found that sub-standard components are the
cause of this (e.g. a while back I was getting poor performance, soon,
the drive failed completely soon afterwards).

> without it connected means I can't use it means there is no slow
> performance. I would love instructions on how to scrub/clean the pool.
> Does it do a data wipe?

Scrubbing is non-destructive, but it is IO intensive, I typically see IO
near wire speed of the drive. Assuming you currently have some bottleneck
this may be painful for your system.

Scrub via:

zpool scrub <pool>

It may take a wee while to ramp up to max speed. You can check status via:

zpool status 5

And cancel via:

zpool scrub -s <pool>

A very useful thing is to have a bootable FreeBSD (or smartos/illumos if
you are ok in solaris world) to do faster scrubs from. I can personally
recommend mfsBSD http://mfsbsd.vx.sk/ which is a memory-resident FreeBSD.
Boot from that, `zpool import -f ` and scrub away.

> I was trying to think of a good backup solution. I have over 3 TBs of
> music in FLAC (lots of which I've paid for) and was hoping RAIDZ would
> take away the need for backups. I was thinking of buying a 4 TB drive
> and moving all my data on that and storing the drive offsite or
> something (in case of burglary, fires, etc). Having a single drive
> fail safe seems secure enough for me so I don't think incremental
> backups are needed.
>
> As for running the latest beta ZFS, I didn't because the FAQ warned me
> not to. What are the differences? Would I have to format and rebuild
> the array?



I’ve been using the beta before it was alpha. It sometimes has trouble
shutting down but then I do that rarely, and other than that I think
the performance is better, and the functionality is the same as other
zfs implementations/ports. A power cycle resolves the hung reboot, and
as it’s zfs I am very sure my data is safe.

I do have a rather robust backup environment, but using `zfs send …`
to a 2nd mac and to a remote FreeBSD server is a very nice addition.
wrt rebuilding the array, personally I would do this. 

> The drives I have are four 3 TB Hitachi HDS723030BLE640.

I can’t tell if these are actually 4K sector drives under the hood, but
http://en.wikipedia.org/wiki/Advanced_Format gives you some idea of the
issue. Basically if you can, create your pool with 4k alignment by default
even if your drives don’t support it today. It is not possible to change it
after the fact (although I have a sneaking suspicion there are a few dark
art tricks to help with this if you have drives to spare). I found this made
a noticeable difference after I switched over from zfsosx 512B blocks to
zfsosx with 4K alignment. It is easy to do this if you can duplicate your
data.

> I started navigating around my computer again, and the slowdown seems
> to be when going into folders with over 1000 files (for anything more
> it will take 1-3 minutes to just list the files in the directory).
> Also when I'm saving images from Firefox (no virtual machine running)
> it takes awhile to navigate the folder structure and sometimes not all
> the folders show, but they do in the Finder. So I wonder if this is an
> issue with programs not getting along with ZFS but the finder being
> fine with it.

I use a specific format for Finder-friendliness:

    zfs create -o normalization=formD atime=off <name>

Which also inherits the settings I have on the higher dataset of

    compression=lz4 
    checksum=fletcher4

More Finder notes & tricks here https://gist.github.com/dch/3333118 but
a few of the points are out of date wrt zfs-osx, the section
about sending snapshots.

> Other things to note, I did disable Spotlight on the drive to make
> sure that isn't running, but I do have QuickSilver. Originally, I had
> QuickSilver indexing the drive, but the computer was practically
> unusable when it did that so I disabled that.

Could be mds is still indexing, it’s a PITA to disable it. my
fix_finder script above does that for each dataset on the zfs drive.

Busty

unread,
May 20, 2014, 3:07:32 PM5/20/14
to zfs-...@googlegroups.com
James,

I use my 15TB pool mainly for flac files too, so I thought I'd throw in
my two cents (even if some is not zfs related):

regarding iTunes recognizing flac: There is a quicktime component that
will enable flac in quicktime, iirc it also works in itunes, at least
you can get it too. But it will not play gapless, there is an amount of
silence between songs.

Another thing is called "TwistedFlac", which in a folder you can specify
shows all flac files as wave files. These can be imported into iTunes,
the downside is that the tags are not recognized.

Just in case that helps with your library. I use songbird, which can
about anything you want, but is not as stable as iTunes.

Regarding your files showing up very slow, I experience that when I
access my files on the pool from a remote machine which has to do with
AFP (Apple filesharing protocol), so I have set up a NFS share. But you
don'T writ eabout accessing the files from a a remote machine, so this
should not be your issue.

I kinda went the way you did. I had no knowledge of zfs but really
wanted the features for data safety. That was roughly 3-4 years ago. As
I set up my pool (and my backup, by the way), I came across all kinds of
problems (drives vanishing, kernel panics, slow file browsing, scripts
to automate backups and scrubs, you name it) which had to be solved, so
I had a lot of reading and googling to do. I kinda was fooled by the
MacZFS tutorial into thinking that this will be completely easy like you
describe.

These guys, in the front row Jason and Alex Blewitt and Bjoern helped me
a lot to get on the way (so thanks again guys)

Sebastian

James Hoyt

unread,
May 20, 2014, 3:32:04 PM5/20/14
to zfs-...@googlegroups.com
So it sounds like I need to recreate my zpool ...

"Logical Block Size" = 512
"Physical Block Size" = 4096

So I should use the following command on my next zpool to help finder
performance and make it compatible for 4k drives?

zfs create -o normalization=formD atime=off murr ashift=12
(let me know if I have any errors in this)

As for the slowness in a VM, Mac file sharing would affect it because
Windows 8 accesses the drives with Fusion by mounting
\\jamess-imac\Volumes\murr as the Z drive so it technically is a file
share if that's what you mean. But it could also be because the
slowness of not using a 4k compatible zpool is compounded with a
virtual machine. (Could someone updated the getting started guide to
have you create a 4k zpool by default?)

Thanks for the advice on Songbird! I may try it if it can organize via
masks and support custom ID3 fields. I saw it's discontinued though
but it's still on SourceForge.

I'm at work so can't give a better reply but I have a lot more to look
into and read now =)

- James

Bjoern Kahl

unread,
May 20, 2014, 5:21:49 PM5/20/14
to zfs-...@googlegroups.com, djna...@gmail.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Hi James,

Am 20.05.14 21:32, schrieb James Hoyt:
> So it sounds like I need to recreate my zpool ...
>
> "Logical Block Size" = 512 "Physical Block Size" = 4096
>
> So I should use the following command on my next zpool to help
> finder performance and make it compatible for 4k drives?
>
> zfs create -o normalization=formD atime=off murr ashift=12 (let me
> know if I have any errors in this)

Almost.

As said in my other mail an hour ago, "normalization" doesn't exist in
the stable MacZFS version. Also each option needs its own "-o" and
ashift is a pool option, not a file system option.

You do "zpool create -o ashift=12 -O atime=off murr _devices_ ..."

Note the capital "-O" and the small letter "-o".

And for subsequent file systems (datasets in ZFS language) you use

"zfs create -o atime=off _pool_name/fs_name_"

If you used the development version, then you would add a
"-O normalization=formD" in the zpool command and a
"-o normalization=formD" in the zfs command.


Best regards

Björn
- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAgUBU3vHa1sDv2ib9OLFAQLLCQP/UaP/ipKUIUmpslXPKrqEnBMMgkRT1PZa
IPtO0mugSSv80XeC9VHGMFgd+gUmNcissnyMpvOJTeMqvOQiarmb0/+OHawALnKi
Y2758Q4ZFT9MTBXlJajF6JSRjURQnf8a+Gytru2u/Q3bCu9CJaQJHMShgIhNr1dP
ZNSc7K3a7Kc=
=WF5b
-----END PGP SIGNATURE-----

James Hoyt

unread,
May 20, 2014, 5:31:15 PM5/20/14
to zfs-...@googlegroups.com
Oh yeah I'm sorry. I meant I'd upgrade to the latest dev version of MacZFS ^^

I think I'm confused between zfs create vs zpool create. If you want
to point me to any links, I'll read them up along with what exactly
Normalization is.

Thanks,

James

Bjoern Kahl

unread,
May 20, 2014, 6:29:26 PM5/20/14
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Hi James,

Am 20.05.14 23:31, schrieb James Hoyt:
> Oh yeah I'm sorry. I meant I'd upgrade to the latest dev version of
> MacZFS ^^
>
> I think I'm confused between zfs create vs zpool create. If you
> want to point me to any links, I'll read them up along with what
> exactly Normalization is.

A quite informative (but technical) article regarding RaidZ
performance is this 2008 blog post:

https://blogs.oracle.com/roch/entry/when_to_and_not_to

Regarding zpool vs. zfs, I'd suggest any ZFS administration handbook
you like. A classic one is the "Solaris ZFS Administration Guide",
available online from Oracle at

http://docs.oracle.com/cd/E19253-01/819-5461/

I'd suggest chapter 3, then article
http://docs.oracle.com/cd/E19253-01/819-5461/gaypk/index.html,
then chapter 2 and finally chapter 1, in that particular order.

Unfortunately, it is only a HTML version. It used to be available in
PDF in the past, but apparently Oracle stopped that practice. Google
is your friend anyway.

There are also some quite good talks on ZFS basics on youTube and some
nice slide sets out there, but I don't have any administration related
ones at hand. I have more the technical stuff aimed at ZFS development
ready.


Bets regards

Björn


> On Tue, May 20, 2014 at 4:21 PM, Bjoern Kahl
> <googl...@bjoern-kahl.de> wrote:
>
> Hi James,
>
> Am 20.05.14 21:32, schrieb James Hoyt:
>>>> So it sounds like I need to recreate my zpool ...
>>>>
>>>> "Logical Block Size" = 512 "Physical Block Size" = 4096
>>>>
>>>> So I should use the following command on my next zpool to
>>>> help finder performance and make it compatible for 4k
>>>> drives?
>>>>
>>>> zfs create -o normalization=formD atime=off murr ashift=12
>>>> (let me know if I have any errors in this)
>
> Almost.
>
> As said in my other mail an hour ago, "normalization" doesn't exist
> in the stable MacZFS version. Also each option needs its own "-o"
> and ashift is a pool option, not a file system option.
>
> You do "zpool create -o ashift=12 -O atime=off murr _devices_ ..."
>
> Note the capital "-O" and the small letter "-o".
>
> And for subsequent file systems (datasets in ZFS language) you use
>
> "zfs create -o atime=off _pool_name/fs_name_"
>
> If you used the development version, then you would add a "-O
> normalization=formD" in the zpool command and a "-o
> normalization=formD" in the zfs command.
>
>

- --
+----------------------------------------------------------------------+
| Björn Kahl +++ Lambertstrasse 2 +++ 53721 Siegburg |
| Tel.: (ISDN) 02241 1462182 +++ Web: http://www.bjoern-kahl.de |
+----------------------------------------------------------------------+
Weitergabe und/oder gewerbliche Nutzung meiner Adresse/TeleNr untersagt.

- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAgUBU3vXPVsDv2ib9OLFAQLWqAQAkWJAXeqHv7DJgDoPuBBmgXu62d4Z+AXW
hF49knmz0ucl1SuNLvVOVxIil6A7UJmk7gizyX5y8lMDqybUPYh4QN3P5+lnJBec
uvIgXnTrTkjWZ1XzD4CkzRINqo/DQXhK1S8OPCcRLRtV+bTt2TIIlROsrqzQNkvy
ZMxU4SdsALo=
=imXP
-----END PGP SIGNATURE-----

Philip Robar

unread,
May 20, 2014, 6:30:34 PM5/20/14
to zfs-...@googlegroups.com
On Tue, May 20, 2014 at 1:28 PM, James Hoyt <djna...@gmail.com> wrote:

I tried searching if my drives are 4k with no luck. I saw an article
back from 2010 stating hard drives were planning to all be 4k in
2011... this leads me to believe that they are 4k since I purchased
them new last year. Crap D: Is there a for sure way I can see if they
are 4k? Could this be my performance issue or is it just because my
directories have large amounts of folders/files in them?

A search on "HDS723030BLE640 4k" shows that these are 4k drives with emulated 512 sectors. I didn't even need to follow any links, just skimmed through the Google results:

"Hi, I bought 2x 3TB model HDS723030BLE640. ... A6 & E6 are both SATA 6GB/s, but A6 is block size 512 native mode while B6 is 512 emulation with 4K block"

The poster gets which letters he's talking about a little confused, but it's clear that the HDS723030A... are 512 sector drives and the HDS723030B... are 4K. (https://www.facebook.com/HGSTStorage/posts/225105534286344)


Phil

Jason Belec

unread,
May 20, 2014, 8:24:15 PM5/20/14
to zfs-...@googlegroups.com
If your upgrading....

Try something like this
zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 deathstar raidz disk1 disk2 disk3 disk4

Do a test on speed. And let us know if you see improvement.

Jason
Sent from my iPhone 5S

> You received this message because you are subscribed to the Google Groups "zfs-macos" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.

Bjoern Kahl

unread,
May 23, 2014, 5:06:09 PM5/23/14
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Just a short note (really):

Am 20.05.14 20:53, schrieb Dave Cottlehuber:
>
> From: James Hoyt djna...@gmail.com(mailto:djna...@gmail.com)
> Reply: zfs-...@googlegroups.com
> zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com) Date:
> 20. Mai 2014 at 15:37:51 To: zfs-...@googlegroups.com
> zfs-...@googlegroups.com(mailto:zfs-...@googlegroups.com)
> Subject: Re: [zfs-macos] RAIDZ1 running slow =(
>
>> Thanks for the detailed reply.
>>
>> The slow performance is only when I'm using the RAID array so I
>> assume

[ ... ]

>> I started navigating around my computer again, and the slowdown
>> seems to be when going into folders with over 1000 files (for
>> anything more it will take 1-3 minutes to just list the files in
>> the directory). Also when I'm saving images from Firefox (no
>> virtual machine running) it takes awhile to navigate the folder
>> structure and sometimes not all the folders show, but they do in
>> the Finder. So I wonder if this is an issue with programs not
>> getting along with ZFS but the finder being fine with it.
>
> I use a specific format for Finder-friendliness:
>
> zfs create -o normalization=formD atime=off <name>

atime=off is a very good idea, and can be set and unset at any time.

The normalization flag is not available in the stable MacZFS, it is
only in the new betas. It defaults to "binary" in the stable MacZFS,
which is not a valid setting in newer versions. It basically means
"Don't touch the encoding, store it as-is".


Best regards

Björn

- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAgUBU3uxu1sDv2ib9OLFAQLUXgQAt4C3zvwp8VCmLEEX2J4RQ6l4pPhU0YHG
2QhX0XDeSyyCcMMDS/wkHOPSl8/6rTA6G9T4aJ7hN86QEaYl7SqkUnjkCTU/iaFt
B+4skxwL5HF7mVXm9AE83j1va4SDA9OGXLDlDwEOfvqmnJO0vM6/QhBDM9Cu/K2N
KtOLiuqtpQI=
=wN2j
-----END PGP SIGNATURE-----

James Hoyt

unread,
Jun 22, 2014, 2:12:47 PM6/22/14
to zfs-...@googlegroups.com
Hi guys! What a journey. So with school and work I finally was able to get everything pulled off my ZFS pool and ready to rebuild it.

Except I can’t find the developer version…

I’m at downloads.maczfs.org and checked current and all downloads and the last release seems to be MacZFS-74.3.3.pkg, which I have installed:
collect-maczfs-state.sh v maczfs_74-3-3-68-ga26cd63


Determining system version
# uname -a
Darwin Jamess-iMac.local 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64

Looking for ZFS packages
# -v pkgs -sl Found %d packages pkgutil --pkgs | grep -e zfs -e ZFS -e ZEVO -e zevo
com.getgreenbytes.community.zfs.pkg
com.getgreenbytes.community.ZFSDriver.pkg
com.getgreenbytes.community.ZFSFilesystem.pkg
org.maczfs.zfs.106.pkg
Found 4 packages

So where is the developer version? Everything else is marked as depreciated. Please help D: I want to use the latest build experimental or not.

I’m going to use this line once I get the new version installed:
zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 deathstar raidz disk1 disk2 disk3 disk5

Do I need to delete the pool first or format the drives in any way special before doing this?

Thanks,

James

Reply all
Reply to author
Forward
0 new messages