-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi James,
pretty much all relevant things have already been said, so I can make
this short (well, didn't worked out).
MacZFS (stable version) comes with man pages. A simple "man zpool"
should give you access to all the ZFS pool maintenance commands.
I virtually fell off my chair when I read your two statements
"I thought I need not backups" and "what is scrub, Does it do a data
wipe?"
As Jason said, ZFS is a wonderful piece of technology, but it is not
that kind of software one should use by just following some
setp-by-step guides. It will sooner or later bite you. We tried to
make it save and we tried to make it Mac friendly, but ZFS is
ultimately designed for big data centers and no interface magic can
really hide that fact.
Nevertheless, to answer your questions:
Scrub reads all data on a pool and verifies the checksums ZFS
maintains for each chunk of data stored in a pool. Jason gave you
the commands in his other post.
If (big if) you have redundancy in your pool, that is a mirror or a
raidz, then and only then it can repair damaged data in the background.
It does so, by either getting a good copy from the other side(s) of
the mirror, or by combinatorial calculations from the raidz parity
stripes.
In a raidzX you can loose X drives without immediate data loss, in a
N-way mirror you can loose (N-1) drives without immediate data loss.
Note! The keyword here is *immediate* data loss. If you buy 3 drives
in a batch, and put these drives in a pool (mirror or raidz), then
these drives will experience similar workload under similar condition,
which significantly increases the likelihood to fail around the same
time.
Which means in a raidz1, you have a significant chance, that a second
drive will fail while you are in the process of replacing a first
failed drive. The moment a second drive fails, your data is gone.
That is why you need backups.
I have personally seen this happen more than once, and switched to
always pairing drives from different manufactures and suppliers into
mirror pairs. I say "and suppliers" to not have both drives
experience the same shuffles and drops to the ground while in
transportation.
And you need regular(!) scrubs, to find out that a drive is getting
weak before it fails completely, so you can replace it in time.
And one more word on replacing drives:
Once you have a drive failure, chances are you are in panic mode or at
least in a hurry to fix things, which means prone to make mistakes.
We are all just humans and do make mistakes. So you should exercise a
drive replacement in advance. Replacing a random drive on a redundant
pool using "zpool replace pool drive1 drive2" is supposed to be a safe
operation, so you can simply try it out. The tricky part is how to
hookup the drives and identify the right drive, not the actual
replacement.
Using "zpool replace" instead of the sometimes suggested "zpool
attach" / "zpool detach" saves you from the all to common mistake to
say "zpool add" instead of "zpool attach", a mistake that would screw
up your pool layout and that can only be fixed by destroying and
recreating the pool.
Regarding the slowness:
Using 4k drives in a pool configure for 512b drives (the standard type
since hard drives were invented) will kill performance.
Using 512b drives in a pool configured for 4k drives does no harm,
except wasting a bit of space if you have many small files.
So I suggest to destroy and recreate the pool if your drives are 4k
(also called "enhanced format"). To configure a pool for 4k, you add
"-o ashift=12" the the "zpool create" command. "zpool get all" should
tell you the current ashift value, which is 9 for 512b and 12 for 4k
:-) Exercise for the reader: Which ashift value to use for old style
16k flash memory? (Not that it would last long, but that's not the
point here.)
Regarding slow, long directories:
Another issue our colleagues working on the new MacZFS find out:
The Mac OSX kernel has a problem with caching really long directories,
because it can run out of some internal file resources (the famous
vnodes). This hits ZFS especially hard due to the way it handles its
own short time locking and caching.
Best regards
Björn
Am 20.05.14 18:09, schrieb James Hoyt:
- --
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++
www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird -
http://www.enigmail.net/
iQCVAgUBU3uaCVsDv2ib9OLFAQI9qwP9G8qRYdQD1w8q8nXCGKW23M9Ko8LjQq4n
N94yqQjzj7WbFYv6m1UMHl71EJkGuscyzKDzlOOqn3J5/hPsU2N12h0aN60qEgYJ
jJIIm+D5ujA+OqcnS2ChUYVSMgNyG19rd72zo+n5g/PXF/B2N+OVvZRNbs3d30Qa
D61uDWwvY5c=
=JP1Y
-----END PGP SIGNATURE-----