Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

notes on converting a FreeBSD 10 zpool from 512 to 4K

2,385 views
Skip to first unread message

Winston

unread,
Jul 8, 2014, 8:38:56 PM7/8/14
to
FWIW...

I wrote this article for anyone who, after converting to FreeBSD 10
on a system running ZFS on disks using 512B blocks, sees zpool status
say "block size: 512B configured, 4096B native" and is looking to fix
it. Below are things I wish I'd understood/known before I started.


Summary:
The warnings and error messages are sometimes misleading/wrong. The
upgrade plan that worked for me is: create a new zpool and use
zfs send -R oldpool@now | zfs recv -duvF newpool to transfer the files
and snapshots. Neither 4KB/sector disks nor gnop create -S 4096 are
required for this to work, and doing this appears to improve
performance.


What isn't the issue:
Misleading item #1:
zpool status says:

action: Replace affected devices with devices that support the
configured block size, or ...

If you believed that part, you might have hoped you'd be able to
zfs attach a 4K/sector provider to the pool, let it resilver, and
rotate through all the disks in the pool until they're all
converted. Sorry -- won't work. The zfs attach will fail even
for gnop providers that are 4KB/sector on 4K-aligned partitions.

Misleading item #2:
zfs attach oldpool oldpartition new-4k-provider refuses, with a
message something like "zfs: can't attach because of sector
alignment". (I don't recall the exact wording).

Despite these messages, "sector alignment" and "block size" are not the
issue. Only one thing appears to matter: the zpool's ashift, and I
did not find any way of settting or changing it for an existing zpool.

If "zdb -C" shows "ashift: 9" for your pool, zfs will not allow you to
attach a 4K provider, so you won't be able to convert the pool via
detach / reconfigure / attach / resilver.


What worked for me:

Zpool status's second suggestion:

action: ... or migrate data to a properly configured pool.

is the one that works. The core plan (but not a detailed list of
steps) is:

boot from a memstick or LiveCD so you don't lose any data
zpool scrub oldpool # be cautious
zfs snapshot -r oldpool@mark
zpool create newpool ${on something}
# creates newpool with ashift=12 even if the disk is 512B/sector
# next, copy everything from oldpool to newpool
zfs send -R oldpool@mark | zfs recv -duvF newpool

Good news #1: NEITHER A 4KB/SECTOR DISK NOR GNOP ARE REQUIRED.
You can use your 512B/sector disk's partitions directly, just as
you've been doing. Articles I've read on the 'Net say, after
conversion, you can even mix 512 and 4K drives. (I don't know
if using 512B drives has a performance penalty compared to using
a 4KB/sector provider, but see the next two points.)

Good news #2:
Copying via zfs send | zfs recv took about half the time that
resilvering with ashift=9 required.

Good news #3:
After conversion, I saw resilver time drop by over 75%, even
though the number of sectors in use increased by ~11%.
zpool scrub time dropped by over 50%.

Good news #4:
The send|recv above copies all of oldpool's snapshots to newpool.

Bad news (but expected):
It uses more disk space. In directories with lots of tiny
files, I saw disk space usage expand up to *6(!). Overall, I
saw disk space usage rise by ~11%, but your results will
probably vary depending on your mix of file sizes.

I changed the disk partitions to be 4K aligned. I don't know for sure
whether this is required, but I suspect it helps.


Of course, if you have a root zpool, and especially if you don't plan to
rename newpool to oldpool and oldpool to something else, there's more
you'll need to do, including:
* MAKE CERTAIN oldpool WON'T BE AUTO-IMPORTED AND MOUNTED OVER NEWPOOL
(change its pool name and mountpoint(s), clear bootfs, etc.);
* be careful about any mirrored swap partitions on your disks if
you change the partition boundaries;
* zpool set bootfs=${whatever} newpool (bootfs isn't copied by zfs
send|recv);
* update /boot/zfs/zpool.cache and /boot/loader.conf if needed;
and
* make sure you have up-to-date (gpt)zfsboot code on the disks, because
newpool will be a current-version zpool.

HTH someone,
-WBE

hgc...@gmail.com

unread,
Aug 23, 2015, 12:11:33 PM8/23/15
to
Thanks! It did help. I was a little too tired to have been working, typed 'add' instead of 'attach' and --- suffered as you've noted above.

I hope the zfs command maintainers consider deprecating 'add' in favor of 'extend'. When 'inside the namespace of the developers heads' the nuance between 'add' and 'attach' is clear, but common usage has them closer to synonyms. This as one 'adds' a drive to the box no matter the intention for it to lengthen a pool (extend) or increase the mirror count (attach).

The difference would allow those who are awake longer than they'd prefer to avoid a mistake requiring them to stay up even longer still.

0 new messages