I seem to be experiencing a strange but consistent behaviour of ZFS on
my system. I can create a zpool and zfs without any problems:
# zpool create zfs raidz2 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/
sde1
# zfs set dedup=on zfs
# zfs create zfs/test
Then I shut down, remove one of the disks, and reboot. ZFS fails to
reassemble the redundant stripe.
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfs - - - - - FAULTED -
# zpool status zfs
pool: zfs
state: UNAVAIL
status: One or more devices could not be used because the label is
missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see:
http://www.sun.com/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfs UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
sda1 FAULTED 0 0 0 corrupted data
sdb1 FAULTED 0 0 0 corrupted data
sdc1 FAULTED 0 0 0 corrupted data
sdd1 FAULTED 0 0 0 corrupted data
sde1 UNAVAIL 0 0 0
None of the disks moved or were removed, but for some reason the zpool
doesn't get reassembled. Am I missing a step somewhere? My suspicion
is that the problem is that the first disk was removed, so all the
disks' names shifted by one. Is this normal? Should ZFS not reassemble
the pool correctly even if the device names have changed? Is there a
recommended way around this?
TIA.
Gordan