Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Erase non-existent zpool

575 views
Skip to first unread message

James

unread,
Oct 14, 2016, 4:40:36 AM10/14/16
to
Hello,

zpool import is showing an "UNAVAIL" zpool. It's unavailable because it
does not exist. How can I erase the record of this zpool?


# zpool import
pool: spool
id: 3274473603325906163
state: UNAVAIL
status: One or more devices are unavailable.
action: The pool cannot be imported due to unavailable devices or data.
config:

spool UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
c2t0d0 UNAVAIL corrupted data
c2t1d0 UNAVAIL corrupted data

device details:

c2t0d0 UNAVAIL corrupted data
status: ZFS detected errors on this device.
The device has invalid label.

c2t1d0 UNAVAIL corrupted data
status: ZFS detected errors on this device.
The device has invalid label.



The devices listed are now used as the root pool:

# zpool status rpool
...
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0


Additional background: When the HDDs were new they were tested in
another machine as a second pool named "spool". They are now used for a
new build as the current root pool, "rpool". "zpool create ..." or
"zpool attach ..." I would expected to overwrite all the data on the
HDDs but somehow the memory is still there.


# zpool status spool
cannot open 'spool': no such pool

# strings /etc/zfs/zpool.cache | grep pool
rpool
rpool
pool_guid




Doing "dd if=/dev/zero of=/dev/dsk/c1t0d0 count=XXX" and reinstalling
and/or zpool create would surely work but is it really necessary? If
so, suggestion please for the value of bs/count=XXX, how much of the HDD
drive needs zapping?



James.

cindy.sw...@gmail.com

unread,
Oct 14, 2016, 11:28:31 AM10/14/16
to
Hi James,

I have used something like this in the past but it might depend on which Solaris release this is:

dd if=/dev/zero of=/dev/dsk/c1t0d0s0 count=100 bs=512k

Thanks, Cindy

James

unread,
Oct 14, 2016, 2:08:44 PM10/14/16
to
On 14/10/2016 16:28, cindy.sw...@gmail.com wrote:

Hello Cindy,

> I have used something like this in the past but it might depend on which Solaris release this is:

11.3


> dd if=/dev/zero of=/dev/dsk/c1t0d0s0 count=100 bs=512k

Doing the reverse:

# dd if=/dev/dsk/c1t0d0 count=100 bs=512k | strings | grep pool

shows that "spool" is on the disc but not in what capacity - is it part
of the format or just in a storage block? How far does the zpool format
extend?

You've not offered an easy solution which leads me to assume there is a
bug in ZFS and the only way out is to zero the HDDs and start again.



James.

cindy.sw...@gmail.com

unread,
Oct 14, 2016, 4:10:41 PM10/14/16
to
I agree this is a long-standing issue, but I predict that we will see a user interface that resolves this sometime in the future.

An easier way to recover from this (when you don't have an active pool on the disks to be wiped) is to create a dummy pool and then remove it. I'm not sure I would want to try wipe old pool info on an active pool (in your case, which I didn't notice at first). Most likely this was caused because it was previously connected to another system.

More commonly, is having an old pool hanging around on different disk but on the same system:

# zpool import
pool: rpool
id: 1254603689977242959
state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported. Access the pool on a system running newer
software, or recreate the pool from backup.
see: http://support.oracle.com/msg/ZFS-8000-A5
config:

rpool UNAVAIL newer version
c2t1d0 ONLINE

# zpool create -f dummy c2t1d0
# zpool status dummy
pool: dummy
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
dummy ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0

errors: No known data errors
# zpool import
no pools available to import
# zpool destroy dummy

Thanks, Cindy

James

unread,
Oct 18, 2016, 5:36:40 AM10/18/16
to
On 14/10/2016 21:10, cindy.sw...@gmail.com wrote:

Hello Cindy,
Thank you for your comments and help.

> I agree this is a long-standing issue, but I predict that we will
> see a user interface that resolves this sometime in the future.

Not having it happen in the first place is the priority.

So I have to start again with a new installation.

1. zpool split the mirror
2. zero primary HDD with dd
3. install to primary
4. zpool import secondary HDD
5. send-receive data from secondary to primary
6. zero second HDD
7. attach second HDD to form a mirror.

This saves copying 3TB across a network from back-ups.



> An easier way to recover from this (when you don't have an active
> pool on the disks to be wiped) is to create a dummy pool and then
> remove it. I'm not sure I would want to try wipe old pool info on an
> active pool (in your case, which I didn't notice at first). Most
> likely this was caused because it was previously connected to another
> system.

> More commonly, is having an old pool hanging around on different disk
> but on the same system:
>
> # zpool import
> pool: rpool
> id: 1254603689977242959
> state: UNAVAIL
> status: The pool is formatted using an incompatible version.
> action: The pool cannot be imported. Access the pool on a system
running newer
> software, or recreate the pool from backup.
> see: http://support.oracle.com/msg/ZFS-8000-A5
> config:
>
> rpool UNAVAIL newer version
> c2t1d0 ONLINE

In my case the pool would have been available for import [prior to
install writing rpool over it].



> # zpool create -f dummy c2t1d0
> # zpool status dummy
> pool: dummy
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> dummy ONLINE 0 0 0
> c2t1d0 ONLINE 0 0 0
>
> errors: No known data errors
> # zpool import
> no pools available to import

This was done on install:

# zpool history rpool | head
History for 'rpool':
2016-07-18.15:35:13 zpool create -f -B rpool c2t0d0
2016-07-18.15:35:18 zfs create -p -V 16370m rpool/dump
2016-07-18.15:35:24 zfs create -p -V 4096m rpool/swap
2016-07-18.15:35:32 zfs set primarycache=metadata rpool/swap
2016-07-18.15:54:56 zfs unmount rpool/ROOT/solaris
2016-07-18.15:58:05 zfs set primarycache=metadata rpool/swap
2016-07-18.15:58:06 zfs create -o mountpoint=/system/zones
rpool/VARSHARE/zones
2016-07-18.15:58:12 zfs set
com.oracle.libbe:last-boot-time=20160718T145807Z rpool/ROOT/solaris
2016-07-18.15:58:42 zfs create rpool/export/home/james


I didn't check with zpool import at this point but if you say it clears
it then it must have been the attach that failed to clear the previous
zpool. Both drives were at one time part of "spool" in a mirror, at
least one preserved the memory:

# zpool history rpool
...
2016-07-18.17:22:59 zpool attach rpool c2t0d0 c2t1d0
...



James.




James

unread,
Oct 20, 2016, 7:48:37 AM10/20/16
to
On 14/10/2016 21:10, cindy.sw...@gmail.com wrote:

I have reproduced this in VirtualBox so I could experiment:

create VM with 2 HDDs
boot from live ISO
zpool create spool mirror c1t0d0 c1t2d0
zpool export spool
format c1t2d0 fdisk and delete partition
zpool create -f -B rpool c1t0d0
zpool attach rpool c1t0d0 c1t2d0


Note no "-f" on the attach.
Now zpool import shows an UNAVAIL pool that is not importable nor
directly deletable, eg, zpool destroy requires import which does not work.


I also reproduced this on a real machine a while back - I've found
another machine with the identical problem. I don't normally do "zpool
import" on a machine that has nothing to import so hadn't noticed before.


A solution that doesn't require reboot, starting with rpool as mirror of
c2t0d0 and c2t1d0:

check back-ups
zpool detach rpool c2t0d0
zpool create -f dummy c2t0d0
zpool destroy dummy
zpool attach rpool c2t1d0 c2t0d0
wait for resilver
bootadm install-bootloader
zpool detach rpool c2t1d0
zpool create -f dummy c2t1d0
zpool destroy dummy
zpool attach rpool c2t0d0 c2t1d0
wait for resilver
bootadm install-bootloader



I guess the problem is that an element of a zpool stores information not
just about itself but about the other elements in the pool, there being
no primary store so that there is redundancy. Creating a zpool on a
device does not overwrite its knowledge that it also still belongs to
another pool, in effect detaching it, whereas destroying the zpool
containing the element does. It isn't detached from the other elements
in the old pool [in my test of this it couldn't because the other HDD
was removed] but that should mean the old pool has a missing element not
a corrupt one. When no referenced element exists the ability to
manipulate them is lost because zpool can't import. I suggest zpool
destroy should work on non-imported pools or zpool create should do what
zpool destroy does to former pool membership.



James.



0 new messages