On 14/10/2016 21:10,
cindy.sw...@gmail.com wrote:
Hello Cindy,
Thank you for your comments and help.
> I agree this is a long-standing issue, but I predict that we will
> see a user interface that resolves this sometime in the future.
Not having it happen in the first place is the priority.
So I have to start again with a new installation.
1. zpool split the mirror
2. zero primary HDD with dd
3. install to primary
4. zpool import secondary HDD
5. send-receive data from secondary to primary
6. zero second HDD
7. attach second HDD to form a mirror.
This saves copying 3TB across a network from back-ups.
> An easier way to recover from this (when you don't have an active
> pool on the disks to be wiped) is to create a dummy pool and then
> remove it. I'm not sure I would want to try wipe old pool info on an
> active pool (in your case, which I didn't notice at first). Most
> likely this was caused because it was previously connected to another
> system.
> More commonly, is having an old pool hanging around on different disk
> but on the same system:
>
> # zpool import
> pool: rpool
> id: 1254603689977242959
> state: UNAVAIL
> status: The pool is formatted using an incompatible version.
> action: The pool cannot be imported. Access the pool on a system
running newer
> software, or recreate the pool from backup.
> see:
http://support.oracle.com/msg/ZFS-8000-A5
> config:
>
> rpool UNAVAIL newer version
> c2t1d0 ONLINE
In my case the pool would have been available for import [prior to
install writing rpool over it].
> # zpool create -f dummy c2t1d0
> # zpool status dummy
> pool: dummy
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> dummy ONLINE 0 0 0
> c2t1d0 ONLINE 0 0 0
>
> errors: No known data errors
> # zpool import
> no pools available to import
This was done on install:
# zpool history rpool | head
History for 'rpool':
2016-07-18.15:35:13 zpool create -f -B rpool c2t0d0
2016-07-18.15:35:18 zfs create -p -V 16370m rpool/dump
2016-07-18.15:35:24 zfs create -p -V 4096m rpool/swap
2016-07-18.15:35:32 zfs set primarycache=metadata rpool/swap
2016-07-18.15:54:56 zfs unmount rpool/ROOT/solaris
2016-07-18.15:58:05 zfs set primarycache=metadata rpool/swap
2016-07-18.15:58:06 zfs create -o mountpoint=/system/zones
rpool/VARSHARE/zones
2016-07-18.15:58:12 zfs set
com.oracle.libbe:last-boot-time=20160718T145807Z rpool/ROOT/solaris
2016-07-18.15:58:42 zfs create rpool/export/home/james
I didn't check with zpool import at this point but if you say it clears
it then it must have been the attach that failed to clear the previous
zpool. Both drives were at one time part of "spool" in a mirror, at
least one preserved the memory:
# zpool history rpool
...
2016-07-18.17:22:59 zpool attach rpool c2t0d0 c2t1d0
...
James.