Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[ZFS] recover destroyed zpool with ZDB

614 views
Skip to first unread message

Beeblebrox

unread,
Apr 17, 2013, 2:05:07 PM4/17/13
to
I destroyed my zpool but forgot to take the tar backup of /home folder. This
was a single-HDD pool and I first did 'zpool destroy' then 'gpart destroy'
before realizing my error.

Since then, I have manually re-created the GPT partitions to the size they
were (testdisk did not correctly identify the geom) and there have been no
writes to the HDD.

After a lengthly discussion here:
http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-td5800299.html
and getting no result with:
# zpool import -D -f -R /bsdr -N -F -X 12018916494219117471 rescue =>
cannot import 'bsdr' as 'rescue': no such pool or dataset. Destroy and
re-create the pool from a backup source.

I sent an email to an expert and was advised to look into zdb and the -F &
-X flags. Good news and bad news there. '# zdb -e -F 12018916494219117471'
gives a lot of output but this is conflicting because although there are no
errors, %used is showing zero:
Traversing all blocks to verify checksums and verify nothing leaked
...
No leaks (block sum matches space maps exactly)
bp count: 43
bp logical: 357888 avg: 8322
bp physical: 36352 avg: 845 compression: 9.85
bp allocated: 93184 avg: 2167 compression: 3.84
bp deduped: 0 ref>1: 0 deduplication: 1.00
SPA allocated: 93184 used: 0.00%

The zdb -F command is giving the internal info for the zpool but it is not
importing it, nor does it change the status to importable.
What can I read or change in the zdb command to get this to come online? The
zdb output is available as a link if needed.

Thanks and regards.



-----
10-Current-amd64-using ccache-portstree merged with marcuscom.gnome3 & xorg.devel

--
View this message in context: http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-with-ZDB-tp5804517.html
Sent from the freebsd-fs mailing list archive at Nabble.com.
_______________________________________________
freeb...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-...@freebsd.org"

Adam Vande More

unread,
Apr 17, 2013, 2:53:40 PM4/17/13
to
On Wed, Apr 17, 2013 at 1:05 PM, Beeblebrox <zap...@berentweb.com> wrote:

> I destroyed my zpool but forgot to take the tar backup of /home folder.
> This
> was a single-HDD pool and I first did 'zpool destroy' then 'gpart destroy'
> before realizing my error.
>
> Since then, I have manually re-created the GPT partitions to the size they
> were (testdisk did not correctly identify the geom) and there have been no
> writes to the HDD.
>
> After a lengthly discussion here:
>
> http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-td5800299.html
> and getting no result with:
> # zpool import -D -f -R /bsdr -N -F -X 12018916494219117471 rescue =>
> cannot import 'bsdr' as 'rescue': no such pool or dataset. Destroy and
> re-create the pool from a backup source.
>
> I sent an email to an expert and was advised to look into zdb and the -F &
> -X flags. Good news and bad news there. '# zdb -e -F 12018916494219117471'
> gives a lot of output but this is conflicting because although there are no
> errors, %used is showing zero:
>

One thing is that you keep using zpool import -D when the pool isn't in a
destroyed state.

--
Adam Vande More

Beeblebrox

unread,
Apr 17, 2013, 3:16:20 PM4/17/13
to
Hi,
It's a long story by now and i was following volodymyr's suggestions.
Anyway, 'zpool list' no-longer shows the bsdr pool at all after having ran
# zdb -e -F 12018916494219117471
obviously, since the ada0p2 metadata was written into the zpool.cache file
with the above command, and zpool list reads the cache file.

Regards.



-----
10-Current-amd64-using ccache-portstree merged with marcuscom.gnome3 & xorg.devel

--
View this message in context: http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-with-ZDB-tp5804517p5804603.html
Sent from the freebsd-fs mailing list archive at Nabble.com.

Adam Vande More

unread,
Apr 17, 2013, 3:32:34 PM4/17/13
to
On Wed, Apr 17, 2013 at 2:16 PM, Beeblebrox <zap...@berentweb.com> wrote:

> Hi,
> It's a long story by now and i was following volodymyr's suggestions.
> Anyway, 'zpool list' no-longer shows the bsdr pool at all after having ran
> # zdb -e -F 12018916494219117471
> obviously, since the ada0p2 metadata was written into the zpool.cache file
> with the above command, and zpool list reads the cache file.


If you can get it back to faulted state, the official procedure is here:

http://docs.oracle.com/cd/E19963-01/html/821-1448/gbbwl.html


--
Adam Vande More

Beeblebrox

unread,
Apr 18, 2013, 1:15:17 AM4/18/13
to
Thanks, but that document does not appear very relevant to my situation.
Also, the issue is not as straight-forward as it seems. The DEFAULTED status
of the zpool was a 'false positive', because

A- The "present pool" did not accept any zpool commands and always gave
message like
no such pool or dataset ... recover the pool from a backup source.
B- The more relevant on-disk metadata showed and still shows this:
# zdb -l /dev/ada0p2 => all 4 labels intact and pool_guid:
12018916494219117471
vdev_tree: type: 'disk' id: 0 guid: 17860002997423999070

While the pool showing up in the zpool list was/is clearly in a worse state
that the above pool:
# zdb -l /dev/ada0 => only label 2 intact and pool_guid:
16018525702691588432

In my opinion, this problem is more similar to a "Resolving a Missing
Device" problem rather than data corruption. Unfortunately, missing device
repairs focus on mirrored setups and no decent document on missing device of
single-HDD pool.



-----
10-Current-amd64-using ccache-portstree merged with marcuscom.gnome3 & xorg.devel

--
View this message in context: http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-with-ZDB-tp5804517p5804714.html
Sent from the freebsd-fs mailing list archive at Nabble.com.
0 new messages