Am 20.01.13 02:07, schrieb piku:
> Well /dev/sda IS still connected but zpool import -D is NOT finding the
> pool. I was wondering if it's because I used a partition (/dev/sdb1)
> rather than the whole disk (/dev/sdb).
Any chance you accidentally shredded the wrong disk, for example
because after reboot disk names have been shuffled around for whatever
reason?
If zdb -lu shows some usable ueberblocks, you may try to import an
older version of the pool, however for that to work "zpool import"
needs to recognize the disk first. (And I am not sure zfs-fuse has
the needed import options. I think last time I had to reset a pool
by some transaction groups I did it with ZoL in a VM.)
Have you verified that the disk is actually recognized by the OS, i.e.
cat /proc/partitions lists the partition(s) with the right size?
And just to be on the save side: There are no traces of the pool in
"zpool status" anymore, right? "zpool import" won't import a pool it
thinks is already half-imported.
(more comments inlined below)
> On Saturday, January 19, 2013 7:21:35 PM UTC-5, Emmanuel Anne wrote:
>>
>> Hum, not sure I got everything. So :
>> /dev/sda is the drive you zeroed so this one is now useless.
>> Then you destroyed the pool, but /dev/sdb is still available with 1 side
>> of the mirror on it.
>> Well in this case zpool import -D should do the trick, just make sure that
>> /dev/sda is NOT connected, otherwise you'll get errors after starting to
>> use the pool.
>>
>> Detaching/re-attaching parts of a mirror works really well on zfs (and
>> zfs-fuse), I did it for long time to maintain a backup because when you
>> re-attach the 2nd part only what has changed is sent to it.
>> So it's 100% reliable usually !
>>
>>
>> 2013/1/20 piku <
evapo...@gmail.com <javascript:>>
>>
>>> I was hoping for any kind of help. Hours of googling is failing me.
>>>
>>> I had a linux zfs-fuse system on fedora. I created a zfs pool called
>>> tank using two partitions which span the entire disk of 2 1.5TB drives, so
>>> a mirror. So /dev/sda1 and /dev/sdb1 were the devices that made up the
>>> pool. So far so good. Now I enable compression and use this setup
>>> successfully for several years.
>>>
>>> ... fast forward
>>>
>>> I get a SMART alert that one of the drives is failing. I detach this
>>> drive from the pool. The pool is stil functional. I reboot to run the
>>> seagate seatools to get the rma code. I then reboot. This is where I made
>>> a huge error. I don't know what state the pool was in, if it was still
>>> detached or what but I ran shred on the failing disk (/dev/sda) which
>>> overwrite random bytes on everything and then zeroed the disk. Then later
>>> on when this was done tank was no longer accessible.
>>>
>>> When I did a zpool status it said IIRC "unhealthy" and said it was
>>> corrupt. There was only one member of the pool and it was the failing
>>> drive that I detached previously.
To me that sounds like "shredded the wrong drive" and/or "drive
renamed on boot, but zpool.cache around with old information".
There is a reason why one should never use the /dev/sdX names. They
are way to instable these days and keep changing when drives are added
or removed or just slow on startup.
>>> I struggled with it for a while and
>>> couldn't get it to use the other good drive in any way so I did a zpool
>>> destroy on it.
I would never had done that. "zpool export -f"? Maybe. Locating and
deleting the zpool.cache? Probably yes. But zpool destroy on a pool I
want to rescue. Never.
>>> Now with one zeroed drive, /dev/sda and one copy of the
>>> mirror on /dev/sdb1 no permuations of zpool import will import the drive.
>>> If I hexdump /dev/sdb I do find zfs related information listed about the
>>> drive, the name, etc.
>>>
>>> What do I do? I can't believe I have a mirror that is uhmm.. Not a
>>> mirror :( Even if I zeroed the wrong drive, the failing drive was still
>>> appearing to function fine. No matter what I should have a copy of my
>>> data, I seem to, but I cannot access it. I'm quite sure this is a normal
>>> mirror configuration. Is there any way to tell zpool to start scanning a
>>> device for vdev headers? Can zdb help me? I'm ok with even just basic
>>> file recovery at this point.
To the best of my knowledge, ZFS neither has a zfs.fsck (sure on this
one) nor any means to manually cat files as a rescue measure (I may be
wrong on this).
In theory, one can use zdb to walk the tree of blocks and extract data
blocks from files. But it is a real pain, requires additional tools
to decode some of the meta data blocks not "pretty-printed enough"
from zdb and of course requires intimate knowledge of the on-disk
structure. I'd estimate a progress rate of one file per hour if
manually walking the block tree, decoding meta data blocks on paper and
extracting individual data blocks.
Best
Björn
--
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++
www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |