Stubbornly persistent pool!

300 views
Skip to first unread message

David Abrahams

unread,
Feb 1, 2009, 12:57:19 AM2/1/09
to zfs-...@googlegroups.com

So I decided to completely reinstall everything on my server. That
means every disk got re-partitioned and at least partly formatted. I just
got zfs-fuse reinstalled and "zpool import -a" yields:

cannot import 'olympic': pool is busy

but of course, I can't export, destroy, or get the status of this
thing. How can I make it disappear?

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

greg....@gmail.com

unread,
Feb 1, 2009, 4:48:35 AM2/1/09
to zfs-...@googlegroups.com
Try deleting /etc/zfs/zpool.cache

David Abrahams

unread,
Feb 11, 2009, 1:32:23 PM2/11/09
to zfs-...@googlegroups.com

on Sun Feb 01 2009, greg.martyn-AT-gmail.com wrote:

>> On 2/1/09, David Abrahams <da...@boostpro.com> wrote:
>>>
>>>
>>> So I decided to completely reinstall everything on my server. That
>>> means every disk got re-partitioned and at least partly formatted. I just
>>> got zfs-fuse reinstalled and "zpool import -a" yields:
>>>
>>> cannot import 'olympic': pool is busy
>>>
>>> but of course, I can't export, destroy, or get the status of this
>>> thing. How can I make it disappear?
>

> Try deleting /etc/zfs/zpool.cache

Doesn't work!

root@hydra:/mnt/recovery/src/package/zfs-fuse-0.5.1# /etc/init.d/zfs-fuse stop
* Unmounting ZFS filesystems...
...done.
* Stopping zfs-fuse zfs-fuse
...done.
root@hydra:/mnt/recovery/src/package/zfs-fuse-0.5.1# mv /etc/zfs/zpool.cache /tmp
root@hydra:/mnt/recovery/src/package/zfs-fuse-0.5.1# /etc/init.d/zfs-fuse start
* Starting zfs-fuse zfs-fuse
...done.
* Mounting ZFS filesystems...


cannot import 'olympic': pool is busy

...done.

Any other ideas? I don't see anything else on the disk that looks like
a likely record of the defunct 'olympic' pool.

David Abrahams

unread,
Feb 17, 2009, 8:47:53 AM2/17/09
to zfs-...@googlegroups.com

on Wed Feb 11 2009, David Abrahams <dave-AT-boostpro.com> wrote:

> Any other ideas? I don't see anything else on the disk that looks like
> a likely record of the defunct 'olympic' pool.

I'm thinking that there's something in an MBR, since I was using
the whole disk insteaf of a partition for ZFS. Maybe I'll try to knock
it out by installing GRUB on all the disks.

David Abrahams

unread,
Feb 18, 2009, 3:22:50 PM2/18/09
to zfs-...@googlegroups.com

on Tue Feb 17 2009, David Abrahams <dave-xT6NqnoQrPdWk0Htik3J/w-AT-public.gmane.org> wrote:

> on Wed Feb 11 2009, David Abrahams <dave-AT-boostpro.com> wrote:
>
>> Any other ideas? I don't see anything else on the disk that looks like
>> a likely record of the defunct 'olympic' pool.
>
> I'm thinking that there's something in an MBR, since I was using
> the whole disk insteaf of a partition for ZFS. Maybe I'll try to knock
> it out by installing GRUB on all the disks.

Still nope. I wonder where that info is hiding and how I can wipe it
out.

Jonathan Schmidt

unread,
Feb 18, 2009, 3:43:05 PM2/18/09
to zfs-...@googlegroups.com
>>> Any other ideas? I don't see anything else on the disk that looks like
>>> a likely record of the defunct 'olympic' pool.
>> I'm thinking that there's something in an MBR, since I was using
>> the whole disk insteaf of a partition for ZFS. Maybe I'll try to knock
>> it out by installing GRUB on all the disks.
>
> Still nope. I wonder where that info is hiding and how I can wipe it
> out.

Sorry for not digging back into the history of this thread to find out,
but have you tried "zpool destroy olympic"? Maybe with "-f" for force?

David Abrahams

unread,
Feb 18, 2009, 3:52:16 PM2/18/09
to zfs-...@googlegroups.com

dave@hydra:~$ sudo zpool destroy olympic
[sudo] password for dave:
cannot open 'olympic': no such pool
dave@hydra:~$ sudo zpool destroy -f olympic
cannot open 'olympic': no such pool

Ricardo M. Correia

unread,
Feb 18, 2009, 4:00:49 PM2/18/09
to zfs-...@googlegroups.com
On Dom, 2009-02-01 at 00:57 -0500, David Abrahams wrote:
> So I decided to completely reinstall everything on my server. That
> means every disk got re-partitioned and at least partly formatted. I just
> got zfs-fuse reinstalled and "zpool import -a" yields:
>
> cannot import 'olympic': pool is busy
>
> but of course, I can't export, destroy, or get the status of this
> thing. How can I make it disappear?

I suppose you don't really wish to import or read any data from this
pool, right?

Since you didn't destroy the pool before repartitioning, zfs-fuse is
still being able to read the on-disk ZFS labels from this pool, but it
can't import the pool because there's at least one disk/partition that
is being used by something else.

I think the only way to get rid of this message might be to use "dd" to
overwrite the first megabyte (or so) of the disks/partitions where this
pool was, as well as the last megabyte, but of course, be careful to
make sure you are not overwriting another filesystem :)

HTH,
Ricardo

Ricardo M. Correia

unread,
Feb 18, 2009, 4:03:06 PM2/18/09
to zfs-...@googlegroups.com
On Qua, 2009-02-18 at 21:00 +0000, Ricardo M. Correia wrote:
> I think the only way to get rid of this message might be to use "dd" to
> overwrite the first megabyte (or so) of the disks/partitions where this
> pool was, as well as the last megabyte, but of course, be careful to
> make sure you are not overwriting another filesystem :)

You can use 'zdb -l /dev/device' to check which labels are still intact.

Labels 0 and 1 are on the first megabyte of the device, labels 2 and 3
are on the last megabyte.

HTH,
Ricardo


David Abrahams

unread,
Feb 18, 2009, 5:02:00 PM2/18/09
to zfs-...@googlegroups.com

on Wed Feb 18 2009, "Ricardo M. Correia" <Ricardo.M.Correia-AT-Sun.COM> wrote:

> On Dom, 2009-02-01 at 00:57 -0500, David Abrahams wrote:
>> So I decided to completely reinstall everything on my server. That
>> means every disk got re-partitioned and at least partly formatted. I just
>> got zfs-fuse reinstalled and "zpool import -a" yields:
>>
>> cannot import 'olympic': pool is busy
>>
>> but of course, I can't export, destroy, or get the status of this
>> thing. How can I make it disappear?
>
> I suppose you don't really wish to import or read any data from this
> pool, right?

Right.

> Since you didn't destroy the pool before repartitioning, zfs-fuse is
> still being able to read the on-disk ZFS labels from this pool,

Clarly.

> but it
> can't import the pool because there's at least one disk/partition that
> is being used by something else.
>
> I think the only way to get rid of this message might be to use "dd" to
> overwrite the first megabyte (or so) of the disks/partitions where this
> pool was, as well as the last megabyte, but of course, be careful to
> make sure you are not overwriting another filesystem :)

yeah, I think that's beyond my comfort level.

David Abrahams

unread,
Feb 18, 2009, 11:11:37 PM2/18/09
to David Abrahams, zfs-...@googlegroups.com

On Wed, 18 Feb 2009 17:02:00 -0500, David Abrahams <da...@boostpro.com>
wrote:

Well, maybe not. I am using 3xRAID1 and RAID6 everywhere, after all. So I
managed to do this with one disk and it re-synced nicely.

Per http://howtoforge.net/software-raid1-grub-boot-fedora-8-p4 I used
sfdisk to copy the partition table across, but it worries me a little as
the sfdisk manpage has this very scary-sounding warning:

sfdisk doesn’t understand GUID Partition Table (GPT) and it is not
designed for large partitions. In
particular case use more advanced GNU parted(8).

Unfortunately, parted can't easily be used to copy partition tables. It
seems to have worked OK. Is the manpage just wrong?
--
David Abrahams
Boostpro Computing
http://www.boostpro.com

David Abrahams

unread,
Feb 19, 2009, 7:39:10 PM2/19/09
to zfs-...@googlegroups.com

on Wed Feb 18 2009, "Ricardo M. Correia" <Ricardo.M.Correia-AT-Sun.COM> wrote:

> On Qua, 2009-02-18 at 21:00 +0000, Ricardo M. Correia wrote:

>> I think the only way to get rid of this message might be to use "dd" to
>> overwrite the first megabyte (or so) of the disks/partitions where this
>> pool was, as well as the last megabyte, but of course, be careful to
>> make sure you are not overwriting another filesystem :)
>

> You can use 'zdb -l /dev/device' to check which labels are still intact.
>
> Labels 0 and 1 are on the first megabyte of the device, labels 2 and 3
> are on the last megabyte.

on Wed Feb 18 2009, David Abrahams <dave-AT-boostpro.com> wrote:

> On Wed, 18 Feb 2009 17:02:00 -0500, David Abrahams <da...@boostpro.com>
> wrote:
>>
>> yeah, I think that's beyond my comfort level.
>
> Well, maybe not. I am using 3xRAID1 and RAID6 everywhere, after all. So I
> managed to do this with one disk and it re-synced nicely.

This worked great, thanks so much for your help.

http://techarcana.net/2009/02/19/stubbornly-persistent-zfs-pools/

Thank goodness I went with 2x redundancy, though -- at one point I
rebooted to find that two of my partitions had remained offline.
Anyway, everything seems to have sync'd nicely.

Reply all
Reply to author
Forward
0 new messages