How to get the zfs pool survive a reboot

378 views
Skip to first unread message

Ieracos

unread,
Dec 17, 2008, 11:44:35 AM12/17/08
to zfs-...@googlegroups.com
Hi people,
I'm new to this list and, unfortunately, I have a newbie question for you :(
This is my situation: I'm trying to run a file server on a Debian unstable, using zfs-fuse and two USB hdd (500Gb each one, in mirroring). Obviously zfs is only for the storage space, no root. At this point, I solved all problems, everything works, I'm happy, and I even found the init script (by Bryan Donlan) on this list, but... after a reboot my pool don't come back to life. The script doesn't fail, but nothing happens because the "zfs mount -a" can't find any dataset, because there aren't (according to zpool) any available pools. Starting manually the zfs-fuse daemon, the result is the same. I think the data is still there on the disks, because if I try to create a new pool with zpool I get an error because the disks are parts of a pool yet ("Try -f to correct this error"). However I didn't found any way to solve this, even reading the zpool help and manual, nor googling a lot.
Any ideas? Anyone can help me? :)

drewpca

unread,
Dec 17, 2008, 12:30:59 PM12/17/08
to zfs-fuse

Ieracos wrote:
> doesn't fail, but nothing happens because the "zfs mount -a" can't find any dataset, because
> there aren't (according to zpool) any available pools. Starting manually the zfs-fuse daemon, the result

I have about the same setup, and I find I need a 'zpool export
mypoolname; zpool import mypoolname' after every reboot (after zfs-
fuse starts).

Ricardo M. Correia

unread,
Dec 17, 2008, 2:06:23 PM12/17/08
to zfs-...@googlegroups.com
Hi Ieracos,

Have you tried running 'zpool import'?

The reason your pool disappeared may be a bug in the 0.5.0 release that
was fixed in the latest source code repository.

To fix that problem, you need to create the /etc/zfs directory manually
('mkdir /etc/zfs'), or alternatively you can upgrade to the latest
version from the Mercurial repository and zfs-fuse will do that for you.

Cheers,
Ricardo

Jonathan Schmidt

unread,
Dec 17, 2008, 2:09:18 PM12/17/08
to zfs-...@googlegroups.com

Can you guys check whether you have a /etc/zfs directory? A certain
version of the zfs-fuse install script was not creating that directory,
and it is what is responsible for "remembering" your pools between
invocations, AFAIK. If you don't have that directory, just create it and
do a zpool export; zpool import and it should be permanent.

Ieracos

unread,
Dec 17, 2008, 8:22:32 PM12/17/08
to zfs-...@googlegroups.com
Hi guys,
First of all thanks everyone for fast replies :)
Well, I tried the "import" method, and... it works partially. Let me explain: I created by hand the /etc/zfs directory, and ran 'zpool import data' (my pool's name is "data"), and that worked, I got back my pool and I was very happy... so, I tried to stop and restart zfs-fuse with the script and that, again, worked. Finally, I tried with a reboot, and that... didn't worked :(
Zfs-fuse recognizes the automatically created zpool.cache file in /etc/zfs, but it doesn't mount the filesystem, and a 'zpool list' returns "HEALTH: FAULTED" for my pool. I tried it many times, with the same result; the only difference is that some times a simple 'zpool import data' (that tooks 3 to 5 minutes) works fine, and other times I got the error "cannot import 'data': no such pool available", so I had to:
1) stop zfs-fuse
2) remove manually /etc/zfs/zpool.cache
3) restart zfs-fuse
4) run 'zpool import data' (that equally tooks 3 to 5 minutes)

and these got me back my pool.

That's all...

P.S.: I'm using version 0.5.0

Ricardo M. Correia

unread,
Dec 17, 2008, 8:44:20 PM12/17/08
to zfs-...@googlegroups.com
On Qui, 2008-12-18 at 02:22 +0100, Ieracos wrote:
> Hi guys, First of all thanks everyone for fast replies :) Well, I tried
> the "import" method, and... it works partially. Let me explain: I
> created by hand the /etc/zfs directory, and ran 'zpool import data'
> (my pool's name is "data"), and that worked, I got back my pool and I
> was very happy... so, I tried to stop and restart zfs-fuse with the
> script and that, again, worked. Finally, I tried with a reboot, and
> that... didn't worked :( Zfs-fuse recognizes the automatically created
> zpool.cache file in /etc/zfs, but it doesn't mount the filesystem, and
> a 'zpool list' returns "HEALTH: FAULTED" for my pool.

What does 'zpool status' say?

> I tried it many times, with the same result; the only difference is
> that some times a simple 'zpool import data' (that tooks 3 to 5
> minutes) works fine

Note that 'zpool import' scans all the block devices in /dev.

If you have slow block devices (cdroms, floppies, network block
devices, ...) it's not surprising it takes a while.

However, a 'zpool import' is supposed to be a very rare operation and in
the future it will be much faster (once an updated e2fsprogs with a ZFS
patch in libblkid gets distributed on all major distros, and another
patch is integrated into zfs-fuse).

> , and other times I got the error "cannot import 'data': no such pool
> available", so I had to: 1) stop zfs-fuse 2) remove manually
> /etc/zfs/zpool.cache 3) restart zfs-fuse 4) run 'zpool import data'
> (that equally tooks 3 to 5 minutes)

Is it really necessary to do all that?
If you do 'zpool export data' and 'zpool import data', won't it work as
well?

>
> and these got me back my pool.

The problem you're experiencing may happen if either:

1) Your devices get renumbered when you reboot. You can solve this by
importing your pool with 'zpool import -d /dev/disk/by-id' (this is
recommended anyway). This should also speed up the import operation
quite a bit and should make this problem disappear.

Or 2) Your devices are not available by the time zfs-fuse starts.
There is no easy way to fix this one, besides making sure that this
doesn't happen. The workaround is running zpool export/import.

HTH,
Ricardo


Ieracos

unread,
Dec 17, 2008, 9:43:58 PM12/17/08
to zfs-...@googlegroups.com
"Ricardo M. Correia" <Ricardo....@Sun.COM> wrote:
> 1) Your devices get renumbered when you reboot. You can solve this by
> importing your pool with 'zpool import -d /dev/disk/by-id' (this is
> recommended anyway). This should also speed up the import operation
> quite a bit and should make this problem disappear.

Wow, you got it! :D
Maybe, being USB disks, they are renumbered at reboot. However, now it works perfectly! And the import operation was extremely fast: took only few seconds.
Now I can enjoy the power of ZFS ;)
Thank you a lot guys. Expecially to Ricardo: if we'll ever meet, I'll buy you a beer... or a pizza :D
Goodbye and thanks again!

Ruben Wisniewski

unread,
Dec 18, 2008, 8:40:32 AM12/18/08
to zfs-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ricardo M. Correia wrote:
> The problem you're experiencing may happen if either:

> Or 2) Your devices are not available by the time zfs-fuse starts.
> There is no easy way to fix this one, besides making sure that this
> doesn't happen. The workaround is running zpool export/import.


As I setup my raidz1 on encrypted devices they aren't avaible on
zfs-fuse's start, but after they have been opend I only have to mount my
zfs-pool to get it online, no need to export or import them.


Greeting Ruben
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJSlLPA71SGzTeS3ARAiUeAJ4pdZ8meu0MzOqu6Tnxlub1NfI/bwCfW+ZL
/q1tKi8WHOSsjq3xbX3xzXw=
=HOKs
-----END PGP SIGNATURE-----

Reply all
Reply to author
Forward
0 new messages