Live update of packages

142 views
Skip to first unread message

Uwe Kubosch

unread,
Dec 27, 2009, 5:15:18 AM12/27/09
to zfs-...@googlegroups.com
Hi all!

I just upgraded a host from 0.6.0 snapshot 433 to 0.6.0 final, and my pool went offline immediately. zpool and zfs could not access the pool and file system:

[root@fili ~]# zpool status
connect: No such file or directory
Please make sure that the zfs-fuse daemon is running.
internal error: failed to initialize ZFS library
[root@fili ~]# zfs list
connect: No such file or directory
Please make sure that the zfs-fuse daemon is running.
internal error: failed to initialize ZFS library

HOWEVER: The pool IS online and the zfs file system is available.

Killing the zfs-fuse daemon and starting it makes everything ok:

[root@fili ~]# killall zfs-fuse
[root@fili ~]# zpool status
connect: No such file or directory
Please make sure that the zfs-fuse daemon is running.
internal error: failed to initialize ZFS library
[root@fili ~]# service zfs-fuse start
Cleaning up stale zfs-fuse PID file in /var/run/zfs-fuse.pid
Starting zfs-fuse: [ OK ]
Immunizing zfs-fuse against OOM kills [ OK ]
Mounting zfs partitions: ^[[A^[[A [ OK ]
[root@fili ~]# zpool status
pool: backup
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
disk/by-id/ata-WDC_WD10EACS-00ZJB0_WD-WCASJ1858261 ONLINE 0 0 0
disk/by-id/ata-WDC_WD10EACS-00ZJB0_WD-WCASJ2223986 ONLINE 0 0 0
disk/by-id/ata-WDC_WD10EACS-00ZJB0_WD-WCASJ2225485 ONLINE 0 0 0

errors: No known data errors

Any idea what prevents the zpool and zfs commands from working?


--
With kind regards
Uwe Kubosch
Kubosch Consulting
u...@kubosch.no
http://kubosch.no/

Fajar A. Nugraha

unread,
Dec 27, 2009, 6:20:05 AM12/27/09
to zfs-...@googlegroups.com
On Sun, Dec 27, 2009 at 5:15 PM, Uwe Kubosch <u...@kubosch.no> wrote:
> Hi all!
>
> I just upgraded a host from 0.6.0 snapshot 433 to 0.6.0 final, and my pool went offline immediately.  zpool and zfs could not access the pool and file system:
>
> [root@fili ~]# zpool status
> connect: No such file or directory
> Please make sure that the zfs-fuse daemon is running.
> internal error: failed to initialize ZFS library

I'm guessing that's because Emmanuel decided to change zfs socket from
/etc/zfs to /var/run/zfs/ :D

I probably should also mention that when I upgraded to devel version
(I currently use e23119283a6007b87c1e75ddb8d06b3f8bf23ca4 from
master), zfs-fuse was unable to discover previous pool if the devices
in "non-standard" location (e.g. using LVM, where the devices are on
/dev/mapper or /dev/vg/). It's like the previous zpool.cache was not
read. I'm not exactly sure if this was because something specific that
I did, or this is something that you can expect in general. Anyway, if
this happens to you, a simple "zpool import -d /dev/mapper" does the
trick.

--
Fajar

Uwe Kubosch

unread,
Dec 27, 2009, 8:46:40 AM12/27/09
to zfs-...@googlegroups.com

On Dec 27, 2009, at 12:20 PM, Fajar A. Nugraha wrote:

> On Sun, Dec 27, 2009 at 5:15 PM, Uwe Kubosch <u...@kubosch.no> wrote:
>> I just upgraded a host from 0.6.0 snapshot 433 to 0.6.0 final, and my pool went offline immediately. zpool and zfs could not access the pool and file system:
>>
>> [root@fili ~]# zpool status
>> connect: No such file or directory
>> Please make sure that the zfs-fuse daemon is running.
>> internal error: failed to initialize ZFS library
>
> I'm guessing that's because Emmanuel decided to change zfs socket from
> /etc/zfs to /var/run/zfs/ :D

That would indeed explain it :)

Rudd-O

unread,
Dec 28, 2009, 9:45:10 PM12/28/09
to zfs-...@googlegroups.com
Yes. The socket that ZFS opens to listen for zpool and zfs commands was
somehow unlinked by the RPM upgrade process.

This happened to me yesterday when I tried your updates-testing package
too.

Rudd-O

unread,
Dec 28, 2009, 9:45:52 PM12/28/09
to zfs-...@googlegroups.com
Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.

Fajar A. Nugraha

unread,
Dec 28, 2009, 10:02:15 PM12/28/09
to zfs-...@googlegroups.com
On Tue, Dec 29, 2009 at 9:45 AM, Rudd-O <rud...@rudd-o.com> wrote:
> Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.

It's actually /var/lib/zfs/zpool.cache :D

It'd be nice if both zpool.cache and socket location is customizable
via zfs-fuse startup option (is it implemented already?). This will at
least give some option to people who wish to keep it the "solaris" way
(/etc/zfs/zpool.cache) or the "linux" way (/var/lib).

--
Fajar

Mike Hommey

unread,
Dec 29, 2009, 2:55:46 AM12/29/09
to zfs-...@googlegroups.com
On Mon, Dec 28, 2009 at 06:45:52PM -0800, Rudd-O wrote:
> Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.

/var/lib, not /var/cache. /var/cache is supposed to be restorable by the
application, i.e. if the admin deletes the content, the application is
supposed to recreate it. zfs-fuse doesn't, as it requires the admin to
reimport the pools.
/var/lib, on the other hand is for state information.

These are the FHS definitions[1][2].

Mike

1. http://www.pathname.com/fhs/2.2/fhs-5.5.html
2. http://www.pathname.com/fhs/2.2/fhs-5.8.html

sgheeren

unread,
Dec 29, 2009, 3:55:02 AM12/29/09
to zfs-...@googlegroups.com
Mike Hommey wrote:
> On Mon, Dec 28, 2009 at 06:45:52PM -0800, Rudd-O wrote:
>
>> Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.
>>
>
> /var/lib, not /var/cache. /var/cache is supposed to be restorable by the
> application, i.e. if the admin deletes the content, the application is
> supposed to recreate it. zfs-fuse doesn't, as it requires the admin to
> reimport the pools.
> /var/lib, on the other hand is for state information.
>

You are quite right!

The brittleness of it all has lead me the other way around; My zfs-fuse
init script will explicitely export and import the pool. I name the pool
in /etc/default/zfs-fuse. This way, zpool.cache is restorable. The
additional benefit seems to be that you can share your pool freely among
(dual boot) host OS-es.

Rudd-O

unread,
Dec 29, 2009, 4:37:31 PM12/29/09
to zfs-...@googlegroups.com
On Tue, 2009-12-29 at 08:55 +0100, Mike Hommey wrote:
> On Mon, Dec 28, 2009 at 06:45:52PM -0800, Rudd-O wrote:
> > Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.
>
> /var/lib, not /var/cache. /var/cache is supposed to be restorable by the
> application, i.e. if the admin deletes the content, the application is
> supposed to recreate it. zfs-fuse doesn't, as it requires the admin to
> reimport the pools.
> /var/lib, on the other hand is for state information.

Actually, if the cache file is not present when zfs-fuse starts up, ZFS
reimports the pools that were imported on the system before the file was
deleted. So this IS, INDEED, a cache file according to the FHS
definition.

Mike Hommey

unread,
Dec 29, 2009, 4:46:25 PM12/29/09
to zfs-...@googlegroups.com
On Tue, Dec 29, 2009 at 01:37:31PM -0800, Rudd-O wrote:
> On Tue, 2009-12-29 at 08:55 +0100, Mike Hommey wrote:
> > On Mon, Dec 28, 2009 at 06:45:52PM -0800, Rudd-O wrote:
> > > Also zpool.cache ought to be stored in /var/cache/zfs/, NOT in /etc.
> >
> > /var/lib, not /var/cache. /var/cache is supposed to be restorable by the
> > application, i.e. if the admin deletes the content, the application is
> > supposed to recreate it. zfs-fuse doesn't, as it requires the admin to
> > reimport the pools.
> > /var/lib, on the other hand is for state information.
>
> Actually, if the cache file is not present when zfs-fuse starts up, ZFS
> reimports the pools that were imported on the system before the file was
> deleted. So this IS, INDEED, a cache file according to the FHS
> definition.

That's not my experience with zfs-fuse.

Mike

Fajar A. Nugraha

unread,
Dec 29, 2009, 5:21:01 PM12/29/09
to zfs-...@googlegroups.com

Instead of "reimports the pools that were imported on the system", a
more appropriate term would be "scans the entries in /dev and tries to
import the pools there". Meaning that you have previously imported
pools but with vdevs located elswehere (/dev/mapper,
/path/to/file/image, etc.) zfs-fuse won't be able to import it without
a valid zpool.cache or "-d" command line.

As to whether this qualifies as cache or state, I think of zpool.cache
as being similar to php's session data. The application can recreate
it, but if it's deleted any previous information located on it
(session data in php, vdev locations in zfs-fuse) would be lost.
Ubuntu's php package puts it on /var/lib.

--
Fajar

Rudd-O

unread,
Dec 29, 2009, 8:54:31 PM12/29/09
to zfs-...@googlegroups.com

Session data is not a cache that could be recreated if it was blown
away. If you kill the session data there, what happens is that your Web
customers LOSE their sessions and new sessions are generated, WITH DATA
LOSS.

This is why /var/lib/php is not in /var/cache.

>
> --
> Fajar
>


Reply all
Reply to author
Forward
0 new messages