[zfs-fuse] How to upgrade zfs-fuse ?

15 views
Skip to first unread message

ToXiC

unread,
May 5, 2010, 7:26:42 AM5/5/10
to zfs-fuse
Hello.

I'm encountering a problem :
actually using an older version of zfs-fuse (out of emmanuel's git, he
created a "sable" branch long time ago, almost exclusively for me...)
I've ttried to install zfs-fuse official 0.6.0, it started well, but I
couldn't import my pool, the error claimed problems with the pool
version.

So, as my Datas are on it, I'd like to know if there is a way for me
to update the pool-version in order to have the bog-fixes of 0.6.0
WITHOUT loosing my datas ?

I'm already backing up, but I'll not be ablo to backup everything...
(too much data, not enought HDDs available for backup)

So I'd be happy if you had a way for me to use 0.6.0, even without
upgrading the pool version if it cannot be done without erasing the
content of the drives...

And as you people are surely much more aware of backups than I do, I'd
also like some advice on backup-solutions for Linux (ubuntu). I was
using rsync, but as my tool (luckyBackup) changed it's config-file-
structure, I have to reconfigure all my backups, so I'm considering
changing the backup-solution, maybe have something that would compress
files to make the backups smaller but still incremental.... Note that
I'm mostly doing backups to avoid any zfs-fuse-problems, so the
backups should be zfs-fuse-FREE...

Thanks for the help !

--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/

sgheeren

unread,
May 5, 2010, 8:08:46 AM5/5/10
to zfs-...@googlegroups.com
)On 05/05/2010 01:26 PM, ToXiC wrote:
Hello.

I'm encountering a problem :
actually using an older version of zfs-fuse (out of emmanuel's git, he
created a "sable" branch long time ago, almost exclusively for me...)
I've ttried to install zfs-fuse official 0.6.0, it started well, but I
couldn't import my pool, the error claimed problems with the pool
version.
  
Since you don't mention any specific pool versions, I'm going to assume you are confused: you are probably trying to downgrade (unstable was _way_ advanced past 0.6.x).
Downgrading a pool is (unfortunately) not possible. Though, I suppose from reading the man-page you could 'zfs send' without any dedup info on and try to receive into an older version pool.

The odds are, though, that the resulting filesystems will _not_ be mountable if the filesystem version (zfs get version mypool/mydataset) is too high.

So, as my Datas are on it, I'd like to know if there is a way for me
to update the pool-version in order to have the bog-fixes of 0.6.0
WITHOUT loosing my datas ?
  
You can by choosing a version sufficiently recent to support the version of your data pool. You might want to try the recently posted 'testing' branch. Any sufficiently recent version (obviously) has the bugfixes as well. No need to lose data unless you  manually select a version that turns out particularly unstable.

Two versions I'd vouch for: 0.6.9_beta (12d41f24e1765ad541a90c5f0ecf176e63aed2da)
Or: somewhat older version supporting nfs sharing: 0c54eb8d4a1dbb5b4100ec6c80f282292b678c58 (see http://zfs-fuse.net/news/nfs-auto-share-sharenfs-support-landed-in-unstable
I'm already backing up, but I'll not be ablo to backup everything...
(too much data, not enought HDDs available for backup)

So I'd be happy if you had a way for me to use 0.6.0, even without
upgrading the pool version if it cannot be done without erasing the
content of the drives...
  
As I mentioned, unstable is _newer_ (for a long time) and so this is _downgrading_ which has never been possible.

And as you people are surely much more aware of backups than I do, I'd
also like some advice on backup-solutions for Linux (ubuntu). I was
using rsync, but as my tool (luckyBackup) changed it's config-file-
structure, I have to reconfigure all my backups, so I'm considering
changing the backup-solution, maybe have something that would compress
files to make the backups smaller but still incremental.... Note that
I'm mostly doing backups to avoid any zfs-fuse-problems, so the
backups should be zfs-fuse-FREE...
  
I use s3sync. Check out Amazon Simple Storage Service. It's offline, it's ubiquitous and not too expensive. Prepare for a few months of uploading initially, depending on the size of your datasets

Thanks for the help !

  

FredR

unread,
Jun 4, 2010, 9:29:54 AM6/4/10
to zfs-fuse
I've had a similar issue (but I wasn't using a git ver of zfs-fuse).

I've wanted to try zfs-fuse for quite some time, and I've been using
lvm quite a bit recently. My distro is slackware 64bit (somewhere
between Slack 13 -current and 13.1) and I'm using scripts from
Slackbuilds.org. I recently installed fuse 2.8.1 and zfs-fuse-0.6.0,
made a small 32gb volume in lvm and setup a zfs pool, then a small
"backups" zfs filesystem (with compression) and copied a few things
over to it.

When 0.6.9 came out today (saw it on freshmeat, woohoo!) I stopped zfs-
fuse, built the new 0.6.9 package and installed, and restarted, it
would not see my old pool. Going back to 0.6.0 made it mount properly
(/etc/rc.d/rc.zfs-fuse stop ; go back to 0.6.0 ; /etc/rc.d/rc.zfs-fuse
start).

I don't believe it's an issue with the project as much as it is my not
understanding that much about zfs yet. :) Because it was just a
test, I destroyed the filesystem and pool, upgraded to 0.6.9 and
started over.

What is the proper procedure for bumping up between zfs-fuse
versions? I think you guys are doing some fantastic work, I want to
test the heck out of it, and follow the project for a while.
> 0c54eb8d4a1dbb5b4100ec6c80f282292b678c58 (seehttp://zfs-fuse.net/news/nfs-auto-share-sharenfs-support-landed-in-un...> I'm already backing up, but I'll not be ablo to backup everything...

Matt

unread,
Jun 4, 2010, 10:08:58 AM6/4/10
to zfs-fuse
I had something that looked an awful lot like that when I upgraded
from 0.6.0 to 0.6.9.

After downgrading back to 0.6.0 and mounting your existing filesystem,
try running "zpool export <poolname>". Then, shut down zfs-fuse
properly and upgrade it. To check that your pool is still okay, run
"zpool import" with the new zfs-fuse. This will list your importable
storage pools:

copper # zpool import
pool: tank
id: 1871273976149223314
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

tank ONLINE
hde ONLINE


From there, type "zpool import <poolname>" and your pool should be
back up and running. Once your zpool is imported you can think about
upgrading it to the latest pool version (if applicable).

Good luck!

Emmanuel Anne

unread,
Jun 4, 2010, 10:14:28 AM6/4/10
to zfs-...@googlegroups.com
Yes it's very probably because the pool cache moved.
zpool import -a -f
probably fixes the issue.

2010/6/4 Matt <mrob...@gmail.com>



--
zfs-fuse git repository : http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary

sgheeren

unread,
Jun 5, 2010, 6:40:46 AM6/5/10
to zfs-...@googlegroups.com
Check what happens when you


zpool import -d /dev/mapper

If that fixes it, file a bug (this could be caused by change to
http://zfs-fuse.net/issues/49

QUOTE:

"Jumping to /dev/disk/by-id because it was accepted and addresses a VFAQ
(very FAQ). If you scan around on the user group"

Revision:
commit affda08c3dc7dc4c615306eb65aeb2dd986acdfd
Author: Seth Heeren <sghe...@hotmail.com>
Date: Sat May 29 01:44:02 2010 +0200

on linux, prefer using /dev/disk/by-id when importing (if available)

sgheeren

unread,
Jun 5, 2010, 6:40:57 AM6/5/10
to zfs-...@googlegroups.com
For the record,
I use lvm2 volumes with zfs on a regular basis, and my volumes are shown
under /dev/disk/by-id like so:

Just tried:

$ lvcreate ssd -n testforzfs -L1g
$ ls | grep test
lrwxrwxrwx 1 root root 27 2010-06-04 17:16 dm-name-ssd-testforzfs ->
../../mapper/ssd-testforzfs


On 06/04/2010 05:11 PM, sgheeren wrote:
> Check what happens when you
>
>
> zpool import -d /dev/mapper
>
> If that fixes it, file a bug (this could be caused by change to
> http://zfs-fuse.net/issues/49
>
> QUOTE:
>
> "Jumping to /dev/disk/by-id because it was accepted and addresses a VFAQ
> (very FAQ). If you scan around on the user group"
>
> Revision:
> commit affda08c3dc7dc4c615306eb65aeb2dd986acdfd
> Author: Seth Heeren <sghe...@hotmail.com>
> Date: Sat May 29 01:44:02 2010 +0200
>
> on linux, prefer using /dev/disk/by-id when importing (if available)
>
>
> On 06/04/2010 03:29 PM, FredR wrote:
>

Reply all
Reply to author
Forward
0 new messages