[zfs-fuse] How to start/shutdown zfs-fuse properly (and automatically) on Ubuntu?

1,671 views
Skip to first unread message

Douglas

unread,
Apr 17, 2010, 8:31:49 AM4/17/10
to zfs-fuse
Hi everyone,

First of all, thanks for all the wonderful work on zfs-fuse! I'm new
to using it (and have only a moderate level of technical knowledge
with Linux), but I really appreciate the work that's been done to
revitalise the project recently.

I have a bit of a newbie question: I have been using version 0.6.0 on
Ubuntu 9.10 using the instructions under "From source (compiled by
you)" at http://zfs-fuse.net/#how-to-try-zfs . At the moment, while
testing things, I just start it up manually with "sudo zfs-fuse" and
try to remember to export the pools before shutting down. I presume
there's a more intelligent way to handle this, so that it starts up
and shuts down automatically and safely with the system. Are there
instructions for this available, and does it involve one of the files
in the "contrib" directory of the release package?

Is one of the scripts in there usable in /etc/init.d/ on Ubuntu? Or do
the issues raised in http://rudd-o.com/en/linux-and-free-software/starting-zfs-fuse-up-properly
(such as immunising zfs-fuse against the OOM killer) still apply with
the current release, and if so, what shoule one do? Are there some
fairly straightforward instructions about the right thing to do?

Finally, does anyone know if this will change with Ubuntu 10.04?

Many thanks,

Douglas

--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/

Subscription settings: http://groups.google.com/group/zfs-fuse/subscribe?hl=en

Emmanuel Anne

unread,
Apr 17, 2010, 8:57:03 AM4/17/10
to zfs-...@googlegroups.com
Wow, thanks for the post, I discovered a page I had never seen before ! ;-)

1st your manual startup :
there is a patch in the current version to automatically unmount everything whatever happens, even if the directories are busy and even if there are inherited datasets (like /home and /home/manu for example). But sorry I am not sure if it has been merged in 0.6.0 or 0.6.1. To check for that : compile with scons debug=2, run zfs-fuse -n to see the messages, then zfs mount -a, and then type ctrl-c in the window running zfs-fuse, if you see "unmounting" messages for all the datasets, then the patch is merged.
(if you don't get the message, don't worry, there is still a journal and since you were not doing any heavy write operation, you won't loose anything, but it's safer to have this patch anyway).

So to comment the points on this page :
 -Unset the LANG environment variable.  Failure to do so will cause ZFS-FUSE to hang if your /usr is on ZFS.
Well I never did that, but I don't mount my /usr as zfs neither. It seems weird though and if this bug is confirmed I'll have to investigate !

 - Immunize ZFS-FUSE against the OOM killer.  If you don't, then it's very likely that your kernel will kill ZFS-FUSE as soon as things get tight -- and this is something you definitely do not want.
The listing below contains code to do just that.
-> bad idea, the oom killer kills a task which is eating way too much memory, it's much safer to use some parameters to limit the memory usage. I have never used such a thing neither (and zfs-fuse was never killed by the oom killer neither). Notice that if you have a large enough swap partition you'll never hear about the oom killer anyway.

 - Remove limits.  If you don't remove the limits, ZFS-FUSE will either hang and spin, or consume an inordinate amount of memory (close to two gigabytes).
-> a long time ago you had to make some adjustements like that if you were running a 32 bits system because of the stack size for the threads. Normally it's totally fixed now, I am running a version on a 32 bit laptop (not very often), and I don't use any of these limits. You can set ulimit -c unlimited if you think you'll have a core file though, but it's only for debugging and in very special circumstances, usually there is no core file !

I got my init.d zfs-fuse file from a very old debian package for version 0.4 of zfs-fuse, and never upgraded it since then, maybe I should try to see if there were some interesting updates since then, but it works fine for me... The only important change I made inside is to add
zfs share -a just after zfs mount -a.
But in 0.6 you don't have sharenfs support, so it's useless.

Yes there are quite a few init scripts in the contrib directory and you should find what you need there.

2010/4/17 Douglas <dpierc...@gmail.com>



--
zfs-fuse git repository : http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary

Douglas

unread,
Apr 17, 2010, 3:12:56 PM4/17/10
to zfs-fuse
On Apr 17, 2:57 pm, Emmanuel Anne <emmanuel.a...@gmail.com> wrote:
> Wow, thanks for the post, I discovered a page I had never seen before ! ;-)
>
> 1st your manual startup :
> there is a patch in the current version to automatically unmount everything
> whatever happens, even if the directories are busy and even if there are
> inherited datasets (like /home and /home/manu for example). But sorry I am
> not sure if it has been merged in 0.6.0 or 0.6.1. To check for that :
> compile with scons debug=2, run zfs-fuse -n to see the messages, then zfs
> mount -a, and then type ctrl-c in the window running zfs-fuse, if you see
> "unmounting" messages for all the datasets, then the patch is merged.
> (if you don't get the message, don't worry, there is still a journal and
> since you were not doing any heavy write operation, you won't loose
> anything, but it's safer to have this patch anyway).

Thanks for the reply! Hmm... when I do:

sudo scons debug=2
sudo scons install
sudo zfs-fuse -n

I get the error: zfs-fuse: invalid option -- 'n'

Is this the same as doing --no-daemon? In that case, I get:

sudo zfs-fuse --no-daemon
hostname = wintermute
hw_serial = 8323329
ncpus = 2
physmem = 981886 pages (3.75 GB)
pagesize = 4096, pageshift: 12
pwd_buflen = 1024, grp_buflen = 1024

at this point if I do:
sudo zfs mount -a
nothing much happens... I have to do a:
sudo zpool import -a

at which point in the zfs-fuse window I see:

mount request: "pool", "/pool", "0", ""
mounting /pool
Adding filesystem 1 at mntpoint /pool

and after ctrl-C I see:

^CExiting...
Calling do_umount()...
VFS is being freed
mounted filesystems: 0
do_umount() done

Does this mean, I presume, that the patch is present?

Thanks,

Douglas

unread,
Apr 17, 2010, 3:16:08 PM4/17/10
to zfs-fuse
On Apr 17, 2:57 pm, Emmanuel Anne <emmanuel.a...@gmail.com> wrote:

> I got my init.d zfs-fuse file from a very old debian package for version 0.4
> of zfs-fuse, and never upgraded it since then, maybe I should try to see if
> there were some interesting updates since then, but it works fine for me...
> The only important change I made inside is to add
> zfs share -a just after zfs mount -a.
> But in 0.6 you don't have sharenfs support, so it's useless.
>
> Yes there are quite a few init scripts in the contrib directory and you
> should find what you need there.

Hello again,

So, could you (or anyone) advise on which particular file I'm supposed
to use from there? Is it just the zfs-fuse.initd, or something else?

And, is it sufficient to rename it to zfs-fuse and put it in /etc/
init.d/ ? I must admit I'm not sure where to start...

Many thanks again,

Emmanuel Anne

unread,
Apr 17, 2010, 4:01:33 PM4/17/10
to zfs-...@googlegroups.com
2010/4/17 Douglas <dpierc...@gmail.com>

Thanks for the reply! Hmm... when I do:

sudo scons debug=2
sudo scons install
sudo zfs-fuse -n

I get the error: zfs-fuse: invalid option -- 'n'

Very old version, sorry, it was "--no-daemon" only in the old days...
 
Is this the same as doing --no-daemon? In that case, I get:

sudo zfs-fuse --no-daemon
hostname = wintermute
hw_serial = 8323329
ncpus = 2
physmem = 981886 pages (3.75 GB)
pagesize = 4096, pageshift: 12
pwd_buflen = 1024, grp_buflen = 1024

at this point if I do:
sudo zfs mount -a
nothing much happens... I have to do a:
sudo zpool import -a

at which point in the zfs-fuse window I see:

mount request: "pool", "/pool", "0", ""
mounting /pool
Adding filesystem 1 at mntpoint /pool

and after ctrl-C I see:

^CExiting...
Calling do_umount()...
VFS is being freed
mounted filesystems: 0
do_umount() done

Does this mean, I presume, that the patch is present?

Yeah well if I remember correctly it was working for basic filesystems.
Try zfs create pool/1 while it's mounted to get a 2nd filesystem inside, and then try ctrl-c again.
If it correctly umounts both filesystems then everything is fine !

Emmanuel Anne

unread,
Apr 17, 2010, 4:02:55 PM4/17/10
to zfs-...@googlegroups.com
You know there is a package for zfs-fuse now in most distributions (debian, ubuntu, fedora, and probably some others...). So it would be the easiest way to install your init script, then overwrite the binaries with more recent versions if you wish to test the newest stuff.

2010/4/17 Emmanuel Anne <emmanu...@gmail.com>



--

Gavin Chappell

unread,
Apr 17, 2010, 4:55:23 PM4/17/10
to zfs-...@googlegroups.com
I'm using Schmod's PPA version of 0.6.0 on my Ubuntu 9.10 HTPC with success - https://launchpad.net/~schmod/+archive/schmod

In Lucid, it looks like zfs-fuse is going to be contained directly in the OS as part of the Universe component, which I think means it's more likely to see upgrades (although it's currently 0.6.0 as well, rather than the 0.6.0+criticals version which is available on the web and via git).

Chris Donovan

unread,
Apr 17, 2010, 10:12:25 PM4/17/10
to zfs-...@googlegroups.com
> So, could you (or anyone) advise on which particular file I'm supposed
> to use from there? Is it just the zfs-fuse.initd, or something else?
>
> And, is it sufficient to rename it to zfs-fuse and put it in /etc/
> init.d/ ? I must admit I'm not sure where to start...

Putting the start/kill script in init.d is the first step. After that
you'll need to create the links in the appropriate /etc/rc?.d/
directories. I have the following start/kill rc?.d entries.

# ls -l /etc/rc?.d/*zfs*
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc0.d/K20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc1.d/K20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc2.d/S20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc3.d/S20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc4.d/S20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc5.d/S20zfs-fuse ->
../init.d/zfs-fuse
lrwxrwxrwx 1 root root 18 2010-04-15 22:30 /etc/rc6.d/K20zfs-fuse ->
../init.d/zfs-fuse

I installed the zfs-fuse package from the PPA like Gavin said earlier
(https://launchpad.net/~schmod/+archive/schmod). Then I copied the
start/kill script and removed the package, then installed the source
via the link you mentioned previously. Then installed the start/kill
script in /etc/init.d, then ran "/usr/sbin/update-rc.d zfs-fuse
defaults".


Chris-

Douglas

unread,
Apr 18, 2010, 2:26:20 AM4/18/10
to zfs-fuse
Thanks to everyone again --- I wasn't aware that there was a PPA
containing zfs-fuse 0.6.0 (I only knew about Filip Brcic's one with
0.5.1). I'll try that, and if it's not up-to-date enough, I'll try
what Chris suggests, too!

Many thanks,
Douglas
> To visit our Web site, click onhttp://zfs-fuse.net/

roscaf

unread,
May 19, 2010, 12:25:40 PM5/19/10
to zfs-fuse
Have been using zfs-fuse for about a year now on ubuntu but recently
did a fresh install and found that the init.d script in the repo
wasn't mounting mounting volumes correctly. It seems to work fine if
the line sleep line before mount -a is commented out, not sure why the
sleep 2 has such an affect though.
Reply all
Reply to author
Forward
0 new messages