I have just added zfs-fuse as a package to Fedora!
You can download the RPMs at
http://koji.fedoraproject.org/koji/buildinfo?buildID=69138
If any of you are running Fedora please test the package and give
feedback at
http://admin.fedoraproject.org/updates/zfs-fuse-0.5.0-3.20081009.fc10
The RPMs have a .fc10 suffix, but should work with Fedora 8 and Fedora 9
as well.
The RPMs are not signed, yet, so you will have to confirm installation.
Signing is a manual process that probably won't be performed until
Fedora 10 is released on November 25th. In all other regards the
packages are final.
If I get 3 successful tests, the package will progress from testing to
stable :) Please test :)
--
With kind regards,
Uwe Kubosch
Norway
--
With kind regards,
Uwe Kubosch
Kubosch Consulting
Norway
I'll test as soon as I can make rawhide to boot on my computer.. :-/
Greg
thanks for your work, I hope this package will be in Fedora soon.
I installed it on my Fedora Rawhide, it works fine !
Could I suggest you an improvement ? Having the ZFS file system
automatically mounted on startup would be very cool. Currently, I
execute "zfs mount -a" before log in as a normal user.
My proposition is to add a parameter in /etc/sysconfig/zfs-fuse :
ZFS_AUTOMOUNT=0|1
(default = 0)
If user configure ZFS_AUTOMOUNT=1, the init script execute "sleep 2 ;
zfs mount -a" with some check to display a warning if mount failed (see
my old-own init script in attachement for an example).
Yours sincerely.
I am looking at this now. I'll get back to you with questions in a
moment.
--
With kind regards,
Uwe Kubosch
Kubosch Consulting
Norway
Thanks again for the script. I have added all improvements I could
identify, and included the result below.
> Two notes you have to pay attention to:
>
> - sysvinit in fedora sometimes cleans up /var/run, if you start zfs
> late enough, no problem.
I believe the init scripts run after this, right? Boot priority is 26.
> - sysvinit does a killall -TERM which kills ZFS-FUSE, you might want
> to cleanly stop zfs before that happens
This should already be handled by setting the shutdown priority to 74.
> PIDFILE=/.zfs-fuse.pid
I added the PIDFILE concept, but in /var/run/zfs-fuse.pid as PID files.
Most of it is handled by the standard functions.
> LOCKFILE=/var/lock/zfs/zfs_lock
This LOCKFILE wasn't actually in use, right?
> ulimit -c 512000
This one (maximum size of core files created) is set to 0 by the daemon
function.
> log_action_begin_msg() {
> true # echo $*
> }
>
> log_action_end_msg() {
> true # echo $*
> }
These did nothing, right?
> # /var/run/sendsigs.omit.d/zfs-fuse
This was commented, but what would it have done?
> log_action_begin_msg "Immunizing ZFS-FUSE against OOM kills
> and sendsigs signals"
> # mkdir -p /var/run/sendsigs.omit.d
> # cp "$PIDFILE" /var/run/sendsigs.omit.d/zfs-fuse
This was commented and I could not find anything on this for Fedora. I
guess it is OK to leave out?
> log_action_begin_msg "Mounting ZFS filesystems"
> sleep 1
Why the "sleep 1" here? We have already confirmed that zfs-fuse is
running by waiting for the PIDFILE. I can see that the mount fails if
it is missing, but I'd like to know what we're waiting for.
> rm -f /var/lib/random-seed
Protection against someone planting a known seed, I guess?
> if [ -x /nonexistent -a -x /usr/bin/renice ] ; then # DISABLED
> log_action_begin_msg "Increasing ZFS-FUSE priority"
> /usr/bin/renice -15 -g $PID > /dev/null
> ES_TO_REPORT=$?
> if [ 0 = "$ES_TO_REPORT" ]
> then
> log_action_end_msg 0
> else
> log_action_end_msg 1 "code $ES_TO_REPORT"
> exit 3
> fi
> true
> fi
Why do we need to increase the priority of zfs-fuse?
> do_stop () {
I use functions defined in /etc/init.d/functions. That reduces the
code.
The pidofproc function gets the PID of a program by looking at the pid
file in /var/run and then /proc/<pid>.
The killproc function sends a TERM signal and waits for the process to
die, and escalates to a KILL signal if the process has not died withing
the given delay. It then removes the pid file.
> log_action_begin_msg "Syncing disks again"
> sync
> log_action_end_msg 0
Why do we need the second sync?
That's it. Please look at the result below and comment.
---------------
#! /bin/bash
#
# zfs-fuse - startup script for zfs-fuse daemon
#
# chkconfig: - 26 74
# description: zfs-fuse daemon
#
### BEGIN INIT INFO
# Provides: zfs-fuse
# Required-Start: fuse
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start:
# Default-Stop:
# Short-Description: Start the zfs-fuse daemon
# Description: zfs-fuse daemon
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
prog="zfs-fuse"
exec="/usr/bin/$prog"
config=/etc/sysconfig/$prog
[ -e $config ] && . $config
PIDFILE=/var/run/$prog.pid
unset LANG
start() {
[ -x $exec ] || (echo "$prog binary not present or executable" && exit 5)
PID=`pidofproc $prog`
start_status=$?
case "$start_status" in
0)
echo "ZFS-FUSE is already running with pid $pid"
exit 3
;;
1)
echo "Cleaning up stale $prog PID file in $PIDFILE"
rm -f "$PIDFILE"
;;
3)
# not running
;;
*)
echo "Huh?"
exit 99
esac
echo -n $"Starting $prog: "
daemon $exec -p "$PIDFILE"
exec_retval=$?
echo
[ $exec_retval -ne 0 ] && return $exec_retval
for a in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ; do
PID=`pidofproc $prog`
[ "$PID" != "" ] && break
echo -n "."
sleep 1
done
if [ "$PID" = "" ] ; then
echo "ZFS-FUSE did not start or create $PIDFILE"
exit 3
fi
echo -n "Immunizing $prog against OOM kills"
echo -17 > "/proc/$PID/oom_adj"
ES_TO_REPORT=$?
if [ "$ES_TO_REPORT" -ne 0 ] ; then
echo_warning
echo "code $ES_TO_REPORT"
exit 3
fi
echo_success
echo
if [ $ZFS_AUTOMOUNT -eq 1 ] ; then
echo -n $"Mounting zfs partitions: "
sleep 1
rm -f /var/lib/random-seed
zfs mount -a
zfs_mount_retval=$?
if [ $zfs_mount_retval = 0 ]; then
echo_success
else
echo_warning
echo zfs mount failed with code $zfs_mount_retval
fi
echo
fi
# if [ -x /nonexistent -a -x /usr/bin/renice ] ; then # DISABLED
# log_action_begin_msg "Increasing ZFS-FUSE priority"
# /usr/bin/renice -15 -g $PID > /dev/null
# ES_TO_REPORT=$?
# if [ 0 = "$ES_TO_REPORT" ] ; then
# log_action_end_msg 0
# else
# log_action_end_msg 1 "code $ES_TO_REPORT"
# exit 3
# fi
# true
# fi
return $exec_retval
}
stop() {
status_quiet || return 0
[ -x $exec ] || (echo "$prog binary not present or executable" && exit 5)
PID=`pidofproc $prog`
if [ "$PID" != "" ] ; then
echo -n "Syncing disks"
sync
echo_success
echo
echo -n "Unmounting ZFS filesystems"
zfs unmount -a
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ] ; then
echo_success
else
echo_warning
exit 3
fi
echo
fi
echo -n $"Stopping $prog: "
killproc $prog
kill_retval=$?
echo
if [ "$PID" != "" ] ; then
echo -n "Syncing disks again"
sync
echo_success
echo
fi
return $kill_retval
}
restart() {
stop
start
}
pool_status() {
# run checks to determine if the service is running or use generic status
status $prog && /usr/bin/zpool status
}
status_quiet() {
pool_status >/dev/null 2>&1
}
case "$1" in
start)
status_quiet && exit 0
$1
;;
stop)
$1
;;
restart)
restart
;;
reload)
status_quiet || exit 7
restart
;;
force-reload)
restart
;;
status)
pool_status
;;
condrestart|try-restart)
status_quiet || exit 0
restart
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
exit $?
--
With kind regards,
Uwe Kubosch
Kubosch Consulting
Norway
Glad to hear that. I am hungering for feedback :)
> It mostly works fine for me.
>
> One problem is that /etc/fstab filesystems get mounted before zfs-fuse
> filesystems, and thus it isn't possible to have e.g. an ext3 partition
> automatically mounted at /tank/ext3 if /tank is zfs.
Do you do that now? I am keen on hearing stories about how zfs-fuse is
actually used today. I you actually mount ext3 file systems below zfs
file systems today, I'd like to hear why :)
> A possible solution might be to provide fstab entries for zfs
> filesystems, or even just a mount all zfs filesystems fstab entry that
> can be prioritized using the standard fstab prioritization methods.
Can you give me an example for this? Wouldn't mounting zfs fule systems
first prevent you from mounting zfs file systems below ext3 file
systems?
> Overall though, it is very nice to have a zfs-fuse rpm.
> Thanks,
> Greg
...and thank you! I am using zfs-fuse myself for some simple setups and
I would love to hear from more advanced users what it is actually
capable of.
I'd love for the WIKI to be open and fill up with pages for specific
usages, like your example of mounting an ext3 file system below a zfs
file system.
BTW. the latest RPM is pending signing which is a manual process, but as
soon as someone does the signing, it will appear on the updates-testing
repository. I'll push it to stable after that.
--
With kind regards,
Uwe Kubosch
Kubosch Consulting
Norway
I wondered if the use of fstab you mentioned is something you do today,
or if it was a suggestion for a feature.
> Then again, I don't have a need for any of this. I usually symlink like
> this:
> ln -sfvn /media/scratchlvm/kdestage ~/kdecheckout where
That is what I really wondered: If you actually mounted "native" file
systems below zfs file systems today, and just needed help automating
it. I only do packaging for Fedora, including init scripts and such.
Seems like automating nested mounts using fstab requires new features in
fuse, so I'm not your guy there :)
> kdestage is something fairly flexible (like live-resizeable xfs on lvm).
> You'd have the option of live upgrading that volume to raid, growing the
> scratchlvm fs with xfs_grow... all without even changing a single
> mount-point let alone sequence.
Does xfs offer data reliability through checksums and automatic repair?
> PS. On a related note, of course reiserfs is quick in certain cases, but
> i'd venture that JFS is faster and more versatile when doing mass
> (parallel) builds (it's my day-job). Don't try too much make clean on
> reiser, e.g.
Any of these offer checksumming and repair?
Btw. xfs has write barrier support (given the proper block device
layer). This is one feature most (all?) other linux fs-es lack.
btrfs is drafted to be the zfs-killer on linux (native kernel driver,
self-healing, checksummed object and metadata; in addition to zfs's
current features it promises to be SSD-aware (incl. smart journaling).
Of course, btrfs is incredibly beta at this point.
So, let's all believe in, adore and abide by zfs. :)
> btrfs is drafted to be the zfs-killer on linux (native kernel driver,
> self-healing, checksummed object and metadata; in addition to zfs's
> current features it promises to be SSD-aware (incl. smart journaling).
It already is SSD aware, there is a ssd mount flag, seekwatch results here:
http://oss.oracle.com/~mason/seekwatcher/pm-compare.png
Explanation here:
http://oss.oracle.com/pipermail/btrfs-devel/2008-February/000513.html
It's planned to eventually detect SSD's and automate that.
> Of course, btrfs is incredibly beta at this point.
It's not even alpha yet, the on-disk format is still evolving and it doesn't
handle ENOSPC yet. ;-)
That's not stopped me using as /home on my work Dell E4200 with a 128GB SSD
drive, I've made two partitions and used btrfs's built in mirroring in case I
get SSD errors on part of the disk. Working fine so far! ;-)
> So, let's all believe in, adore and abide by zfs. :)
I'm still using it for backups, but I don't think it has a long term future
any more I'm afraid. :-(
cheers,
Chris
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP
I value that opinion. I've read some of your blog posts on FS-es so I
know that you know your stuff. I've never been able to get started with
Btrfs (getting nothing but errors even mk-ing the fs... :() on Ubuntu yet.
However, since you are reporting interesting results, I might reconsider
sooner than I was planning on (I had slipped that schedule to around
2010 after my latest deception).
In the mean time, I'll still be using ZFS for my local - reliable
backups. My remotes/offlines will still simply be unchecked (although I
assume some kind of redundant raid config at my hosting locs) because of
practical concerns (and the money, as always).
Cheers,
Seth
Hi Seth,
> Thanks Chris!
Not a problem. Note that I'm not being down on ZFS itself, just that with
Riccardo being the only maintainer and his day job taking up all his time (as
it should!) and him knowing the code the best it'll take far longer to develop
than kernel based filesystems which tend to get more people involved.
> I value that opinion. I've read some of your blog posts on FS-es so I
> know that you know your stuff. I've never been able to get started with
> Btrfs (getting nothing but errors even mk-ing the fs... :() on Ubuntu yet.
Ouch! That's enough to put anyone off. :-) The version I used from git
didn't have any issues (except the space for a RAID filesystem is misleadingly
large because, under btrfs, every filesystem chunk can have its own RAID
level).
> However, since you are reporting interesting results, I might reconsider
> sooner than I was planning on (I had slipped that schedule to around
> 2010 after my latest deception).
I'm being brave using it on /home purely because I don't trust SSD's yet,
after reading Val Henson's blog on them. I'm not suggesting anyone starts
playing with it in anger until they've got the disk format nailed down.
> In the mean time, I'll still be using ZFS for my local - reliable
> backups.
Same here - and I've had to use them (once) to recover a file deleted due to
finger trouble, so they do work. ;-)
I also don't see me changing from using ZFS for this for a long time, there's
a lot to be said for a diversity in mechanisms for backups.
> My remotes/offlines will still simply be unchecked (although I assume some
> kind of redundant raid config at my hosting locs) because of practical
> concerns (and the money, as always).
Of course!
All the best,