zfs-fuse on Fedora

294 views
Skip to first unread message

Uwe Kubosch

unread,
Nov 11, 2008, 10:23:54 AM11/11/08
to zfs-...@googlegroups.com
Hi all!

I have just added zfs-fuse as a package to Fedora!

You can download the RPMs at

http://koji.fedoraproject.org/koji/buildinfo?buildID=69138

If any of you are running Fedora please test the package and give
feedback at

http://admin.fedoraproject.org/updates/zfs-fuse-0.5.0-3.20081009.fc10

The RPMs have a .fc10 suffix, but should work with Fedora 8 and Fedora 9
as well.

The RPMs are not signed, yet, so you will have to confirm installation.
Signing is a manual process that probably won't be performed until
Fedora 10 is released on November 25th. In all other regards the
packages are final.

If I get 3 successful tests, the package will progress from testing to
stable :) Please test :)

--
With kind regards,
Uwe Kubosch
Norway
--
With kind regards,
Uwe Kubosch
Kubosch Consulting
Norway

signature.asc

Greg Martyn

unread,
Nov 11, 2008, 12:47:43 PM11/11/08
to zfs-...@googlegroups.com
Great! Thanks.

I'll test as soon as I can make rawhide to boot on my computer.. :-/

Greg

donV

unread,
Nov 11, 2008, 6:32:42 PM11/11/08
to zfs-fuse
Excellent!

Romain LE DISEZ

unread,
Nov 12, 2008, 5:11:02 AM11/12/08
to zfs-...@googlegroups.com
Hi Uwe,

thanks for your work, I hope this package will be in Fedora soon.

I installed it on my Fedora Rawhide, it works fine !

Could I suggest you an improvement ? Having the ZFS file system
automatically mounted on startup would be very cool. Currently, I
execute "zfs mount -a" before log in as a normal user.

My proposition is to add a parameter in /etc/sysconfig/zfs-fuse :
ZFS_AUTOMOUNT=0|1
(default = 0)

If user configure ZFS_AUTOMOUNT=1, the init script execute "sleep 2 ;
zfs mount -a" with some check to display a warning if mount failed (see
my old-own init script in attachement for an example).

Yours sincerely.

zfs-fuse

donV

unread,
Nov 14, 2008, 8:47:44 PM11/14/08
to zfs-fuse
I will look at this after Fedora 10 is out.

On Nov 12, 11:11 am, Romain LE DISEZ <romain.zfsf...@ledisez.net>
wrote:
>  zfs-fuse
> 1KViewDownload

donV

unread,
Dec 21, 2008, 6:06:39 PM12/21/08
to zfs-fuse
I have now pushed a new release of zfs-fuse to Fedora 10 testing at

https://admin.fedoraproject.org/updates/zfs-fuse-0.5.0-4.20081221.fc10

please test.

The package includes the changes you requested. I set automount to be
on as a default. When would you not want your zfs filesystems
mounted?


Uwe


On Nov 12, 11:11 am, Romain LE DISEZ <romain.zfsf...@ledisez.net>
wrote:

Rudd-O

unread,
Dec 22, 2008, 12:06:32 PM12/22/08
to zfs-fuse
I hereby post my Fedora zfsctl init script which does not use any
sleeps or stuff like that, but instead actually *verifies* that ZFS is
running correctly. It also sets a few environment variables up that
make it possible for ZFS-FUSE not to be OOMed or pick up language
files that might throw ZFS-FUSE into an endless loop. Please pick up
the improvements there and add them to your initscript. The point of
the script is mostly the logic, ignore the decorations that are cruft
from Ubuntu.

Two notes you have to pay attention to:

- sysvinit in fedora sometimes cleans up /var/run, if you start zfs
late enough, no problem.
- sysvinit does a killall -TERM which kills ZFS-FUSE, you might want
to cleanly stop zfs before that happens

atm I run the script late in rc.sysinit and run the stop phase before
the killall in the S01killall script in sysvinit.

--------------------------

cat /sbin/zfsctl
#! /bin/sh

PIDFILE=/.zfs-fuse.pid
LOCKFILE=/var/lock/zfs/zfs_lock

export PATH=/sbin:/bin
unset LANG
ulimit -v unlimited
ulimit -c 512000

log_action_begin_msg() {
true # echo $*
}

log_action_end_msg() {
true # echo $*
}

do_start() {
test -x /sbin/zfs-fuse || exit 0
PID=`cat "$PIDFILE" 2> /dev/null`
if [ "$PID" != "" ]
then
if kill -0 $PID 2> /dev/null
then
echo "ZFS-FUSE is already running"
exit 3
else
# pid file is stale, we clean up shit
log_action_begin_msg "Cleaning up stale ZFS-
FUSE PID files"
rm -f
"$PIDFILE"
# /var/run/sendsigs.omit.d/zfs-
fuse
log_action_end_msg
0

fi

fi

log_action_begin_msg "Starting ZFS-FUSE process"
zfs-fuse -p "$PIDFILE"
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ]
then
true
else
log_action_end_msg 1 "code $ES_TO_REPORT"
exit 3
fi

for a in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
do
PID=`cat "$PIDFILE" 2> /dev/null`
[ "$PID" != "" ] && break
sleep 1
done

if [ "$PID" = "" ]
then
log_action_end_msg 1 "ZFS-FUSE did not start or create
$PIDFILE"
exit
3

else
log_action_end_msg
0

fi

log_action_begin_msg "Immunizing ZFS-FUSE against OOM kills
and sendsigs signals"
# mkdir -p /var/run/
sendsigs.omit.d
# cp "$PIDFILE" /var/run/sendsigs.omit.d/zfs-
fuse
echo -17 > "/proc/$PID/
oom_adj"
ES_TO_REPORT=
$?
if [ 0 =
"$ES_TO_REPORT" ]

then
log_action_end_msg
0

else
log_action_end_msg 1 "code
$ES_TO_REPORT"
exit
3

fi

log_action_begin_msg "Mounting ZFS filesystems"

sleep 1
rm -f /var/lib/random-seed
zfs mount -a
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ]
then
log_action_end_msg 0
else
log_action_end_msg 1 "code $ES_TO_REPORT"
#echo "Dropping into a shell for debugging.
Post_mountall pending."

#bash

#post_mountall
exit
3

fi

if [ -x /nonexistent -a -x /usr/bin/renice ] ; then # DISABLED
log_action_begin_msg "Increasing ZFS-FUSE priority"
/usr/bin/renice -15 -g $PID > /dev/null
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ]
then
log_action_end_msg 0
else
log_action_end_msg 1 "code $ES_TO_REPORT"
exit 3
fi
true
fi

}

do_stop () {
test -x /sbin/zfs-fuse || exit 0
PID=`cat "$PIDFILE" 2> /dev/null`
if [ "$PID" = "" ] ; then
# no pid file, we exit
exit 0
elif kill -0 $PID 2> /dev/null; then
# pid file and killable, we continue
true
else
# pid file is stale, we clean up shit
log_action_begin_msg "Cleaning up stale ZFS-FUSE PID
files"
rm -f
"$PIDFILE"
# /var/run/sendsigs.omit.d/zfs-
fuse
log_action_end_msg
0
exit
0

fi

log_action_begin_msg "Syncing disks"
sync
log_action_end_msg 0

log_action_begin_msg "Unmounting ZFS filesystems"
zfs unmount -a
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ]
then
log_action_end_msg 0
else
log_action_end_msg 1 "code $ES_TO_REPORT"
exit 3
fi

log_action_begin_msg "Terminating ZFS-FUSE process gracefully"
kill -TERM $PID

for a in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
do
kill -0 $PID 2> /dev/null
[ "$?" != "0" ] && break
sleep 1
done

if kill -0 $PID 2> /dev/null
then
log_action_end_msg 1 "ZFS-FUSE refused to die after 15
seconds"
exit
3

else
rm -f
"$PIDFILE"
# /var/run/sendsigs.omit.d/zfs-
fuse
log_action_end_msg
0

fi

log_action_begin_msg "Syncing disks again"
sync
log_action_end_msg 0
}

case "$1" in
start)
do_start
;;
stop)
do_stop
;;
status)
PID=`cat "$PIDFILE" 2> /dev/null`
if [ "$PID" = "" ] ; then
echo "ZFS-FUSE is not running"
exit 3
else
if kill -0 $PID
then
echo "ZFS-FUSE is running, pid $PID"
zpool status
exit 0
else
echo "ZFS-FUSE died, PID files stale"
exit 3
fi
fi
;;
restart|reload|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
*)
echo "Usage: $0 start|stop|status" >&2
exit 3
;;
esac

:

Uwe Kubosch

unread,
Dec 22, 2008, 4:38:36 PM12/22/08
to zfs-...@googlegroups.com
On Mon, 2008-12-22 at 09:06 -0800, Rudd-O wrote:
> I hereby post my Fedora zfsctl init script which does not use any
> sleeps or stuff like that, but instead actually *verifies* that ZFS is
> running correctly. It also sets a few environment variables up that
> make it possible for ZFS-FUSE not to be OOMed or pick up language
> files that might throw ZFS-FUSE into an endless loop. Please pick up
> the improvements there and add them to your initscript. The point of
> the script is mostly the logic, ignore the decorations that are cruft
> from Ubuntu.

I am looking at this now. I'll get back to you with questions in a
moment.

--
With kind regards,
Uwe Kubosch

Kubosch Consulting
Norway

signature.asc

Uwe Kubosch

unread,
Dec 23, 2008, 3:39:42 AM12/23/08
to zfs-...@googlegroups.com
On Mon, 2008-12-22 at 09:06 -0800, Rudd-O wrote:
> I hereby post my Fedora zfsctl init script which does not use any
> sleeps or stuff like that, but instead actually *verifies* that ZFS is
> running correctly. It also sets a few environment variables up that
> make it possible for ZFS-FUSE not to be OOMed or pick up language
> files that might throw ZFS-FUSE into an endless loop. Please pick up
> the improvements there and add them to your initscript. The point of
> the script is mostly the logic, ignore the decorations that are cruft
> from Ubuntu.

Thanks again for the script. I have added all improvements I could
identify, and included the result below.

> Two notes you have to pay attention to:
>
> - sysvinit in fedora sometimes cleans up /var/run, if you start zfs
> late enough, no problem.

I believe the init scripts run after this, right? Boot priority is 26.

> - sysvinit does a killall -TERM which kills ZFS-FUSE, you might want
> to cleanly stop zfs before that happens

This should already be handled by setting the shutdown priority to 74.

> PIDFILE=/.zfs-fuse.pid

I added the PIDFILE concept, but in /var/run/zfs-fuse.pid as PID files.
Most of it is handled by the standard functions.

> LOCKFILE=/var/lock/zfs/zfs_lock

This LOCKFILE wasn't actually in use, right?

> ulimit -c 512000

This one (maximum size of core files created) is set to 0 by the daemon
function.

> log_action_begin_msg() {
> true # echo $*
> }
>
> log_action_end_msg() {
> true # echo $*
> }

These did nothing, right?

> # /var/run/sendsigs.omit.d/zfs-fuse

This was commented, but what would it have done?

> log_action_begin_msg "Immunizing ZFS-FUSE against OOM kills
> and sendsigs signals"
> # mkdir -p /var/run/sendsigs.omit.d
> # cp "$PIDFILE" /var/run/sendsigs.omit.d/zfs-fuse

This was commented and I could not find anything on this for Fedora. I
guess it is OK to leave out?

> log_action_begin_msg "Mounting ZFS filesystems"
> sleep 1

Why the "sleep 1" here? We have already confirmed that zfs-fuse is
running by waiting for the PIDFILE. I can see that the mount fails if
it is missing, but I'd like to know what we're waiting for.

> rm -f /var/lib/random-seed

Protection against someone planting a known seed, I guess?

> if [ -x /nonexistent -a -x /usr/bin/renice ] ; then # DISABLED
> log_action_begin_msg "Increasing ZFS-FUSE priority"
> /usr/bin/renice -15 -g $PID > /dev/null
> ES_TO_REPORT=$?
> if [ 0 = "$ES_TO_REPORT" ]
> then
> log_action_end_msg 0
> else
> log_action_end_msg 1 "code $ES_TO_REPORT"
> exit 3
> fi
> true
> fi

Why do we need to increase the priority of zfs-fuse?

> do_stop () {

I use functions defined in /etc/init.d/functions. That reduces the
code.

The pidofproc function gets the PID of a program by looking at the pid
file in /var/run and then /proc/<pid>.

The killproc function sends a TERM signal and waits for the process to
die, and escalates to a KILL signal if the process has not died withing
the given delay. It then removes the pid file.

> log_action_begin_msg "Syncing disks again"
> sync
> log_action_end_msg 0

Why do we need the second sync?

That's it. Please look at the result below and comment.

---------------

#! /bin/bash
#
# zfs-fuse - startup script for zfs-fuse daemon
#
# chkconfig: - 26 74
# description: zfs-fuse daemon
#
### BEGIN INIT INFO
# Provides: zfs-fuse
# Required-Start: fuse
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start:
# Default-Stop:
# Short-Description: Start the zfs-fuse daemon
# Description: zfs-fuse daemon
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

prog="zfs-fuse"
exec="/usr/bin/$prog"
config=/etc/sysconfig/$prog

[ -e $config ] && . $config

PIDFILE=/var/run/$prog.pid

unset LANG

start() {
[ -x $exec ] || (echo "$prog binary not present or executable" && exit 5)
PID=`pidofproc $prog`
start_status=$?
case "$start_status" in
0)
echo "ZFS-FUSE is already running with pid $pid"
exit 3
;;
1)
echo "Cleaning up stale $prog PID file in $PIDFILE"
rm -f "$PIDFILE"
;;
3)
# not running
;;
*)
echo "Huh?"
exit 99
esac

echo -n $"Starting $prog: "
daemon $exec -p "$PIDFILE"
exec_retval=$?
echo
[ $exec_retval -ne 0 ] && return $exec_retval



for a in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ; do

PID=`pidofproc $prog`
[ "$PID" != "" ] && break
echo -n "."
sleep 1
done

if [ "$PID" = "" ] ; then
echo "ZFS-FUSE did not start or create $PIDFILE"
exit 3
fi

echo -n "Immunizing $prog against OOM kills"
echo -17 > "/proc/$PID/oom_adj"
ES_TO_REPORT=$?
if [ "$ES_TO_REPORT" -ne 0 ] ; then
echo_warning
echo "code $ES_TO_REPORT"
exit 3
fi
echo_success
echo

if [ $ZFS_AUTOMOUNT -eq 1 ] ; then
echo -n $"Mounting zfs partitions: "


sleep 1
rm -f /var/lib/random-seed
zfs mount -a

zfs_mount_retval=$?
if [ $zfs_mount_retval = 0 ]; then
echo_success
else
echo_warning
echo zfs mount failed with code $zfs_mount_retval
fi
echo
fi


# if [ -x /nonexistent -a -x /usr/bin/renice ] ; then # DISABLED
# log_action_begin_msg "Increasing ZFS-FUSE priority"
# /usr/bin/renice -15 -g $PID > /dev/null
# ES_TO_REPORT=$?
# if [ 0 = "$ES_TO_REPORT" ] ; then
# log_action_end_msg 0
# else
# log_action_end_msg 1 "code $ES_TO_REPORT"
# exit 3
# fi
# true
# fi


return $exec_retval
}

stop() {
status_quiet || return 0
[ -x $exec ] || (echo "$prog binary not present or executable" && exit 5)
PID=`pidofproc $prog`
if [ "$PID" != "" ] ; then
echo -n "Syncing disks"
sync
echo_success
echo

echo -n "Unmounting ZFS filesystems"


zfs unmount -a
ES_TO_REPORT=$?
if [ 0 = "$ES_TO_REPORT" ] ; then

echo_success
else
echo_warning
exit 3
fi
echo
fi

echo -n $"Stopping $prog: "
killproc $prog
kill_retval=$?
echo

if [ "$PID" != "" ] ; then
echo -n "Syncing disks again"
sync
echo_success
echo
fi

return $kill_retval
}

restart() {
stop
start
}

pool_status() {
# run checks to determine if the service is running or use generic status
status $prog && /usr/bin/zpool status
}

status_quiet() {
pool_status >/dev/null 2>&1
}

case "$1" in
start)
status_quiet && exit 0
$1
;;
stop)
$1
;;
restart)
restart
;;
reload)
status_quiet || exit 7
restart
;;
force-reload)
restart
;;
status)
pool_status
;;
condrestart|try-restart)
status_quiet || exit 0
restart
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
exit $?

--
With kind regards,
Uwe Kubosch

Kubosch Consulting
Norway

signature.asc

donV

unread,
Jan 1, 2009, 7:24:52 AM1/1/09
to zfs-fuse
I have pushed the init script changes to a build at

http://koji.fedoraproject.org/koji/buildinfo?buildID=76410

Could you look at it and tell me how it looks?

I'll push it to updates-testing and stable within a few days.

Greg Martyn

unread,
Jan 1, 2009, 2:32:13 PM1/1/09
to zfs-...@googlegroups.com
Thanks for the rpm. I'm very glad it's available.

It mostly works fine for me.

One problem is that /etc/fstab filesystems get mounted before zfs-fuse
filesystems, and thus it isn't possible to have e.g. an ext3 partition
automatically mounted at /tank/ext3 if /tank is zfs.

A possible solution might be to provide fstab entries for zfs
filesystems, or even just a mount all zfs filesystems fstab entry that
can be prioritized using the standard fstab prioritization methods.

Overall though, it is very nice to have a zfs-fuse rpm.

Thanks,
Greg

Uwe Kubosch

unread,
Jan 1, 2009, 6:17:54 PM1/1/09
to zfs-...@googlegroups.com
On Thu, 2009-01-01 at 14:32 -0500, Greg Martyn wrote:
> Thanks for the rpm. I'm very glad it's available.

Glad to hear that. I am hungering for feedback :)

> It mostly works fine for me.
>
> One problem is that /etc/fstab filesystems get mounted before zfs-fuse
> filesystems, and thus it isn't possible to have e.g. an ext3 partition
> automatically mounted at /tank/ext3 if /tank is zfs.

Do you do that now? I am keen on hearing stories about how zfs-fuse is
actually used today. I you actually mount ext3 file systems below zfs
file systems today, I'd like to hear why :)

> A possible solution might be to provide fstab entries for zfs
> filesystems, or even just a mount all zfs filesystems fstab entry that
> can be prioritized using the standard fstab prioritization methods.

Can you give me an example for this? Wouldn't mounting zfs fule systems
first prevent you from mounting zfs file systems below ext3 file
systems?

> Overall though, it is very nice to have a zfs-fuse rpm.
> Thanks,
> Greg

...and thank you! I am using zfs-fuse myself for some simple setups and
I would love to hear from more advanced users what it is actually
capable of.

I'd love for the WIKI to be open and fill up with pages for specific
usages, like your example of mounting an ext3 file system below a zfs
file system.

BTW. the latest RPM is pending signing which is a manual process, but as
soon as someone does the signing, it will appear on the updates-testing
repository. I'll push it to stable after that.

--
With kind regards,
Uwe Kubosch

Kubosch Consulting
Norway

signature.asc

Greg Martyn

unread,
Jan 1, 2009, 9:12:39 PM1/1/09
to zfs-...@googlegroups.com
On Thu, Jan 1, 2009 at 6:17 PM, Uwe Kubosch <u...@kubosch.no> wrote:
> On Thu, 2009-01-01 at 14:32 -0500, Greg Martyn wrote:
>> One problem is that /etc/fstab filesystems get mounted before zfs-fuse
>> filesystems, and thus it isn't possible to have e.g. an ext3 partition
>> automatically mounted at /tank/ext3 if /tank is zfs.
>
> Do you do that now?  I am keen on hearing stories about how zfs-fuse is
> actually used today.  I you actually mount ext3 file systems below zfs
> file systems today, I'd like to hear why :)


I occasionally do KDE development, and end up compiling lots of code from the KDE svn server. The speed of the compilation is much more important to me than data integrity because even if I lose everything, I can just checkout a fresh copy. For this I have /home/kde on a reiserfs partition, while /home is raidz1.



>> A possible solution might be to provide fstab entries for zfs
>> filesystems, or even just a mount all zfs filesystems fstab entry that
>> can be prioritized using the standard fstab prioritization methods.
>
> Can you give me an example for this?  Wouldn't mounting zfs fule systems
> first prevent you from mounting zfs file systems below ext3 file
> systems?


Filesystems are mounted in the order they appear in fstab. I was imagining that each zfs filesystem could be listed like any other filesystem. The mount all children entry would simply be for convenience.

Eg:

# device name   mount point     fs-type     options     dump-freq pass-num
LABEL=/         /               ext3        defaults            1 1
/dev/sdb1       /opt            ext3        defaults            0 0
                /home           zfs         children            0 0
/dev/sdc1       /home/kde       ext3        defaults            0 0
tank            /var            zfs         defaults            0 0

The /home line with options=children would mount all zfs filesystems with mountpoints under /home. I might for instance have a zfs fs with mountpoint /home, another at /home/greg and another mounted at /home/sally. This one line would mount all three. Also, if you wanted to mount all available zfs filesystems, you could change /home to /. Options=children would be for convenience so that the user doesn't have to edit fstab every time they add a zfs filesystem. Note that no device was specified, so zfs figured out what to mount on its own.

The /home/kde line mounts an ext3 fs after /home has been mounted.

The /var line
mounts the zfs fs with name "tank" at /var as zfs, and doesn't automatically mount zfs filesystems that are children of /var (because options=defaults).


Cheers,
Greg

Greg Martyn

unread,
Jan 1, 2009, 9:15:07 PM1/1/09
to zfs-...@googlegroups.com
Oops.. one small mistake: it's probably better to use zfs-fuse as the filesystem name.

donV

unread,
Jan 2, 2009, 6:59:06 AM1/2/09
to zfs-fuse
This is just a thought, right? This doesn't work, yet, right?

sghe...@hotmail.com

unread,
Jan 2, 2009, 7:55:33 AM1/2/09
to zfs-...@googlegroups.com
donV wrote:
> This is just a thought, right? This doesn't work, yet, right?
>
What is? If you're talking about nested mounting of 'native' fs-es under
fuse fs-es, I wasn't aware of any limitations. You'd simply have to make
sure (as always) that the mounts occur in the right order. My take on
this is: keep it simple. I'd just manually mount the nested fs-es in my
zfs-fuse init-script.

Then again, I don't have a need for any of this. I usually symlink like
this:
ln -sfvn /media/scratchlvm/kdestage ~/kdecheckout where

kdestage is something fairly flexible (like live-resizeable xfs on lvm).
You'd have the option of live upgrading that volume to raid, growing the
scratchlvm fs with xfs_grow... all without even changing a single
mount-point let alone sequence.

PS. On a related note, of course reiserfs is quick in certain cases, but
i'd venture that JFS is faster and more versatile when doing mass
(parallel) builds (it's my day-job). Don't try too much make clean on
reiser, e.g.

Just my 20 pence as always,
Seth

>
> >
>
>
>

Uwe Kubosch

unread,
Jan 2, 2009, 6:39:02 PM1/2/09
to zfs-...@googlegroups.com
On Fri, 2009-01-02 at 13:55 +0100, sghe...@hotmail.com wrote:
> donV wrote:
> > This is just a thought, right? This doesn't work, yet, right?
> >
> What is? If you're talking about nested mounting of 'native' fs-es under
> fuse fs-es, I wasn't aware of any limitations. You'd simply have to make
> sure (as always) that the mounts occur in the right order. My take on
> this is: keep it simple. I'd just manually mount the nested fs-es in my
> zfs-fuse init-script.

I wondered if the use of fstab you mentioned is something you do today,
or if it was a suggestion for a feature.

> Then again, I don't have a need for any of this. I usually symlink like
> this:
> ln -sfvn /media/scratchlvm/kdestage ~/kdecheckout where

That is what I really wondered: If you actually mounted "native" file
systems below zfs file systems today, and just needed help automating
it. I only do packaging for Fedora, including init scripts and such.

Seems like automating nested mounts using fstab requires new features in
fuse, so I'm not your guy there :)

> kdestage is something fairly flexible (like live-resizeable xfs on lvm).
> You'd have the option of live upgrading that volume to raid, growing the
> scratchlvm fs with xfs_grow... all without even changing a single
> mount-point let alone sequence.

Does xfs offer data reliability through checksums and automatic repair?

> PS. On a related note, of course reiserfs is quick in certain cases, but
> i'd venture that JFS is faster and more versatile when doing mass
> (parallel) builds (it's my day-job). Don't try too much make clean on
> reiser, e.g.

Any of these offer checksumming and repair?

signature.asc

sghe...@hotmail.com

unread,
Jan 2, 2009, 6:49:38 PM1/2/09
to zfs-...@googlegroups.com
Uwe Kubosch wrote:
> [snip]

> Does xfs offer data reliability through checksums and automatic repair?
>
> [snip]

> Any of these offer checksumming and repair?
>
Of course not. We all believe in, adore and abide by zfs. :)
I was thinking along with the guy that mentioned nested mounts in the
first place: he would nest-mount a reiser vol in order to get best
compile performance while everything else works on superb-yet-slowish
zfs-fuse. I couldn't resist pointing out the obvious candidates.

Btw. xfs has write barrier support (given the proper block device
layer). This is one feature most (all?) other linux fs-es lack.

btrfs is drafted to be the zfs-killer on linux (native kernel driver,
self-healing, checksummed object and metadata; in addition to zfs's
current features it promises to be SSD-aware (incl. smart journaling).
Of course, btrfs is incredibly beta at this point.

So, let's all believe in, adore and abide by zfs. :)

Chris Samuel

unread,
Jan 3, 2009, 2:33:04 AM1/3/09
to zfs-...@googlegroups.com
On Sat, 3 Jan 2009 10:49:38 am sghe...@hotmail.com wrote:

> btrfs is drafted to be the zfs-killer on linux (native kernel driver,
> self-healing, checksummed object and metadata; in addition to zfs's
> current features it promises to be SSD-aware (incl. smart journaling).

It already is SSD aware, there is a ssd mount flag, seekwatch results here:

http://oss.oracle.com/~mason/seekwatcher/pm-compare.png

Explanation here:

http://oss.oracle.com/pipermail/btrfs-devel/2008-February/000513.html

It's planned to eventually detect SSD's and automate that.

> Of course, btrfs is incredibly beta at this point.

It's not even alpha yet, the on-disk format is still evolving and it doesn't
handle ENOSPC yet. ;-)

That's not stopped me using as /home on my work Dell E4200 with a 128GB SSD
drive, I've made two partitions and used btrfs's built in mirroring in case I
get SSD errors on part of the disk. Working fine so far! ;-)

> So, let's all believe in, adore and abide by zfs. :)

I'm still using it for backups, but I don't think it has a long term future
any more I'm afraid. :-(

cheers,
Chris
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP

signature.asc

sghe...@hotmail.com

unread,
Jan 3, 2009, 8:05:00 AM1/3/09
to zfs-...@googlegroups.com
Thanks Chris!

I value that opinion. I've read some of your blog posts on FS-es so I
know that you know your stuff. I've never been able to get started with
Btrfs (getting nothing but errors even mk-ing the fs... :() on Ubuntu yet.

However, since you are reporting interesting results, I might reconsider
sooner than I was planning on (I had slipped that schedule to around
2010 after my latest deception).

In the mean time, I'll still be using ZFS for my local - reliable
backups. My remotes/offlines will still simply be unchecked (although I
assume some kind of redundant raid config at my hosting locs) because of
practical concerns (and the money, as always).

Cheers,
Seth

Chris Samuel

unread,
Jan 3, 2009, 6:28:16 PM1/3/09
to zfs-...@googlegroups.com
On Sun, 4 Jan 2009 12:05:00 am sghe...@hotmail.com wrote:

Hi Seth,

> Thanks Chris!

Not a problem. Note that I'm not being down on ZFS itself, just that with
Riccardo being the only maintainer and his day job taking up all his time (as
it should!) and him knowing the code the best it'll take far longer to develop
than kernel based filesystems which tend to get more people involved.

> I value that opinion. I've read some of your blog posts on FS-es so I
> know that you know your stuff. I've never been able to get started with
> Btrfs (getting nothing but errors even mk-ing the fs... :() on Ubuntu yet.

Ouch! That's enough to put anyone off. :-) The version I used from git
didn't have any issues (except the space for a RAID filesystem is misleadingly
large because, under btrfs, every filesystem chunk can have its own RAID
level).

> However, since you are reporting interesting results, I might reconsider
> sooner than I was planning on (I had slipped that schedule to around
> 2010 after my latest deception).

I'm being brave using it on /home purely because I don't trust SSD's yet,
after reading Val Henson's blog on them. I'm not suggesting anyone starts
playing with it in anger until they've got the disk format nailed down.

> In the mean time, I'll still be using ZFS for my local - reliable
> backups.

Same here - and I've had to use them (once) to recover a file deleted due to
finger trouble, so they do work. ;-)

I also don't see me changing from using ZFS for this for a long time, there's
a lot to be said for a diversity in mechanisms for backups.

> My remotes/offlines will still simply be unchecked (although I assume some
> kind of redundant raid config at my hosting locs) because of practical
> concerns (and the money, as always).

Of course!

All the best,

signature.asc
Reply all
Reply to author
Forward
0 new messages