ZFS vs BTRFS

1128 views
Skip to first unread message

Durval Menezes

unread,
Aug 14, 2012, 6:29:30 AM8/14/12
to zfs-...@googlegroups.com

Hi folks,

Looks like a good, detailed comparison (even if the author seems somewhat biased towards ZFS):  http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-better-than-btrfs

Cheers,
--
   Durval.

On Aug 12, 2012 12:05 AM, "Durval Menezes" <durval....@gmail.com> wrote:
Hello folks,

On Sat, Aug 11, 2012 at 8:03 PM, ljlj <luis.jo...@gmail.com> wrote:
I have greatly valued the ability to use ZFS via FUSE.
I would even say that without the zfs-fuse project I would have not bothered with ZFS at all.
For years now I've had a storage system that was reliable and performed well for my purposes.
Also true is the fact that I have learned a great deal from the response of the zfs-fuse guys on this list/group.

So, I second those thanks :)

I third that!

zfs-fuse simply rocks, and I dare to say that there would be no zfsonlinux project (at least not for me) if not for zfs-fuse.

Cheers,
--
   Durval.
 
 


On Thursday, August 9, 2012 6:26:09 PM UTC+1, Emmanuel Anne wrote:
My good news of the day ! :)

2012/8/9 Daniel Smedegaard Buus <danie...@gmail.com>
Never too many times you can say thank you for something good.

Been so happy using ZFS-FUSE for years, and now with ZoL quickly improving yet not quite stable, it's so nice to be able to go back to ZFS-FUSE to get stability for my pool, when ZoL fails me.

Thank you for taking the first hard steps in bringing ZFS to Linux, and for doing it so god naggit well!

You're beautiful people :)

--
To post to this group, send email to zfs-...@googlegroups.com

To visit our Web site, click on http://zfs-fuse.net/

--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/

Emmanuel Anne

unread,
Aug 14, 2012, 5:09:36 PM8/14/12
to zfs-...@googlegroups.com
Arf, of course, it's rudd-o, that is the main site where zfs-fuse was (still is ?) hosted, so it's biased ! :)

2012/8/14 Durval Menezes <durval....@gmail.com>

Durval Menezes

unread,
Aug 14, 2012, 8:03:25 PM8/14/12
to zfs-...@googlegroups.com
Hi Emmanuel,

On Tue, Aug 14, 2012 at 6:09 PM, Emmanuel Anne <emmanu...@gmail.com> wrote:
Arf, of course, it's rudd-o, that is the main site where zfs-fuse was (still is ?) hosted, so it's biased ! :)

LOL, I didn't know! Now that explains everything! :-)

Cheers,
--
   Durval.
 

FnordMan

unread,
Aug 15, 2012, 10:40:03 AM8/15/12
to zfs-...@googlegroups.com


On Tuesday, August 14, 2012 5:29:30 AM UTC-5, Durval Menezes wrote:

Hi folks,

Looks like a good, detailed comparison (even if the author seems somewhat biased towards ZFS):  http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-better-than-btrfs

Cheers,
--
   Durval.

There's one big difference: Btrfs is getting better with time. ZFS is sorta stagnant at the moment.  
As soon as it's got RAID-5 support I was planning on converting my arrays over from ZFS as it looks like ZFS is sorta dead thanks to Oracle. 

Bud Bundy

unread,
Aug 15, 2012, 10:56:05 AM8/15/12
to zfs-...@googlegroups.com
I played with btrfs for a little bit:
- it's not stable enough.  And they display a warning when you install it, it's just an alpha or whatever.  I've had a pool dissapear for no reason.  The pool mounts from one device, but fails to mount from another device (you should be able to mount btrfs by pointing it to any device in the pool).  
- the JBOD mode is useless.  I had 2 500 GB drives, and I added a 2 TB drive to the pool.  My available space did not increase by anywhere near 2 TB.  It seems to be in the design how btrfs allocates space.  You can google this problem.  Some references ( http://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size )
- no scrub built in.  You have to run an externally developed script which effectively reads all files to /dev/null and if there's a problem btrfs complains to syslogs.

Of course most of these things will be fixed.  So maybe in years time I'll switch over.  But for now I'm really really happy with zfsonlinux.


--

Fajar A. Nugraha

unread,
Aug 15, 2012, 11:08:08 AM8/15/12
to zfs-...@googlegroups.com
On Wed, Aug 15, 2012 at 9:56 PM, Bud Bundy <bud...@gmail.com> wrote:
> - no scrub built in.

There is :)

$ btrfs sc
usage: btrfs scrub <command> [options] <path>|<device>

btrfs scrub start [-Bdqr] <path>|<device>
Start a new scrub
btrfs scrub cancel <path>|<device>
Cancel a running scrub
btrfs scrub resume [-Bdqr] <path>|<device>
Resume previously canceled or interrupted scrub
btrfs scrub status [-dR] <path>|<device>
Show status of running or finished scrub


Depending on what distro/version you use, it might not be in your
distro kernel/tools yet though.

> Of course most of these things will be fixed.

For me it's qgroups and send/receive. I've seen the patches (and
probably have been merged upstream), but it's definitely not in
mainstream distro yet.

> So maybe in years time I'll
> switch over.

next Ubuntu LTS, perhaps :)

--
Fajar

Emmanuel Anne

unread,
Aug 17, 2012, 4:20:45 PM8/17/12
to zfs-...@googlegroups.com
2012/8/15 Bud Bundy <bud...@gmail.com>

- the JBOD mode is useless.  I had 2 500 GB drives, and I added a 2 TB drive to the pool.  My available space did not increase by anywhere near 2 TB.  It seems to be in the design how btrfs allocates space.  You can google this problem.  Some references ( http://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size 
 
Thanks for the link, very informative. I didn't know about the metadata raid1 as a default.
I'd say it's the main drawback of btrfs for me so far : sometimes you find out after using it for a very long time that you should have used some other settings at the mkfs.btrfs call, except they can't be changed afterwards unless you recreate everything... Very time consuming and annoying.

Except that it's very stable for me, never had any problem, the fact that Linus uses it daily helped me to decide it was not so alpha after all (even if he probably has a ton of backups !).
To summarize I'd say btrfs is better at low level (faaast !), but zfs is better at high level (easier to use, many more features).
You have to be sure about the features you'll need when you switch.

Christopher Chan

unread,
Sep 8, 2012, 12:52:50 AM9/8/12
to zfs-...@googlegroups.com

There's one big difference: Btrfs is getting better with time. ZFS is sorta stagnant at the moment.  
As soon as it's got RAID-5 support I was planning on converting my arrays over from ZFS as it looks like ZFS is sorta dead thanks to Oracle. 


What do you mean as soon as ZFS got raid-5 support? ZFS had something better than RAID-5 from the very beginning. raidz. That's raid-5 without the write-hole issue. You want raid-6? raidz2.

/me running a 9 disk raidz2 array with a tenth disk sitting there ready as a hot spare.

Durval Menezes

unread,
Sep 8, 2012, 7:34:22 AM9/8/12
to zfs-...@googlegroups.com
HI Christopher,

Obviously "it" in the OP's sentence ("As soon as it's got RAID-5 [...]") means BTRFS, not ZFS. 

Relax and watch the blinking lights ;-)

Cheers,
-- 
   Durval.

--

Christopher Chan

unread,
Sep 8, 2012, 10:04:41 AM9/8/12
to zfs-...@googlegroups.com
Argh, somehow I read "from" as "to"

ZFS is still being worked on no? Surely someone from Garrett D'Amore's or Alasdairr's camps is working on ZFS?

Christopher

Durval Menezes

unread,
Sep 8, 2012, 10:26:37 AM9/8/12
to zfs-...@googlegroups.com

Hi Christopher,

As recently announced here, development on zfs-fuse has basically stopped, but development of zfsonlinux continues at a steady pace.

Cheers,
--
   Durval.

Emmanuel Anne

unread,
Sep 8, 2012, 11:58:28 AM9/8/12
to zfs-...@googlegroups.com
Yeah but to be more precise, oracle closed the main dev site for zfs (more precisely, it went closed source), there is another one which opened to try to replace it, but it's probably not the same (even if I didn't even try to test their work, I must say).
I think at the time oracle said they would continue to release the patches but with a delay like 3 months, it was 1 year ago or so, and I never heard about these patches again since then !
They clearly want to slow down or to kill any open source effort around zfs.

2012/9/8 Durval Menezes <durval....@gmail.com>



--

Durval Menezes

unread,
Sep 8, 2012, 9:25:55 PM9/8/12
to zfs-...@googlegroups.com
Hi Emmanuel,

On Sat, Sep 8, 2012 at 12:58 PM, Emmanuel Anne <emmanu...@gmail.com> wrote:
> Yeah but to be more precise, oracle closed the main dev site for zfs (more
> precisely, it went closed source), there is another one which opened to try
> to replace it, but it's probably not the same (even if I didn't even try to
> test their work, I must say).
> I think at the time oracle said they would continue to release the patches
> but with a delay like 3 months, it was 1 year ago or so, and I never heard
> about these patches again since then !
> They clearly want to slow down or to kill any open source effort around zfs.

I know I'm preaching to the choir here, but that's exactly the beauty
of an open-source project: even if the creators goes kaput and someone
else much less "open-minded" (pun intended) gets control of the
project, it's well-nigh impossible to close it up again: worse comes
to worst, the community will simply fork the last open version and
continue developing it.

Can't Oracle see that this kind of attitude is not only ineffective
but also guaranteed to damage their reputation with the open-source
community?

Cheers,
--
Durval.

Christopher Chan

unread,
Sep 9, 2012, 6:00:54 AM9/9/12
to zfs-...@googlegroups.com
Oh, I'm using illumos with openindiana so all i know is that they implemented dedup and maybe some other stuff.

I guess there is no dedup in zfs-fuse?

Richard.Elling

unread,
Sep 9, 2012, 11:05:55 AM9/9/12
to zfs-...@googlegroups.com
ZFS development outside of Oracle is very much alive! Many of the
original ZFS developers are now at Delphix and regularly contribute
code back to the community. Furthermore, next month in San Francisco
(at the same time as Oracle Open World :-) the illumos Foundation is
hosting a ZFS Day celebration and tech conference. Everyone is invited.
See www.zfsday.com for more info.

-- richard

Emmanuel Anne

unread,
Sep 9, 2012, 12:08:58 PM9/9/12
to zfs-...@googlegroups.com
2012/9/9 Christopher Chan <feizho...@gmail.com>

Oh, I'm using illumos with openindiana so all i know is that they implemented dedup and maybe some other stuff.

I guess there is no dedup in zfs-fuse?

Been on another planet lately ? dedup has been here for years now ! ;-) (and actually that's the last reason I still have a zfs-fuse pool today !). 

Christopher Chan

unread,
Sep 10, 2012, 8:09:00 AM9/10/12
to zfs-...@googlegroups.com
On Mon, Sep 10, 2012 at 12:08 AM, Emmanuel Anne <emmanu...@gmail.com> wrote:
2012/9/9 Christopher Chan <feizho...@gmail.com>
Oh, I'm using illumos with openindiana so all i know is that they implemented dedup and maybe some other stuff.

I guess there is no dedup in zfs-fuse?

Been on another planet lately ? dedup has been here for years now ! ;-) (and actually that's the last reason I still have a zfs-fuse pool today !). 

You could say so :-p

I only checked this email account lately and saw zee zfs vs btrfs thread.

So thanks for educating this alien on the current status of zfs-fuse. I'll stick with openindiana thanks.

Oracle being what it is...somehow I don't think it will be a good idea to depend on an Oracle project. Look at what they have done with mysql, opensolaris, openoffice and zfs. Once they have you hooked...I dunno.

Emmanuel Anne

unread,
Sep 10, 2012, 8:13:35 AM9/10/12
to zfs-...@googlegroups.com
It's not bad for everyone, it allowed libreoffice to be created, and it seems healthier than openoffice ever was ! (especially it gets a lot of patches from the community, when openoffice got almost only patches from sun).
For mysql I don't know, didn't follow that very closely, it still works for now anyway.
But you forgot java, which has been a huge mess lately ! ;-)
For zfs, well for now it seems maintained so I guess it's ok... !

So things are not so bad !

2012/9/10 Christopher Chan <feizho...@gmail.com>

--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/

Durval Menezes

unread,
Sep 11, 2012, 4:19:30 PM9/11/12
to zfs-...@googlegroups.com
Hi emmanuel, Christopher  and others,

Regarding Oracle's position on ZFS (and other  technologies  inherited from Sun), here's a nice article: http://www.serverwatch.com/server-trends/oracle-pushing-forward-on-linux-and-solaris.html

Regarding ZFS specifically, here's what  Wim Coekearts, a "Senior VP"  at Oracle, has declared:
      "According to Coekaerts, porting ZFS to Linux involves a non-optimal approach that is not native. As such, there is likely not a need to attempt to bring ZFS to Linux since Btrfs is now around to fit the bill. "

Well, as long as they don't try to interfere and stop others like the ZFSOnLinux folks from doing so, I for one am happy enough with Oracle's decision to get out of the way in regards to porting ZFS to Linux. 

Cheers,
--
   Durval.

Emmanuel Anne

unread,
Sep 11, 2012, 4:33:54 PM9/11/12
to zfs-...@googlegroups.com
Maybe they don't even know ZoL for now ! ;-)

2012/9/11 Durval Menezes <durval....@gmail.com>

Durval Menezes

unread,
Sep 11, 2012, 5:04:49 PM9/11/12
to zfs-...@googlegroups.com
Hello Emmanuel,

On Tue, Sep 11, 2012 at 5:33 PM, Emmanuel Anne <emmanu...@gmail.com> wrote:
Maybe they don't even know ZoL for now ! ;-)

Who knows? But if that's indeed the case, maybe the old proverb applies and *their* ignorance is in fact *our* bliss....  :-)

Cheers,
--
   Durval.

 

Durval Menezes

unread,
Sep 24, 2015, 9:24:15 AM9/24/15
to zfs-fuse, Emmanuel Anne
Howdy Emmanuel, FNordMan and Budric (if you are still listening here), 

On Friday, August 17, 2012 at 5:20:45 PM UTC-3, Emmanuel Anne wrote:
2012/8/15 Bud Bundy <bud...@gmail.com>

- the JBOD mode is useless.  I had 2 500 GB drives, and I added a 2 TB drive to the pool.  My available space did not increase by anywhere near 2 TB.  It seems to be in the design how btrfs allocates space.  You can google this problem.  Some references ( http://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size 
 
Thanks for the link, very informative. I didn't know about the metadata raid1 as a default.
I'd say it's the main drawback of btrfs for me so far : sometimes you find out after using it for a very long time that you should have used some other settings at the mkfs.btrfs call, except they can't be changed afterwards unless you recreate everything... Very time consuming and annoying.

Except that it's very stable for me, never had any problem, the fact that Linus uses it daily helped me to decide it was not so alpha after all (even if he probably has a ton of backups !).
To summarize I'd say btrfs is better at low level (faaast !), but zfs is better at high level (easier to use, many more features).
You have to be sure about the features you'll need when you switch.

Been three years and change... care to post an update re: how's your use of btrfs vs zfs going? 

Cheers, 
-- 
   Durval.

Budric Bundy

unread,
Sep 25, 2015, 4:41:48 PM9/25/15
to zfs-...@googlegroups.com, Emmanuel Anne
I started using btrfs on single drives for the main system - my laptop and desktop PC.  It's an easily supported /boot partition; and provides snapshot features.  I don't have many complaints and haven't lost my data (so far).  One minor complaint is I can't use it to store swap files.

For RAID5 (RAIDZ) volumes I still use ZFS.  It just works and I haven't had a big reason to switch.  I don't feel that confident in BTRFS RAID5.  The wiki says it's stable but may have inconsistent parity on power failure: https://btrfs.wiki.kernel.org/index.php/RAID56 , and the RAID code seems to still be getting routine RAID5 fixes http://www.phoronix.com/scan.php?page=news_item&px=Btrfs-For-Linux-4.3 .  I don't know if ZFS suffers from same corner case issue.

Although I have run into an issue with zfs send/receive.  I do backups by taking a snapshot on the main server and doing incremental or full sends.
At one point my backup machine, which had Ubuntu OS same as main server, died and I replaced it with Arch and AUR zfs-git and imported the pool (and ran scrub).  But the the zfs send/receive failed with stream checksum error - for both incremental and full backups.

I ran memtest86+, scrubbed both pools, checked that pool and filesystems are same version.  The only thing I can see that's different is ashift=12 on one system and 9 on the other, but sends were working before when destination was an Ubuntu machine.  This led me to search and find a few similar posts (for example https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/M69pMAEZekY).  It seems in general don't store ZFS send dumps (for example on tapes), because you're not guaranteed to import them back in - either because of data degradation while on tape, or version changes in the system. 

The problem seems to have gone away with ZFS versions 0.6.5.1-1~trusty and zfs-git 0.6.5.1_r0_g159270e_4.1.6_1-1 on the two systems.


--
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Durval Menezes

unread,
Sep 25, 2015, 5:49:37 PM9/25/15
to zfs-...@googlegroups.com, Emmanuel Anne
Hello Budric, 

Thanks for the detailed status report! More below: 

On Fri, Sep 25, 2015 at 5:41 PM, Budric Bundy <bud...@gmail.com> wrote:
I started using btrfs on single drives for the main system - my laptop and desktop PC.  It's an easily supported /boot partition; and provides snapshot features.  I don't have many complaints and haven't lost my data (so far). 

This is encouraging (I guess ;-)).
 
One minor complaint is I can't use it to store swap files.

I don't think you can do that with ZFS either (but I might be wrong). 
 
For RAID5 (RAIDZ) volumes I still use ZFS.  It just works and I haven't had a big reason to switch.  I don't feel that confident in BTRFS RAID5.  The wiki says it's stable but may have inconsistent parity on power failure: https://btrfs.wiki.kernel.org/index.php/RAID56 , and the RAID code seems to still be getting routine RAID5 fixes http://www.phoronix.com/scan.php?page=news_item&px=Btrfs-For-Linux-4.3 .  I don't know if ZFS suffers from same corner case issue.

This seems to be the classic RAID5 write-hole issue, which is inherently a design failure for RAID5 itself; ZFS does *not* suffer from it, in fact RAIDZ was designed from the start to be immune to it (and that's why it's called RAIDZ and not RAID5). Amazing that BTRFS hasn't solved it from the start too, it's a well known (and well documented) issue with a know solution for COW filesystems.

BTW: a "power off" issue doesn't happen only at power off, but at any instant the kernel drivers are unable to write to the disk, for example the controller goes south, or the kernel crashes hard. Then you'd be exposed to a write-hole too.
 
Although I have run into an issue with zfs send/receive.  I do backups by taking a snapshot on the main server and doing incremental or full sends.
At one point my backup machine, which had Ubuntu OS same as main server, died and I replaced it with Arch and AUR zfs-git and imported the pool (and ran scrub).  But the the zfs send/receive failed with stream checksum error - for both incremental and full backups.

Seems like a zfs stream version issue. Have you checked it with zstreamdump, and compared the result with the same from a stream generated on your destination platform? 
 

I ran memtest86+,

memtest86+ is a lousy checker, in my experience. Much better is to leave dledford's memtest or similar running over one night or three. Burning servers with it before putting them into production, or when weird things have happened, has detected marginal memory dimms (and even motherboards) more often than not.
 
scrubbed both pools, checked that pool and filesystems are same version.  The only thing I can see that's different is ashift=12 on one system and 9 on the other, but sends were working before when destination was an Ubuntu machine. 

Yep, different ashifts wouldn't do that. 
 
This led me to search and find a few similar posts (for example https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/M69pMAEZekY).  It seems in general don't store ZFS send dumps (for example on tapes), because you're not guaranteed to import them back in - either because of data degradation while on tape, or version changes in the system. 

If you do have to store zfs send dumps anywhere instead of applying them directly, I would store a sha256 sum along with it (to be able to check later whether it got corrupted). But the recommendation above is sound, the best is to apply it immediately. 
 
The problem seems to have gone away with ZFS versions 0.6.5.1-1~trusty and zfs-git 0.6.5.1_r0_g159270e_4.1.6_1-1 on the two systems.

So perhaps it was a stream imcompatibility issue between versions... 

Cheers, 
-- 
   Durval.

Gordan Bobic

unread,
Sep 26, 2015, 7:24:42 AM9/26/15
to zfs-...@googlegroups.com, Emmanuel Anne
On Fri, Sep 25, 2015 at 9:41 PM, Budric Bundy <bud...@gmail.com> wrote:
I started using btrfs on single drives for the main system - my laptop and desktop PC.  It's an easily supported /boot partition; and provides snapshot features.  I don't have many complaints and haven't lost my data (so far).  One minor complaint is I can't use it to store swap files.

I'm not sure why to even bother considering BTRFS, though. I have ZoL on my 64-bit x86 hardware, including /boot and rootfs. On my ARM hardware I use zfs-fuse for the rootfs. The only minor annoyance with the latter is that file caps don't appear to be supported so updating a package with files that require it fails, but it isn't that big a deal.

 

For RAID5 (RAIDZ) volumes I still use ZFS.  It just works and I haven't had a big reason to switch.  I don't feel that confident in BTRFS RAID5.  The wiki says it's stable but may have inconsistent parity on power failure: https://btrfs.wiki.kernel.org/index.php/RAID56 , and the RAID code seems to still be getting routine RAID5 fixes http://www.phoronix.com/scan.php?page=news_item&px=Btrfs-For-Linux-4.3 .  I don't know if ZFS suffers from same corner case issue.

Although I have run into an issue with zfs send/receive.  I do backups by taking a snapshot on the main server and doing incremental or full sends.
At one point my backup machine, which had Ubuntu OS same as main server, died and I replaced it with Arch and AUR zfs-git and imported the pool (and ran scrub).  But the the zfs send/receive failed with stream checksum error - for both incremental and full backups.

Very odd. I have never had such a problem, and I regularly do zfs send/receive between different implementations (ZoL to zfs-fuse) and I have never seen it fail to work.

If anybody cares, BTW, I have a zfs-fuse branch on github that adds support for pool v28. I haven't managed to make anything break on it for a few months now, but any testing is welcome.

Gordan

Nikolam

unread,
Sep 26, 2015, 3:36:23 PM9/26/15
to zfs-...@googlegroups.com
Since year ago or so, BTRF is in production, as per Redhat, SuSE and I think Oracle LInux.
That said, OpenZFS continues to advance ZFS with 'Feature flags' from many companies, on illumos, Linux, OSX and FreeBSD.


--

Gordan Bobic

unread,
Sep 27, 2015, 6:37:43 AM9/27/15
to zfs-...@googlegroups.com
Just because more people are now confident that BTRFS won't outright eat your data doesn't mean it is anywhere near being even in the same league as ZFS.

Ryan How

unread,
Sep 27, 2015, 7:40:34 AM9/27/15
to zfs-...@googlegroups.com
I've had inconsistent checksum issues on ZFS receive. I rebuilt my server and then it was fine. Don't know what the underlying cause of it was, but haven't seen it since.

The only real ZFS problems I've had have been performance (which was self inflicted due to not understanding it well enough) and memory, which seems to behave much better in the later versions on linux.
Reply all
Reply to author
Forward
0 new messages