Hi folks,
Looks like a good, detailed comparison (even if the author seems somewhat biased towards ZFS): http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-better-than-btrfs
Cheers,
--
Durval.
Hello folks,On Sat, Aug 11, 2012 at 8:03 PM, ljlj <luis.jo...@gmail.com> wrote:I have greatly valued the ability to use ZFS via FUSE.I would even say that without the zfs-fuse project I would have not bothered with ZFS at all.For years now I've had a storage system that was reliable and performed well for my purposes.Also true is the fact that I have learned a great deal from the response of the zfs-fuse guys on this list/group.So, I second those thanks :)
I third that!
zfs-fuse simply rocks, and I dare to say that there would be no zfsonlinux project (at least not for me) if not for zfs-fuse.
Cheers,
--
Durval.
On Thursday, August 9, 2012 6:26:09 PM UTC+1, Emmanuel Anne wrote:My good news of the day ! :)2012/8/9 Daniel Smedegaard Buus <danie...@gmail.com>Never too many times you can say thank you for something good.--Been so happy using ZFS-FUSE for years, and now with ZoL quickly improving yet not quite stable, it's so nice to be able to go back to ZFS-FUSE to get stability for my pool, when ZoL fails me.Thank you for taking the first hard steps in bringing ZFS to Linux, and for doing it so god naggit well!You're beautiful people :)
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Arf, of course, it's rudd-o, that is the main site where zfs-fuse was (still is ?) hosted, so it's biased ! :)
Hi folks,
Looks like a good, detailed comparison (even if the author seems somewhat biased towards ZFS): http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-better-than-btrfs
Cheers,
--
Durval.
--
- the JBOD mode is useless. I had 2 500 GB drives, and I added a 2 TB drive to the pool. My available space did not increase by anywhere near 2 TB. It seems to be in the design how btrfs allocates space. You can google this problem. Some references ( http://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size
There's one big difference: Btrfs is getting better with time. ZFS is sorta stagnant at the moment.As soon as it's got RAID-5 support I was planning on converting my arrays over from ZFS as it looks like ZFS is sorta dead thanks to Oracle.
--
Hi Christopher,
As recently announced here, development on zfs-fuse has basically stopped, but development of zfsonlinux continues at a steady pace.
Cheers,
--
Durval.
-- richard
Oh, I'm using illumos with openindiana so all i know is that they implemented dedup and maybe some other stuff.I guess there is no dedup in zfs-fuse?
2012/9/9 Christopher Chan <feizho...@gmail.com>Oh, I'm using illumos with openindiana so all i know is that they implemented dedup and maybe some other stuff.I guess there is no dedup in zfs-fuse?Been on another planet lately ? dedup has been here for years now ! ;-) (and actually that's the last reason I still have a zfs-fuse pool today !).
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Maybe they don't even know ZoL for now ! ;-)
2012/8/15 Bud Bundy <bud...@gmail.com>
- the JBOD mode is useless. I had 2 500 GB drives, and I added a 2 TB drive to the pool. My available space did not increase by anywhere near 2 TB. It seems to be in the design how btrfs allocates space. You can google this problem. Some references ( http://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size
Thanks for the link, very informative. I didn't know about the metadata raid1 as a default.I'd say it's the main drawback of btrfs for me so far : sometimes you find out after using it for a very long time that you should have used some other settings at the mkfs.btrfs call, except they can't be changed afterwards unless you recreate everything... Very time consuming and annoying.Except that it's very stable for me, never had any problem, the fact that Linus uses it daily helped me to decide it was not so alpha after all (even if he probably has a ton of backups !).To summarize I'd say btrfs is better at low level (faaast !), but zfs is better at high level (easier to use, many more features).You have to be sure about the features you'll need when you switch.
--
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I started using btrfs on single drives for the main system - my laptop and desktop PC. It's an easily supported /boot partition; and provides snapshot features. I don't have many complaints and haven't lost my data (so far).
One minor complaint is I can't use it to store swap files.
For RAID5 (RAIDZ) volumes I still use ZFS. It just works and I haven't had a big reason to switch. I don't feel that confident in BTRFS RAID5. The wiki says it's stable but may have inconsistent parity on power failure: https://btrfs.wiki.kernel.org/index.php/RAID56 , and the RAID code seems to still be getting routine RAID5 fixes http://www.phoronix.com/scan.php?page=news_item&px=Btrfs-For-Linux-4.3 . I don't know if ZFS suffers from same corner case issue.
Although I have run into an issue with zfs send/receive. I do backups by taking a snapshot on the main server and doing incremental or full sends.At one point my backup machine, which had Ubuntu OS same as main server, died and I replaced it with Arch and AUR zfs-git and imported the pool (and ran scrub). But the the zfs send/receive failed with stream checksum error - for both incremental and full backups.
I ran memtest86+,
scrubbed both pools, checked that pool and filesystems are same version. The only thing I can see that's different is ashift=12 on one system and 9 on the other, but sends were working before when destination was an Ubuntu machine.
This led me to search and find a few similar posts (for example https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/M69pMAEZekY). It seems in general don't store ZFS send dumps (for example on tapes), because you're not guaranteed to import them back in - either because of data degradation while on tape, or version changes in the system.
The problem seems to have gone away with ZFS versions 0.6.5.1-1~trusty and zfs-git 0.6.5.1_r0_g159270e_4.1.6_1-1 on the two systems.
I started using btrfs on single drives for the main system - my laptop and desktop PC. It's an easily supported /boot partition; and provides snapshot features. I don't have many complaints and haven't lost my data (so far). One minor complaint is I can't use it to store swap files.
For RAID5 (RAIDZ) volumes I still use ZFS. It just works and I haven't had a big reason to switch. I don't feel that confident in BTRFS RAID5. The wiki says it's stable but may have inconsistent parity on power failure: https://btrfs.wiki.kernel.org/index.php/RAID56 , and the RAID code seems to still be getting routine RAID5 fixes http://www.phoronix.com/scan.php?page=news_item&px=Btrfs-For-Linux-4.3 . I don't know if ZFS suffers from same corner case issue.Although I have run into an issue with zfs send/receive. I do backups by taking a snapshot on the main server and doing incremental or full sends.At one point my backup machine, which had Ubuntu OS same as main server, died and I replaced it with Arch and AUR zfs-git and imported the pool (and ran scrub). But the the zfs send/receive failed with stream checksum error - for both incremental and full backups.
--