6.5.5 OOM with btrfs raid6+raid1

15 views
Skip to first unread message

Miles Raymond

unread,
Jul 4, 2024, 11:34:12 PM (12 days ago) Jul 4
to GnuBee
I started with switching my raid from md+ext4 over to btrfs:
mkfs.btrfs -m raid1 -d raid6 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
/etc/fstab:
LABEL=GNUBEE-DATA /home btrfs defaults,noatime,nofail 0 1

Originally mounted with compress=zstd:9 then tried just compress, then no compression at all, but in all cases it always results in out of memory errors when rsyncing files to the raid, when it was able to rsync just fine with md+ext4.

I tried just a straight scp with no compression mount option and still resulted in OOM errors. As much as I can tell, swap remains unused. In all cases when the OOM error occurs it seems to follow with a reboot shortly after, so uptime is always low.

Anyone else has any ideas? I really want to switch to btrfs but it seems quite unstable. Has anyone else used btrfs with success? What was your configuration?
syslog

Miles Raymond

unread,
Jul 6, 2024, 1:52:57 AM (11 days ago) Jul 6
to GnuBee
After a bit more experimentation, thankfully the memory leak seems to be limited to an issue with the raid support in btrfs, as going back to the md raid6 and btrfs on top of that results in no more OOM, even when using compress=zstd:9 and deduplication.
mdadm --create /dev/md/gnubee /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf --level=6 --raid-devices=6
mkfs.btrfs -L GNUBEE-DATA /dev/md/gnubee
/etc/fstab:
LABEL=GNUBEE-DATA /home btrfs defaults,noatime,compress=zstd:9,nofail 0 1

Vincent Legoll

unread,
Jul 6, 2024, 8:14:43 AM (11 days ago) Jul 6
to Miles Raymond, GnuBee
Hello Miles,

On Sat, Jul 6, 2024 at 5:52 AM Miles Raymond <reuk...@gmail.com> wrote:
After a bit more experimentation, thankfully the memory leak seems to be limited to an issue with the raid support in btrfs, as going back to the md raid6 and btrfs on top of that results in no more OOM, even when using compress=zstd:9 and deduplication.
mdadm --create /dev/md/gnubee /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf --level=6 --raid-devices=6
mkfs.btrfs -L GNUBEE-DATA /dev/md/gnubee
/etc/fstab:
LABEL=GNUBEE-DATA /home btrfs defaults,noatime,compress=zstd:9,nofail 0 1

Just curious,
but did you bench the differences between ext4, btrfs, maybe even xfs FSes ?

A few fio runs should easily show something.

Is the zstd compression OK wrt the somewhat weak CPUs / low memory of
the gnubees ?

Thanks

--
Vincent Legoll

Miles Raymond

unread,
Jul 7, 2024, 11:55:51 AM (10 days ago) Jul 7
to GnuBee
No, I am not interested in benchmarking these filesystems because performance is not my primary concerns. My main attraction to btrfs is the efficient use of storage with transparent compression, along with the data integrity benefits that btrfs can provide, since I am predominantly using this gnubee as a backup storage device. I am using Brett Neumeier's 6.5.5 kernel build with zram support, with zram-swap configured as 256MB. With zstd:9, transparent compression seems to utilize up to 90% cpu and ~400MB RAM overall while writing new data.

Though from what I recall, with ext4 I was able to write ~10MB/s over the network. I do not recall the cpu+ram utilization with ext4. With btrfs using zstd:9 compression, this seems to write on average ~9MB/s over the network.

Vincent Legoll

unread,
Jul 7, 2024, 1:47:11 PM (10 days ago) Jul 7
to Miles Raymond, GnuBee
Hello,

On Sun, Jul 7, 2024 at 3:55 PM Miles Raymond <reuk...@gmail.com> wrote:
No, I am not interested in benchmarking these filesystems because performance is not my primary concerns. My main attraction to btrfs is the efficient use of storage with transparent compression, along with the data integrity benefits that btrfs can provide, since I am predominantly using this gnubee as a backup storage device. I am using Brett Neumeier's 6.5.5 kernel build with zram support, with zram-swap configured as 256MB. With zstd:9, transparent compression seems to utilize up to 90% cpu and ~400MB RAM overall while writing new data.
 
OK, the thing that I was wondering is the use of zram, which is a tradeof, more CPU usage to get virtually more space in RAM, but on such a slow CPU, is this tradeof working out ?

The benchmarking question was more about if you had numbers for DM raid + FS versus FS that does internal RAID, out of curiosity. Because the FS with internal RAID claims of being more optimizable than a stacked setup. I really should try to bench that myself someday. And that may not even be visible on the Gnubee HW.
 
Though from what I recall, with ext4 I was able to write ~10MB/s over the network. I do not recall the cpu+ram utilization with ext4. With btrfs using zstd:9 compression, this seems to write on average ~9MB/s over the network.

Thanks, this is already interesting numbers, so this is a bit less, but not really changing the big picture.

--
Vincent Legoll

Miles Raymond

unread,
Jul 8, 2024, 1:54:11 AM (9 days ago) Jul 8
to GnuBee
In my experience, zram-swap is always better than using any storage-based swap or disabling swap completely. While zstd is far better on more performant CPUs, lzo is fast enough on weak CPUs and is typically good enough to get data in RAM compressed down by at least 50% and usually better, highly dependent on the applications and data occupying memory.

Since the btrfs internal RAID is what caused the memory leak and OOM, I really wouldn't recommend it, at least on the gnubee. I'm sure I'll be trying it again soon on a different system, so maybe it is just a bug with the MIPS architecture. I doubt I'll get much time to play around with btrfs raid on this device again any time in the near future, as it takes quite a long while to rebuild the md raid and copy all my data back.

Aaron D Borden

unread,
Jul 11, 2024, 12:24:54 AM (6 days ago) Jul 11
to GnuBee, Miles Raymond
FYI I have had some success with simple btrfs + lvm + md RAID10 on the
gnubee-pc1 but abandoned it because I felt the memory constraints were
limiting. Sorry I don't have specific numbers for you.

There's no minimum RAM requirement published for btrfs (that I can find), but
there is some overhead associated with it. The man pages for btrfs-check
mention it can consume 3x memory compared to e2fsck.

I found that I wasn't able to run btrfs check without OOM. This was a deal
breaker for me and gave me pause that I'd be able to repair a corrupt btrfs
filesystem. So I ended up switching back to ext4.

FWIW RAID 5/6 was never recommended on gnubee due to limited resources. Back in
2017, I asked Larry the gnubee creator about RAID5/6 support. He mentioned that
RAID 4,5,6 have too much single threaded overhead (referring to md). I assume
that is still the case? And is that also the case with the btrfs raid5/6
implementation?

Good luck and thanks for sharing your progress with us.


--
Aaron D Borden
Human and hacker
Reply all
Reply to author
Forward
0 new messages