shortage of inodes on metadata server

50 views
Skip to first unread message

Paul Weakliem

unread,
Dec 3, 2024, 7:27:07 PM12/3/24
to fhgfs...@googlegroups.com
Hi all,

We're running out of inodes on our Beegfs metadata server, which
obviously leads to problems (e.g. new files can't be written to the
beegfs system!!).  I see two ways to deal with this:

The metadata server is formatted with xfs, and we had originally
allocated 50% of the space to inodes, i.e.

 mkfs.xfs -f -i size=512,maxpct=50 /dev/sdb

and my thought is that by using 'xfs_growfs' to increase this percentage
should alleviate this problem (at least for a while) - and without
having to rebuild/restore from backups -  for example:

xfs_growfs -m 80 /data/meta

I'd do this when the system is down, and no clients are accessing the
data.   If anyone has run into this inodes situation with xfs and dealt
with it in this way, I'd be interested to hear any lessons
learned/stories/advice.



A second (although more risky) possibility is that our setup originally
was using metadata mirroring, but we stopped doing that (because of some
other issues it caused).  We simply had turned off the other buddy
metadata server, but never actually unwound the configuration (so it
still shows up in the beegfs-ctl -list commands as 'offline').  However,
the remaining/running metadata server still has a 'buddydir' directory
with lots of entries, which I assume are the old synced copies from the
other mirror.  There's docs about removing metadata nodes, but it's
talking about the case where one was accidentally added and has no files
yet.  It seems somewhat reckless to simply remove the files in
'buddydir' on our existing metadata server, but if I'm never going to
turn on the other buddy mirror, will it matter?  Or, more specifically,
is it possible to remove a buddy mirror when the other member of the
mirror is gone?




Thanks, Paul





--
California NanoSystems Institute
Center for Scientific Computing
Elings Hall, Room 3231
University of California
Santa Barbara, CA 93106-6105
http://www.cnsi.ucsb.edu http://csc.cnsi.ucsb.edu
(805)-893-4205

Waltar

unread,
Dec 4, 2024, 5:55:05 AM12/4/24
to beegfs-user
Hi Paul,
to your first question, the default value is 25%  for  filesystems under 1TB, 5% for filesystems under 50TB (which are the most common) and 1% for filesystems over 50TB
so it looks your sdb is quiet small if setting 50% on mkfs.xfs cmd. With xfs_growfs you can move this inode percentage "like a quota" on the fly up or down which is instandly
and you don't need the system down. We have a DR server of a fileserver which does lots of reflink copies as pseudo snapshots where we go up from default 5% to 12%
but there's no problem to. Even did a nvme full of inodes for testing with limit set to 100% and it's no problem as it's just full if full - so don't be worry at all :-)
To your second question an offline md buddy mirror has only outdated metadata which isn't useful anymore at all ...

John Hearns

unread,
Dec 7, 2024, 3:11:59 PM12/7/24
to fhgfs...@googlegroups.com
As Waltar says why are you using so many nodes?
If users are creating millions of small files I would suggest use of the clue stick.

--
You received this message because you are subscribed to the Google Groups "beegfs-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fhgfs-user+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/fhgfs-user/dc16a2d3-89a3-4385-b22c-1503b8440730n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages