Seeking Help: BeeGFS Metadata Partially Lost After Cleanup – Recovery from Intact Storage Data?

37 views
Skip to first unread message

Tang Hong

unread,
Oct 3, 2025, 9:51:27 AM (12 days ago) Oct 3
to beegfs-user

Hello everyone,

I’m a BeeGFS community user (since 2021) managing a computer cluster for education and research. Last Sunday, users started reporting persistent read/write errors. Investigation revealed that the metadata filesystem had exhausted its inodes. In attempting a quick resolution, I followed some problematic online advice and executed cleanup commands that inadvertently removed partial metadata. While the actual data managed by beegfs-storage remains intact, we’ve lost approximately 19% of metadata.

Current Access Situation:

  • Some earlier directories (e.g., /work/software) remain accessible through client mounts
  • However, critical user data directories (e.g., /work/home) are now unavailable
  • The storage data partition confirms user data remains physically intact

Technical Context:

Initial State (inode exhaustion):

[root@phydata ~]# df -i /data/beegfs/meta
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/rl-home
77285568 77284071 1497 100% /data/beegfs/meta

Problematic Cleanup Commands Executed:

find /data/beegfs/meta -type f -size 0 -delete find /data/beegfs/meta -type f -name "*.tmp" -delete

Current Metadata State (~19% loss):

[root@phydata ~]# df -i /data/beegfs/meta
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/rl-home
77285376 62480494 14804882 81% /data/beegfs/meta

Environment:

  • BeeGFS 7.4.5 on Rocky Linux 9.4
  • Metadata: /dev/mapper/rl-home (148G, 49G used) @ /data/beegfs/meta
  • Storage: /dev/sdb (73T, 64T used) @ /data/beegfs/storage (user data verified intact)

Core Question:
Are there any methods or tools—such as specific beegfs-fsck options or third-party utilities—capable of reconstructing lost metadata from the intact storage data? We understand the gravity of this operational error and are seeking any viable recovery approaches. We’re prepared to provide detailed logs and system information, and are open to commercial solutions or professional support.

The preservation of user data in /work/home is our highest priority. Any guidance, similar experiences, or suggested recovery procedures would be immensely appreciated.

Thank you for your time and expertise.


Waltar

unread,
Oct 4, 2025, 12:31:43 PM (10 days ago) Oct 4
to beegfs-user
That looks like ext4 on top of lvm on os disk ... I wouldn't have high hopes but if you want you be free to send the whole disk to a recover company.
More realistic would be to have a good backup and even have a functioning restore ability from.
But anyway in a metadata filesystem are only 0 byte files with attributes stored in the inodes, so if you you want to run ext4 again you should allow as much as possible inodes generated (max= "-i 1024") by first create new fs before beegfs reinit and do your restore by
mkfs.ext4 -i 1024 -I 512 -J size=1024 -Odir_index,filetype /dev/mapper/rl-home
And even that looks like a beegfs single server setup which even makes no sense at all as a local filesystem is faster than a local distributed one but that's another story.

Tang Hong

unread,
Oct 5, 2025, 8:42:24 AM (10 days ago) Oct 5
to beegfs-user
Hi Waltar,

Thank you for the reality check and the technical advice. Your notes are very helpful for our next steps.
Reply all
Reply to author
Forward
0 new messages