Hi,
I assume you are not using metadata mirror functionality to make your
metadata services highly available. Only in that case you would expect
/meta usage to be equivalent on both meta nodes.
Metadata services distribute directories across meta nodes. That means
that all files entries that belong to a given directory will be stored
on a single meta node that hosts the directory. In your case I would
assume that you have some directories which contain large number or
files. Those directories are assigned to a single meta and the only way
to distribute them is to split them in more directories. You can always
check with beegfs-ctl --getentryinfo which meta is responsible for a
given directory.
More info here:
https://doc.beegfs.io/latest/architecture/overview.html#metadata-distribution
Cheers
emir
On 15.04.2024. 12:31, Laurence Mayer wrote:
> Hi,
>
> We have a two node Beegfs Cluster (version 7.4.2) with identical hardware.
>
> On each device we have two file-systems one for meta and one for storage
> (see below)
>
> We noticed that when copying data into the cluster only one of the meta
> devices changes (increase IUsed and decrease IFree) so that now there is
> a discrepancy between the two nodes (and growing). I do see that when
> copying from another location then the inodes of the other device does
> change too.
>
> Should it not be creating the exact same number of files between the two
> devices as such both meta devices should be changing?
>
> Is this expected behaviours, I am somewhat concerned that we'll run out
> of inodes on one of the devices while the other device will have lots of
> space?
>
> Is this working as designed?
>
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/sdb 262144000 *299331* 261844669 1% /beegfs/meta
> /dev/sdc 1233662720 187721 1233474999 1% /beegfs/storage
>
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/sdb 262144000 *351351* 261792649 1% /beegfs/meta
> /dev/sdc 1233662720 189470 1233473250 1% /beegfs/storage
>
> ETADATA SERVERS:
> TargetID Cap. Pool Total Free % ITotal
> IFree %
> ======== ========= ===== ==== = ======
> ===== =
> 1 normal 374.5GiB 374.0GiB 100% 262.1M
> 261.8M 100%
> 2 normal 374.5GiB 374.0GiB 100% 262.1M
> 261.8M 100%
>
> STORAGE TARGETS:
> TargetID Cap. Pool Total Free % ITotal
> IFree %
> ======== ========= ===== ==== = ======
> ===== =
> 1 normal 11763.1GiB 10713.4GiB 91% 1233.7M
> 1233.5M 100%
> 2 normal 11763.1GiB 10713.4GiB 91% 1233.7M
> 1233.5M 100%
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "beegfs-user" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
fhgfs-user+...@googlegroups.com
> <mailto:
fhgfs-user+...@googlegroups.com>.
> To view this discussion on the web visit
>
https://groups.google.com/d/msgid/fhgfs-user/e2c132dd-78c5-4596-a836-0dd2667af479n%40googlegroups.com <
https://groups.google.com/d/msgid/fhgfs-user/e2c132dd-78c5-4596-a836-0dd2667af479n%40googlegroups.com?utm_medium=email&utm_source=footer>.