inodes

57 views
Skip to first unread message

Laurence Mayer

unread,
Apr 15, 2024, 6:31:25 AMApr 15
to beegfs-user
Hi,

We have a two node Beegfs Cluster (version 7.4.2) with identical hardware.

On each device we have two file-systems one for meta and one for storage (see below)

We noticed that when copying data into the cluster only one of the meta devices changes (increase IUsed and decrease IFree) so that now there is a discrepancy between the two nodes (and growing). I do see that when copying from another location then the inodes of the other device does change too.

Should it not be creating the exact same number of files between the two devices as such both meta devices should be changing?

Is this expected behaviours, I am somewhat concerned that we'll run out of inodes on one of the devices while the other device will have lots of space?

Is this working as designed?

Filesystem         Inodes  IUsed      IFree IUse% Mounted on
/dev/sdb        262144000 299331  261844669    1% /beegfs/meta
/dev/sdc       1233662720 187721 1233474999    1% /beegfs/storage

Filesystem         Inodes  IUsed      IFree IUse% Mounted on
/dev/sdb        262144000 351351  261792649    1% /beegfs/meta
/dev/sdc       1233662720 189470 1233473250    1% /beegfs/storage


ETADATA SERVERS:
TargetID   Cap. Pool        Total         Free    %      ITotal       IFree    %
========   =========        =====         ====    =      ======       =====    =
       1      normal     374.5GiB     374.0GiB 100%      262.1M      261.8M 100%
       2      normal     374.5GiB     374.0GiB 100%      262.1M      261.8M 100%

STORAGE TARGETS:
TargetID   Cap. Pool        Total         Free    %      ITotal       IFree    %
========   =========        =====         ====    =      ======       =====    =
       1      normal   11763.1GiB   10713.4GiB  91%     1233.7M     1233.5M 100%
       2      normal   11763.1GiB   10713.4GiB  91%     1233.7M     1233.5M 100%


傅 卫华

unread,
Apr 15, 2024, 6:43:50 AMApr 15
to fhgfs...@googlegroups.com
Hi, 
On beegfs, all files with in a directory  writes meta datas to a  single and fixed meta target.

发件人: fhgfs...@googlegroups.com <fhgfs...@googlegroups.com> 代表 Laurence Mayer <laurenc...@gmail.com>
发送时间: 2024年4月15日 18:31
收件人: beegfs-user <fhgfs...@googlegroups.com>
主题: [beegfs-user] inodes
 
--
You received this message because you are subscribed to the Google Groups "beegfs-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fhgfs-user+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fhgfs-user/e2c132dd-78c5-4596-a836-0dd2667af479n%40googlegroups.com.

Laurence Mayer

unread,
Apr 15, 2024, 9:49:59 AMApr 15
to fhgfs...@googlegroups.com
Thank you.

So just to clarify, per directory down to which directory level? how does it determine which directory chose which meta server target (random)?

Regards
Laurence 

You received this message because you are subscribed to a topic in the Google Groups "beegfs-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fhgfs-user/rrRqIvA5SSM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fhgfs-user+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fhgfs-user/OS3P286MB197777BA05AA7C5325657E50A7092%40OS3P286MB1977.JPNP286.PROD.OUTLOOK.COM.

Emir Imamagic

unread,
Apr 15, 2024, 9:50:13 AMApr 15
to fhgfs...@googlegroups.com, Laurence Mayer
Hi,

I assume you are not using metadata mirror functionality to make your
metadata services highly available. Only in that case you would expect
/meta usage to be equivalent on both meta nodes.

Metadata services distribute directories across meta nodes. That means
that all files entries that belong to a given directory will be stored
on a single meta node that hosts the directory. In your case I would
assume that you have some directories which contain large number or
files. Those directories are assigned to a single meta and the only way
to distribute them is to split them in more directories. You can always
check with beegfs-ctl --getentryinfo which meta is responsible for a
given directory.

More info here:

https://doc.beegfs.io/latest/architecture/overview.html#metadata-distribution

Cheers
emir

On 15.04.2024. 12:31, Laurence Mayer wrote:
> Hi,
>
> We have a two node Beegfs Cluster (version 7.4.2) with identical hardware.
>
> On each device we have two file-systems one for meta and one for storage
> (see below)
>
> We noticed that when copying data into the cluster only one of the meta
> devices changes (increase IUsed and decrease IFree) so that now there is
> a discrepancy between the two nodes (and growing). I do see that when
> copying from another location then the inodes of the other device does
> change too.
>
> Should it not be creating the exact same number of files between the two
> devices as such both meta devices should be changing?
>
> Is this expected behaviours, I am somewhat concerned that we'll run out
> of inodes on one of the devices while the other device will have lots of
> space?
>
> Is this working as designed?
>
> Filesystem         Inodes  IUsed      IFree IUse% Mounted on
> /dev/sdb        262144000 *299331*  261844669    1% /beegfs/meta
> /dev/sdc       1233662720 187721 1233474999    1% /beegfs/storage
>
> Filesystem         Inodes  IUsed      IFree IUse% Mounted on
> /dev/sdb        262144000 *351351*  261792649    1% /beegfs/meta
> /dev/sdc       1233662720 189470 1233473250    1% /beegfs/storage
>
> ETADATA SERVERS:
> TargetID   Cap. Pool        Total         Free    %      ITotal
> IFree    %
> ========   =========        =====         ====    =      ======
> =====    =
>        1      normal     374.5GiB     374.0GiB 100%      262.1M
>  261.8M 100%
>        2      normal     374.5GiB     374.0GiB 100%      262.1M
>  261.8M 100%
>
> STORAGE TARGETS:
> TargetID   Cap. Pool        Total         Free    %      ITotal
> IFree    %
> ========   =========        =====         ====    =      ======
> =====    =
>        1      normal   11763.1GiB   10713.4GiB  91%     1233.7M
> 1233.5M 100%
>        2      normal   11763.1GiB   10713.4GiB  91%     1233.7M
> 1233.5M 100%
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "beegfs-user" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to fhgfs-user+...@googlegroups.com
> <mailto:fhgfs-user+...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/fhgfs-user/e2c132dd-78c5-4596-a836-0dd2667af479n%40googlegroups.com <https://groups.google.com/d/msgid/fhgfs-user/e2c132dd-78c5-4596-a836-0dd2667af479n%40googlegroups.com?utm_medium=email&utm_source=footer>.

傅 卫华

unread,
Apr 16, 2024, 8:35:32 PMApr 16
to fhgfs...@googlegroups.com
Hi,Laurence.


发送时间: 2024年4月15日 18:47
收件人: fhgfs...@googlegroups.com <fhgfs...@googlegroups.com>
主题: Re: [beegfs-user] inodes
 

傅 卫华

unread,
Apr 16, 2024, 8:40:20 PMApr 16
to fhgfs...@googlegroups.com
You can go through the beegfs-meta.conf.
```
tuneTargetChooser            = randomized
# [tuneTargetChooser]
# The algorithm to choose storage targets for file creation.
# Values:
#   * randomized: choose targets in a random fashion.
#   * roundrobin: choose targets in a deterministic round-robin fashion.
#        (Use this only for benchmarking of large-file streaming throughput.)
#   * randomrobin: randomized round-robin; choose targets in a deterministic
#        round-robin fashion, but random shuffle the result targets list.
#   * randominternode: choose random targets that are assigned to different
#        storage nodeIDs. (See sysTargetAttachmentFile if multiple storage
#        storage daemon instances are running on the same physical host.)
# Note: Only the randomized chooser honors client's preferred nodes/targets
#    settings.
# Default: randomized
```



发送时间: 2024年4月15日 18:47
收件人: fhgfs...@googlegroups.com <fhgfs...@googlegroups.com>
主题: Re: [beegfs-user] inodes

Laurence Mayer

unread,
Apr 17, 2024, 7:34:23 AMApr 17
to beegfs-user
Understood, thank you.

Regards
Laurence 

Reply all
Reply to author
Forward
0 new messages