3 things:1: when you report log output, give the exact log output. If it's long, pipe it into a termbin paste:
2: why in the world would you mix version numbers (even minor versions) in the same filesystem?
3. Did you setsysAllowNewServers = true
in the /etc/beegfs/beegfs-mgmtd.confto allow new servers to join storage pool?
My 3 cents.
(2) Apr30 11:35:28 DGramLis [Node registration] >> New node: beegfs-storage oss03 [ID: 3]; RDMA; Ver: 6.18-0; Source: <ip.address.oss03>
(2) Apr30 11:35:30 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 3; TargetID: 330; Pool: Emergency; Reason: No capacity report received.
(2) Apr30 11:35:30 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 3; TargetID: 331; Pool: Emergency; Reason: No capacity report received.
(2) Apr30 11:35:31 DirectWorker1 [Change consistency states] >> Storage target is coming online. ID: 330
(2) Apr30 11:35:31 DirectWorker1 [Change consistency states] >> Storage target is coming online. ID: 331
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 1; TargetID: 100; Pool: Low; Reason: Free capacity threshold
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 1; TargetID: 101; Pool: Low; Reason: Free capacity threshold
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 2; TargetID: 200; Pool: Low; Reason: Free capacity threshold
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 2; TargetID: 201; Pool: Low; Reason: Free capacity threshold
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 3; TargetID: 330; Pool: Normal.
(2) Apr30 11:35:35 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 3; TargetID: 331; Pool: Normal.
(2) Apr30 11:37:51 DGramLis [Node registration] >> New node: beegfs-storage oss04 [ID: 4]; RDMA; Ver: 6.18-0; Source: <ip.address.oss04>
(2) Apr30 11:37:54 DirectWorker1 [Change consistency states] >> Storage target is coming online. ID: 440
(2) Apr30 11:37:54 DirectWorker1 [Change consistency states] >> Storage target is coming online. ID: 441
(2) Apr30 11:37:55 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 4; TargetID: 440; Pool: Normal.
(2) Apr30 11:37:55 XNodeSync [Assign target to capacity pool] >> Storage target capacity pool assignment updated. NodeID: 4; TargetID: 441; Pool: Normal.
# beegfs-df
METADATA SERVERS:
TargetID Pool Total Free % ITotal IFree %
======== ==== ===== ==== = ====== ===== =
1 normal 837.0GiB 834.6GiB 100% 558.4M 554.7M 99%
2 normal 837.0GiB 834.6GiB 100% 558.4M 554.8M 99%
STORAGE TARGETS:
TargetID Pool Total Free % ITotal IFree %
======== ==== ===== ==== = ====== ===== =
100 low 8936.0GiB 1136.1GiB 13% 893.8M 891.7M 100%
101 low 8936.0GiB 1136.4GiB 13% 893.8M 891.7M 100%
200 low 8936.0GiB 1136.7GiB 13% 893.8M 891.7M 100%
201 low 8936.0GiB 1136.7GiB 13% 893.8M 891.7M 100%
330 normal 16759.3GiB 16758.2GiB 100% 1676.1M 1676.1M 100%
331 normal 16759.3GiB 16758.3GiB 100% 1676.1M 1676.1M 100%
440 normal 16759.3GiB 16758.7GiB 100% 1676.1M 1676.1M 100%
441 normal 16759.3GiB 16758.7GiB 100% 1676.1M 1676.1M 100%
/opt/beegfs/sbin/beegfs-setup-storage -p /mnt/disk1 -s 1004 -i 1400 -m <mgmt.serve>
beegfs-ctl --removenode --nodetype=storage 1004
beegfs-ctl --removetarget 1400
how can i permenantly remove a storage host and target from the configuration? i tried to delete the target from the file targetNumIDs butthey showed up again! i think beegfs keeps the info somewhere else too.