# beegfs-ctl --listtargets --nodetype=storage --longnodes --state
TargetID Reachability Consistency NodeID
======== ============ =========== ======
1 Online Good beegfs-storage b1 [ID: 1]
2 Online Good beegfs-storage b1 [ID: 1]
3 Online Good beegfs-storage b1 [ID: 1]
4 Online Good beegfs-storage b1 [ID: 1]
5 Online Good beegfs-storage b1 [ID: 1]
6 Online Good beegfs-storage b1 [ID: 1]
7 Online Good beegfs-storage b2 [ID: 2]
8 Online Good beegfs-storage b2 [ID: 2]
10 Online Good beegfs-storage b2 [ID: 2]
11 Online Good beegfs-storage b2 [ID: 2]
12 Online Good beegfs-storage b2 [ID: 2]
13 Online Good beegfs-storage b2 [ID: 2]
14 Online Good beegfs-storage b2 [ID: 2]
15 Online Good beegfs-storage b2 [ID: 2]
16 Online Good beegfs-storage b3 [ID: 3]
17 Online Good beegfs-storage b3 [ID: 3]
18 Online Good beegfs-storage b3 [ID: 3]
19 Online Good beegfs-storage b3 [ID: 3]
20 Online Good beegfs-storage b3 [ID: 3]
21 Online Good beegfs-storage b3 [ID: 3]
# beegfs-ctl --listtargets --mirrorgroups
MirrorGroupID MGMemberType TargetID NodeID
============= ============ ======== ======
101 primary 1 1
101 secondary 2 1
102 primary 3 1
102 secondary 4 1
103 primary 5 1
103 secondary 6 1
201 primary 7 2
201 secondary 8 2
203 primary 11 2
203 secondary 12 2
204 primary 13 2
204 secondary 14 2
205 primary 15 2
205 secondary 10 2
301 primary 16 3
301 secondary 17 3
302 primary 18 3
302 secondary 19 3
303 primary 20 3
303 secondary 21 3
# beegfs-ctl --getentryinfo --verbose /mnt/beegfs/storage/video-1525866804.mp4
EntryID: 1F-5BAD6B8E-1
Metadata buddy group: 1
Current primary metadata node: mb1 [ID: 1]
Stripe pattern details:
+ Type: RAID10
+ Chunksize: 512K
+ Number of storage targets: desired: 4; actual: 4
+ Storage targets:
+ 1 @ b1 [ID: 1]
+ 8 @ b2 [ID: 2]
+ 11 @ b2 [ID: 2]
+ 17 @ b3 [ID: 3]
Chunk path: u3E9/5BAD/6/1E-5BAD6B8E-1/1F-5BAD6B8E-1
Dentry path: 6B/4F/1E-5BAD6B8E-1/
Hi Tobias,
I am not familiar with the RAID10 stripe type, so I can’t help you there but I can tell you why the failover didn’t happen.
BeeGFS buddy mirroring is set up for host failures, not target failures. If you have multiple targets on one host, beegfs thinks everything is fine as long as the host is up. If a target fails, you need to stop the beegfs-storage process on that host for the failover to kick-in, and then it will be for all the targets on that node rather than just the failed target.
I think BeeGFS was designed to have a single RAID target per storage server which is why the buddy mirroring behaves as it does. Unfortunately this behaviour isn’t very well documented.
Thanks,
Nick
--
You received this message because you are subscribed to the Google Groups "beegfs-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
fhgfs-user+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
PRIVACY AND CONFIDENTIALITY NOTICE
The information contained in this message is intended for the named recipients only. It may contain confidential information and if you are not the intended recipient, you must not copy, distribute or take any action in reliance on it. If you have received
this message in error please destroy it and reply to the sender immediately or contact us at the above telephone number.
VIRUS DISCLAIMER
While we take every precaution against presence of computer viruses on our system, we accept no responsibility for loss or damage arising from the transmission of viruses to e-mail recipients.
In general, a storage buddy group could even be composed of two targets that are attached to the same server.
[...]
If the primary storage target [...] of a buddy group is unreachable, it will get marked as offline and a failover to the secondary will be issued. In this case, the former secondary will become the new primary.
That’s interesting that the documentation says storage buddy groups can be two targets on the same server. If that was the case, failover would never occur when a target fails. Unless you run multi-mode and have a beegfs-storage process per target. Failover only starts when communication is lost to the beegfs-storage process (that is my understanding anyway).