Infiniband interface aggregation?

257 views
Skip to first unread message

Jan Behrend

unread,
Nov 25, 2016, 12:36:09 PM11/25/16
to fhgfs...@googlegroups.com
Hello list,

I am using a very nifty storage backend device as a BeeGFS storage server:

http://www.rnt.de/en/bigfoot-storage-xxlarge.html

The 48 disks actually saturate a 4xFDR IB interface with 6.35 GB/s.
The local storage itself has its limits with the 3xSAS2 backplanes at 9GB/s.

Since I have a 2nd IB card plugged into this machine, can anyone think
of a way how to utilize this additional bandwith.
Out of the box BeeGFS takes the second interface as a fallback route.

I have tought about bonding and I am going to test it, but RDMA and bonding is
not possible as far as I know.  Is bonding worth dropping RDMA?

Any suggestions?

Thanks in advance!
Cheers Jan

--
MAX-PLANCK-INSTITUT fuer Radioastronomie
Jan Behrend - Rechenzentrum
----------------------------------------
Auf dem Huegel 69, D-53121 Bonn
Tel: +49 (228) 525 359, Fax: +49 (228) 525 229
http://www.mpifr-bonn.mpg.de


Sven Breuner

unread,
Dec 7, 2016, 8:34:01 AM12/7/16
to fhgfs...@googlegroups.com, Jan Behrend
Hi Jan,

thanks for sharing info on this nice system.

I guess using EDR InfiniBand in the server instead FDR would not be an option?

There is currently no built-in interface bonding in BeeGFS, but a workaround
that some people use is to run two instances of the beegfs-storage service
(multi-mode) on a server.

For this example, I will assume that you have 4 storage targets (without loss of
generality). The first beegfs-storage instance would export the first two
storage targets and would set ib0 as primary interface in its connInterfacesFile
in beegfs-storage.conf. The second beegfs-storage instance would export the
other two targets and would set ib1 as primary interface in its connInterfacesFile.

To have the routing working correctly, the IP addresses (yes, IP addresses,
because we are using rdma_cm, which establishes native RDMA connections based on
IP addresses) of those two IB interfaces would need to be in different subnets,
e.g. i...@192.168.0.1/24 and i...@192.168.1.1/24. And very important, the clients
also need to have an IP in both subnets, even though they might have only a
single IB interface, e.g. client01 i...@192.168.0.101/24 and
ib0:0/192.168.1.101/24 (so you would just add the second IP as another virtual
interface ib0:0).

Best regards,
Sven
Reply all
Reply to author
Forward
0 new messages