Maximal network speed over lxc-bridge

118 views
Skip to first unread message

web...@manfbraun.de

unread,
Jun 22, 2021, 10:45:01 PM6/22/21
to lxc-...@lists.linuxcontainers.org
Hello!
 
My debian host (buster, kernel 5.10) , which hosts LXC (3.1) containers
too, got a new NIC and became part of an additional 10GbE network.
This results in a transfer speed over 8.x Gb/s between physical hosts,
which is a little (but remarkable) below the expected speed. The 10 GbE
phys nic was later on attached to a linux bridge (so, software), to made
it "shareable" (but no other traffic yet - but tested with samba).
 
I added a second network interface to one of my containers
and it became member of the same bridge. There is really
nothing special on this bridge, with the little exception, that
all port use a MTU 4000 (verified).
 
An iPerf between a remote workstation and the LCX container
(where the container was the client) resultet in a throughput
of about 5.21 Gb/s.
 
An iPerf between the container (as a client) to the host of the
container, astoundingly result only in a throughput of 2 Gb/s.
There was no other traffice while this test, the host has
28 GB free mem and load(5s) was about 0.6 (its a intel atom
C2xxx CPU with 8 Cores, but with only 2.4 GHz clock).
 
SAMBA read/write between physical host (no virtual bridges
involved) was between500 und 800 MB/s (regard the big B), but
breaks down to the half, after the linux bridge was introduced.
 
How is this dicrepancy to explain?
For the first test, there are the virtual switch, two physical switches
and the wires, but naturally not against the local host, there is
the linux (virtual) bridge only. All bridge members (except the
phys nic naturally) are using VETH ports and the ethtool show
10 Gb for all bridge members.
 
Even though I read in the net, that even an Intel Atom C3xxxx CPU
should - so that writing - not be able to saturate a 10 GbE link (my
is only from the C2xxx generation) - this was the reason for me,
not to investigate the low 8.x Gb/s (from the beginning of the mail)
further (But new HW will come in the next months).
 
But the discrepancy between the remote/local test is immense.
Effected are a software bridge and software interfaces (VETH).
It looks like, these causes a big problem.
 
Any thoughs and hints would be great!
 
Best regards,
Manfred
 
 

web...@manfbraun.de

unread,
Jun 23, 2021, 1:07:29 AM6/23/21
to lxc-...@lists.linuxcontainers.org
Hello!
 
Sorry, my mail was a bit rushed ...
 
A little later, the right thing came to my mind.
Create a local VETH pair on the LXC host "outside" all
existing nets and run iPerf on its ends.
This sheds light onto the issue.
 
With one thread and a longer period, one can see 2-6 Gbs/ throughput, very varying.
Using more threads (iperf -c -P <n>) on the client side increases throughput more and
more and its max was about 8.x Gb/s for 3-4 threads - this was the reachable max and
this is true for the local VETH pair as for the hardwire cross machines.
 
So its quite clear, that the CPU speed seems to be the limitation, the memory
throuput though, ist about 12 GB/s (big B).

Sorry for my rushed mail ;-)
But probably another person may find this nevertheless interesting ....
 
Regards,
Manfred
--
You received this message because you are subscribed to the Google Groups "lxc-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lxc-users+...@lists.linuxcontainers.org.
To view this discussion on the web visit https://groups.google.com/a/lists.linuxcontainers.org/d/msgid/lxc-users/C2B9CA8EF10949FD8826320081C0BBCD.MAI%40mail4.mailingkunden.de.
Reply all
Reply to author
Forward
0 new messages