My debian host (buster, kernel 5.10) , which hosts LXC (3.1) containers
too, got a new NIC and became part of an additional 10GbE network.
This results in a transfer speed over 8.x Gb/s between physical hosts,
which is a little (but remarkable) below the expected speed. The 10 GbE
phys nic was later on attached to a linux bridge (so, software), to made
it "shareable" (but no other traffic yet - but tested with samba).
I added a second network interface to one of my containers
and it became member of the same bridge. There is really
nothing special on this bridge, with the little exception, that
all port use a MTU 4000 (verified).
An iPerf between a remote workstation and the LCX container
(where the container was the client) resultet in a throughput
of about 5.21 Gb/s.
An iPerf between the container (as a client) to the host of the
container, astoundingly result only in a throughput of 2 Gb/s.
There was no other traffice while this test, the host has
28 GB free mem and load(5s) was about 0.6 (its a intel atom
C2xxx CPU with 8 Cores, but with only 2.4 GHz clock).
SAMBA read/write between physical host (no virtual bridges
involved) was between500 und 800 MB/s (regard the big B), but
breaks down to the half, after the linux bridge was introduced.
How is this dicrepancy to explain?
For the first test, there are the virtual switch, two physical switches
and the wires, but naturally not against the local host, there is
the linux (virtual) bridge only. All bridge members (except the
phys nic naturally) are using VETH ports and the ethtool show
10 Gb for all bridge members.
Even though I read in the net, that even an Intel Atom C3xxxx CPU
should - so that writing - not be able to saturate a 10 GbE link (my
is only from the C2xxx generation) - this was the reason for me,
not to investigate the low 8.x Gb/s (from the beginning of the mail)
further (But new HW will come in the next months).
But the discrepancy between the remote/local test is immense.
Effected are a software bridge and software interfaces (VETH).
It looks like, these causes a big problem.
Any thoughs and hints would be great!