Leigh suggested starting a FAQ pinned to the top of geni-users@, and that we start adding things starting with the question below.
If you think a question should be added, please reply to this thread with the question and answer.
= Question Index =
1) "Why do I see poor bandwidth on my stitched links with iperf?"
===================
Question: "Why do I see poor bandwidth on my stitched links with iperf?"
Answer (provided by Nick Bastin):
The window size is way too small if you want single-flow throughput to be high, particularly given the latency in WAN connections (the resultant bandwidth delay product will be much too high for iperf's default window sizes).
You simply need to set the window size larger on both the server and client and your performance should be better for a single flow, regardless of raw-pc or xen VM. (The default raw PC window size is likely a lot larger, hiding this problem, but if you want deterministic performance from iperf you always need to set the window sizes you're going to use, otherwise it makes a guess at a default that is going to be highly intolerant of any latency in your connection).
Barring any other shaping going on, the throughput you're getting with a 23.5K window means you have a connection with roughly 12ms of delay. To get 100 Mbits (or close to it) on this connection, you'd need a window size of 150K or larger (and make sure your buffer sizes on both end can support that large of a window).
Brecht Vermeulen adds:
typically also handy in debugging link performance is to throw in an UDP iperf to see what is possible on the links and when packet loss begins to start:
server side: iperf -s -u
client side: iperf -c xxx -b 50M
so you can vary bandwidth and packet size and see what happens
For the whole thread that lead to this discussion see:
https://groups.google.com/d/msg/geni-users/Pqgs_BpSZPc/3LOLHwrkbPYJ