Hi all,
I tried to run some iperf experiments and see BBR's value from the "ss" command line tool. Below are the details:
Testbed: Two linux machines A and B (with Linux kernel 4.16) connected by a NETGEAR switch. Machine A is the sender, machine B is the receiver.
Benchmark tool: Iperf, run for 60 seconds
Bandwidth settings: Use netem ifb tool to set the bandwidth on the receiver machine B. At time t=0s, set bandwidth to 500mbps; at time t=26s, halve bandwidth to 250mbps; at time t=36s, double bandwidth back to 500mbps.
Delay settings: Add 10ms delay on the receiver machine B by using htb.
I collected 3 types of BBR values from the "ss" tool {bw, pacing_rate, delivery_rate} every 1ms. I also got the reported bandwidth from the iperf tool. The values are plotted in the attached figures. The 3 figures show the collected values when the bandwidth is {decreasing, stable, increasing}. The original "ss" log is also attached.
There are performance issues I don't quite understand here. Hoping to get your idea.
- The "pacing_rate" (shown in red line) is generally higher than the "bw"(shown in blue line). When pacing_rate equals to 1, shouldn't be the estimated "bw" be the same as "pacing_rate"?
- During the period when bandwidth doubles (around t=36s), it takes less than 2 cycles (1 cycle = 8 round trips) for the "bw" estimates to double. However, the paper (Figure 3 in https://queue.acm.org/detail.cfm?id=3022184) mentions the "bw" estimates increases 1.95X (1.25^3) in 3 cycles. May I know why it takes less time to double here?
- During the period when bandwidth halves -- After the "delivery_rate" (shown in yellow line) begins to decreases, it takes more than 5 cycles (40 rtts) for "bw" and "pacing_rate" to decrease. I thought the estimate bandwidth filter window was 10 rtts so "bw" would start decreasing after 10 rtts. Why it takes 40 rtts for "bw" to start decreasing?
Thanks,
Davy