BBR evaluation with netem

839 views
Skip to first unread message

Ethan Huang

unread,
Apr 22, 2017, 2:44:09 AM4/22/17
to BBR Development
Hi all,
    Since BBR needs qdisc set to "fq" in kernel, I wonder if it is not applicable to test BBR with simulated delay such as "tc qdisc add DEV netem delay 100ms 100ms 50%". Does TC command also utilizes qdisc and if it would conflict with "fq" settings for BBR?

Neal Cardwell

unread,
Apr 22, 2017, 7:27:02 PM4/22/17
to Ethan Huang, BBR Development
Hi,

For testing BBR with emulated networks using netem, the considerations that apply are largely the same as with other Linux congestion control modules. Namely, netem should not run on the sender machine, because the TCP Small Queues logic running on the sender would limit the amount of data in the sending-host queus, including the netem qdisc. This means that the test results would be very different than what would happen in a real network.

The simplest setup for testing BBR using netem would be something like:

(1) sending machine:
  (a) TCP BBR
  (b) fq qdisc

(2) receiving machine:
  (a)  ifb qdisc
  (b) netem qdisc

For more details on how to use netem on incoming traffic using ifb, you can check out:

Hope that helps,
neal




On Sat, Apr 22, 2017 at 2:44 AM, Ethan Huang <cshua...@gmail.com> wrote:
Hi all,
    Since BBR needs qdisc set to "fq" in kernel, I wonder if it is not applicable to test BBR with simulated delay such as "tc qdisc add DEV netem delay 100ms 100ms 50%". Does TC command also utilizes qdisc and if it would conflict with "fq" settings for BBR?

--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ethan Huang

unread,
Apr 23, 2017, 9:15:32 PM4/23/17
to BBR Development, cshua...@gmail.com
Hi Neal,
   Thank you for your reply. This would answer the test results I ran when I found that a case like "delay 100ms 100ms 50%" or "delay 10ms 10ms 50%" made BBR badly suffered when Cubic are much better. I'll re-test the scenario with correct settings.

在 2017年4月23日星期日 UTC+8上午7:27:02,Neal Cardwell写道:
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.

Ethan Huang

unread,
Apr 25, 2017, 2:25:11 AM4/25/17
to BBR Development, cshua...@gmail.com
Hi Neal,
    I re-do the test and again BBR shows poor performance when delay variation are set to the same level as the emulated delay value, like "100ms 100ms 50%". I 'printk' the status of BBR and found a lot of lowering snd_cwnd value in TCP_CA_Recovery state. Don't know if it was related to the performance issue.
    Below are the evaluation env. setttings:
Two server with 1G NICs connected by a 10G-beyond switch, the ping delay between the two hosts is about 250us.
Single TCP stream: wget a file of size about 250MB. The link sending back ACKs is not delayed with netem.

Case 1: tc qdisc add dev ifb0 netem delay 10ms 10ms 50%
BBR: 1.2MB/s     Cubic 23.8MB/s

Case 2:  tc qdisc add dev ifb0 netem delay 100ms 100ms 50%
BBR:94.3KB/s     Cubic: 1.52MB/s

Case 3: tc qdisc add dev ifb0 netem delay 100ms 200ms 50%
BBR: 66.6KB/s     Cubic: 552KB/s

Case 4: tc qdisc add dev ifb0 netem delay 10ms 50ms 50%
BBR: 12.2MB/s    Cubic: 4.41MB/s

Case 5: tc qdisc add dev ifb0 netem delay 100ms 50ms 50%
BBR: 9.2MB/s      Cubic: 2.93MB/s

在 2017年4月23日星期日 UTC+8上午7:27:02,Neal Cardwell写道:
Hi,
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.

Neal Cardwell

unread,
Apr 25, 2017, 9:54:10 AM4/25/17
to Ethan Huang, BBR Development
Hi Ethan,

Thanks for your test results!

I suspect that the behavior you're running into is a
synthetically-generated case of the same issue discussed in the other
bbr-dev thread from this morning, "BBR vs Cubic on Wifi network". :-)

I strongly suspect that these tests are running into the known issue
where the current upstream BBR parameters for provisioning cwnd need
to be more generous for paths like these emulated paths in your tests,
where there is extremely high delay jitter that is equal to or much
higher than the minimum RTT observed over the path. This issue is
discussed in the "BBR test on wireless path" thread from Jan 11:

https://groups.google.com/d/msg/bbr-dev/zUrcENm9rZI/Ea28juVoFAAJ

As I mention in the other thread, we are actively working on tuning
this aspect of the BBR code.

A few quick questions:

+ You list both CUBIC and BBR results. Are those simultaneous
transfers? Or transfers conducted at different times?

+ Would you be able to post a tcpdump trace of a few seconds of the
TCP BBR flow from the cases 1-3? We'd like to verify that the behavior
you're
seeing here matches the known behavior we're working on. Perhaps something like:

tcpdump -w /tmp/test.pcap -s 120 -i $DEVICE -c 100000 port $PORT

Thanks!
neal

Ethan Huang

unread,
Apr 26, 2017, 6:02:01 AM4/26/17
to BBR Development, cshua...@gmail.com
Hi Neal,
    Sorry about not reading the previous posts about the issue. I noticed that at IETF-98 conf. the BBR team continued to make optimizations and I believe it'd make BBR even better.
    I ran the tests for BBR and Cubic separately since BBR and Cubic would compete. Each test just a single TCP flow is constructed.
    Sorry about the pcap file since I'm working in a restricted env. without the permission to send public emails. The test is very simple just as you indicated: two host with ingress ifb0 to be controlled by netem. Two settings "delay 100ms 100ms 50%" and "delay 10ms 10ms 50%". I checked the wireshark IO graph, the sending sn is just a small larger than the acks. And source code inspection reveals that the pacing rate is larger than snd_cwnd value in TCP_CA_Recovery statue but still small compared to the situation if only delay are set without jitters.
   I may take some tests at home to re-capture the pcap at home if time, then I could send you a sample file. Thx!

在 2017年4月25日星期二 UTC+8下午9:54:10,Neal Cardwell写道:
Reply all
Reply to author
Forward
0 new messages