BBRv2 in varying bottleneck and large buffers

239 views
Skip to first unread message

in...@ijdata.com

unread,
Sep 18, 2020, 2:15:44 AM9/18/20
to BBR Development
Hi again.

I did a very simple experiment with two laptops (server, client) connected via ethernet. Performed a simple FTP transfer between a server and a client.

On the client side I added a 20ms delay with netem

On the server side I set the bottleneck BW to 50Mbps, change 10Mbps and then back to 50Mbps with 
sudo tc qdisc add dev eth0 toot tbf rate 50mbit burst 32mbit latency 400ms
sudo tc qdisc change dev eth0 toot tbf rate 10mbit burst 32mbit latency 400ms
sudo tc qdisc change dev eth0 toot tbf rate 50mbit burst 32mbit latency 400ms
I log the RTT with ping -i 0.02

FTP transfer starts at T~8s in the graph below, the 10Mbps BW is applied at T~26s, and back to 50Mbps again at T~37s

I notice that the RTT increases to ~100ms when the 10Mbps BW is applied, and it appears to stay there until the BW is increased. Is this expected or is it something that is triggered by this method of testing with qdisc ?, after all the tbf is temporarily removed when the bottleneck BW is changed as one can see in that the RTT is only 20ms for a brief period at T~26s and T~37s.

PS, no packet losses occur in this experiment

/Ingemar 


BBRv2-20ms-50-10-50Mbps.jpg

Neal Cardwell

unread,
Sep 18, 2020, 10:01:34 AM9/18/20
to in...@ijdata.com, BBR Development
Thanks for the report!

You can use "ss -tin" with a recent ss binary to look at the bbr2 estimated bandwidth, estimated min_rtt, pacing rate, and cwnd to see how they may be impacting queuing. To see how to build a recent ss, check out:

That said, there are likely at least a few test artifacts from the fact that the netem rate limiting was applied on the sender, which creates sender-host queueing that interacts with the TCP Small Queues (TSQ) in a way that is unrealistic if the goal is to emulate a path in the middle of the network changing its link bandwidth.

When using qdiscs to emulate a network path, to get high-fidelity results the qdiscs must be on the receiver machine, or a router in the middle of the path.

If you want a tool to automatically set up qdiscs in a good setup to emulate varying network conditions, you may want to check out the transperf tool that our team open-sourced:

  https://github.com/google/transperf

An example config that should give you a scenario like the one you mention is:

  conn=conn('bbr2', sender=0, start=0.0)
  loss=0
  bw=var_bw(bw(50, dur=10), bw(10, dur=10), 50)
  buf=100
  rtt=20
  dur=40

Once you have the experiment set up with qdiscs on the receiver (either manually or with transperf) I recommend looking at a sender-side tcpdump trace and either the transperf output graphs or "ss -tin" dump to understand the dynamics.

best,
neal


--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/83a16fb6-f8bf-4a39-bf06-23c874f42844n%40googlegroups.com.

in...@ijdata.com

unread,
Oct 13, 2020, 11:10:15 AM10/13/20
to BBR Development
Thanks for the help.
Tried transperf but did not get immediately get along with it and was short of time, tried instead with the bandwidth limit on the ingress with qdisk and that solved the issue. It helped me to debug the BBRv2 model that we have on our 5G system simulator.   I will give transperf a chance later on, it looks like a real good tool.
/Ingemar

Neal Cardwell

unread,
Oct 13, 2020, 11:12:29 AM10/13/20
to in...@ijdata.com, BBR Development
Thanks for the update! Glad to hear that installing the bandwidth limit on the ingress side resolved the issue.

best,
neal


Reply all
Reply to author
Forward
0 new messages