BBR vs Cubic p99 latency difference for GET requests

197 views
Skip to first unread message

Ilker Yaz

unread,
May 8, 2024, 5:48:55 PMMay 8
to BBR Development
Hey all,

TL;DR:
p99 GET latency regresses by 25% when server congestion control algorithm switches from Cubic to BBR.
RTT ~60ms.
Any ideas on what could be the root cause of this?

Long version:
I work on a storage service (e.g. Amazon S3).
I have two different proxies (e.g. L7 load balancer) sitting in front of our storage service. 
I'm migrating clients from Proxy-1 to Proxy-2.
The clients and the proxies are in different regions (~60ms RTT / through private backbone)

During the migration, we have experinced ~25% p99 latency regression for GET requests.
p50 latency didn't change.
GET sizes vary from 100KB to a few MBs.
There is a connection pool and reuse ratio is 100% (all connections are created during startup).
One connection is used for one request at a time (no pipelining / no HTTP/2 streams).

After investigating quite a few possible root causes, the congestion control algorithm difference between Proxy-1 and Proxy-2 turned out to be the culprit.

Proxy-1 uses Cubic.
Proxy-2 uses BBR.

When I change Proxy-2 to use Cubic and restart the client (to create new connections) p99 latency regression disappears. When I revert to BBR, it re-appears.

I repeated the multiple times because I'm having hard time believing this as the root cause and received consistently the same results. 

iPerf:
I repeated the same test with iPerf (w/ -R to generate reverse traffic to simulate GETs).
The avg throughput achieved for BBR vs Cubic are the same (~500Mbit/sec).

However I see BBR per sec throughput occasionally drops to ~350Mbit/sec level and jumps back again to ~500Mbit/sec level the next second.

Cubic doesn't have that behaviour.

Attached sorted per/sec throughput comparison  w/ p50/p5/p1 comparison(ignored 1st and last seconds in iperf results). I don't know if this gives more clue. 

bbr_cubic.png



Neal Cardwell

unread,
May 9, 2024, 11:37:13 AMMay 9
to Ilker Yaz, BBR Development
Hi,

Thanks for the report!

A hypothesis would be that the tail latency issue is from BBR's PROBE_RTT mode. The PROBE_RTT behavior in BBRv1, which is what's in upstream Linux, is particularly constraining, since it cuts cwnd to 4 packets. The behavior with BBRv2/BBRv3 is considerably better, and cuts cwnd to 0.5*estiamated_BDP.

To provide evidence to check this hypothesis, you could either take ss traces, or tcpdump traces, or (preferably) both:

(while true; do date +%s.%N; ss -tinmo 'dst $OTHER_HOST'; sleep 0.025; done) > /tmp/ss.txt &

sudo tcpdump -w /tmp/out.pcap -s 100 -i any port $PORTNUM &

best,
neal


--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/978f93ee-197e-4302-8825-3cbe6af92907n%40googlegroups.com.

Ilker Yaz

unread,
May 10, 2024, 12:42:55 PMMay 10
to Neal Cardwell, BBR Development
Thank Neal!
Indeed, I can see the occasional drop of cwnd to 4 and then jumping back to 7972 in ss tool output (attached screenshot).
Is there any setting I could adjust now w/o waiting for BBRv2/v3 for this?

image.png

Neal Cardwell

unread,
May 14, 2024, 2:12:21 PMMay 14
to Ilker Yaz, BBR Development
Thanks for confirming that the issue seems to be the BBRv1 PROBE_RTT behavior!

> Is there any setting I could adjust now w/o waiting for BBRv2/v3 for this?

Unfortunately there is no setting that can be tweaked for this.

If you are open to running experimental kernels, you can use the BBRv3 release:

Otherwise, if the latency impact is unacceptable you'd have to use CUBIC or wait for BBRv3 to be upstream.

best regards,
neal

Eli Dart

unread,
May 14, 2024, 6:18:24 PMMay 14
to Neal Cardwell, Ilker Yaz, BBR Development
Hi Neal,



Otherwise, if the latency impact is unacceptable you'd have to use CUBIC or wait for BBRv3 to be upstream.

Having BBRv3 upstreamed and available in production distro kernels would be amazing. Any thoughts you can share on potential timelines?

Many thanks,

Eli



 


--

Eli Dart, Network Engineer                          NOC: (510) 486-7600
ESnet Science Engagement Group                           (800) 333-7638
Lawrence Berkeley National Laboratory 
Reply all
Reply to author
Forward
0 new messages