78 views

Skip to first unread message

Aug 21, 2021, 2:53:29 PM8/21/21

to Venkat Arun, cc...@mit.edu, BBR Development

Hi,

I was interested to read your SIGCOMM 2021 paper, "Toward Formally Verifying Congestion Control Behavior" (PDF). Thanks for this nice paper! It's great to see research work in this area.

I have some questions/concerns about whether the results of the CCAC model in the paper reflect the actual BBR behavior, due to some potentially significant discrepancies between the model and the algorithm/code.

I wanted to better understand both the CCAC model of BBR and your experiments with it, to understand the implications, and see if there are any improvements that can be made either to the CCAC model of BBRv1 or the BBRv1 algorithm itself.

I had questions about a few different spots in the paper:

(1) Regarding the paper's description of the BBR algorithm (in particular in the BDP calculation and pacing rate calculation) in Section 6 "CASE STUDY 1: BBR".

(1a) For the BDP calculation the text mentions:

the sender calculates the BDP as the current rate multiplied by the minimum RTT, i.e., as the (total number of bytes ACKed in the last RTT) * (min RTT) / (RTT). The sender sets its BDP estimate to the maximum value calculated over the last 10 RTTs.

AFAICT what is described there does not match BBR. The BBR BDP is not calculated using the maximum value of "(total number of bytes ACKed in the last RTT) * (min RTT) / (RTT)". It is calculated using the estimated bandwidth, which is itself calculated using a max of recent (generally non-app-limited) bandwidth samples over the last 10 round trips. But those BBR bandwidth samples are calculated in a different manner than what the paper describes.

The bandwidth sample implied there in the text (let's call it CCAC_BBR_model_bandwidth_sample)is: "(total number of bytes ACKed in the last RTT)"/"(RTT)". However, this is not how BBR calculates bandwidth samples. The BBR bandwidth sample calculation is described in the original BBR article and the IETF draft on the topic, and boils down to bw_sample = Δdelivered_data/Δtime, where the deltas are computed between the ACK before a data packet was transmitted and the ACK of that data packet. The BBR calculation can give values quite different from the approach you describe, particularly at low bandwidths, or really any time the ACK arrival process is not exactly in phase with the transmission process.

(1b) For the pacing rate calculation the text mentions:

BBR sets ... its pacing rate to (BDP estimate) / RTT

This does not match how BBR sets its pacing rate. The text seems to be saying the CCAC model of BBR computes a pacing rate as:

pacing rate

= (BDP estimate) / RTT

= (max_recent_CCAC_BBR_model_bandwidth_sample * (min RTT)) / RTT

= max_recent_CCAC_BBR_model_bandwidth_sample * ((min RTT)/RTT)

The BBR pacing rate is actually computed using the estimated bandwidth, which (as mentioned above) is itself calculated using a max of recent (generally non-app-limited) bandwidth samples over the last 10 round trips.

The BBR pacing rate is actually computed using the estimated bandwidth, which (as mentioned above) is itself calculated using a max of recent (generally non-app-limited) bandwidth samples over the last 10 round trips.

One question I didn't see answered in the text is: which RTT value is used in the pacing rate calculation in CCAC? If it is the most recent RTT, as seems to be the case, then if one attempts to implement a BBR-inspired algorithm in this manner, the pacing rate will tend to underestimate the max_recent_CCAC_BBR_model_bandwidth_sample by a factor of: ((min RTT)/RTT).

Just to check my understanding: is this how the CCAC model of BBR was structured? If so, this may explain why the CCAC model of BBR is having bandwidth underutilization problems in the paper's experiments.

AFAICT from the above analysis, the CCAC BBR model's approach of using "pacing rate = (BDP estimate) / RTT" will generally underestimate the CCAC_BBR_model_bandwidth_sample unless "RTT" means exactly "min RTT"; but in that case, I wasn't sure why the paper would represent this quantity with "RTT" instead of "min RTT"?

If the CCAC model of the BBR algorithm diverges in these ways from the BBR algorithm, I am curious to what extent the results in the experiments with the CCAC model would apply to the actual BBR algorithm.

Is the code here the source code for the BBR model in CCAC discussed in the paper?

Is the code here the source code for the BBR model in CCAC discussed in the paper?

(2) Regarding this part of Section 6 "CASE STUDY 1: BBR" describing the issue:

CCAC generates examples of poor utilization...Our next step is to analyze these low-utilization examples. Figure 6 shows a schematic of the examples produced. When BBR increases the pacing rate to probe for network bandwidth, the pulse is small (i.e., the increase in pacing rate is small), and the network does not serve it at a higher bandwidth. Hence, BBR’s probe fails (i.e., the BDP estimate remains small). ...if the network is clean with a smooth service, the problem CCAC identifies can manifest itself.

(The italics are in the original).

I confess I don't understand the description of the scenario. To me, that passage sounds contradictory. The scenario describes all three of these statements as holding true: (a) poor utilization, (b) a sender that sends at a higher bandwidth, but "the network does not serve it at a higher bandwidth", and also that (c) this problem can "manifest itself" if "*the network is clean with a smooth service*" (the italics are in the original text). AFAICT it seems that if (a) and (b) hold then we have a case where an underutilized link did not increase its service bandwidth when presented with a higher input bandwidth demand, which AFAICT is not consistent with saying that "*the network is clean with a smooth service"* (at least as I think of that phrase)*.*

We could also put this in the terms of the diagram, Figure 6. In that figure, the slope of the blue cumulative arrivals curve, A(t), increases during the bandwidth probing phase, but in the ACKs for the bandwidth probing phase, the slope of the red cumulative service curve, S(t), does not increase. AFAICT this does not seem like "clean" or "smooth" service, so this again (if I am interpreting the section correctly) seems to contradict the claim that this issue can happen when "*the network is clean with a smooth service". *

Can you please elaborate on this passage, and sketch a little more detail to help me understand this scenario? Are there traces of this scenario that you could share, to help shed light on the dynamics?

(3) Regarding this part of Section 6 "CASE STUDY 1: BBR" describing the proposed fix:

One approach to solve this problem is to intentionally overestimate the pacing rate. For instance, the sender can pace BBR flows at twice the prescribed rate. In fact, recently, Facebook made this change to their BBR implementation in mvfst [27], the version of QUIC [28] they use in production. We implemented that version of the algorithm in CCAC. We found that CCAC no longer finds any cases where the algorithm gets <100% utilization in the steady state when the buffer is infinite

FWIW the phrase "overestimate the pacing rate" seems inconsistent with the BBR framework, in which the pacing rate is a control parameter chosen by the sender, rather than a network path property estimated by the sender. Using the terms of the BBR framework, the fix you describe seems to involve effectively doubling the pacing gain for steady-state bandwidth probing from 1.25 to 2.5.

If this is a general problem with the BBR algorithm, where it can manifest "poor utilization, even for arbitrarily small values of 𝑥," then I confess that I do not yet see how a simple change in the constant pacing gain used during bandwidth probing can be a general fix. Can you please elaborate?

Also, it looks like the mvfst Facebook QUIC BBR commit you reference ( https://github.com/facebookincubator/mvfst/commit/e04fcaacc1633c1bae78c61aac1f5f8a5784f657 ) only doubles the pacing rate in Startup, so I don't yet see how this can affect the steady-state behavior that you are studying. Can you please elaborate on how implementing "that version of the algorithm" in CCAC had the effect you mention (the paper mentions: "we are looking for low utilization in steady-state")? I suppose the text is referring to a general doubling mechanism (that applies to steady-state) that was inspired by the mvfst commit?

Sorry for the long e-mail. :-)

Many thanks for your time, and I hope we can collaborate to improve both the BBRv1 algorithm and models of that algorithm.

best regards,

neal

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu