BBR about BDP calculation

140 views
Skip to first unread message

litao

unread,
Nov 15, 2024, 8:33:39 AM11/15/24
to BBR Development
I have a question, BBR uses rtt times bw when calculating BDP, but rtt includes the time from client to server, and also includes the time from server to client, so the calculation of BDP does not also include the BDP from server to client, but we actually want to calculate the BDP from client to server?

Neal Cardwell

unread,
Nov 15, 2024, 9:34:56 AM11/15/24
to litao, BBR Development
On Fri, Nov 15, 2024 at 8:33 AM litao <sliao...@gmail.com> wrote:
I have a question, BBR uses rtt times bw when calculating BDP, but rtt includes the time from client to server, and also includes the time from server to client, so the calculation of BDP does not also include the BDP from server to client, but we actually want to calculate the BDP from client to server?

That depends on who "we" is. :-) Is "we" the client's transport connection endpoint or the server's transport connection endpoint?

It's important to keep in mind that most transport connections (including widely-deployed protocols such as TCP or QUIC) can send data in both directions between two endpoints. That means that each endpoint needs to run a congestion control algorithm to decide how fast to send data over the network.

The transport connection endpoint and its congestion control algorithm generally do not and should not care whether the endpoint is being used by a "client" or a "server". That's because the job of the transport connection endpoint is the same in either case (whether "client" or "server"): to send data to the other end of the connection, at an appropriate rate.

A transport connection endpoint can estimate its BDP (bandwidth-delay product) by computing the delivery rate for the data it is sending, and multiplying this by the estimated two-way propagation delay (where the  two-way propagation delay is often called the "min_rtt", "baseRTT", or RTprop in the original BBR article).

Putting this all together, the client and server can each independently estimate their BDP:

+ the client's transport connection endpoint congestion control algorithm can estimate the client's BDP by multiplying (a) the delivery rate for the data the client is sending, by (b) the client's estimated two-way propagation delay

+ the server's transport connection endpoint congestion control algorithm can estimate the server's BDP by multiplying (a) the delivery rate for the data the server is sending, by (b) the server's estimated two-way propagation delay

Note that in general we expect that the client and server may have very different estimated BDPs. This is because the endpoints can have both different delivery rates and different two-way propagation delays:

(1) Delivery rate: the delivery rate measured by each side can be very different, because the route can be asymmetric, the links can have different bandwidths in each direction, and the competing traffic can be different in each direction.

(2) Two-way propagation delay: Intuitively, we would expect well-functioning client and server congestion control algorithms to calculate similar estimated two-way propagation delay values, but the estimates will tend to be at least somewhat different because:

(a) they are taking RTT samples at different times, with different queuing delay, link-layer delays, etc
(b) they are computing RTTs for packets of different sizes traveling over paths with potentially different available bandwidths (so serialization delays of the data packets can differ greatly)
(c) the delayed ACK policy of each endpoint may differ
(d) they may use different algorithms to combine those RTT samples to estimate the two-way propagation delay

best regards,
neal

 

--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bbr-dev/a821c9ba-7746-4496-8fd3-2470d2a9da88n%40googlegroups.com.

litao

unread,
Nov 18, 2024, 8:58:45 AM11/18/24
to BBR Development
Thank you neal for your detailed reply
“That depends on who "we" is. :-) Is "we" the client's transport connection endpoint or the server's transport connection endpoint?”
I mean from the client side

sorry,my question may not be clearly described, whether the client should only include the  propagation delays  from the client to the server when calculating client's BDP,now BBR use two-way propagation delays(client to server and server to client)to calculate BDP

Neal Cardwell

unread,
Nov 18, 2024, 9:58:19 AM11/18/24
to BBR Development
On Monday, November 18, 2024 at 8:58:45 AM UTC-5 sliao...@gmail.com wrote:
Thank you neal for your detailed reply
“That depends on who "we" is. :-) Is "we" the client's transport connection endpoint or the server's transport connection endpoint?”
I mean from the client side

sorry,my question may not be clearly described, whether the client should only include the  propagation delays  from the client to the server when calculating client's BDP,now BBR use two-way propagation delays(client to server and server to client)to calculate BDP

Please note that BBR is not redefining "BDP"; it's making use of the long-standard notion of BDP defined as bandwidth*RTT (not bandwidth*one_way_delay). 

For example, consider RFC 1072 ("TCP Extensions for Long-Delay Paths") by Jacobson and Braden in 1988, which has a clear definition of BDP and its rationale:

This memo proposes a set of extensions to the TCP protocol to provide efficient operation over a path with a high bandwidth*delay product.
...
The significant parameter is the product of bandwidth (bits per second) and round-trip delay (RTT in seconds); this product is the number of bits it takes to "fill the pipe", i.e., the amount of unacknowledged data that TCP must handle in order to keep the pipeline full.

Why is BDP calculated with the RTT instead of one-way propagation delay?

As noted in RFC 1072, the value we are trying to capture with BDP is the amount of unacknowledged data that the transport endpoint must allow in order to keep the bottleneck link fully utilized.

The key aspect here is *acknowledged*. For data to be acknowledged, the data must travel from the data sender to the data receiver, and then the ACK must travel on the return path from the data receiver to the data sender. The time it takes for this to happen is the RTT.

More concretely, for simplicity,  imagine the case where a transport endpoint restarts from idle, with no data in flight in the network. To fully utilize the bottleneck link, the transport endpoint will have to send at the rate of the bottleneck bandwidth available to the transport connection, and will have to sustain that until it receives the ACK for the first data packet it sent when it restarted from idle. How much data will the connection have sent during that time? The amount of data in flight in the network (unacknowledged) will be:

   data_in_flight = sending_rate * first_send_until_first_ACK = bandwidth * RTT = BDP

That's why the BDP (bandwidth * RTT) is the amount of data the sender needs to allow in flight (unacknowledged) to fully utilize the bottleneck link.

Does that clarify things?

best regards,
neal

litao

unread,
Nov 18, 2024, 9:26:03 PM11/18/24
to BBR Development
Got it, Thank you neal
Reply all
Reply to author
Forward
0 new messages