On Tue, Nov 22, 2016 at 3:14 AM, Tao X <
g.xi...@gmail.com> wrote:
> Hi, neal
>
> I'm interested in your test results.
> Could you provide the "UNIT" of the values in your test results?(I guess
> it's Mbps on CUBIC/Vegas/BBR's values?)
Yes, those numbers are in Mbps.
> And, what testing tools did you use to get the results?
Those tests used netperf for traffic and netem for network emulation.
> On the question of "what is BBR based on?"
>
> In my comprehension, BBR can be said as "BDP-based", due to BBR always
> calculates the bandwidth and RTT, these two factors result in BDP.
Yes, saying BBR is "BDP-based" would be one way to capture a large
aspect of BBR's behavior.
As the BBR paper in ACM Queue discusses, there are two conditions that
BBR tries to meet, in order to achieve high throughput and low delay:
A connection runs with the highest throughput and lowest delay
when (rate balance) the bottleneck packet arrival rate equals BtlBw,
and (full pipe) the total data in flight is equal to the BDP
(= BtlBw × RTprop).
So yes, BBR tries to operate near the BDP (= BtlBw × RTprop), to meet
the "full pipe" condition.
But BBR also tries to meet the "rate balance" condition, by pacing at
or very near the bottleneck bandwidth most of the time.
> BTW, according to the BBR's paper's title, it's congestion-based.
To quote the BBR paper in ACM Queue again:
Congestion is just sustained operation to the right of the BDP line,
and congestion control is some scheme to bound how far to the right
a connection operates on average.
So you might say that BBR is "congestion-based" in the sense that it
explicitly tries to bound how far above the BDP the sending flows
operate. In fact in BBR inflight is explicitly bounded to 2*BDP, and
in practice BBR's pacing gain cycling algorithm can often keep
inflight closer to 1*BDP in many cases. And we're working now on
expanding the set of cases in which inflight is closer to 1*BDP than
2*BDP.
This is in contrast to loss-based congestion control, which reacts to
packet losses, which can happen much later than when congestion occurs
(bufferbloat, caused by deep FIFO buffers) or much earlier than when
congestion occurs (in high-speed WAN traffic going through
shallow-buffered switches).
neal