Wrong bandwidth estimate result in traffic limit scene

89 views
Skip to first unread message

Peilong Jiang

unread,
Jun 24, 2022, 3:47:43 AM6/24/22
to BBR Development
Hello everyone,  thank you for your amazing work! BBRv1 works well in most of my scenarios.

But when I use it with QUIC in a traffic limit scene, 1000Mbps limit to 2Mbps drops the extra packet, the bandwidth estimate result never less than 100Mbps in active use scene,  is there a mitigation measure?

Neal Cardwell

unread,
Jun 24, 2022, 8:21:23 AM6/24/22
to Peilong Jiang, BBR Development
Hi,

Thanks for your post. Can you please provide some more details and clarification?

+ Can you please clarify the scenario:
  + the bottleneck bandwidth
  + the bottleneck buffer depth
  + the minimum round trip time of the path
  + the initial congestion window used
  + how many flows were involved; i.e. was there any cross traffic? if there were cross-traffic flows, what congestion control were they using?
  + were the flows bulk flows or short flows? if short flows, how much data was transferred?

+ When you say "1000Mbps limit to 2Mbps", do you mean that the bottleneck rate starts at 1000 Mbps and then later changes to 2 Mbps? Or that the sender's line rate is 1000 Mbps and then there is a later hop in the path that has a link rate of 2 Mbps?

+ What does "active use scene" mean?

Thanks!
neal




On Fri, Jun 24, 2022 at 3:47 AM Peilong Jiang <jpl...@gmail.com> wrote:
Hello everyone,  thank you for your amazing work! BBRv1 works well in most of my scenarios.

But when I use it with QUIC in a traffic limit scene, 1000Mbps limit to 2Mbps drops the extra packet, the bandwidth estimate result never less than 100Mbps in active use scene,  is there a mitigation measure?

--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/a834f727-236f-4bc4-b406-c763c22304bfn%40googlegroups.com.

Neal Cardwell

unread,
Jun 27, 2022, 1:38:57 PM6/27/22
to Peilong Jiang, BBR Development
Hi Peilong,

Thanks for the details.

This detail that you mentioned may be key:

> I use software for traffic limit operation, such as clumsy(in GitHub version 0.3RC4).
> Its strategy is quite simple if current traffic is beyond the limit(256KB/s I set),
> it just drops packets.

It sounds like you are using the "clumsy" network emulator ( https://github.com/jagt/clumsy ?) for rate-limiting, and clumsy is using some kind of policing rather than shaping.

The delivery rate measured for BBR's core bandwidth estimator uses the rate at which data is delivered over the time scale of a single round trip. When there is a policer on a network path, that delivery rate can be anywhere between the policed rate (here 2Mbps) and the underlying physical bottleneck link rate (here 1Gbps), depending on the details of the policer implementation (e.g., token bucket depth), the RTT of the path, etc. That is presumably why the estimated bandwidth is 100Mbps even though the "real bandwidth" is 1Gbps and the policed rate is 2Mbps.

Here is a good paper about policing if you want more info:
  An Internet-Wide Analysis of Traffic Policing

Keep in mind that the bandwidth estimate is just that; an estimate. The estimate can be above or below the "real" rate, and the "real" rate can fluctuate wildly depending on policer behavior.

The main question is: how well does the transport connection perform, with BBR congestion control versus the alternative (likely CUBIC). Have you tried comparing the performance of BBR vs CUBIC in your test scenario?

thanks,
neal

On Sun, Jun 26, 2022 at 11:19 PM Peilong Jiang <jpl...@gmail.com> wrote:
Hi,

Thank you for your prompt reply.

I use BBR in LAN for transferring video streams, here is some data you may need: 

+ the bottleneck bandwidth BBR estimated is around 100Mbps, but the real bandwidth is 1000Mbps.
+  the bottleneck buffer depth? I have no idea.
+ the minimum round trip time of the path is 8ms
+ the initial congestion window is 32, use QUIC default setting
+ Maybe round 8 flows are involved, but the communication is all in LAN, and video streaming takes the most part of traffic.
+ Bulk flows, server's send rate is around 10Mbps in a stable state.
+ I use software for traffic limit operation, such as clumsy(in GitHub version 0.3RC4). Its strategy is quite simple if current traffic is beyond the limit(256KB/s I set), it just drops packets.
+ It means heavy usage scenarios such as continuous video streaming. Sorry for my poor English expression😂. 

If you have any further questions please let feel free to contact me.😃

Thanks!
Peilong

Neal Cardwell <ncar...@google.com> 于2022年6月24日周五 20:21写道:
Reply all
Reply to author
Forward
0 new messages