Hello, I have a question regarding the determination of the 2% loss threshold value.
I understand that setting a lower threshold may result in poor throughput when BBR competes with traditional congestion control algorithms, while setting a higher threshold could make BBR overly aggressive.
Therefore, I am curious about how the 2% threshold was derived. Was it calculated based on theoretical analysis, or is it primarily supported by empirical results?
Hello, I have a question regarding the determination of the 2% loss threshold value.
I understand that setting a lower threshold may result in poor throughput when BBR competes with traditional congestion control algorithms,
while setting a higher threshold could make BBR overly aggressive.
Therefore, I am curious about how the 2% threshold was derived. Was it calculated based on theoretical analysis, or is it primarily supported by empirical results?
--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bbr-dev/def71252-97fb-416a-a54a-b1f24ee4cea7n%40googlegroups.com.
Hello Neal,We see that simulated 5% packet loss in lab environment causes BBR to collapse congestion window to what seems to be a small multiple of MTU.
Do I understand it correctly that it can be explained by this loss threshold?
It seems that BBR doesn't enter bandwidth probing state and therefore can't estimate BDP properly to increase CWND above minimum.
How would you recommend testing for a bad networks, like low-signal WiFI, where packet loss is constantly present regardless of send rate?
To view this discussion visit https://groups.google.com/d/msgid/bbr-dev/186608eb-efac-4c30-a0b9-56e2d42d3af8n%40googlegroups.com.