I had the exact same question. What metrics are used to determine improvement. We are thinking of evaluating this at my current work. First thing is to setup observability to see impact.
Wondering what synthetic benchmarks used (Netperf, etc..). And what actual measurement used (Client side, Kernel side, etc..).
On Tue, Sep 20, 2016 at 3:33 AM, Tomasz Jamroszczak <tjamro...@opera.com> wrote:On Mon, 19 Sep 2016 20:01:41 +0200, Dave Taht <dave...@gmail.com> wrote:
I put up some very limited test results here.
http://blog.cerowrt.org/post/bbrs_basic_beauty/
I was *really* impressed by how low it held the RTT, while holding
bandwidth high. Please let me know if I'm misinterpreting the new
"sawtooth" or got anything else wrong.
You write: "I think what we are doing for wifi remains worthwhile. And, to give the BBR developers their day in the sun, I’m not going to publish those results". But that is interesting - how does the BBR deal with WiFi and its non-random, bursty packet loss. Do you have any insights to share?
By design, BBR first and foremost bases its sending rate on the actual delivery rate of the network, rather than packet loss or delay signals. So if the packet loss is low enough that it does not impact the overall delivery rate of the path, then BBR is able to fully utilize the path. In general, if the packet loss rate is below 15% then BBR is able to fully utilize the path (reaching link_bandwidth*(1 - loss_rate)). This 15% threshold is a design parameter, rather than a fundamental limit of the algorithm.In the A/B experiments I have done with real wifi networks I have access to, BBR tends to do as well or better than CUBIC. And the throughput numbers we see for YouTube traffic show similar trends: on the whole, BBR tends to do as well or better than CUBIC on most cellular, wifi, DSL, or cable modem paths.neal
--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+unsubscribe@googlegroups.com.
[root@server bbr]# cat server.sh #!/bin/bash # client(sender) --- server(receiver) # tc qdisc: fq tc qdisc: ingress + netem(delay 100ms loss 1%)
#setup env ethtool -i ens2f0| grep driver ethtool ens2f0| grep Speed ip link set dev ens2f0 up ip add add 192.168.100.2/24 dev ens2f0 modprobe ifb numifbs=1 ip link set dev ifb0 up tc qdisc add dev ens2f0 ingress tc filter add dev ens2f0 parent ffff: matchall action mirred egress redirect dev ifb0 tc filter show dev ens2f0 ingress tc qdisc add dev ifb0 root netem delay 100ms loss 1% [root@client bbr]# cat client.sh #!/bin/bash # client(sender) --- server(receiver) # tc qdisc: fq tc qdisc: ingress + netem(delay 100ms loss 1%) # setup env ethtool -i ens1| grep driver ethtool ens1| grep Speed ip link set dev ens2f0 up ip add add 192.168.100.1/24 dev ens2f0 tc qdisc del dev ens1 root tc qdisc add dev ens1 root fq tc qdisc show dev ens1
Steps to Reproduce: 1. set tcp buff to 128M on client and server(10 Gbps x 100ms = 125MB BDP) net.core.rmem_max = 268435456 net.core.wmem_max = 268435456 net.ipv4.tcp_rmem = 4096 87380 268435456 net.ipv4.tcp_wmem = 4096 65536 268435456 2. on server: # sh ./server.sh # iperf3 -s -1 3. on client: # sh ./client.sh # iperf3 -c 192.168.100.2 -C bbr