When it comes to rmem and wmem, there are two scenarios that come to mind:
(1) For users that have been dialing send or receive buffers down to
try to avoid bufferbloat in deep buffers: with BBR, this should no
longer be needed. BBR ought to be able to bound the amount of data in
flight to a sensible range, without any need for buffer tuning. That
sounds like what you were asking about, and the answer is: "with BBR,
this should just work". If it doesn't, please let us know.
(2) For users that have high-speed WAN paths with moderate-to-high
loss rates (e.g. due to shallow-buffered switches): BBR is often able
to fully utilize these pipes that previous congestion control
algorithms could not, so with BBR you may now be limited by your
kernel's send or receive buffer settings. Because TCP has an in-order
delivery model, and the TCP SACK specification considers SACKs as
"hints" rather than official and final acknowledgments of retaining
data, senders and receivers may have to keep several BDP of data in
the send and receive buffers when there is packet loss on repeated
rounds. This is because the TCP layer must wait until retransmissions
fill in all the holes in the sequence space, so the data can be passed
up to the receiving application. So to fully take advantage of
bandwidth on higher-loss paths you may need to increase the maximum
values of the kernel send and receive buffer sizes. For testing, you
might try something like:
sysctl -w net.core.wmem_max=16777216
sysctl -w net.ipv4.tcp_wmem='4096 16384 16777216'
sysctl -w net.core.rmem_max=25165824
sysctl -w net.ipv4.tcp_rmem='4096 87380 25165824'
neal
> --
> You received this message because you are subscribed to the Google Groups
> "BBR Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
bbr-dev+u...@googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.