Which Kernel version is the patch targeting?

1,481 views
Skip to first unread message

Joseph D. Beshay

unread,
Sep 19, 2016, 11:13:40 PM9/19/16
to BBR Development
Hi,

Which net-next branch/tag is the patch for? I have checked both the master and 4.8-rc6 but they don't seem to have the 'cong_control' field in the tcp_congestion_ops struct. Is there another patch I am missing?

Also, how is the BBR implementation making sure the rest of the TCP stack is not overwriting the sk_pacing_rate value? (such as here: http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/tree/net/ipv4/tcp_input.c#n3345) I am guessing the BBR patch is targeting a branch that has all of this taken care of.

Regards,
Joseph

Eric Dumazet

unread,
Sep 19, 2016, 11:16:25 PM9/19/16
to BBR Development
Make sure you apply the complete patch series, on top of David Miller net-next

V4 should be available soon.

Tong Meng

unread,
Sep 20, 2016, 2:02:46 AM9/20/16
to BBR Development
Another thing to make sure is that, since BBR sender do not trigger halved congestion window as in legacy TCP, so only compiling and building a new kernel for the sender is enough, i.e., no need to change the receiver side kernel, correct?

Neal Cardwell

unread,
Sep 20, 2016, 6:40:53 AM9/20/16
to Tong Meng, BBR Development
Yes, that is correct. When using BBR you only need to change the
sender-side kernel.

neal
> --
> You received this message because you are subscribed to the Google Groups
> "BBR Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bbr-dev+u...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Tong Meng

unread,
Sep 27, 2016, 11:29:46 PM9/27/16
to BBR Development
Now I get BBR run on Emulab, and get some exiting results from a single TCP test flows, which shows orders of throughput increment with the existence of packet loss. Really a great work!

Then I came across problem when I tried to use sch_fq.
For reference, I build sch_fq (as well as sch_fq_codel) as a kernel module.
I can load the module by "modprobe sch_fq", (and according to the result of "lsmod", the module is really loaded).
Next, when I try to enforce sch_fq by "sysctl -w net.core.default_qdisc=sch_fq", I'm told with sysctl error of "No such file or directory".
Am I doing that wrong?

As I just start exploring towards congestion control, my problem may be quite naive.
But I would really thank for your time to help!

Neal Cardwell

unread,
Sep 27, 2016, 11:48:47 PM9/27/16
to Tong Meng, BBR Development
I believe the "sch_" prefix is not needed. You can try:

sysctl -w net.core.default_qdisc=fq

However, that probably won't take effect if you already have a qdisc
set up for your NICs.

To install fq immediately, you could try (assuming your NIC is "eth0"):

tc qdisc replace dev eth0 root fq pacing

hope that helps,
neal

Eric Dumazet

unread,
Sep 28, 2016, 12:10:02 AM9/28/16
to BBR Development, mengto...@gmail.com
Note that if your NIC is a multi queue NIC, I higly recommend having MQ+FQ

One way to get this would be :

# sysctl net.core.default_qdisc=fq
net.core.default_qdisc = fq

# tc qd replace dev eth0 root mq

(This will automatically install fq children below mq)

# tc -s -d qd sh dev eth0
qdisc mq 8001: root 
 Sent 2295 bytes 21 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc fq 0: parent 8001:1 limit 10000p flow_limit 100p buckets 1024 quantum 3028 initial_quantum 15140 
 Sent 90 bytes 1 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  1 flows (0 inactive, 0 throttled)
  0 gc, 0 highprio, 0 throttled
qdisc fq 0: parent 8001:2 limit 10000p flow_limit 100p buckets 1024 quantum 3028 initial_quantum 15140 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  0 flows (0 inactive, 0 throttled)
  0 gc, 0 highprio, 0 throttled
qdisc fq 0: parent 8001:3 limit 10000p flow_limit 100p buckets 1024 quantum 3028 initial_quantum 15140 
 Sent 2205 bytes 20 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  7 flows (6 inactive, 0 throttled)
  0 gc, 0 highprio, 0 throttled
qdisc fq 0: parent 8001:4 limit 10000p flow_limit 100p buckets 1024 quantum 3028 initial_quantum 15140 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  0 flows (0 inactive, 0 throttled)
  0 gc, 0 highprio, 0 throttled

8918...@qq.com

unread,
Dec 13, 2016, 9:31:27 PM12/13/16
to BBR Development
Hi
I have the same questions. I tried to compile linux kernel 2.6.34/3.10.1 with patch “https://lwn.net/Articles/701177/” and “https://patchwork.ozlabs.org/patch/671069/”, but failed. Which kernel version is supportted by patch of BBR and where is the BBR patch?

Neal Cardwell

unread,
Dec 13, 2016, 11:25:19 PM12/13/16
to 8918...@qq.com, BBR Development
On Tue, Dec 13, 2016 at 9:31 PM, <8918...@qq.com> wrote:
Hi
I have the same questions. I tried to compile linux kernel 2.6.34/3.10.1 with patch “https://lwn.net/Articles/701177/” and “https://patchwork.ozlabs.org/patch/671069/”, but failed. Which kernel version is supportted by patch of BBR and where is the BBR patch?

BBR is in Linux 4.9 and beyond.

The initial commit for the BBR can be viewed here:


Note that the BBR module itself depends on previous patches in the BBR patch series, as well as a number of recent changes to the Linux TCP stack. Backporting it to older versions of Linux would require backporting a number of dependencies.

To get a copy of the current version of the source code for Linux you can use:


Our team put together a quick-start guide with pointers on how to download the latest Linux networking sources, enable and configure BBR and the fq qdisc, and build and boot a new kernel:


Hope that helps.

cheers,
neal

 


On Tuesday, September 20, 2016 at 11:13:40 AM UTC+8, Joseph D. Beshay wrote:
Hi,

Which net-next branch/tag is the patch for? I have checked both the master and 4.8-rc6 but they don't seem to have the 'cong_control' field in the tcp_congestion_ops struct. Is there another patch I am missing?

Also, how is the BBR implementation making sure the rest of the TCP stack is not overwriting the sk_pacing_rate value? (such as here: http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/tree/net/ipv4/tcp_input.c#n3345) I am guessing the BBR patch is targeting a branch that has all of this taken care of.

Regards,
Joseph

--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+unsubscribe@googlegroups.com.

Brian Tierney

unread,
Dec 14, 2016, 9:53:25 AM12/14/16
to BBR Development

For those folks who want to try BBR on a RHEL/CentOS-based system, 4.9 and BBR are now available in the 'elrepo' kernel as of last weekend.

Also you need to do is:

  rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org   
  # for CentOS 6
  rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm  
  # for CentOS 7
  rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm 
  # then to install kernel
  yum -y --enablerepo=elrepo-kernel install kernel-ml


Then configure Grub to use the new kernel:







On Monday, September 19, 2016 at 8:13:40 PM UTC-7, Joseph D. Beshay wrote:

Nikita Shirokov

unread,
Apr 10, 2017, 10:42:20 PM4/10/17
to BBR Development
Hi, Eric.
 Is it possible to pass somehow fq's options while turning it on as you have described (thru mq and default qdisc)?
we want to try to increase bucket's size (as we, after turning it (FQ) on, seing some cpu's regression (5 to 7 %) and want to try to play with available params to test if we could somehow improv our cpu's util). Unfortunately tc's documentation is super scarce and i wasn't able to find anything. trying to change fq's param after adding mq's qdisc bails out with errors like this:

tc qd show dev eth0 | head
qdisc mq 8009: root
qdisc fq 0: parent 8009:1 limit 10000p flow_limit 100p buckets 1024 orphan_mask 1023 quantum 3028 initial_quantum 15140 refill_delay 40.0ms
qdisc fq 0: parent 8009:2 limit 10000p flow_limit 100p buckets 1024 orphan_mask 1023 quantum 3028 initial_quantum 15140 refill_delay 40.0ms 


tc qd change dev eth0 parent 8009:1  fq  buckets 2048
RTNETLINK answers: No such file or directory

--
Nikita

Eric Dumazet

unread,
Apr 10, 2017, 11:37:41 PM4/10/17
to Nikita Shirokov, BBR Development
Sure, you can run a script like that :

# Number of TX queues on NIC
NBQ=8

FQ="fq buckets 4096"

for ETH in eth0
do
tc qd del dev $ETH root 2>/dev/null
tc qd add dev $ETH root handle 100: mq
for i in `seq 1 $NBQ`
do
slot=$( printf %x $(( i )) )
tc qd add dev $ETH handle $slot: parent 100:$slot $FQ
done
done

Nikita Shirokov

unread,
Apr 10, 2017, 11:42:23 PM4/10/17
to BBR Development
Thank you for so fast reply!

--
Nikita
Reply all
Reply to author
Forward
0 new messages