Since these helpers eventually call bpf_bprintf_prepare(),
I figured adding protection around bpf_try_get_buffers(),
which triggers the original warning, should be sufficient.
I tried a few approaches to address the warning as below :
1. preempt_disable() / preempt_enable() around bpf_prog_run_pin_on_cpu()
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 1b61bb25ba0e..6a128179a26f 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -1021,7 +1021,9 @@ u32 bpf_flow_dissect(struct bpf_prog *prog, struct bpf_flow_dissector *ctx,
(int)FLOW_DISSECTOR_F_STOP_AT_ENCAP);
flow_keys->flags = flags;
+ preempt_disable();
result = bpf_prog_run_pin_on_cpu(prog, ctx);
+ preempt_enable();
flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, nhoff, hlen);
flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
This fixes the original WARN_ON in both PREEMPT_FULL and RT builds.
However, when tested with the syz reproducer of the original bug [1], it
still triggers the expected DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt))
warning from __local_bh_disable_ip(), due to the preempt_disable()
interacting with RT spinlock semantics.
[1] [
https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8](https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8)
So this approach avoids the buffer nesting issue, but re-introduces the following issue:
[ 363.968103][T21257] DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt))
[ 363.968922][T21257] WARNING: CPU: 0 PID: 21257 at kernel/softirq.c:176 __local_bh_disable_ip+0x3d9/0x540
[ 363.969046][T21257] Modules linked in:
[ 363.969176][T21257] Call Trace:
[ 363.969181][T21257] <TASK>
[ 363.969186][T21257] ? __local_bh_disable_ip+0xa1/0x540
[ 363.969197][T21257] ? sock_map_delete_elem+0xa2/0x170
[ 363.969209][T21257] ? preempt_schedule_common+0x83/0xd0
[ 363.969252][T21257] ? rt_spin_unlock+0x161/0x200
[ 363.969269][T21257] sock_map_delete_elem+0xaf/0x170
[ 363.969280][T21257] bpf_prog_464bc2be3fc7c272+0x43/0x47
[ 363.969289][T21257] bpf_flow_dissect+0x22b/0x750
[ 363.969299][T21257] bpf_prog_test_run_flow_dissector+0x37c/0x5c0
2. preempt_disable() inside bpf_try_get_buffers() and bpf_put_buffers()
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8eb117c52817..bc8630833a94 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -777,12 +777,14 @@ int bpf_try_get_buffers(struct bpf_bprintf_buffers **bufs)
{
int nest_level;
+ preempt_disable();
nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
if (WARN_ON_ONCE(nest_level > MAX_BPRINTF_NEST_LEVEL)) {
this_cpu_dec(bpf_bprintf_nest_level);
return -EBUSY;
}
*bufs = this_cpu_ptr(&bpf_bprintf_bufs[nest_level - 1]);
+ preempt_enable();
return 0;
}
@@ -791,7 +793,10 @@ void bpf_put_buffers(void)
{
if (WARN_ON_ONCE(this_cpu_read(bpf_bprintf_nest_level) == 0))
return;
+
+ preempt_disable();
this_cpu_dec(bpf_bprintf_nest_level);
+ preempt_enable();
}
This *still* reproduces the original syz issue, so the protection needs to be
placed around the entire program run, not inside the helper itself as
in above experiment.
3. Using a per-CPU local_lock
Finally, I tested with a per-CPU local_lock around bpf_prog_run_pin_on_cpu():
+struct bpf_cpu_lock {
+ local_lock_t lock;
+};
+
+static DEFINE_PER_CPU(struct bpf_cpu_lock, bpf_cpu_lock) = {
+ .lock = INIT_LOCAL_LOCK(),
+};
@@ -1021,7 +1030,9 @@ u32 bpf_flow_dissect(struct bpf_prog *prog, struct bpf_flow_dissector *ctx,
(int)FLOW_DISSECTOR_F_STOP_AT_ENCAP);
flow_keys->flags = flags;
+ local_lock(&bpf_cpu_lock.lock);
result = bpf_prog_run_pin_on_cpu(prog, ctx);
+ local_unlock(&bpf_cpu_lock.lock);
This approach avoid the warning on both RT and non-RT builds, with both the
syz reproducer. The intention of introducing the per-CPU local_lock is to
maintain consistent per-CPU execution semantics between RT and non-RT kernels.
On non-RT builds, local_lock maps to preempt_disable()/enable(),
which provides the same semantics as before.
On RT builds, it maps to an RT-safe per-CPU spinlock, avoiding the
softirq_ctrl.cnt issue.
Let me know if you’d like me to run some more experiments on this.