Arnaud Lecomte
unread,Jan 4, 2026, 3:52:36 PM (7 days ago) Jan 4Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to syzbot+d1b7fa...@syzkaller.appspotmail.com, and...@kernel.org, a...@kernel.org, b...@vger.kernel.org, con...@arnaud-lcm.com, dan...@iogearbox.net, edd...@gmail.com, hao...@google.com, john.fa...@gmail.com, jo...@kernel.org, kps...@kernel.org, linux-...@vger.kernel.org, marti...@linux.dev, net...@vger.kernel.org, s...@fomichev.me, so...@kernel.org, syzkall...@googlegroups.com, yongho...@linux.dev, Brahmajit Das
Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stack()
during stack trace copying.
The issue occurs when: the callchain entry (stored as a per-cpu variable)
grow between collection and buffer copy, causing it to exceed the initially
calculated buffer size based on max_depth.
The callchain collection intentionally avoids locking for performance
reasons, but this creates a window where concurrent modifications can
occur during the copy operation.
To prevent this from happening, we clamp the trace len to the max
depth initially calculated with the buffer size and the size of
a trace.
Reported-by:
syzbot+d1b7fa...@syzkaller.appspotmail.com
Closes:
https://lore.kernel.org/all/691231dc.a70a022...@google.com/T/
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Tested-by:
syzbot+d1b7fa...@syzkaller.appspotmail.com
Cc: Brahmajit Das <
lis...@listout.xyz>
Signed-off-by: Arnaud Lecomte <
con...@arnaud-lcm.com>
---
Thanks Brahmajit Das for the initial fix he proposed that I tweaked
with the correct justification and a better implementation in my
opinion.
---
kernel/bpf/stackmap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index da3d328f5c15..e56752a9a891 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -465,7 +465,6 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
if (trace_in) {
trace = trace_in;
- trace->nr = min_t(u32, trace->nr, max_depth);
} else if (kernel && task) {
trace = get_callchain_entry_for_task(task, max_depth);
} else {
@@ -479,7 +478,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
goto err_fault;
}
- trace_nr = trace->nr - skip;
+ trace_nr = min(trace->nr, max_depth);
+ trace_nr = trace_nr - skip;
copy_len = trace_nr * elem_size;
ips = trace->ip + skip;
--
2.43.0