Hello,
syzbot found the following issue on:
HEAD commit: 29e53a5b1c4f Linux 5.15.194
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=1217a1e2580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link:
https://syzkaller.appspot.com/bug?extid=3f46bf226b6ddefd5156
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/8cd4e3ff852c/disk-29e53a5b.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/21e1f773622c/vmlinux-29e53a5b.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/8f418c0a25cc/bzImage-29e53a5b.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+3f46bf...@syzkaller.appspotmail.com
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 1-...!: (1 GPs behind) idle=6e1/1/0x4000000000000000 softirq=44287/44288 fqs=53
(detected by 0, t=10502 jiffies, g=63729, q=312)
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 14055 Comm: syz.0.2002 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
RIP: 0010:__lock_acquire+0x5a03/0x7c60 kernel/locking/lockdep.c:5039
Code: 00 00 eb 12 c7 44 24 68 00 00 00 00 49 b8 00 00 00 00 00 fc ff df 48 c7 84 24 20 01 00 00 0e 36 e0 45 48 8b 84 24 e8 00 00 00 <42> c7 04 00 00 00 00 00 4a c7 44 00 0b 00 00 00 00 42 c7 44 00 17
RSP: 0018:ffffc90000dd08c0 EFLAGS: 00000097
RAX: 1ffff920001ba13c RBX: ffff88807d356428 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8ffbd0c0
RBP: ffffc90000dd0b10 R08: dffffc0000000000 R09: fffffbfff1ff7a19
R10: fffffbfff1ff7a19 R11: 1ffffffff1ff7a18 R12: 9be2421a50ef4013
R13: ffff88807d355940 R14: ffff88807d356420 R15: ffff88807d356478
FS: 00007f9807b806c0(0000) GS:ffff8880b9100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000002fdf1000 CR4: 00000000003506e0
DR0: ffffffffffffffff DR1: 00000000000001f8 DR2: 0000000000000083
DR3: ffffffffefffff15 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:312
rcu_read_lock include/linux/rcupdate.h:739 [inline]
advance_sched+0x6ca/0x940 net/sched/sch_taprio.c:769
__run_hrtimer kernel/time/hrtimer.c:1685 [inline]
__hrtimer_run_queues+0x53d/0xc40 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xa5/0x100 kernel/locking/spinlock.c:194
Code: 74 05 e8 ae 4b cb f7 48 c7 44 24 20 00 00 00 00 9c 8f 44 24 20 f6 44 24 21 02 75 4b f7 c3 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> 36 d5 9e f7 65 8b 05 17 d3 4f 76 85 c0 74 3c 48 c7 04 24 0e 36
RSP: 0018:ffffc900035bf580 EFLAGS: 00000206
RAX: f6b7dcfe363bd400 RBX: 0000000000000a02 RCX: f6b7dcfe363bd400
RDX: dffffc0000000000 RSI: ffffffff8a0b1820 RDI: 0000000000000001
RBP: ffffc900035bf618 R08: dffffc0000000000 R09: fffffbfff1ff7a2e
R10: fffffbfff1ff7a2e R11: 1ffffffff1ff7a2d R12: dffffc0000000000
R13: 1ffff920006b7edb R14: ffff88807b1b0ec0 R15: 1ffff920006b7eb0
spin_unlock_irqrestore include/linux/spinlock.h:418 [inline]
prepare_to_wait_exclusive+0xc5/0x220 kernel/sched/wait.c:288
unix_wait_for_peer+0xf8/0x2e0 net/unix/af_unix.c:1306
unix_dgram_sendmsg+0x106f/0x1890 net/unix/af_unix.c:1910
sock_sendmsg_nosec net/socket.c:704 [inline]
__sock_sendmsg net/socket.c:716 [inline]
____sys_sendmsg+0x5a2/0x8c0 net/socket.c:2436
___sys_sendmsg+0x1f0/0x260 net/socket.c:2490
__sys_sendmmsg+0x27c/0x4a0 net/socket.c:2576
__do_sys_sendmmsg net/socket.c:2605 [inline]
__se_sys_sendmmsg net/socket.c:2602 [inline]
__x64_sys_sendmmsg+0x9c/0xb0 net/socket.c:2602
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f9809918ec9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f9807b80038 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 00007f9809b6ffa0 RCX: 00007f9809918ec9
RDX: 0000000000000651 RSI: 0000200000000000 RDI: 0000000000000004
RBP: 00007f980999bf91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f9809b70038 R14: 00007f9809b6ffa0 R15: 00007fff549f4188
</TASK>
rcu: rcu_preempt kthread starved for 10316 jiffies! g63729 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:28032 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 0
CPU: 0 PID: 14073 Comm: syz.4.2007 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<IRQ>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
rcu_check_gp_kthread_starvation+0x1cd/0x250 kernel/rcu/tree_stall.h:487
print_other_cpu_stall+0x10c8/0x1220 kernel/rcu/tree_stall.h:592
check_cpu_stall kernel/rcu/tree_stall.h:745 [inline]
rcu_pending kernel/rcu/tree.c:3936 [inline]
rcu_sched_clock_irq+0x831/0x1110 kernel/rcu/tree.c:2619
update_process_times+0x193/0x200 kernel/time/timer.c:1818
tick_sched_handle kernel/time/tick-sched.c:254 [inline]
tick_sched_timer+0x37d/0x560 kernel/time/tick-sched.c:1473
__run_hrtimer kernel/time/hrtimer.c:1685 [inline]
__hrtimer_run_queues+0x4fe/0xc40 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:csd_lock_wait kernel/smp.c:440 [inline]
RIP: 0010:smp_call_function_many_cond+0xb88/0xd70 kernel/smp.c:969
Code: b6 44 05 00 84 c0 0f 85 9d 00 00 00 41 8b 1c 24 89 de 83 e6 01 31 ff e8 66 49 0b 00 83 e3 01 75 07 e8 fc 45 0b 00 eb 43 f3 90 <48> b8 00 00 00 00 00 fc ff df 41 0f b6 44 05 00 84 c0 75 11 41 f7
RSP: 0018:ffffc90003a97a20 EFLAGS: 00000246
RAX: ffffffff816c847b RBX: 0000000000000001 RCX: 0000000000080000
RDX: ffffc9000db59000 RSI: 000000000007ffff RDI: 0000000000080000
RBP: ffffc90003a97b60 R08: dffffc0000000000 R09: fffffbfff1ff7a19
R10: fffffbfff1ff7a19 R11: 1ffffffff1ff7a18 R12: ffff8880b91405c8
R13: 1ffff110172280b9 R14: ffff8880b903b3c0 R15: 0000000000000001
on_each_cpu_cond_mask+0x3b/0x80 kernel/smp.c:1135
on_each_cpu include/linux/smp.h:71 [inline]
text_poke_sync arch/x86/kernel/alternative.c:1442 [inline]
text_poke_bp_batch+0x2a9/0x7c0 arch/x86/kernel/alternative.c:1642
text_poke_flush arch/x86/kernel/alternative.c:1833 [inline]
text_poke_finish+0x16/0x30 arch/x86/kernel/alternative.c:1840
arch_jump_label_transform_apply+0x13/0x20 arch/x86/kernel/jump_label.c:146
static_key_enable_cpuslocked+0x11f/0x240 kernel/jump_label.c:177
static_key_enable+0x16/0x20 kernel/jump_label.c:190
__sched_core_enable kernel/sched/core.c:307 [inline]
sched_core_get+0x7d/0x1d0 kernel/sched/core.c:331
sched_core_alloc_cookie+0x71/0xa0 kernel/sched/core_sched.c:21
sched_core_share_pid+0x2d6/0x710 kernel/sched/core_sched.c:178
__do_sys_prctl kernel/sys.c:2564 [inline]
__se_sys_prctl+0x15f/0xfd0 kernel/sys.c:2298
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f90d8d19ec9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f90d6f81038 EFLAGS: 00000246 ORIG_RAX: 000000000000009d
RAX: ffffffffffffffda RBX: 00007f90d8f70fa0 RCX: 00007f90d8d19ec9
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 000000000000003e
RBP: 00007f90d8d9cf91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f90d8f71038 R14: 00007f90d8f70fa0 R15: 00007ffc481c5978
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup