[v5.15] INFO: rcu detected stall in sys_symlinkat (4)

1 view
Skip to first unread message

syzbot

unread,
Dec 31, 2025, 6:56:26 AM (8 days ago) 12/31/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 68efe5a6c16a Linux 5.15.197
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=169b4afc580000
kernel config: https://syzkaller.appspot.com/x/.config?x=7e6ed99963d6ee1d
dashboard link: https://syzkaller.appspot.com/bug?extid=ca21c0beb57345f1b8e5
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/998e4247f9d0/disk-68efe5a6.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4ef4066c91ec/vmlinux-68efe5a6.xz
kernel image: https://storage.googleapis.com/syzbot-assets/922e7b640c48/bzImage-68efe5a6.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ca21c0...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
(detected by 1, t=10502 jiffies, g=35853, q=51)
rcu: All QSes seen, last rcu_preempt kthread activity 10502 (4294976113-4294965611), jiffies_till_next_fqs=1, root ->qsmask 0x0
rcu: rcu_preempt kthread starved for 10502 jiffies! g35853 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27904 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 1
CPU: 1 PID: 10462 Comm: syz-executor Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<IRQ>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
rcu_check_gp_kthread_starvation+0x1cd/0x250 kernel/rcu/tree_stall.h:487
print_other_cpu_stall+0x10c8/0x1220 kernel/rcu/tree_stall.h:592
check_cpu_stall kernel/rcu/tree_stall.h:745 [inline]
rcu_pending kernel/rcu/tree.c:3936 [inline]
rcu_sched_clock_irq+0x831/0x1110 kernel/rcu/tree.c:2619
update_process_times+0x193/0x200 kernel/time/timer.c:1818
tick_sched_handle kernel/time/tick-sched.c:254 [inline]
tick_sched_timer+0x37d/0x560 kernel/time/tick-sched.c:1473
__run_hrtimer kernel/time/hrtimer.c:1685 [inline]
__hrtimer_run_queues+0x4fe/0xc40 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
RIP: 0010:arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
RIP: 0010:kvm_wait+0x141/0x190 arch/x86/kernel/kvm.c:918
Code: 89 df 48 89 f8 48 c1 e8 03 42 0f b6 04 30 84 c0 75 47 0f b6 1f e8 1f e2 49 00 44 38 e3 75 10 66 90 0f 00 2d c1 68 d4 08 fb f4 <e9> 33 ff ff ff fb e9 2d ff ff ff e8 4f df 6d 08 89 f9 80 e1 07 38
RSP: 0018:ffffc9000353e720 EFLAGS: 00000246
RAX: 861c880f76958000 RBX: 0000000000000003 RCX: 861c880f76958000
RDX: dffffc0000000000 RSI: ffffffff8a0b1be0 RDI: ffffffff8a59e800
RBP: ffffc9000353e7d0 R08: dffffc0000000000 R09: fffffbfff1ff5435
R10: fffffbfff1ff5435 R11: 1ffffffff1ff5434 R12: 0000000000000003
R13: ffff8880b913b154 R14: dffffc0000000000 R15: 1ffff920006a7ce4
pv_wait arch/x86/include/asm/paravirt.h:597 [inline]
pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline]
__pv_queued_spin_lock_slowpath+0x60f/0x9c0 kernel/locking/qspinlock.c:508
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:585 [inline]
queued_spin_lock_slowpath+0x43/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:85 [inline]
do_raw_spin_lock+0x217/0x280 kernel/locking/spinlock_debug.c:115
spin_lock include/linux/spinlock.h:364 [inline]
get_swap_pages+0x1cc/0xe20 mm/swapfile.c:1069
refill_swap_slots_cache mm/swap_slots.c:265 [inline]
get_swap_page+0x4c4/0x740 mm/swap_slots.c:335
shmem_writepage+0x8d8/0x13d0 mm/shmem.c:1419
pageout mm/vmscan.c:1082 [inline]
shrink_page_list+0x437a/0x6b60 mm/vmscan.c:1669
shrink_inactive_list mm/vmscan.c:2223 [inline]
shrink_list mm/vmscan.c:2450 [inline]
shrink_lruvec+0x1170/0x24a0 mm/vmscan.c:2769
shrink_node_memcgs mm/vmscan.c:2956 [inline]
shrink_node+0x10a7/0x2610 mm/vmscan.c:3079
shrink_zones mm/vmscan.c:3285 [inline]
do_try_to_free_pages+0x5da/0x15c0 mm/vmscan.c:3340
try_to_free_mem_cgroup_pages+0x2f6/0x780 mm/vmscan.c:3654
try_charge_memcg+0x3de/0x14a0 mm/memcontrol.c:2654
obj_cgroup_charge_pages+0x87/0x190 mm/memcontrol.c:3018
obj_cgroup_charge+0x1a0/0x310 mm/memcontrol.c:3299
memcg_slab_pre_alloc_hook mm/slab.h:287 [inline]
slab_pre_alloc_hook+0x9f/0xc0 mm/slab.h:497
slab_alloc_node mm/slub.c:3139 [inline]
slab_alloc mm/slub.c:3233 [inline]
kmem_cache_alloc+0x3d/0x290 mm/slub.c:3238
__d_alloc+0x2a/0x6f0 fs/dcache.c:1749
d_alloc+0x4a/0x250 fs/dcache.c:1828
lookup_one_qstr_excl+0xc6/0x240 fs/namei.c:1567
filename_create+0x21e/0x450 fs/namei.c:3844
do_symlinkat+0xb3/0x6c0 fs/namei.c:4456
__do_sys_symlinkat fs/namei.c:4483 [inline]
__se_sys_symlinkat fs/namei.c:4480 [inline]
__x64_sys_symlinkat+0x95/0xa0 fs/namei.c:4480
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f9f513e0cc7
Code: 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0a 01 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe9ad99ef8 EFLAGS: 00000206 ORIG_RAX: 000000000000010a
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9f513e0cc7
RDX: 00007f9f514678c3 RSI: 00000000ffffff9c RDI: 00007f9f51466390
RBP: 00007ffe9ad99f3c R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000000d7
R13: 00000000000927c0 R14: 0000000000044f31 R15: 00007ffe9ad99f90
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages