[v5.15] INFO: rcu detected stall in kernfs_fop_open

5 views
Skip to first unread message

syzbot

unread,
Jun 6, 2023, 6:25:04 AM6/6/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d7af3e5ba454 Linux 5.15.115
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1405bfd1280000
kernel config: https://syzkaller.appspot.com/x/.config?x=1b527a384742ac24
dashboard link: https://syzkaller.appspot.com/bug?extid=da2a7936d785c1b78db8
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/05a22ee14f2f/disk-d7af3e5b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b062fea2e542/vmlinux-d7af3e5b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/14b4cea9b6c8/bzImage-d7af3e5b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+da2a79...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P24397/1:b..l
(detected by 0, t=10502 jiffies, g=162485, q=122)
task:kworker/u4:17 state:R running task stack:22392 pid:24397 ppid: 2 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
preempt_schedule_irq+0xf7/0x1c0 kernel/sched/core.c:6776
irqentry_exit+0x53/0x80 kernel/entry/common.c:426
asm_sysvec_reschedule_ipi+0x16/0x20 arch/x86/include/asm/idtentry.h:643
RIP: 0010:lock_acquire+0x252/0x4f0 kernel/locking/lockdep.c:5626
Code: 2b 00 74 08 4c 89 f7 e8 4c e4 66 00 f6 44 24 61 02 0f 85 84 01 00 00 41 f7 c7 00 02 00 00 74 01 fb 48 c7 44 24 40 0e 36 e0 45 <4b> c7 44 25 00 00 00 00 00 43 c7 44 25 09 00 00 00 00 43 c7 44 25
RSP: 0018:ffffc90003f0f8e0 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 1ffff920007e1f28 RCX: 1ffff920007e1ec8
RDX: dffffc0000000000 RSI: ffffffff8a8b0f00 RDI: ffffffff8ad85f80
RBP: ffffc90003f0fa40 R08: dffffc0000000000 R09: fffffbfff1f79221
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff920007e1f24
R13: dffffc0000000000 R14: ffffc90003f0f940 R15: 0000000000000246
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:269
rcu_read_lock include/linux/rcupdate.h:696 [inline]
inet_twsk_purge+0x11e/0xa20 net/ipv4/inet_timewait_sock.c:268
ops_exit_list net/core/net_namespace.c:174 [inline]
cleanup_net+0x763/0xb60 net/core/net_namespace.c:596
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2307
worker_thread+0xaca/0x1280 kernel/workqueue.c:2454
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
rcu: rcu_preempt kthread timer wakeup didn't happen for 10498 jiffies! g162485 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
rcu: Possible timer handling issue on cpu=1 timer-softirq=98969
rcu: rcu_preempt kthread starved for 10499 jiffies! g162485 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:I stack:27000 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1884
rcu_gp_fqs_loop+0x2af/0xf70 kernel/rcu/tree.c:1959
rcu_gp_kthread+0xa4/0x360 kernel/rcu/tree.c:2132
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 2960 Comm: udevd Not tainted 5.15.115-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
RIP: 0010:native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
RIP: 0010:arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
RIP: 0010:kvm_wait+0x1b4/0x200 arch/x86/kernel/kvm.c:918
Code: e0 48 c1 e8 03 42 0f b6 04 28 84 c0 75 42 45 0f b6 34 24 e8 fe 96 4e 00 44 3a 74 24 1c 75 10 66 90 0f 00 2d 3e 84 50 09 fb f4 <e9> c8 fe ff ff fb e9 c2 fe ff ff 44 89 e1 80 e1 07 38 c1 0f 8c 54
RSP: 0018:ffffc90000dd0840 EFLAGS: 00000246
RAX: 7520a8be48888a00 RBX: 1ffff920001ba10c RCX: ffffffff8162db08
RDX: dffffc0000000000 RSI: ffffffff8a8afc60 RDI: ffffffff8ad85f80
RBP: ffffc90000dd0910 R08: dffffc0000000000 R09: fffffbfff1f79239
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff88801dbdd8f0
R13: dffffc0000000000 R14: 0000000000000003 R15: ffffc90000dd0880
FS: 00007f4b3bc6ec80(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f2228153000 CR3: 0000000020b8e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
pv_wait arch/x86/include/asm/paravirt.h:597 [inline]
pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline]
__pv_queued_spin_lock_slowpath+0x6bc/0xc40 kernel/locking/qspinlock.c:508
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:585 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:85 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
spin_lock include/linux/spinlock.h:363 [inline]
fq_pie_timer+0x87/0x260 net/sched/sch_fq_pie.c:380
call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421
expire_timers kernel/time/timer.c:1466 [inline]
__run_timers+0x67c/0x890 kernel/time/timer.c:1737
run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1750
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:636
irq_exit_rcu+0x5/0x20 kernel/softirq.c:648
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:stack_depot_save+0x0/0x440 lib/stackdepot.c:261
Code: 48 8d 4c 18 18 49 89 0e 8b 44 18 0c eb 02 31 c0 5b 41 5e 41 5f c3 48 c7 c7 00 18 f2 8c 4c 89 fe e8 d5 5b 00 00 eb c3 0f 1f 00 <55> 41 57 41 56 41 55 41 54 53 48 83 ec 28 65 48 8b 04 25 28 00 00
RSP: 0018:ffffc90002c3f0d8 EFLAGS: 00000246
RAX: 0000000000000010 RBX: ffff88807d57a340 RCX: 7520a8be48888a00
RDX: 0000000000002800 RSI: 0000000000000010 RDI: ffffc90002c3f120
RBP: ffffc90002c3f208 R08: 000000000000000f R09: ffffc90002c3f050
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000587e20 R14: ffffc90002c3f120 R15: 1ffff1100faaf468
save_stack+0x104/0x1e0 mm/page_owner.c:120
__reset_page_owner+0x52/0x180 mm/page_owner.c:140
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1340 [inline]
free_pcp_prepare mm/page_alloc.c:1391 [inline]
free_unref_page_prepare+0xc34/0xcf0 mm/page_alloc.c:3317
free_unref_page+0x95/0x2d0 mm/page_alloc.c:3396
free_slab mm/slub.c:2015 [inline]
discard_slab mm/slub.c:2021 [inline]
__unfreeze_partials+0x1b7/0x210 mm/slub.c:2507
put_cpu_partial+0x132/0x1a0 mm/slub.c:2587
do_slab_free mm/slub.c:3487 [inline]
___cache_free+0xe3/0x100 mm/slub.c:3506
qlist_free_all+0x36/0x90 mm/kasan/quarantine.c:176
kasan_quarantine_reduce+0x162/0x180 mm/kasan/quarantine.c:283
__kasan_slab_alloc+0x2f/0xc0 mm/kasan/common.c:444
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
slab_alloc mm/slub.c:3228 [inline]
kmem_cache_alloc_trace+0xfb/0x290 mm/slub.c:3245
kmalloc include/linux/slab.h:591 [inline]
kzalloc include/linux/slab.h:721 [inline]
kernfs_fop_open+0x3b5/0xbc0 fs/kernfs/file.c:628
do_dentry_open+0x807/0xfb0 fs/open.c:826
do_open fs/namei.c:3538 [inline]
path_openat+0x2702/0x2f20 fs/namei.c:3672
do_filp_open+0x21c/0x460 fs/namei.c:3699
do_sys_openat2+0x13b/0x500 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1238
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f4b3bd999a4
Code: 24 20 48 8d 44 24 30 48 89 44 24 28 64 8b 04 25 18 00 00 00 85 c0 75 2c 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 76 60 48 8b 15 55 a4 0d 00 f7 d8 64 89 02 48 83
RSP: 002b:00007ffe9c2453e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f4b3bd999a4
RDX: 0000000000080241 RSI: 00007ffe9c245818 RDI: 00000000ffffff9c
RBP: 00007ffe9c245818 R08: 0000000000000004 R09: 0000000000000001
R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
R13: 000055c0a9d8372e R14: 0000000000000001 R15: 0000000000000000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Sep 14, 2023, 6:24:47 AM9/14/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages