[v6.1] INFO: rcu detected stall in kernfs_fop_write_iter

0 views
Skip to first unread message

syzbot

unread,
Jul 26, 2023, 6:12:45 PM7/26/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 5302e81aa209 Linux 6.1.41
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=151a0216a80000
kernel config: https://syzkaller.appspot.com/x/.config?x=773f0dbbc4999ad5
dashboard link: https://syzkaller.appspot.com/bug?extid=d7201ee266c133a641d6
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/0d4c5b598789/disk-5302e81a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e6fc722fbbe4/vmlinux-5302e81a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a18ebb82b176/bzImage-5302e81a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+d7201e...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
(detected by 0, t=10502 jiffies, g=126333, q=73 ncpus=2)
rcu: All QSes seen, last rcu_preempt kthread activity 10502 (4295148608-4295138106), jiffies_till_next_fqs=1, root ->qsmask 0x0
rcu: rcu_preempt kthread starved for 10502 jiffies! g126333 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:25880 pid:16 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0x132c/0x4330 kernel/sched/core.c:6554
schedule+0xbf/0x180 kernel/sched/core.c:6630
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1935
rcu_gp_fqs_loop+0x2c2/0x1010 kernel/rcu/tree.c:1661
rcu_gp_kthread+0xa3/0x3a0 kernel/rcu/tree.c:1860
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 3000 Comm: udevd Not tainted 6.1.41-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
RIP: 0010:__lock_acquire+0xf9e/0x1f80
Code: 59 76 00 48 0f a3 1d 81 7a bf 0e 0f 83 bb 01 00 00 48 8d 04 5b 48 c1 e0 06 48 8d 98 20 c1 f8 8f 48 bf 00 00 00 00 00 fc ff df <e9> cd 01 00 00 e8 98 7d c7 02 31 f6 85 c0 0f 84 51 05 00 00 48 c7
RSP: 0018:ffffc900001e0a60 EFLAGS: 00000006
RAX: 00000000000102c0 RBX: ffffffff8ff9c3e0 RCX: ffffffff8169e797
RDX: 0000000000000000 RSI: 0000000000000008 RDI: dffffc0000000000
RBP: 07da8986a778f33b R08: dffffc0000000000 R09: fffffbfff2052c4a
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff88807c738ad8
R13: ffff88807c738000 R14: 0000000000000001 R15: 1ffff1100f8e716a
FS: 00007f9badcb9c80(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020006ac0 CR3: 000000001e968000 CR4: 00000000003506e0
DR0: 00000000ffff070c DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:350 [inline]
advance_sched+0x47/0x940 net/sched/sch_taprio.c:700
__run_hrtimer kernel/time/hrtimer.c:1685 [inline]
__hrtimer_run_queues+0x5e5/0xe50 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:cmpxchg_double_slab+0x125/0x310 mm/slub.c:552
Code: 02 00 00 0f 84 8c 00 00 00 fb e9 86 00 00 00 41 8d 46 20 a8 0f 0f 85 60 01 00 00 4c 89 f9 4c 89 e2 4c 89 e8 f0 49 0f c7 4e 20 <b0> 01 74 67 eb 61 49 8b 46 08 a8 01 0f 85 73 01 00 00 0f 1f 44 00
RSP: 0018:ffffc900031bfa28 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffff88807bb3eb40 RCX: 00000000800c000b
RDX: 00000000000c000c RSI: ffffea0001eecf80 RDI: ffff888141f4d3c0
RBP: 00000000800c000b R08: ffff88807bb3eb40 R09: 00000000800c000b
R10: 0000000000000000 R11: dffffc0000000001 R12: 00000000000c000c
R13: 0000000000000000 R14: ffffea0001eecf80 R15: 00000000800c000b
__slab_free+0x9c/0x280 mm/slub.c:3520
qlist_free_all+0x22/0x60 mm/kasan/quarantine.c:187
kasan_quarantine_reduce+0x162/0x180 mm/kasan/quarantine.c:294
__kasan_slab_alloc+0x1f/0x70 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x50/0x370 mm/slab.h:737
slab_alloc_node mm/slub.c:3398 [inline]
__kmem_cache_alloc_node+0x137/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:954 [inline]
__kmalloc+0xa1/0x230 mm/slab_common.c:968
kmalloc include/linux/slab.h:558 [inline]
kernfs_fop_write_iter+0x157/0x4f0 fs/kernfs/file.c:307
call_write_iter include/linux/fs.h:2205 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f9bad916bf2
Code: 89 c7 48 89 44 24 08 e8 7b 34 fa ff 48 8b 44 24 08 48 83 c4 28 c3 c3 64 8b 04 25 18 00 00 00 85 c0 75 20 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 76 6f 48 8b 15 07 a2 0d 00 f7 d8 64 89 02 48 83
RSP: 002b:00007ffd94754068 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000564d50f76fd0 RCX: 00007f9bad916bf2
RDX: 0000000000000007 RSI: 0000564d50f87210 RDI: 000000000000000c
RBP: 0000000000000007 R08: 0000564d50f87210 R09: 0000000000000020
R10: 000000000000010f R11: 0000000000000246 R12: 0000000000000007
R13: 0000564d50f87210 R14: 00007ffd94754448 R15: 0000000000000000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Nov 3, 2023, 6:13:14 PM11/3/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages