[v6.1] INFO: rcu detected stall in blk_mq_run_hw_queues

1 view
Skip to first unread message

syzbot

unread,
Feb 7, 2024, 5:13:23 AMFeb 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: f1bb70486c9c Linux 6.1.77
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13cd37c4180000
kernel config: https://syzkaller.appspot.com/x/.config?x=39447811cb133e7e
dashboard link: https://syzkaller.appspot.com/bug?extid=2d7f51769877141a8efe
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f93cb7e9dad2/disk-f1bb7048.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/22703d1d86ee/vmlinux-f1bb7048.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4129725af309/bzImage-f1bb7048.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2d7f51...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 1-...!: (1 GPs behind) idle=cc5c/1/0x4000000000000000 softirq=292149/292150 fqs=1
(detected by 0, t=10506 jiffies, g=423749, q=24 ncpus=2)
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 120 Comm: kworker/1:1H Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: kblockd blk_mq_requeue_work
RIP: 0010:arch_atomic64_read arch/x86/include/asm/atomic64_64.h:22 [inline]
RIP: 0010:atomic64_read include/linux/atomic/atomic-instrumented.h:647 [inline]
RIP: 0010:taprio_set_budget net/sched/sch_taprio.c:551 [inline]
RIP: 0010:advance_sched+0x618/0x970 net/sched/sch_taprio.c:745
Code: 00 4c 69 f0 e8 03 00 00 4c 8b 6c 24 48 49 8d 6d a0 48 89 ef be 08 00 00 00 e8 f4 0b 2d f9 48 89 e8 48 c1 e8 03 42 80 3c 20 00 <4c> 8b 64 24 50 74 08 48 89 ef e8 49 0a 2d f9 4c 89 f0 31 d2 48 f7
RSP: 0000:ffffc900001e0cc0 EFLAGS: 00000046
RAX: 1ffff110030dec5c RBX: ffff88803ddc5c20 RCX: ffffffff88b4ddfc
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff8880186f62e0
RBP: ffff8880186f62e0 R08: dffffc0000000000 R09: ffffed10030dec5d
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: ffff8880186f6340 R14: 0000000fa0000000 R15: 17b96c59d0000000
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f069966d000 CR3: 0000000033114000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000008976 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
__run_hrtimer kernel/time/hrtimer.c:1685 [inline]
__hrtimer_run_queues+0x5e5/0xe50 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
RIP: 0010:xas_find+0x0/0xaa0 lib/xarray.c:1240
Code: d6 f5 5a f7 e9 fe fa ff ff e8 bc 88 03 f7 48 c7 c7 20 ee 5c 8e 48 89 ee 48 89 da e8 3a f9 db f9 e9 7b fa ff ff 0f 1f 44 00 00 <55> 41 57 41 56 41 55 41 54 53 48 83 ec 40 48 89 74 24 38 49 89 fc
RSP: 0000:ffffc900025b7978 EFLAGS: 00000293
RAX: ffffffff8a875ae8 RBX: 0000000000000008 RCX: ffff888013f4d940
RDX: 0000000000000000 RSI: ffffffffffffffff RDI: ffffc900025b79e0
RBP: ffffc900025b7a90 R08: ffffffff8a875aba R09: fffffbfff2092245
R10: 0000000000000000 R11: dffffc0000000001 R12: ffffffffffffffff
R13: ffffc900025b79e0 R14: 0000000000000001 R15: 1ffff920004b6f3f
xa_find+0x263/0x420 lib/xarray.c:2024
blk_mq_run_hw_queues+0x1d8/0x360 block/blk-mq.c:2350
blk_mq_requeue_work+0x73d/0x780 block/blk-mq.c:1461
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
rcu: rcu_preempt kthread starved for 10500 jiffies! g423749 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:26680 pid:16 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1935
rcu_gp_fqs_loop+0x2d2/0x1120 kernel/rcu/tree.c:1706
rcu_gp_kthread+0xa3/0x3a0 kernel/rcu/tree.c:1905
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 PID: 14957 Comm: syz-executor.1 Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
RIP: 0010:csd_lock_wait kernel/smp.c:424 [inline]
RIP: 0010:smp_call_function_many_cond+0x1fb0/0x3460 kernel/smp.c:998
Code: 2f 44 89 ee 83 e6 01 31 ff e8 ec 42 0b 00 41 83 e5 01 49 bd 00 00 00 00 00 fc ff df 75 0a e8 77 3f 0b 00 e9 1b ff ff ff f3 90 <42> 0f b6 04 2b 84 c0 75 14 41 f7 07 01 00 00 00 0f 84 fe fe ff ff
RSP: 0018:ffffc90003c7f3c0 EFLAGS: 00000246
RAX: ffffffff817f3bcb RBX: 1ffff11017328031 RCX: 0000000000040000
RDX: ffffc900038e1000 RSI: 000000000003ffff RDI: 0000000000040000
RBP: ffffc90003c7f7a0 R08: ffffffff817f3b94 R09: fffffbfff209224b
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000800000000
R13: dffffc0000000000 R14: 0000000000000001 R15: ffff8880b9940188
FS: 00007f06a3b656c0(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055555702b938 CR3: 0000000033114000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000008976 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
</IRQ>
<TASK>
on_each_cpu_cond_mask+0x3b/0x80 kernel/smp.c:1166
__flush_tlb_multi arch/x86/include/asm/paravirt.h:87 [inline]
flush_tlb_multi arch/x86/mm/tlb.c:924 [inline]
flush_tlb_mm_range+0x353/0x590 arch/x86/mm/tlb.c:1010
tlb_flush arch/x86/include/asm/tlb.h:20 [inline]
tlb_flush_mmu_tlbonly+0x1ab/0x410 include/asm-generic/tlb.h:430
tlb_flush_mmu+0x28/0x210 mm/mmu_gather.c:260
tlb_finish_mmu+0xce/0x1f0 mm/mmu_gather.c:361
unmap_region+0x29f/0x2f0 mm/mmap.c:2332
do_mas_align_munmap+0xec8/0x15f0 mm/mmap.c:2582
do_mas_munmap+0x246/0x2b0 mm/mmap.c:2640
__vm_munmap+0x268/0x370 mm/mmap.c:2917
__do_sys_munmap mm/mmap.c:2942 [inline]
__se_sys_munmap mm/mmap.c:2939 [inline]
__x64_sys_munmap+0x5c/0x70 mm/mmap.c:2939
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f06a2e7de37
Code: 00 00 00 48 c7 c2 b0 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f06a3b64ea8 EFLAGS: 00000246 ORIG_RAX: 000000000000000b
RAX: ffffffffffffffda RBX: 0000000000080000 RCX: 00007f06a2e7de37
RDX: 0000000000000000 RSI: 0000000008400000 RDI: 00007f06995ff000
RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000005d5
R10: 00000000000003c8 R11: 0000000000000246 R12: 0000000000000003
R13: 00007f06a3b64f80 R14: 00007f06a3b64f40 R15: 00007f06995ff000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages