[v6.1] INFO: rcu detected stall in generic_file_write_iter

0 views
Skip to first unread message

syzbot

unread,
Apr 21, 2024, 12:23:20 PMApr 21
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 6741e066ec76 Linux 6.1.87
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13e98e73180000
kernel config: https://syzkaller.appspot.com/x/.config?x=3fc2f61bd0ae457
dashboard link: https://syzkaller.appspot.com/bug?extid=879b1eed75b2b0b43b9b
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b606a22ddf4b/disk-6741e066.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e31c21737449/vmlinux-6741e066.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ee0cb8c049e9/bzImage-6741e066.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+879b1e...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 1-...!: (1 ticks this GP) idle=d5a4/1/0x4000000000000000 softirq=19030/19030 fqs=3
(detected by 0, t=10503 jiffies, g=23233, q=207 ncpus=2)
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 6324 Comm: syz-executor.4 Not tainted 6.1.87-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:task_irq_context kernel/locking/lockdep.c:4558 [inline]
RIP: 0010:__lock_acquire+0x3d5/0x1f80 kernel/locking/lockdep.c:4987
Code: 89 5c 24 78 48 89 03 65 8b 05 0b 08 98 7e 31 db 85 c0 0f 95 c3 01 db 49 8d ad c4 0a 00 00 48 89 e8 48 c1 e8 03 48 89 44 24 58 <0f> b6 04 10 84 c0 0f 85 1d 13 00 00 31 c0 48 89 6c 24 10 83 7d 00
RSP: 0018:ffffc900001e09e0 EFLAGS: 00000803
RAX: 1ffff11002b7c510 RBX: 0000000000000002 RCX: ffff888015be1dc0
RDX: dffffc0000000000 RSI: ffff888015be28a0 RDI: ffffffff91ea85e0
RBP: ffff888015be2884 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000000
R13: ffff888015be1dc0 R14: ffffffff91ea85e0 R15: 0000000000000000
FS: 00007f9d6c30b6c0(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b31b26000 CR3: 0000000018e59000 CR4: 00000000003526e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
debug_object_activate+0x68/0x4e0 lib/debugobjects.c:697
debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
debug_activate kernel/time/hrtimer.c:475 [inline]
enqueue_hrtimer+0x30/0x410 kernel/time/hrtimer.c:1084
__run_hrtimer kernel/time/hrtimer.c:1703 [inline]
__hrtimer_run_queues+0x728/0xe50 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
Code: 9c 8f 44 24 20 42 80 3c 23 00 74 08 4c 89 f7 e8 f2 3b 4d f7 f6 44 24 21 02 75 4e 41 f7 c7 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> d7 88 c9 f6 65 8b 05 78 a4 6d 75 85 c0 74 3f 48 c7 04 24 0e 36
RSP: 0018:ffffc90005606ea0 EFLAGS: 00000206
RAX: 82923ecddf9f8d00 RBX: 1ffff92000ac0dd8 RCX: ffffffff816ad11a
RDX: dffffc0000000000 RSI: ffffffff8aec01c0 RDI: 0000000000000001
RBP: ffffc90005606f30 R08: dffffc0000000000 R09: fffffbfff2093445
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000ac0dd4 R14: ffffc90005606ec0 R15: 0000000000000246
spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
rmqueue_bulk mm/page_alloc.c:3146 [inline]
__rmqueue_pcplist+0x2023/0x2310 mm/page_alloc.c:3749
rmqueue_pcplist mm/page_alloc.c:3791 [inline]
rmqueue mm/page_alloc.c:3834 [inline]
get_page_from_freelist+0x86c/0x3320 mm/page_alloc.c:4276
__alloc_pages+0x28d/0x770 mm/page_alloc.c:5547
__folio_alloc+0xf/0x30 mm/page_alloc.c:5579
vma_alloc_folio+0x486/0x990 mm/mempolicy.c:2243
shmem_alloc_folio mm/shmem.c:1589 [inline]
shmem_alloc_and_acct_folio+0x5a8/0xd50 mm/shmem.c:1613
shmem_get_folio_gfp+0x13f0/0x3470 mm/shmem.c:1941
shmem_get_folio mm/shmem.c:2072 [inline]
shmem_write_begin+0x16e/0x4e0 mm/shmem.c:2559
generic_perform_write+0x2fc/0x5e0 mm/filemap.c:3817
__generic_file_write_iter+0x176/0x400 mm/filemap.c:3945
generic_file_write_iter+0xab/0x310 mm/filemap.c:3977
call_write_iter include/linux/fs.h:2265 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f9d6b67cbef
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 b9 80 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 0c 81 02 00 48
RSP: 002b:00007f9d6c30ae80 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000001000000 RCX: 00007f9d6b67cbef
RDX: 0000000001000000 RSI: 00007f9d61df7000 RDI: 0000000000000004
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000005597
R10: 0000000000000002 R11: 0000000000000293 R12: 0000000000000004
R13: 00007f9d6c30af80 R14: 00007f9d6c30af40 R15: 00007f9d61df7000
</TASK>
rcu: rcu_preempt kthread starved for 10490 jiffies! g23233 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27032 pid:16 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1965
rcu_gp_fqs_loop+0x2d2/0x1150 kernel/rcu/tree.c:1706
rcu_gp_kthread+0xa3/0x3b0 kernel/rcu/tree.c:1905
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 PID: 6318 Comm: syz-executor.0 Not tainted 6.1.87-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:__sanitizer_cov_trace_pc+0x58/0x60 kernel/kcov.c:225
Code: f8 15 00 00 83 fa 02 75 21 48 8b 91 00 16 00 00 48 8b 32 48 8d 7e 01 8b 89 fc 15 00 00 48 39 cf 73 08 48 89 3a 48 89 44 f2 08 <c3> 0f 1f 80 00 00 00 00 4c 8b 04 24 65 48 8b 15 74 df 77 7e 65 8b
RSP: 0018:ffffc900036cf618 EFLAGS: 00000293
RAX: ffffffff817f05ca RBX: 1ffff920006d9ee1 RCX: ffff8880475b3b80
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc900036cf990 R08: ffffffff817f05a4 R09: fffff520006d9e8d
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 0000000000000001 R14: ffffc900036cf708 R15: 0000000000000000
FS: 0000555555a30480(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b31c22000 CR3: 0000000078339000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
</IRQ>
<TASK>
csd_lock_wait kernel/smp.c:424 [inline]
smp_call_function_single+0x4ba/0x2680 kernel/smp.c:787
loaded_vmcs_clear arch/x86/kvm/vmx/vmx.c:750 [inline]
vmx_vcpu_load_vmcs+0x12f/0x7d0 arch/x86/kvm/vmx/vmx.c:1354
vmx_vcpu_load+0x19/0x80 arch/x86/kvm/vmx/vmx.c:1421
kvm_arch_vcpu_load+0x19c/0x7d0 arch/x86/kvm/x86.c:4739
vcpu_load+0x4e/0x80 arch/x86/kvm/../../../virt/kvm/kvm_main.c:226
kvm_unload_vcpu_mmu arch/x86/kvm/x86.c:12528 [inline]
kvm_unload_vcpu_mmus arch/x86/kvm/x86.c:12540 [inline]
kvm_arch_destroy_vm+0x1a3/0x430 arch/x86/kvm/x86.c:12647
kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:1336 [inline]
kvm_put_kvm+0xcfc/0x18a0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:1370
kvm_vcpu_release+0x53/0x60 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3856
__fput+0x3b7/0x890 fs/file_table.c:320
task_work_run+0x246/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline]
syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:303
do_syscall_64+0x47/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f3a45a7cd9a
Code: 48 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 48 83 ec 18 89 7c 24 0c e8 03 7f 02 00 8b 7c 24 0c 89 c2 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 36 89 d7 89 44 24 0c e8 63 7f 02 00 8b 44 24
RSP: 002b:00007ffd25b304b0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000007 RCX: 00007f3a45a7cd9a
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000006
RBP: 00007f3a45bad980 R08: 00007f3a45a00000 R09: 0000000000000001
R10: 0000000000000001 R11: 0000000000000293 R12: 0000000000026869
R13: 0000000000026837 R14: 00007ffd25b30670 R15: 00007f3a45a34cb0
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages