[v6.1] INFO: rcu detected stall in pipe_write

0 views
Skip to first unread message

syzbot

unread,
Jun 15, 2024, 9:17:29 PM (8 days ago) Jun 15
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ae9f2a70d69e Linux 6.1.93
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=115cbbe2980000
kernel config: https://syzkaller.appspot.com/x/.config?x=1d7ac9fef66f54aa
dashboard link: https://syzkaller.appspot.com/bug?extid=e2c2114a8d90f6bed906
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b6a79d8f90ec/disk-ae9f2a70.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/f1b3e75b81f1/vmlinux-ae9f2a70.xz
kernel image: https://storage.googleapis.com/syzbot-assets/80fc2213e3e5/bzImage-ae9f2a70.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e2c211...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 1-...!: (1 GPs behind) idle=c66c/1/0x4000000000000000 softirq=39775/39776 fqs=0
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P12074/1:b..l
(detected by 0, t=10505 jiffies, g=52205, q=92 ncpus=2)
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 12076 Comm: syz-executor.3 Not tainted 6.1.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
RIP: 0010:lookup_chain_cache_add kernel/locking/lockdep.c:3738 [inline]
RIP: 0010:validate_chain+0x17e/0x5950 kernel/locking/lockdep.c:3793
Code: 8d 1c c5 a0 d1 35 90 48 89 d8 48 c1 e8 03 48 89 44 24 58 42 80 3c 20 00 74 08 48 89 df e8 7a f2 76 00 48 89 5c 24 28 48 8b 1b <48> 85 db 74 48 48 83 c3 f8 74 42 4c 8d 7b 18 4c 89 f8 48 c1 e8 03
RSP: 0018:ffffc900001e0720 EFLAGS: 00000046
RAX: 1ffffffff206c7f8 RBX: ffffffff904ad568 RCX: ffffffff816b2002
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff9049d220
RBP: ffffc900001e09d0 R08: dffffc0000000000 R09: fffffbfff2093a45
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: ffff888023138b78 R14: 2e4489edb596d9f6 R15: 1ffff1100462716f
FS: 00007ff27af3b6c0(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000201bb000 CR3: 0000000015b1e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
debug_object_activate+0x68/0x4e0 lib/debugobjects.c:697
debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
debug_activate kernel/time/hrtimer.c:475 [inline]
enqueue_hrtimer+0x30/0x410 kernel/time/hrtimer.c:1084
__run_hrtimer kernel/time/hrtimer.c:1703 [inline]
__hrtimer_run_queues+0x728/0xe50 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
RIP: 0010:__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
RIP: 0010:_raw_spin_unlock_irq+0x25/0x40 kernel/locking/spinlock.c:202
Code: b1 bc f5 ff 90 53 48 89 fb 48 83 c7 18 48 8b 74 24 08 e8 ee 20 d5 f6 48 89 df e8 56 5e d6 f6 e8 71 ea fb f6 fb bf 01 00 00 00 <e8> c6 f6 c8 f6 65 8b 05 f7 03 6d 75 85 c0 74 02 5b c3 e8 a4 27 6b
RSP: 0018:ffffc900038c6e10 EFLAGS: 00000282
RAX: 84350e99419d4400 RBX: ffff8880172b3008 RCX: ffffffff816ad3ca
RDX: dffffc0000000000 RSI: ffffffff8aec0240 RDI: 0000000000000001
RBP: ffffc900038c6f90 R08: dffffc0000000000 R09: fffffbfff2093a4b
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000cc0
R13: 1ffff92000718ddc R14: 0000000000000000 R15: 0000000000000201
spin_unlock_irq include/linux/spinlock.h:401 [inline]
add_to_swap_cache+0xd96/0x1da0 mm/swap_state.c:124
__read_swap_cache_async+0x58c/0xab0 mm/swap_state.c:490
swap_cluster_readahead+0x3b2/0x780 mm/swap_state.c:641
swapin_readahead+0x10d/0xa50 mm/swap_state.c:855
do_swap_page+0x3e1/0x3e00 mm/memory.c:3868
handle_pte_fault mm/memory.c:5017 [inline]
__handle_mm_fault mm/memory.c:5155 [inline]
handle_mm_fault+0x2051/0x5340 mm/memory.c:5276
do_user_addr_fault arch/x86/mm/fault.c:1340 [inline]
handle_page_fault arch/x86/mm/fault.c:1431 [inline]
exc_page_fault+0x26f/0x620 arch/x86/mm/fault.c:1487
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0010:copy_user_enhanced_fast_string+0xa/0x40 arch/x86/lib/copy_user_64.S:166
Code: ff c9 75 f2 89 d1 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 31 c0 0f 01 ca c3 8d 0c ca 89 ca eb 20 0f 01 cb 83 fa 40 72 38 89 d1 <f3> a4 31 c0 0f 01 ca c3 89 ca eb 0a 66 2e 0f 1f 84 00 00 00 00 00
RSP: 0018:ffffc900038c7938 EFLAGS: 00050206
RAX: ffffffff84364201 RBX: 0000000000001000 RCX: 0000000000001000
RDX: 0000000000001000 RSI: 00000000201bb000 RDI: ffff88801f45b000
RBP: ffffc900038c7a98 R08: dffffc0000000000 R09: ffffed1003e8b800
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff88801f45b000
R13: 1ffff92000718fb3 R14: 0000000000001000 R15: dffffc0000000000
copy_user_generic arch/x86/include/asm/uaccess_64.h:37 [inline]
raw_copy_from_user arch/x86/include/asm/uaccess_64.h:52 [inline]
copyin lib/iov_iter.c:183 [inline]
_copy_from_iter+0x2c2/0xff0 lib/iov_iter.c:631
copy_page_from_iter+0x76/0x100 lib/iov_iter.c:752
pipe_write+0x857/0x1af0 fs/pipe.c:537
call_write_iter include/linux/fs.h:2265 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7ff27a27cea9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff27af3b0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007ff27a3b4050 RCX: 00007ff27a27cea9
RDX: 00000000fffffdef RSI: 0000000020000000 RDI: 0000000000000000
RBP: 00007ff27a2ebff4 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007ff27a3b4050 R15: 00007ffffa85f338
</TASK>
task:syz-executor.3 state:R running task stack:24512 pid:12074 ppid:5582 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
preempt_schedule_notrace+0xf8/0x140 kernel/sched/core.c:6820
preempt_schedule_notrace_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35
rcu_lockdep_current_cpu_online+0xfe/0x120 kernel/rcu/tree.c:778
rcu_read_lock_held_common kernel/rcu/update.c:112 [inline]
rcu_read_lock_sched_held+0x74/0x130 kernel/rcu/update.c:123
task_css include/linux/cgroup.h:509 [inline]
blkcg_css+0x6d/0x1c0 block/blk-cgroup.c:77
blk_cgroup_congested+0xb9/0x220 block/blk-cgroup.c:1998
__cgroup_throttle_swaprate+0x6d/0x1b0 mm/swapfile.c:3671
cgroup_throttle_swaprate include/linux/swap.h:662 [inline]
wp_page_copy+0x461/0x18c0 mm/memory.c:3160
handle_pte_fault mm/memory.c:5031 [inline]
__handle_mm_fault mm/memory.c:5155 [inline]
handle_mm_fault+0x2525/0x5340 mm/memory.c:5276
do_user_addr_fault arch/x86/mm/fault.c:1340 [inline]
handle_page_fault arch/x86/mm/fault.c:1431 [inline]
exc_page_fault+0x26f/0x620 arch/x86/mm/fault.c:1487
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0010:copy_user_enhanced_fast_string+0xa/0x40 arch/x86/lib/copy_user_64.S:166
Code: ff c9 75 f2 89 d1 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 31 c0 0f 01 ca c3 8d 0c ca 89 ca eb 20 0f 01 cb 83 fa 40 72 38 89 d1 <f3> a4 31 c0 0f 01 ca c3 89 ca eb 0a 66 2e 0f 1f 84 00 00 00 00 00
RSP: 0018:ffffc90003b2f970 EFLAGS: 00050206
RAX: ffffffff84362501 RBX: 00007fffffffe000 RCX: 0000000000000e80
RDX: 0000000000001000 RSI: ffff88800fd87180 RDI: 000000002059e000
RBP: ffffc90003b2faf8 R08: dffffc0000000000 R09: ffffed1001fb1000
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000001000
R13: 0000000000000000 R14: 000000002059de80 R15: ffff88800fd87000
copy_user_generic arch/x86/include/asm/uaccess_64.h:37 [inline]
raw_copy_to_user arch/x86/include/asm/uaccess_64.h:58 [inline]
copyout+0xd8/0x120 lib/iov_iter.c:170
_copy_to_iter+0x4a6/0x1000 lib/iov_iter.c:527
copy_page_to_iter+0xac/0x170 lib/iov_iter.c:725
process_vm_rw_pages mm/process_vm_access.c:45 [inline]
process_vm_rw_single_vec mm/process_vm_access.c:117 [inline]
process_vm_rw_core mm/process_vm_access.c:215 [inline]
process_vm_rw+0x886/0xcc0 mm/process_vm_access.c:283
__do_sys_process_vm_readv mm/process_vm_access.c:295 [inline]
__se_sys_process_vm_readv mm/process_vm_access.c:291 [inline]
__x64_sys_process_vm_readv+0xdc/0xf0 mm/process_vm_access.c:291
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7ff27a27cea9
RSP: 002b:00007ff27af5c0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000136
RAX: ffffffffffffffda RBX: 00007ff27a3b3f80 RCX: 00007ff27a27cea9
RDX: 0000000000000002 RSI: 0000000020008400 RDI: 0000000000000576
RBP: 00007ff27a2ebff4 R08: 0000000000000286 R09: 0000000000000000
R10: 0000000020008640 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007ff27a3b3f80 R15: 00007ffffa85f338
</TASK>
rcu: rcu_preempt kthread starved for 10505 jiffies! g52205 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27064 pid:16 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1965
rcu_gp_fqs_loop+0x2d2/0x1150 kernel/rcu/tree.c:1706
rcu_gp_kthread+0xa3/0x3b0 kernel/rcu/tree.c:1905
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 PID: 3927 Comm: kworker/u4:13 Not tainted 6.1.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:check_kcov_mode kernel/kcov.c:175 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x2c/0x60 kernel/kcov.c:207
Code: 04 24 65 48 8b 0d 24 dc 77 7e 65 8b 15 25 dc 77 7e f7 c2 00 01 ff 00 74 11 f7 c2 00 01 00 00 74 35 83 b9 1c 16 00 00 00 74 2c <8b> 91 f8 15 00 00 83 fa 02 75 21 48 8b 91 00 16 00 00 48 8b 32 48
RSP: 0018:ffffc90004d07598 EFLAGS: 00000246
RAX: ffffffff817f53ab RBX: 1ffff110173281b1 RCX: ffff888027a55940
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc90004d07980 R08: ffffffff817f5374 R09: fffffbfff2093a45
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000800000000
R13: dffffc0000000000 R14: 0000000000000001 R15: ffff8880b9940d88
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f5258181198 CR3: 000000000ce8e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
</IRQ>
<TASK>
csd_lock_wait kernel/smp.c:424 [inline]
smp_call_function_many_cond+0x1fcb/0x3460 kernel/smp.c:998
on_each_cpu_cond_mask+0x3b/0x80 kernel/smp.c:1166
on_each_cpu include/linux/smp.h:71 [inline]
text_poke_sync arch/x86/kernel/alternative.c:1334 [inline]
text_poke_bp_batch+0x2bb/0x940 arch/x86/kernel/alternative.c:1534
text_poke_flush arch/x86/kernel/alternative.c:1725 [inline]
text_poke_finish+0x16/0x30 arch/x86/kernel/alternative.c:1732
arch_jump_label_transform_apply+0x13/0x20 arch/x86/kernel/jump_label.c:146
static_key_enable_cpuslocked+0x12e/0x250 kernel/jump_label.c:177
static_key_enable+0x16/0x20 kernel/jump_label.c:190
toggle_allocation_gate+0xbf/0x480 mm/kfence/core.c:804
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages