[v5.15] INFO: rcu detected stall in sys_sendmmsg (2)

0 views
Skip to first unread message

syzbot

unread,
May 9, 2024, 7:17:23 AMMay 9
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 284087d4f7d5 Linux 5.15.158
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1005e85c980000
kernel config: https://syzkaller.appspot.com/x/.config?x=b0dd54e4b5171ebc
dashboard link: https://syzkaller.appspot.com/bug?extid=76cd2cc068fbaa0ef5c1
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c2e33c1db6bf/disk-284087d4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d9f77284af1d/vmlinux-284087d4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a600323dd149/bzImage-284087d4.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+76cd2c...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 0-...!: (1 GPs behind) idle=9ad/1/0x4000000000000000 softirq=138965/138966 fqs=6
(detected by 1, t=10505 jiffies, g=217737, q=231)
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 7807 Comm: syz-executor.4 Not tainted 5.15.158-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
RIP: 0010:preempt_count_add+0x5f/0x180 kernel/sched/core.c:5493
Code: 1f 0f ab 7e 65 01 1d 18 0f ab 7e 48 c7 c0 c0 ef 3e 91 48 c1 e8 03 42 0f b6 04 38 84 c0 0f 85 d9 00 00 00 83 3d f1 8e e7 0f 00 <75> 11 65 8b 05 f0 0e ab 7e 0f b6 c0 3d f5 00 00 00 73 59 65 8b 05
RSP: 0018:ffffc90000007ca0 EFLAGS: 00000046
RAX: 0000000000000004 RBX: 0000000000000001 RCX: ffffffff913eef03
RDX: 0000000080010002 RSI: ffffffff8ad8f5e0 RDI: 0000000000000001
RBP: ffffc90000007d50 R08: ffffffff816f665e R09: fffffbfff1bc8c56
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff8880b9a2a200
R13: 1ffff92000000f98 R14: ffffc90000007ce0 R15: dffffc0000000000
FS: 00007fec3b5e56c0(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b3292c000 CR3: 000000005fa36000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 000000001d623346 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
__raw_spin_lock_irq include/linux/spinlock_api_smp.h:127 [inline]
_raw_spin_lock_irq+0xb3/0x110 kernel/locking/spinlock.c:170
__run_hrtimer kernel/time/hrtimer.c:1690 [inline]
__hrtimer_run_queues+0x662/0xcf0 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
Code: 9c 8f 44 24 20 42 80 3c 23 00 74 08 4c 89 f7 e8 02 e8 a2 f7 f6 44 24 21 02 75 4e 41 f7 c7 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> f7 14 30 f7 65 8b 05 c8 22 db 75 85 c0 74 3f 48 c7 04 24 0e 36
RSP: 0018:ffffc900030a7540 EFLAGS: 00000206
RAX: d386d3ae39353600 RBX: 1ffff92000614eac RCX: ffffffff81631938
RDX: dffffc0000000000 RSI: ffffffff8a8b2980 RDI: 0000000000000001
RBP: ffffc900030a75d0 R08: dffffc0000000000 R09: fffffbfff1f7ee2d
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000614ea8 R14: ffffc900030a7560 R15: 0000000000000246
spin_unlock_irqrestore include/linux/spinlock.h:418 [inline]
prepare_to_wait_exclusive+0xc5/0x220 kernel/sched/wait.c:288
unix_wait_for_peer+0x15d/0x330 net/unix/af_unix.c:1304
unix_dgram_sendmsg+0x1441/0x2090 net/unix/af_unix.c:1911
sock_sendmsg_nosec net/socket.c:704 [inline]
__sock_sendmsg net/socket.c:716 [inline]
____sys_sendmsg+0x59e/0x8f0 net/socket.c:2431
___sys_sendmsg+0x252/0x2e0 net/socket.c:2485
__sys_sendmmsg+0x2bf/0x560 net/socket.c:2571
__do_sys_sendmmsg net/socket.c:2600 [inline]
__se_sys_sendmmsg net/socket.c:2597 [inline]
__x64_sys_sendmmsg+0x9c/0xb0 net/socket.c:2597
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fec3d072d69
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fec3b5e50c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 00007fec3d1a0f80 RCX: 00007fec3d072d69
RDX: 0000000000000651 RSI: 0000000020000000 RDI: 0000000000000007
RBP: 00007fec3d0bf49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000004d R14: 00007fec3d1a0f80 R15: 00007ffd1b36fe88
</TASK>
rcu: rcu_preempt kthread starved for 10475 jiffies! g217737 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:26584 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x2bf/0x1080 kernel/rcu/tree.c:1972
rcu_gp_kthread+0xa4/0x360 kernel/rcu/tree.c:2145
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 1
CPU: 1 PID: 7830 Comm: syz-executor.1 Not tainted 5.15.158-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
rcu_check_gp_kthread_starvation+0x1d2/0x240 kernel/rcu/tree_stall.h:487
print_other_cpu_stall+0x137a/0x14d0 kernel/rcu/tree_stall.h:592
check_cpu_stall kernel/rcu/tree_stall.h:745 [inline]
rcu_pending kernel/rcu/tree.c:3932 [inline]
rcu_sched_clock_irq+0xa38/0x1150 kernel/rcu/tree.c:2619
update_process_times+0x196/0x200 kernel/time/timer.c:1818
tick_sched_handle kernel/time/tick-sched.c:254 [inline]
tick_sched_timer+0x386/0x550 kernel/time/tick-sched.c:1473
__run_hrtimer kernel/time/hrtimer.c:1686 [inline]
__hrtimer_run_queues+0x55b/0xcf0 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:csd_lock_wait kernel/smp.c:440 [inline]
RIP: 0010:smp_call_function_many_cond+0xa93/0xd90 kernel/smp.c:969
Code: 04 03 84 c0 0f 85 84 00 00 00 45 8b 7d 00 44 89 fe 83 e6 01 31 ff e8 4c cf 0b 00 41 83 e7 01 75 07 e8 e1 cb 0b 00 eb 41 f3 90 <48> b8 00 00 00 00 00 fc ff df 0f b6 04 03 84 c0 75 11 41 f7 45 00
RSP: 0018:ffffc900030f7220 EFLAGS: 00000246
RAX: ffffffff81749104 RBX: 1ffff11017348509 RCX: 0000000000040000
RDX: ffffc9000db42000 RSI: 000000000003ffff RDI: 0000000000040000
RBP: ffffc900030f7360 R08: ffffffff817490d4 R09: fffffbfff1f7ee1a
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000000
R13: ffff8880b9a42848 R14: ffff8880b9b3b3c0 R15: 0000000000000001
on_each_cpu_cond_mask+0x3b/0x80 kernel/smp.c:1135
__purge_vmap_area_lazy+0x294/0x1740 mm/vmalloc.c:1683
_vm_unmap_aliases+0x453/0x4e0 mm/vmalloc.c:2107
change_page_attr_set_clr+0x308/0x1050 arch/x86/mm/pat/set_memory.c:1740
change_page_attr_clear arch/x86/mm/pat/set_memory.c:1797 [inline]
set_memory_ro+0xa1/0xe0 arch/x86/mm/pat/set_memory.c:1943
bpf_jit_binary_lock_ro include/linux/filter.h:891 [inline]
bpf_int_jit_compile+0xbf57/0xc6e0 arch/x86/net/bpf_jit_comp.c:2372
bpf_prog_select_runtime+0x6e2/0x9b0 kernel/bpf/core.c:1930
bpf_prog_load+0x131c/0x1b60 kernel/bpf/syscall.c:2357
__sys_bpf+0x343/0x670 kernel/bpf/syscall.c:4651
__do_sys_bpf kernel/bpf/syscall.c:4755 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4753 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4753
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f677e6a2d69
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f677cc150c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f677e7d0f80 RCX: 00007f677e6a2d69
RDX: 0000000000000045 RSI: 0000000020000200 RDI: 0000000000000005
RBP: 00007f677e6ef49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f677e7d0f80 R15: 00007fffcd12efb8
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages