[v6.1] INFO: rcu detected stall in sys_sendmmsg

0 views
Skip to first unread message

syzbot

unread,
Jun 8, 2024, 12:23:23 AMJun 8
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 88690811da69 Linux 6.1.92
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12dba80a980000
kernel config: https://syzkaller.appspot.com/x/.config?x=ee57a613e7f5bf6c
dashboard link: https://syzkaller.appspot.com/bug?extid=c3a620e9f8895b68b6b0
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8b45ba80e02a/disk-88690811.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ca769d644800/vmlinux-88690811.xz
kernel image: https://storage.googleapis.com/syzbot-assets/26a1d8aecbf6/bzImage-88690811.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c3a620...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P4069/2:b..l
(detected by 1, t=10502 jiffies, g=10713, q=60 ncpus=2)
task:syz-executor.3 state:R running task stack:24216 pid:4069 ppid:3585 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
preempt_schedule_irq+0xf7/0x1c0 kernel/sched/core.c:6870
irqentry_exit+0x53/0x80 kernel/entry/common.c:439
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
RIP: 0010:__sanitizer_cov_trace_cmp8+0x0/0x80 kernel/kcov.c:284
Code: 39 c8 77 22 89 f8 89 f6 49 ff c2 4c 89 11 48 c7 44 0a 08 04 00 00 00 48 89 44 0a 10 48 89 74 0a 18 4c 89 44 0a 20 c3 0f 1f 00 <4c> 8b 04 24 65 48 8b 0d 04 e0 77 7e 65 8b 05 05 e0 77 7e a9 00 01
RSP: 0018:ffffc90005e06e28 EFLAGS: 00000297
RAX: 0000000000000002 RBX: 0000000000000000 RCX: ffff888027419dc0
RDX: ffff888027419dc0 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffffff88c40ed7 R09: fffffbfff1ce712e
R10: 0000000000000000 R11: dffffc0000000001 R12: ffffc90005e071e0
R13: ffff88807412f550 R14: 00000000010000e0 R15: 00000000010000e0
nf_inet_addr_cmp include/linux/netfilter.h:31 [inline]
__nf_ct_tuple_dst_equal include/net/netfilter/nf_conntrack_tuple.h:135 [inline]
nf_ct_tuple_equal include/net/netfilter/nf_conntrack_tuple.h:144 [inline]
nf_ct_key_equal+0x232/0x680 net/netfilter/nf_conntrack_core.c:709
____nf_conntrack_find net/netfilter/nf_conntrack_core.c:769 [inline]
__nf_conntrack_find_get+0x2a3/0x760 net/netfilter/nf_conntrack_core.c:795
resolve_normal_ct net/netfilter/nf_conntrack_core.c:1850 [inline]
nf_conntrack_in+0x88d/0x1d10 net/netfilter/nf_conntrack_core.c:2017
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_slow+0xae/0x1e0 net/netfilter/core.c:614
nf_hook+0x2c0/0x450 include/linux/netfilter.h:257
__ip_local_out+0x38b/0x4a0 net/ipv4/ip_output.c:115
ip_local_out net/ipv4/ip_output.c:124 [inline]
ip_send_skb+0x49/0x1a0 net/ipv4/ip_output.c:1596
udp_send_skb+0xa33/0x1420 net/ipv4/udp.c:988
udp_sendmsg+0x1d10/0x2ad0 net/ipv4/udp.c:1276
sock_sendmsg_nosec net/socket.c:718 [inline]
__sock_sendmsg net/socket.c:730 [inline]
____sys_sendmsg+0x5a5/0x8f0 net/socket.c:2514
___sys_sendmsg net/socket.c:2568 [inline]
__sys_sendmmsg+0x3ab/0x730 net/socket.c:2654
__do_sys_sendmmsg net/socket.c:2683 [inline]
__se_sys_sendmmsg net/socket.c:2680 [inline]
__x64_sys_sendmmsg+0x9c/0xb0 net/socket.c:2680
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fe71ba7cf69
RSP: 002b:00007fe71c7f60c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 00007fe71bbb4050 RCX: 00007fe71ba7cf69
RDX: 000000000800001d RSI: 0000000020007fc0 RDI: 0000000000000007
RBP: 00007fe71bada6fe R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007fe71bbb4050 R15: 00007fff0c53b158
</TASK>
rcu: rcu_preempt kthread starved for 10538 jiffies! g10713 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:26816 pid:16 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1965
rcu_gp_fqs_loop+0x2d2/0x1150 kernel/rcu/tree.c:1706
rcu_gp_kthread+0xa3/0x3b0 kernel/rcu/tree.c:1905
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:112 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_idle_do_entry+0x10f/0x340 drivers/acpi/processor_idle.c:572


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages