[v5.15] INFO: rcu detected stall in ip_list_rcv (2)

2 views
Skip to first unread message

syzbot

unread,
Oct 23, 2025, 6:28:30 PM (4 days ago) Oct 23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ac56c046adf4 Linux 5.15.195
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=171c6d42580000
kernel config: https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link: https://syzkaller.appspot.com/bug?extid=2c9b629ebe11f977a423
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/098286e78a2b/disk-ac56c046.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/34cbdaabbc45/vmlinux-ac56c046.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ac636b4df380/bzImage-ac56c046.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2c9b62...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
(detected by 1, t=10502 jiffies, g=58489, q=552)
rcu: All QSes seen, last rcu_preempt kthread activity 10503 (4294993665-4294983162), jiffies_till_next_fqs=1, root ->qsmask 0x0
rcu: rcu_preempt kthread starved for 10504 jiffies! g58489 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27496 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 15225 Comm: syz.4.2973 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
RIP: 0010:lock_acquire+0x86/0x3f0 kernel/locking/lockdep.c:5591
Code: 8b 48 c7 44 24 70 70 f8 5b 81 4c 8d 6c 24 60 49 c1 ed 03 48 b8 f1 f1 f1 f1 00 f2 f2 f2 4b 89 44 3d 00 66 43 c7 44 3d 09 f3 f3 <43> c6 44 3d 0b f3 e9 ab 02 00 00 65 44 8b 35 4f 1c a6 7e 41 83 fe
RSP: 0018:ffffc900000067c0 EFLAGS: 00000802
RAX: f2f2f200f1f1f1f1 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff962cdbf0
RBP: ffffc900000068c8 R08: 0000000000000001 R09: 0000000000000000
R10: fffffbfff1ad334e R11: 1ffffffff1ad334d R12: ffffffff962cdbf0
R13: 1ffff92000000d04 R14: 0000000000000802 R15: dffffc0000000000
FS: 00007f05485d76c0(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f05481b4d58 CR3: 000000005eb7e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: fffffffffffffffc DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<IRQ>
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
debug_object_deactivate+0x63/0x340 lib/debugobjects.c:749
debug_hrtimer_deactivate kernel/time/hrtimer.c:415 [inline]
debug_deactivate+0x1d/0x1c0 kernel/time/hrtimer.c:471
__run_hrtimer kernel/time/hrtimer.c:1653 [inline]
__hrtimer_run_queues+0x2db/0xc40 kernel/time/hrtimer.c:1749
hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1811
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x4d/0xc0 arch/x86/kernel/apic/apic.c:1108
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:seqcount_lockdep_reader_access+0x17b/0x1c0 include/linux/seqlock.h:106
Code: f9 4d 85 e4 75 16 e8 44 53 61 f9 eb 15 e8 3d 53 61 f9 e8 88 d6 92 01 4d 85 e4 74 ea e8 2e 53 61 f9 fb 48 c7 04 24 0e 36 e0 45 <4b> c7 04 3e 00 00 00 00 66 43 c7 44 3e 09 00 00 43 c6 44 3e 0b 00
RSP: 0018:ffffc90000006da0 EFLAGS: 00000246
RAX: ffffffff88167762 RBX: 0000000000000000 RCX: ffff88807dae1dc0
RDX: 0000000000000100 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90000006e50 R08: dffffc0000000000 R09: fffffbfff1ff7a32
R10: fffffbfff1ff7a32 R11: 1ffffffff1ff7a31 R12: 0000000000000200
R13: 1ffffffff1607df8 R14: 1ffff92000000db4 R15: dffffc0000000000
nf_conntrack_get_ht include/net/netfilter/nf_conntrack.h:338 [inline]
____nf_conntrack_find net/netfilter/nf_conntrack_core.c:785 [inline]
__nf_conntrack_find_get+0x146/0x620 net/netfilter/nf_conntrack_core.c:823
resolve_normal_ct net/netfilter/nf_conntrack_core.c:1793 [inline]
nf_conntrack_in+0x6a3/0x16f0 net/netfilter/nf_conntrack_core.c:1960
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_slow net/netfilter/core.c:584 [inline]
nf_hook_slow_list+0x26a/0x5a0 net/netfilter/core.c:623
NF_HOOK_LIST include/linux/netfilter.h:338 [inline]
ip_sublist_rcv+0xbd8/0xce0 net/ipv4/ip_input.c:634
ip_list_rcv+0x3df/0x430 net/ipv4/ip_input.c:671
__netif_receive_skb_list_ptype net/core/dev.c:5568 [inline]
__netif_receive_skb_list_core+0x574/0x740 net/core/dev.c:5616
__netif_receive_skb_list net/core/dev.c:5668 [inline]
netif_receive_skb_list_internal+0x871/0xb90 net/core/dev.c:5759
gro_normal_list net/core/dev.c:5913 [inline]
gro_normal_one net/core/dev.c:5926 [inline]
napi_skb_finish net/core/dev.c:6263 [inline]
napi_gro_receive+0x4ef/0xa60 net/core/dev.c:6293
receive_buf+0x3793/0x5780 drivers/net/virtio_net.c:1240
virtnet_receive drivers/net/virtio_net.c:1504 [inline]
virtnet_poll+0x546/0xef0 drivers/net/virtio_net.c:1617
__napi_poll+0xc0/0x430 net/core/dev.c:7075
napi_poll net/core/dev.c:7142 [inline]
net_rx_action+0x4a8/0x9c0 net/core/dev.c:7232
handle_softirqs+0x328/0x820 kernel/softirq.c:576
__do_softirq kernel/softirq.c:610 [inline]
invoke_softirq kernel/softirq.c:450 [inline]
__irq_exit_rcu+0x12f/0x220 kernel/softirq.c:659
irq_exit_rcu+0x5/0x20 kernel/softirq.c:671
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0xa0/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:preempt_schedule_irq+0xac/0x150 kernel/sched/core.c:6799
Code: 44 24 20 f6 44 24 21 02 74 0b 0f 0b 48 f7 03 08 00 00 00 74 70 bf 01 00 00 00 e8 cf d7 9f f7 e8 9a 4f cc f7 fb bf 01 00 00 00 <e8> ef b5 ff ff 48 c7 44 24 40 00 00 00 00 9c 8f 44 24 40 8b 44 24
RSP: 0018:ffffc9000339f1c0 EFLAGS: 00000282
RAX: 621e6505d367b000 RBX: 0000000000000000 RCX: 621e6505d367b000
RDX: dffffc0000000000 RSI: ffffffff8a0b18a0 RDI: 0000000000000001
RBP: ffffc9000339f260 R08: dffffc0000000000 R09: fffffbfff1ff7a32
R10: fffffbfff1ff7a32 R11: 1ffffffff1ff7a31 R12: 0000000000000000
R13: 0000000000000000 R14: dffffc0000000000 R15: 1ffff92000673e38
irqentry_exit+0x63/0x70 kernel/entry/common.c:432
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:rmqueue_pcplist mm/page_alloc.c:3711 [inline]
RIP: 0010:rmqueue mm/page_alloc.c:3742 [inline]
RIP: 0010:get_page_from_freelist+0x156c/0x1c60 mm/page_alloc.c:4189
Code: 01 00 00 00 00 00 00 9c 8f 84 24 80 01 00 00 f6 84 24 81 01 00 00 02 0f 85 dc 02 00 00 41 f7 c6 00 02 00 00 74 01 fb 4d 85 e4 <48> b9 00 00 00 00 00 fc ff df 4c 8b 74 24 38 0f 84 b9 01 00 00 4c
RSP: 0018:ffffc9000339f320 EFLAGS: 00000282
RAX: 621e6505d367b000 RBX: ffffffff8bbc3820 RCX: 621e6505d367b000
RDX: dffffc0000000000 RSI: ffffffff8a0b18a0 RDI: ffffffff8a59a880
RBP: ffffc9000339f530 R08: dffffc0000000000 R09: fffffbfff1ff7a32
R10: fffffbfff1ff7a32 R11: 1ffffffff1ff7a31 R12: ffffea000188da80
R13: 0000000000000000 R14: 0000000000000202 R15: ffffc9000339f588
__alloc_pages+0x1e1/0x470 mm/page_alloc.c:5487
__get_free_pages+0x8/0x30 mm/page_alloc.c:5524
genradix_alloc_node lib/generic-radix-tree.c:83 [inline]
__genradix_ptr_alloc+0xda/0x350 lib/generic-radix-tree.c:122
__genradix_prealloc+0x3e/0x80 lib/generic-radix-tree.c:225
sctp_stream_alloc_out net/sctp/stream.c:104 [inline]
sctp_stream_init+0x139/0x400 net/sctp/stream.c:149
sctp_association_init net/sctp/associola.c:233 [inline]
sctp_association_new+0x10db/0x24a0 net/sctp/associola.c:298
sctp_connect_new_asoc+0x2bb/0x690 net/sctp/socket.c:1089
sctp_sendmsg_new_asoc net/sctp/socket.c:1691 [inline]
sctp_sendmsg+0x15e0/0x2950 net/sctp/socket.c:2005
sock_sendmsg_nosec net/socket.c:704 [inline]
__sock_sendmsg net/socket.c:716 [inline]
____sys_sendmsg+0x5a2/0x8c0 net/socket.c:2436
___sys_sendmsg+0x1f0/0x260 net/socket.c:2490
__sys_sendmsg net/socket.c:2519 [inline]
__do_sys_sendmsg net/socket.c:2528 [inline]
__se_sys_sendmsg+0x190/0x250 net/socket.c:2526
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f054a3b1fc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f05485d7038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f054a609180 RCX: 00007f054a3b1fc9
RDX: 00000000000000fc RSI: 0000200000000600 RDI: 000000000000000a
RBP: 00007f054a434f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f054a609218 R14: 00007f054a609180 R15: 00007ffe29401a18
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages