[v6.6] INFO: rcu detected stall in sys_clone (2)

0 views
Skip to first unread message

syzbot

unread,
Dec 9, 2025, 4:13:26 AM (23 hours ago) Dec 9
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 5fa4793a2d2d Linux 6.6.119
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1330aec2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link: https://syzkaller.appspot.com/bug?extid=7919861b650883ef99d9
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/857fe583cdec/disk-5fa4793a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ed4d3c1402bc/vmlinux-5fa4793a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/40296d968c3d/bzImage-5fa4793a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+791986...@syzkaller.appspotmail.com

bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5782/1:b..l
rcu: (detected by 0, t=10503 jiffies, g=18237, q=368 ncpus=2)
task:udevd state:R running task stack:26696 pid:5782 ppid:5138 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6866
preempt_schedule+0xab/0xc0 kernel/sched/core.c:6890
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:391 [inline]
zap_pte_range mm/memory.c:1523 [inline]
zap_pmd_range mm/memory.c:1571 [inline]
zap_pud_range mm/memory.c:1600 [inline]
zap_p4d_range mm/memory.c:1621 [inline]
unmap_page_range+0x236f/0x2fe0 mm/memory.c:1642
unmap_vmas+0x25e/0x3a0 mm/memory.c:1732
exit_mmap+0x200/0xb50 mm/mmap.c:3302
__mmput+0x118/0x3c0 kernel/fork.c:1355
exit_mm+0x1da/0x2c0 kernel/exit.c:569
do_exit+0x88e/0x23c0 kernel/exit.c:870
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
__do_sys_exit_group kernel/exit.c:1035 [inline]
__se_sys_exit_group kernel/exit.c:1033 [inline]
__x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1033
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f16d44f16c5
RSP: 002b:00007ffdedc61a28 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000557c44433fb0 RCX: 00007f16d44f16c5
RDX: 00000000000000e7 RSI: fffffffffffffe68 RDI: 0000000000000000
RBP: 0000557c44433910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffdedc61a70 R14: 0000000000000000 R15: 0000000000000000
</TASK>
rcu: rcu_preempt kthread starved for 1498 jiffies! g18237 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27368 pid:17 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x160/0x280 kernel/time/timer.c:2168
rcu_gp_fqs_loop+0x302/0x1560 kernel/rcu/tree.c:1667
rcu_gp_kthread+0x99/0x380 kernel/rcu/tree.c:1866
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 PID: 6958 Comm: syz.1.225 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xa9/0x110 kernel/locking/spinlock.c:194
Code: 74 05 e8 0a d9 1a f7 48 c7 44 24 20 00 00 00 00 9c 8f 44 24 20 f6 44 24 21 02 75 4b f7 c3 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> e2 53 ea f6 65 8b 05 d3 9a 92 75 85 c0 74 3c 48 c7 04 24 0e 36
RSP: 0018:ffffc90000006740 EFLAGS: 00000206
RAX: fe8aa5e64cec6800 RBX: 0000000000000a06 RCX: fe8aa5e64cec6800
RDX: dffffc0000000000 RSI: ffffffff8aaabce0 RDI: 0000000000000001
RBP: ffffc900000067c0 R08: ffffffff90d9452f R09: 1ffffffff21b28a5
R10: dffffc0000000000 R11: fffffbfff21b28a6 R12: dffffc0000000000
R13: ffff88807a4b7200 R14: ffffffff97113980 R15: 1ffff92000000ce8
FS: 00007f07052356c0(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f07051156c0 CR3: 0000000060403000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<IRQ>
__debug_check_no_obj_freed lib/debugobjects.c:999 [inline]
debug_check_no_obj_freed+0x51f/0x540 lib/debugobjects.c:1020
slab_free_hook mm/slub.c:1786 [inline]
slab_free_freelist_hook+0xd2/0x1b0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3852
skb_ext_del include/linux/skbuff.h:4786 [inline]
nf_bridge_info_free net/bridge/br_netfilter_hooks.c:152 [inline]
br_nf_dev_queue_xmit+0x48f/0x1b90 net/bridge/br_netfilter_hooks.c:-1
NF_HOOK+0x613/0x6a0 include/linux/netfilter.h:304
br_nf_post_routing+0xb41/0xfb0 net/bridge/br_netfilter_hooks.c:977
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x213/0x3d0 include/linux/netfilter.h:302
br_forward_finish+0xd3/0x130 net/bridge/br_forward.c:66
br_nf_hook_thresh net/bridge/br_netfilter_hooks.c:-1 [inline]
br_nf_forward_finish+0xa33/0xe50 net/bridge/br_netfilter_hooks.c:684
NF_HOOK+0x613/0x6a0 include/linux/netfilter.h:304
br_nf_forward_ip+0xcc1/0x1110 net/bridge/br_netfilter_hooks.c:754
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x213/0x3d0 include/linux/netfilter.h:302
__br_forward+0x41f/0x600 net/bridge/br_forward.c:115
deliver_clone net/bridge/br_forward.c:131 [inline]
maybe_deliver+0xb5/0x150 net/bridge/br_forward.c:191
br_flood+0x31b/0x680 net/bridge/br_forward.c:237
br_handle_frame_finish+0x143d/0x1950 net/bridge/br_input.c:215
br_nf_hook_thresh+0x3b6/0x480 net/bridge/br_netfilter_hooks.c:1184
br_nf_pre_routing_finish_ipv6+0x9e3/0xd90 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:304 [inline]
br_nf_pre_routing_ipv6+0x34d/0x680 net/bridge/br_netfilter_ipv6.c:184
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
br_handle_frame+0x957/0x14c0 net/bridge/br_input.c:424
__netif_receive_skb_core+0xf6b/0x3ac0 net/core/dev.c:5502
__netif_receive_skb_one_core net/core/dev.c:5606 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5722
process_backlog+0x380/0x6e0 net/core/dev.c:6050
__napi_poll+0xc0/0x460 net/core/dev.c:6612
napi_poll net/core/dev.c:6679 [inline]
net_rx_action+0x5ea/0xbf0 net/core/dev.c:6815
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xc7/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:find_stack lib/stackdepot.c:349 [inline]
RIP: 0010:__stack_depot_save+0x149/0x630 lib/stackdepot.c:390
Code: e2 c1 c2 04 29 d1 31 c8 c1 c1 0e 29 c8 41 31 c4 c1 c0 18 41 29 c4 48 8b 35 fc e1 c8 12 8b 2d f2 e1 c8 12 44 21 e5 4c 8b 2c ee <4d> 85 ed 74 33 44 89 f0 eb 09 4d 8b 6d 00 4d 85 ed 74 25 45 39 65
RSP: 0018:ffffc90005357148 EFLAGS: 00000202
RAX: 00000000251f38aa RBX: ffffc900053571e0 RCX: 000000004cca028f
RDX: 000000005bf1bf11 RSI: ffff88823b400000 RDI: 0000000000002800
RBP: 000000000008792a R08: 0000000007a9d424 R09: 00000000457e3612
R10: 0000000000000004 R11: 0000000000000002 R12: 00000000e568792a
R13: 0000000000000000 R14: 0000000000000010 R15: 0000000000000001
save_stack+0x105/0x1f0 mm/page_owner.c:129
__reset_page_owner+0x4e/0x190 mm/page_owner.c:149
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
discard_slab mm/slub.c:2127 [inline]
__unfreeze_partials+0x1cf/0x210 mm/slub.c:2667
put_cpu_partial+0x17c/0x250 mm/slub.c:2743
__slab_free+0x31d/0x410 mm/slub.c:3700
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xe0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3519
kmem_cache_zalloc include/linux/slab.h:711 [inline]
__proc_create+0x370/0x8d0 fs/proc/generic.c:447
proc_create_reg fs/proc/generic.c:574 [inline]
proc_create_data+0xa6/0x190 fs/proc/generic.c:588
nfsd_net_init+0x1a2/0x1c0 fs/nfsd/nfsctl.c:1537
ops_init+0x397/0x640 net/core/net_namespace.c:139
setup_net+0x3a5/0xa00 net/core/net_namespace.c:343
copy_net_ns+0x36d/0x5e0 net/core/net_namespace.c:520
create_new_namespaces+0x3d3/0x6f0 kernel/nsproxy.c:110
copy_namespaces+0x430/0x4a0 kernel/nsproxy.c:179
copy_process+0x1700/0x3d70 kernel/fork.c:2509
kernel_clone+0x21b/0x840 kernel/fork.c:2914
__do_sys_clone kernel/fork.c:3057 [inline]
__se_sys_clone kernel/fork.c:3041 [inline]
__x64_sys_clone+0x18c/0x1e0 kernel/fork.c:3041
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f070438f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f0705234fe8 EFLAGS: 00000206 ORIG_RAX: 0000000000000038
RAX: ffffffffffffffda RBX: 00007f07045e5fa0 RCX: 00007f070438f749
RDX: 0000000000000000 RSI: 00002000000012b0 RDI: 0000000040000200
RBP: 00007f0704413f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000
R13: 00007f07045e6038 R14: 00007f07045e5fa0 R15: 00007ffdd28fcfd8
</TASK>
net_ratelimit: 4430 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
net_ratelimit: 10331 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:56:30:15:7e:9c:2b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages