Hello,
syzbot found the following issue on:
HEAD commit: 80de0a958133 Linux 6.6.133
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=1738534e580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link:
https://syzkaller.appspot.com/bug?extid=73c5f39e178c7ad1bb1d
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/3a04f1ef21aa/disk-80de0a95.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/081f2fcfce9b/vmlinux-80de0a95.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/3ef5904d5301/bzImage-80de0a95.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+73c5f3...@syzkaller.appspotmail.com
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5763/1:b..l P5768/1:b..l
rcu: (detected by 0, t=10503 jiffies, g=11929, q=277 ncpus=2)
task:syz-executor state:R running task stack:21448 pid:5768 ppid:5765 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_irq+0xbf/0x150 kernel/sched/core.c:7010
irqentry_exit+0x67/0x70 kernel/entry/common.c:438
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:preempt_count arch/x86/include/asm/preempt.h:27 [inline]
RIP: 0010:rcu_lockdep_current_cpu_online+0x4/0x120 kernel/rcu/tree.c:4222
Code: 6e 00 4c 85 3b 0f 95 c0 5b 41 5e 41 5f c3 89 fe 89 fb 48 c7 c7 40 6f 13 8d e8 28 17 ec 02 89 df e9 53 ff ff ff 90 f3 0f 1e fa <41> 57 41 56 53 65 8b 0d 08 82 92 7e b0 01 f7 c1 00 00 f0 00 0f 85
RSP: 0018:ffffc9000451f3d0 EFLAGS: 00000202
RAX: 0000000000000001 RBX: 0000000000000000 RCX: bb41a50c69a43000
RDX: 0000000000000000 RSI: ffffffff8b1c8dc0 RDI: ffffffff8b1c8d80
RBP: 0000000000000001 R08: dffffc0000000000 R09: 1ffffffff22388a0
R10: dffffc0000000000 R11: fffffbfff22388a1 R12: dffffc0000000000
R13: 0000000000000500 R14: 0000000000145e80 R15: ffff88813aa00000
rcu_read_lock_held_common kernel/rcu/update.c:112 [inline]
rcu_read_lock_held+0x1e/0x40 kernel/rcu/update.c:348
lookup_page_ext mm/page_ext.c:240 [inline]
page_ext_get+0x193/0x2b0 mm/page_ext.c:509
__reset_page_owner+0x2e/0x190 mm/page_owner.c:145
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1181 [inline]
free_unref_page_prepare+0x7b2/0x8c0 mm/page_alloc.c:2365
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2458
__slab_free+0x35a/0x400 mm/slub.c:3736
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xd0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
____kasan_kmalloc mm/kasan/common.c:341 [inline]
__kasan_kmalloc+0x22/0xa0 mm/kasan/common.c:384
kasan_kmalloc include/linux/kasan.h:198 [inline]
__do_kmalloc_node mm/slab_common.c:1007 [inline]
__kmalloc+0xb4/0x230 mm/slab_common.c:1020
kmalloc_array include/linux/slab.h:637 [inline]
memcg_list_lru_alloc+0x1cc/0xb10 mm/list_lru.c:487
memcg_slab_pre_alloc_hook mm/slab.h:501 [inline]
slab_pre_alloc_hook+0x2b2/0x310 mm/slab.h:719
slab_alloc_node mm/slub.c:3477 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc_lru+0x4d/0x2d0 mm/slub.c:3526
__d_alloc+0x31/0x730 fs/dcache.c:1773
d_alloc_anon fs/dcache.c:1874 [inline]
d_alloc_cursor+0x44/0xd0 fs/dcache.c:1880
dcache_dir_open+0x41/0x80 fs/libfs.c:83
do_dentry_open+0x8c6/0x1500 fs/open.c:929
do_open fs/namei.c:3640 [inline]
path_openat+0x27f1/0x3230 fs/namei.c:3797
do_filp_open+0x1f5/0x430 fs/namei.c:3824
do_sys_openat2+0x134/0x1d0 fs/open.c:1421
do_sys_open fs/open.c:1436 [inline]
__do_sys_openat fs/open.c:1452 [inline]
__se_sys_openat fs/open.c:1447 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1447
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fdd8579bb3c
RSP: 002b:00007ffd3baafab0 EFLAGS: 00000206 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fdd8579bb3c
RDX: 0000000000090800 RSI: 00007fdd858325d3 RDI: 00000000ffffff9c
RBP: 00007ffd3baafb5c R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000
R13: 0000000000000000 R14: 000000000001a199 R15: 00007ffd3baafbb0
</TASK>
task:syz-executor state:R running task stack:24232 pid:5763 ppid:5756 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6867
preempt_schedule+0xc0/0xd0 kernel/sched/core.c:6891
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:391 [inline]
zap_pte_range mm/memory.c:1523 [inline]
zap_pmd_range mm/memory.c:1571 [inline]
zap_pud_range mm/memory.c:1600 [inline]
zap_p4d_range mm/memory.c:1621 [inline]
unmap_page_range+0x2315/0x3000 mm/memory.c:1642
unmap_vmas+0x286/0x3f0 mm/memory.c:1732
exit_mmap+0x238/0xb90 mm/mmap.c:3302
__mmput+0x118/0x3c0 kernel/fork.c:1355
exit_mm+0x1f2/0x2c0 kernel/exit.c:569
do_exit+0x8dd/0x2460 kernel/exit.c:870
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x13f0 kernel/signal.c:2902
arch_do_signal_or_restart+0xc2/0x800 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xa0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fbc47357997
RSP: 002b:00007ffc8bd800c0 EFLAGS: 00000202 ORIG_RAX: 000000000000003d
RAX: fffffffffffffe00 RBX: 0000555576bb3500 RCX: 00007fbc47357997
RDX: 0000000040000000 RSI: 00007ffc8bd8011c RDI: ffffffffffffffff
RBP: 00007ffc8bd8011c R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000008
R13: 0000000000000003 R14: 00007ffc8bd80378 R15: 0000000000000000
</TASK>
rcu: rcu_preempt kthread starved for 9009 jiffies! g11929 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27080 pid:17 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_timeout+0x188/0x2d0 kernel/time/timer.c:2168
rcu_gp_fqs_loop+0x313/0x1590 kernel/rcu/tree.c:1667
rcu_gp_kthread+0x9d/0x3b0 kernel/rcu/tree.c:1866
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 PID: 5781 Comm: kworker/u5:6 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Workqueue: hci2 hci_cmd_work
RIP: 0010:lock_is_held_type+0x13e/0x190 kernel/locking/lockdep.c:5830
Code: 75 40 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00 00 75 46 41 f7 c5 00 02 00 00 74 01 fb 65 48 8b 04 25 28 00 00 00 <48> 3b 44 24 08 75 3c 89 e8 48 83 c4 10 5b 41 5c 41 5d 41 5e 41 5f
RSP: 0018:ffffc90000006728 EFLAGS: 00000206
RAX: a648ac147f982d00 RBX: ffff88802a400000 RCX: a648ac147f982d00
RDX: 0000000000000100 RSI: ffffffff8acadb60 RDI: ffffffff8b1c8de0
RBP: 0000000000000000 R08: ffffc90000006a80 R09: ffffc90000006a90
R10: ffffc900000068e0 R11: fffff52000000d1e R12: 0000000000000008
R13: 0000000000000246 R14: ffffffff8e3c21c8 R15: ffff88802a400c20
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe6349456b8 CR3: 0000000061054000 CR4: 00000000003506f0
Call Trace:
<IRQ>
lock_is_held include/linux/lockdep.h:288 [inline]
lockdep_rtnl_is_held+0x1b/0x30 net/core/rtnetlink.c:176
__in6_dev_get include/net/addrconf.h:323 [inline]
ip6_ignore_linkdown include/net/addrconf.h:414 [inline]
find_match+0xd0/0xc80 net/ipv6/route.c:782
__find_rr_leaf+0x249/0x760 net/ipv6/route.c:870
find_rr_leaf net/ipv6/route.c:891 [inline]
rt6_select net/ipv6/route.c:935 [inline]
fib6_table_lookup+0x3b5/0xa80 net/ipv6/route.c:2232
ip6_pol_route+0x239/0x1230 net/ipv6/route.c:2268
pol_lookup_func include/net/ip6_fib.h:641 [inline]
fib6_rule_lookup+0x20c/0x570 net/ipv6/fib6_rules.c:116
ip6_route_input_lookup net/ipv6/route.c:2337 [inline]
ip6_route_input+0x732/0xa50 net/ipv6/route.c:2633
ip6_rcv_finish+0x143/0x230 net/ipv6/ip6_input.c:77
ip_sabotage_in+0x1f4/0x280 net/bridge/br_netfilter_hooks.c:1001
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x21d/0x3b0 include/linux/netfilter.h:302
__netif_receive_skb_one_core net/core/dev.c:5634 [inline]
__netif_receive_skb+0xcc/0x290 net/core/dev.c:5748
netif_receive_skb_internal net/core/dev.c:5834 [inline]
netif_receive_skb+0x1bc/0x720 net/core/dev.c:5893
NF_HOOK+0x9e/0x3a0 include/linux/netfilter.h:304
br_handle_frame_finish+0x13e5/0x18f0 net/bridge/br_input.c:221
br_nf_hook_thresh+0x3cd/0x4a0 net/bridge/br_netfilter_hooks.c:1184
br_nf_pre_routing_finish_ipv6+0x9dc/0xd00 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:304 [inline]
br_nf_pre_routing_ipv6+0x349/0x6b0 net/bridge/br_netfilter_ipv6.c:184
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
br_handle_frame+0x1245/0x14d0 net/bridge/br_input.c:424
__netif_receive_skb_core+0xfab/0x3af0 net/core/dev.c:5528
__netif_receive_skb_one_core net/core/dev.c:5632 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5748
process_backlog+0x391/0x6f0 net/core/dev.c:6076
__napi_poll+0xc0/0x460 net/core/dev.c:6638
napi_poll net/core/dev.c:6705 [inline]
net_rx_action+0x616/0xc40 net/core/dev.c:6841
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xd3/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:orc_find arch/x86/kernel/unwind_orc.c:218 [inline]
RIP: 0010:unwind_next_frame+0x25c/0x2970 arch/x86/kernel/unwind_orc.c:494
Code: 48 bb 00 00 00 00 00 fc ff df 0f b6 04 18 84 c0 0f 85 de 21 00 00 45 8b 6d 00 44 89 f8 ff c0 48 8d 2c 85 5c 5f b2 8f 48 89 e8 <48> c1 e8 03 0f b6 04 18 84 c0 0f 85 d8 21 00 00 8b 6d 00 ff c5 4b
RSP: 0018:ffffc900046bf478 EFLAGS: 00000206
RAX: ffffffff8fb60188 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: ffff88802a400000 RSI: 000000000000e88a RDI: 000000000009c000
RBP: ffffffff8fb60188 R08: ffffc900046bf610 R09: 0000000000000000
R10: 0000000000000004 R11: 0000000000000000 R12: ffffc900046bf548
R13: 000000000002f757 R14: ffffc900046bf57d R15: 000000000000e88a
arch_stack_walk+0x144/0x190 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0xaa/0x100 kernel/stacktrace.c:122
save_stack+0x125/0x230 mm/page_owner.c:128
__reset_page_owner+0x4e/0x190 mm/page_owner.c:149
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1181 [inline]
free_unref_page_prepare+0x7b2/0x8c0 mm/page_alloc.c:2365
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2458
discard_slab mm/slub.c:2127 [inline]
__unfreeze_partials+0x1cf/0x210 mm/slub.c:2667
put_cpu_partial+0x17c/0x250 mm/slub.c:2743
__slab_free+0x319/0x400 mm/slub.c:3700
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xd0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:306
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4b0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11a/0x2d0 mm/slub.c:3519
skb_clone+0x1eb/0x370 net/core/skbuff.c:1915
hci_send_cmd_sync net/bluetooth/hci_core.c:4051 [inline]
hci_cmd_work+0xe0/0x650 net/bluetooth/hci_core.c:4087
process_one_work kernel/workqueue.c:2653 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2730
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2811
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup