Hello,
syzbot found the following issue on:
HEAD commit: bb9c90ab9c5a Linux 6.6.102
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=15f2e462580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7b989b97a0687b0a
dashboard link:
https://syzkaller.appspot.com/bug?extid=65b7d080de37a0d45cc4
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/44e2224124c3/disk-bb9c90ab.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/33e0aef7ff42/vmlinux-bb9c90ab.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/0515bc436228/bzImage-bb9c90ab.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+65b7d0...@syzkaller.appspotmail.com
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5146/1:b..l
rcu: (detected by 0, t=10503 jiffies, g=104725, q=932153 ncpus=2)
task:klogd state:R running task stack:23144 pid:5146 ppid:1 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
preempt_schedule_irq+0xb5/0x140 kernel/sched/core.c:7009
irqentry_exit+0x67/0x70 kernel/entry/common.c:438
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:lock_acquire+0x1f2/0x410 kernel/locking/lockdep.c:5758
Code: 00 9c 8f 84 24 80 00 00 00 f6 84 24 81 00 00 00 02 0f 85 f5 00 00 00 41 f7 c6 00 02 00 00 74 01 fb 48 c7 44 24 60 0e 36 e0 45 <4b> c7 04 3c 00 00 00 00 66 43 c7 44 3c 09 00 00 43 c6 44 3c 0b 00
RSP: 0018:ffffc900032474a0 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 0000000000000000 RCX: ce5af1ab07852200
RDX: 0000000000000000 RSI: ffffffff8aaacb40 RDI: ffffffff8afc66c0
RBP: ffffc900032475a8 R08: dffffc0000000000 R09: 1ffffffff21b46a0
R10: dffffc0000000000 R11: fffffbfff21b46a1 R12: 1ffff92000648ea0
R13: ffffffff8cd2fbe0 R14: 0000000000000246 R15: dffffc0000000000
rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
rcu_read_lock include/linux/rcupdate.h:786 [inline]
page_ext_get+0x3e/0x2b0 mm/page_ext.c:508
__page_table_check_zero+0x138/0x4b0 mm/page_table_check.c:148
page_table_check_free include/linux/page_table_check.h:41 [inline]
free_pages_prepare mm/page_alloc.c:1155 [inline]
free_unref_page_prepare+0x7e1/0x8e0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
__slab_free+0x35e/0x410 mm/slub.c:3722
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xe0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
____kasan_kmalloc mm/kasan/common.c:340 [inline]
__kasan_kmalloc+0x22/0xa0 mm/kasan/common.c:383
kasan_kmalloc include/linux/kasan.h:198 [inline]
__do_kmalloc_node mm/slab_common.c:1007 [inline]
__kmalloc_node_track_caller+0xb2/0x230 mm/slab_common.c:1027
kmalloc_reserve+0x117/0x260 net/core/skbuff.c:581
__alloc_skb+0x138/0x2c0 net/core/skbuff.c:650
alloc_skb include/linux/skbuff.h:1284 [inline]
alloc_skb_with_frags+0xca/0x7c0 net/core/skbuff.c:6328
sock_alloc_send_pskb+0x857/0x990 net/core/sock.c:2799
unix_dgram_sendmsg+0x5a1/0x1720 net/unix/af_unix.c:2001
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x46a/0x620 net/socket.c:2201
__do_sys_sendto net/socket.c:2213 [inline]
__se_sys_sendto net/socket.c:2209 [inline]
__x64_sys_sendto+0xde/0xf0 net/socket.c:2209
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f1a7bfd2407
RSP: 002b:00007ffc5afa8080 EFLAGS: 00000202 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f1a7be82c80 RCX: 00007f1a7bfd2407
RDX: 000000000000005f RSI: 00007ffc5afa81c0 RDI: 0000000000000003
RBP: 00007ffc5afa85f0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000004000 R11: 0000000000000202 R12: 00007ffc5afa8608
R13: 00007ffc5afa81c0 R14: 0000000000000044 R15: 00007ffc5afa81c0
</TASK>
rcu: rcu_preempt kthread starved for 3038 jiffies! g104725 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27656 pid:17 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x160/0x280 kernel/time/timer.c:2167
rcu_gp_fqs_loop+0x302/0x1560 kernel/rcu/tree.c:1667
rcu_gp_kthread+0x99/0x380 kernel/rcu/tree.c:1866
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 14341 Comm: kworker/u4:11 Not tainted 6.6.102-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:unwind_get_return_address+0x71/0xc0 arch/x86/kernel/unwind_orc.c:369
Code: 80 3c 37 00 74 08 48 89 df e8 db 30 a2 00 48 8b 3b e8 f3 d5 1d 00 89 c5 31 ff 89 c6 e8 48 ed 4a 00 85 ed 74 20 e8 8f e9 4a 00 <43> 80 3c 37 00 74 08 48 89 df e8 b0 30 a2 00 48 8b 03 eb 0e e8 76
RSP: 0018:ffffc900001efe60 EFLAGS: 00000246
RAX: ffffffff813aa191 RBX: ffffc900001efed0 RCX: ffff88802d94da00
RDX: 0000000000000100 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 0000000000000001 R08: ffff88802d94da00 R09: 0000000000000003
R10: 0000000000000004 R11: 0000000000000100 R12: ffffffff88e87b03
R13: ffffc900001f02a0 R14: dffffc0000000000 R15: 1ffff9200003dfda
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000020000001a000 CR3: 000000000cb30000 CR4: 00000000003506e0
Call Trace:
<IRQ>
arch_stack_walk+0x11d/0x190 arch/x86/kernel/stacktrace.c:26
stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1806 [inline]
slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1832
slab_free mm/slub.c:3816 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3838
nft_synproxy_eval_v4+0x377/0x560 net/netfilter/nft_synproxy.c:60
nft_synproxy_do_eval+0x342/0x570 net/netfilter/nft_synproxy.c:141
expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline]
nft_do_chain+0x3f9/0x1580 net/netfilter/nf_tables_core.c:288
nft_do_chain_inet+0x25b/0x330 net/netfilter/nft_chain_filter.c:161
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x204/0x390 include/linux/netfilter.h:302
NF_HOOK+0x303/0x390 include/linux/netfilter.h:304
__netif_receive_skb_one_core net/core/dev.c:5596 [inline]
__netif_receive_skb+0xcc/0x290 net/core/dev.c:5710
process_backlog+0x380/0x6e0 net/core/dev.c:6038
__napi_poll+0xc0/0x460 net/core/dev.c:6600
napi_poll net/core/dev.c:6667 [inline]
net_rx_action+0x5ea/0xbf0 net/core/dev.c:6803
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xc7/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:csd_lock_wait kernel/smp.c:311 [inline]
RIP: 0010:smp_call_function_many_cond+0xddf/0x1130 kernel/smp.c:855
Code: 45 8b 2c 24 44 89 ee 83 e6 01 31 ff e8 da d6 0a 00 41 83 e5 01 49 bd 00 00 00 00 00 fc ff df 75 07 e8 15 d3 0a 00 eb 38 f3 90 <42> 0f b6 04 2b 84 c0 75 11 41 f7 04 24 01 00 00 00 74 1e e8 f9 d2
RSP: 0018:ffffc90003937780 EFLAGS: 00000293
RAX: ffffffff817ab827 RBX: 1ffff110171c87d9 RCX: ffff88802d94da00
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc90003937900 R08: ffffffff90da3507 R09: 1ffffffff21b46a0
R10: dffffc0000000000 R11: fffffbfff21b46a1 R12: ffff8880b8e43ec8
R13: dffffc0000000000 R14: ffff8880b8f3d588 R15: 0000000000000000
on_each_cpu_cond_mask+0x3f/0x80 kernel/smp.c:1023
on_each_cpu include/linux/smp.h:71 [inline]
text_poke_sync arch/x86/kernel/alternative.c:2222 [inline]
text_poke_bp_batch+0x318/0x930 arch/x86/kernel/alternative.c:2432
text_poke_flush arch/x86/kernel/alternative.c:2623 [inline]
text_poke_finish+0x30/0x50 arch/x86/kernel/alternative.c:2630
arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
static_key_enable_cpuslocked+0x123/0x240 kernel/jump_label.c:207
static_key_enable+0x1a/0x20 kernel/jump_label.c:220
toggle_allocation_gate+0xaa/0x250 mm/kfence/core.c:831
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup