Hello,
syzbot found the following issue on:
HEAD commit: 68efe5a6c16a Linux 5.15.197
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=17f36e1a580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7e6ed99963d6ee1d
dashboard link:
https://syzkaller.appspot.com/bug?extid=99384689d3d8a957e4e6
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/900f9b9bd850/disk-68efe5a6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/1e089a5019a6/vmlinux-68efe5a6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/b319f477b907/bzImage-68efe5a6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+993846...@syzkaller.appspotmail.com
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P3559/1:b..l
(detected by 0, t=10503 jiffies, g=15425, q=78)
task:udevd state:R running task stack:25696 pid: 3559 ppid: 1 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
preempt_schedule_common+0x82/0xd0 kernel/sched/core.c:6571
preempt_schedule+0xa7/0xb0 kernel/sched/core.c:6596
preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:34
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline]
_raw_spin_unlock_irqrestore+0xf6/0x100 kernel/locking/spinlock.c:194
spin_unlock_irqrestore include/linux/spinlock.h:419 [inline]
__wake_up_common_lock kernel/sched/wait.c:140 [inline]
__wake_up_sync_key+0x11b/0x180 kernel/sched/wait.c:205
sock_def_readable+0x136/0x250 net/core/sock.c:3101
__netlink_sendskb net/netlink/af_netlink.c:1264 [inline]
netlink_sendskb+0x97/0x130 net/netlink/af_netlink.c:1270
netlink_sendmsg+0x8ab/0xbc0 net/netlink/af_netlink.c:1918
sock_sendmsg_nosec net/socket.c:706 [inline]
__sock_sendmsg net/socket.c:718 [inline]
____sys_sendmsg+0x5a2/0x8c0 net/socket.c:2446
___sys_sendmsg+0x1f0/0x260 net/socket.c:2500
__sys_sendmsg net/socket.c:2529 [inline]
__do_sys_sendmsg net/socket.c:2538 [inline]
__se_sys_sendmsg+0x190/0x250 net/socket.c:2536
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f8dd0d10407
RSP: 002b:00007ffc848f1d10 EFLAGS: 00000202 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f8dd0c22880 RCX: 00007f8dd0d10407
RDX: 0000000000000000 RSI: 00007ffc848f1d70 RDI: 0000000000000004
RBP: 000055896d94b9a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 00000000000000b6
R13: 000055896d9279e0 R14: 0000000000000000 R15: 0000000000000000
</TASK>
rcu: rcu_preempt kthread starved for 10530 jiffies! g15425 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27880 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5545 Comm: f2fs_gc-7:0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:check_preemption_disabled+0x42/0x110 lib/smp_processor_id.c:55
Code: 65 8b 05 d9 1d 60 76 65 8b 0d 42 78 60 76 f7 c1 ff ff ff 7f 74 1f 65 48 8b 0c 25 28 00 00 00 48 3b 4c 24 08 0f 85 c4 00 00 00 <48> 83 c4 10 5b 41 5e 41 5f 5d c3 48 c7 04 24 00 00 00 00 9c 8f 04
RSP: 0018:ffffc90000dd01e0 EFLAGS: 00000046
RAX: 0000000000000001 RBX: ffff88807b73bb80 RCX: 4dda2c0d7e64b500
RDX: 0000000000000100 RSI: ffffffff8a0b29e0 RDI: ffffffff8a59e800
RBP: 00000000ffffffff R08: ffffc90000dd04e0 R09: ffffc90000dd04f0
R10: fffff520001ba06a R11: 1ffff920001ba068 R12: 1ffff1100bee538a
R13: 0000000000000246 R14: ffffffff8c11c720 R15: 00000000ffffffff
FS: 0000000000000000(0000) GS:ffff8880b9100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b33e0aff8 CR3: 00000000565e0000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
lockdep_recursion_inc kernel/locking/lockdep.c:431 [inline]
lock_is_held_type+0x75/0x190 kernel/locking/lockdep.c:5665
__find_rr_leaf+0x342/0x6c0 net/ipv6/route.c:797
find_rr_leaf net/ipv6/route.c:853 [inline]
rt6_select net/ipv6/route.c:897 [inline]
fib6_table_lookup+0x39c/0xa70 net/ipv6/route.c:2183
ip6_pol_route+0x209/0x1290 net/ipv6/route.c:2219
pol_lookup_func include/net/ip6_fib.h:582 [inline]
fib6_rule_lookup+0x1d3/0x570 net/ipv6/fib6_rules.c:115
ip6_route_input_lookup net/ipv6/route.c:2289 [inline]
ip6_route_input+0x6bf/0xa40 net/ipv6/route.c:2585
ip6_rcv_finish+0x136/0x240 net/ipv6/ip6_input.c:77
NF_HOOK+0x2d6/0x360 include/linux/netfilter.h:302
__netif_receive_skb_one_core net/core/dev.c:5525 [inline]
__netif_receive_skb+0xcc/0x290 net/core/dev.c:5639
process_backlog+0x364/0x780 net/core/dev.c:6516
__napi_poll+0xc0/0x430 net/core/dev.c:7075
napi_poll net/core/dev.c:7142 [inline]
net_rx_action+0x4a8/0x9c0 net/core/dev.c:7232
handle_softirqs+0x328/0x820 kernel/softirq.c:576
__do_softirq kernel/softirq.c:610 [inline]
invoke_softirq kernel/softirq.c:450 [inline]
__irq_exit_rcu+0x12f/0x220 kernel/softirq.c:659
irq_exit_rcu+0x5/0x20 kernel/softirq.c:671
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0xa0/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xa5/0x100 kernel/locking/spinlock.c:194
Code: 74 05 e8 be 9a d2 f7 48 c7 44 24 20 00 00 00 00 9c 8f 44 24 20 f6 44 24 21 02 75 4b f7 c3 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> d6 22 a6 f7 65 8b 05 67 20 57 76 85 c0 74 3c 48 c7 04 24 0e 36
RSP: 0018:ffffc9000401f560 EFLAGS: 00000206
RAX: 4dda2c0d7e64b500 RBX: 0000000000000a06 RCX: 4dda2c0d7e64b500
RDX: dffffc0000000000 RSI: ffffffff8a0b1be0 RDI: 0000000000000001
RBP: ffffc9000401f5e0 R08: dffffc0000000000 R09: fffffbfff1ff5444
R10: fffffbfff1ff5444 R11: 1ffffffff1ff5443 R12: dffffc0000000000
R13: 0000000000000048 R14: ffffffff8c7343e0 R15: 1ffff92000803eac
stack_depot_save+0x404/0x440 lib/stackdepot.c:331
kasan_save_stack mm/kasan/common.c:40 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0xb3/0xd0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x4c/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3225 [inline]
slab_alloc mm/slub.c:3233 [inline]
kmem_cache_alloc+0x100/0x290 mm/slub.c:3238
f2fs_kmem_cache_alloc_nofail fs/f2fs/f2fs.h:2631 [inline]
f2fs_kmem_cache_alloc fs/f2fs/f2fs.h:2641 [inline]
add_free_nid+0xd7/0x6d0 fs/f2fs/node.c:2307
scan_free_nid_bits fs/f2fs/node.c:2458 [inline]
__f2fs_build_free_nids fs/f2fs/node.c:2492 [inline]
f2fs_build_free_nids+0x519/0x1170 fs/f2fs/node.c:2550
f2fs_balance_fs_bg+0x13e/0x720 fs/f2fs/segment.c:550
gc_thread_func+0x1662/0x26c0 fs/f2fs/gc.c:139
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
vkms_vblank_simulate: vblank timer overrun
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup