Hello,
syzbot found the following issue on:
HEAD commit: de8dfb3f0278 Linux 5.15.206
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=13b05a73980000
kernel config:
https://syzkaller.appspot.com/x/.config?x=353ae28c40b35af5
dashboard link:
https://syzkaller.appspot.com/bug?extid=d1b2b803464a6cdf072d
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/a3b00346a7a2/disk-de8dfb3f.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/cd8ff348788e/vmlinux-de8dfb3f.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/6d03a73b9540/bzImage-de8dfb3f.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+d1b2b8...@syzkaller.appspotmail.com
INFO: task kworker/1:5:4233 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:5 state:D stack:24088 pid: 4233 ppid: 2 flags:0x00004000
Workqueue: events_power_efficient reg_check_chans_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
__mutex_lock_common+0xcfc/0x2400 kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
reg_check_chans_work+0x82/0xa80 net/wireless/reg.c:2437
process_one_work+0x85f/0x1010 kernel/workqueue.c:2310
worker_thread+0xaa6/0x1290 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Showing all locks held in the system:
2 locks held by init/1:
#0: ffff88807f4d0f28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807f4d0f28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
3 locks held by kworker/1:1/23:
#0: ffff88802b7f9d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1:
ffffc90000ddfd00
((addr_chk_work).work){+.+.}-{0:0}
, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8d43d3c8
(rtnl_mutex){+.+.}-{3:3}
, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4654
1 lock held by khungtaskd/27:
#0: ffffffff8c31eb20 (rcu_read_lock
){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by kworker/u4:3/156:
#0: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
#1: ffff8880b9027888 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x499/0x7d0 kernel/sched/psi.c:882
1 lock held by kswapd0/255:
2 locks held by kswapd1/256:
#0: ffff8880b913a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
#1: ffff8880b9127888 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x499/0x7d0 kernel/sched/psi.c:882
5 locks held by udevd/3561:
2 locks held by dhcpcd/3854:
#0: ffff88802c4bab28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802c4bab28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de768 (pcpu_drain_mutex){+.+.}-{3:3}, at: __drain_all_pages+0x57/0x720 mm/page_alloc.c:3189
2 locks held by dhcpcd/3855:
#0:
ffff88802c4bb928 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
ffff88802c4bb928 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
2 locks held by dhcpcd/3856:
#0: ffff88802c4b8128 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802c4b8128 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
2 locks held by getty/3946:
#0: ffff88802c5a6098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc900026562e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5df/0x1a70 drivers/tty/n_tty.c:2158
2 locks held by sshd-session/4173:
#0: ffff88807a0b4028 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807a0b4028 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1:
ffffffff8c3dea60
(fs_reclaim){+.+.}-{0:0}
, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
3 locks held by kworker/1:4/4232:
3 locks held by kworker/1:5/4233:
#0: ffff888016c71938 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90003ddfd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x82/0xa80 net/wireless/reg.c:2437
3 locks held by kworker/0:7/4271:
#0: ffff888016c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90003ecfd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2:
ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:74
7 locks held by kworker/u4:13/4757:
#0: ffff888016dcd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1:
ffffc9000311fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2:
ffffffff8d4314d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:589
#3: ffffffff8d462a28 (devlink_mutex){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0xa4/0x310 net/core/devlink.c:11534
#4: ffff8880609a9658 (&nsim_bus_dev->nsim_bus_reload_lock){+.+.}-{3:3}, at: nsim_dev_reload_up+0xc5/0x820 drivers/net/netdevsim/dev.c:897
#5: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: nsim_init_netdevsim drivers/net/netdevsim/netdev.c:310 [inline]
#5: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: nsim_create+0x2ef/0x3e0 drivers/net/netdevsim/netdev.c:365
#6: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#6: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#6: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
1 lock held by syz.2.280/5195:
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:699 [inline]
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3d/0x1b0 drivers/net/tun.c:3440
2 locks held by dhcpcd/5673:
#0: ffff88802c4b8828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802c4b8828 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3dea60 (
fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
2 locks held by syz.0.541/6284:
#0: ffff888024678120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1744 [inline]
#0: ffff888024678120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_setsockopt+0x7f3/0x1af0 net/packet/af_packet.c:3830
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
1 lock held by syz-executor/6477:
#0:
ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:699 [inline]
ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3d/0x1b0 drivers/net/tun.c:3440
3 locks held by kworker/0:9/7406:
#0: ffff888016c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90004ab7d00 ((work_completion)(&(&vi->refill)->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#2: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#2: ffffffff8c3dea60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10e8/0x28b0 mm/page_alloc.c:5128
2 locks held by kworker/u4:16/7814:
1 lock held by syz-executor/7891:
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:699 [inline]
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3d/0x1b0 drivers/net/tun.c:3440
1 lock held by syz.7.1012/8170:
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:699 [inline]
#0: ffffffff8d43d3c8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3d/0x1b0 drivers/net/tun.c:3440
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a2/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe0f/0xe50 kernel/hung_task.c:369
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4231 Comm: kworker/0:3 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Workqueue: events nsim_dev_trap_report_work
RIP: 0010:io_serial_in+0x73/0xb0 drivers/tty/serial/8250/8250_port.c:461
Code: e8 82 6f 29 fd 44 89 f9 d3 e3 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 33 8e 6e fd 41 03 1e 89 da ec <0f> b6 c0 5b 41 5c 41 5e 41 5f c3 44 89 f9 80 e1 07 38 c1 7c aa 4c
RSP: 0000:ffffc9000359f370 EFLAGS: 00000002
RAX: 1ffffffff2cc0500 RBX: 00000000000003fd RCX: 0000000000000000
RDX: 00000000000003fd RSI: 0000000000000000 RDI: 0000000000000020
RBP: 0000000000000020 R08: 0000000000000003 R09: 0000000000000004
R10: dffffc0000000000 R11: fffff520006b3e64 R12: dffffc0000000000
R13: 1ffffffff2c63d30 R14: ffffffff96602880 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe0cae4c747 CR3: 000000002abd1000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
serial_in drivers/tty/serial/8250/8250.h:116 [inline]
wait_for_xmitr+0x4c/0x260 drivers/tty/serial/8250/8250_port.c:2069
serial8250_console_putchar+0x1a/0x50 drivers/tty/serial/8250/8250_port.c:3315
uart_console_write+0xaa/0x100 drivers/tty/serial/serial_core.c:1981
serial8250_console_write+0xc46/0x1000 drivers/tty/serial/8250/8250_port.c:3392
call_console_drivers kernel/printk/printk.c:-1 [inline]
console_unlock+0xb9a/0x1120 kernel/printk/printk.c:2744
vprintk_emit+0xc0/0x150 kernel/printk/printk.c:2274
_printk+0xda/0x130 kernel/printk/printk.c:2299
slab_out_of_memory+0x9c/0x170 mm/slub.c:2791
___slab_alloc+0xd66/0xdd0 mm/slub.c:3017
__slab_alloc mm/slub.c:3100 [inline]
slab_alloc_node mm/slub.c:3191 [inline]
kmem_cache_alloc_node+0x1c3/0x2d0 mm/slub.c:3261
__alloc_skb+0xf4/0x750 net/core/skbuff.c:415
alloc_skb include/linux/skbuff.h:1162 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:664 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:721 [inline]
nsim_dev_trap_report_work+0x2a1/0xb40 drivers/net/netdevsim/dev.c:762
process_one_work+0x85f/0x1010 kernel/workqueue.c:2310
worker_thread+0xaa6/0x1290 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
cache: skbuff_head_cache, object size: 232, buffer size: 320, default order: 0, min order: 0
node 0: slabs: 8077, objs: 96924, free: 0
node 1: slabs: 3764, objs: 45168, free: 20
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup