[v5.15] INFO: task hung in wg_destruct

1 view
Skip to first unread message

syzbot

unread,
Oct 8, 2025, 7:52:29 AM (8 days ago) Oct 8
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 29e53a5b1c4f Linux 5.15.194
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15e1d458580000
kernel config: https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link: https://syzkaller.appspot.com/bug?extid=7f8a2f83398a2f851996
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/6cf4d5e6e441/disk-29e53a5b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b332ae2ff099/vmlinux-29e53a5b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/2f344d06d6b9/bzImage-29e53a5b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7f8a2f...@syzkaller.appspotmail.com

INFO: task kworker/u4:5:4273 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:5 state:D stack:22152 pid: 4273 ppid: 2 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x97/0x280 kernel/time/timer.c:1890
do_wait_for_common+0x29a/0x440 kernel/sched/completion.c:85
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x48/0x60 kernel/sched/completion.c:138
kthread_stop+0x16e/0x540 kernel/kthread.c:666
destroy_workqueue+0xf2/0xb20 kernel/workqueue.c:4451
wg_destruct+0x1e8/0x300 drivers/net/wireguard/device.c:241
netdev_run_todo+0x82d/0xa40 net/core/dev.c:10691
default_device_exit_batch+0x33b/0x390 net/core/dev.c:11668
ops_exit_list net/core/net_namespace.c:177 [inline]
cleanup_net+0x77b/0xb80 net/core/net_namespace.c:635
process_one_work+0x863/0x1000 kernel/workqueue.c:2310
worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
INFO: task wg-crypt-wg2:4652 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:wg-crypt-wg2 state:D stack:29888 pid: 4652 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
percpu_rwsem_wait+0x2d8/0x310 kernel/locking/percpu-rwsem.c:160
__percpu_down_read+0xc9/0x100 kernel/locking/percpu-rwsem.c:174
percpu_down_read include/linux/percpu-rwsem.h:65 [inline]
cgroup_threadgroup_change_begin include/linux/cgroup-defs.h:724 [inline]
exit_signals+0x3e5/0x510 kernel/signal.c:2991
do_exit+0x256/0x20a0 kernel/exit.c:830
kthread_exit+0x11/0x20 kernel/kthread.c:283
kthread+0x454/0x520 kernel/kthread.c:336
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
INFO: task syz-executor:4820 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:20136 pid: 4820 ppid: 4819 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
rwsem_down_write_slowpath+0xc46/0x11f0 kernel/locking/rwsem.c:1165
mmap_write_lock include/linux/mmap_lock.h:71 [inline]
mpol_rebind_mm+0x33/0x2c0 mm/mempolicy.c:381
cpuset_attach+0x330/0x5f0 kernel/cgroup/cpuset.c:2376
cgroup_migrate_execute+0x7eb/0x1010 kernel/cgroup/cgroup.c:2597
cgroup_attach_task+0x562/0x7e0 kernel/cgroup/cgroup.c:2892
__cgroup1_procs_write+0x2e5/0x3f0 kernel/cgroup/cgroup-v1.c:527
cgroup_file_write+0x2f7/0x630 kernel/cgroup/cgroup.c:3966
kernfs_fop_write_iter+0x379/0x4c0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2172 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x712/0xd00 fs/read_write.c:594
ksys_write+0x14d/0x250 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fbccadba97f
RSP: 002b:00007fff1fd870b0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fbccadba97f
RDX: 0000000000000001 RSI: 00007fff1fd87100 RDI: 0000000000000003
RBP: 00007fff1fd87670 R08: 0000000000000000 R09: 00007fff1fd86f07
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007fff1fd87100 R14: 00007fff1fd87630 R15: 00007fff1fd87670
</TASK>

Showing all locks held in the system:
2 locks held by kworker/u4:0/9:
#0: ffff888016879138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1
#1: ffffc90000ce7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285
1 lock held by khungtaskd/27:
#0: ffffffff8c11c660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by kworker/u4:2/154:
#0: ffff888016879138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1
#1: ffffc90001ff7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285
2 locks held by klogd/3551:
2 locks held by udevd/3562:
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mmap kernel/fork.c:503 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mm kernel/fork.c:1466 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: copy_mm+0x21f/0x1380 kernel/fork.c:1518
#1: ffff88807db45528 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#1: ffff88807db45528 (&mm->mmap_lock){++++}-{3:3}, at: dup_mmap kernel/fork.c:504 [inline]
#1: ffff88807db45528 (&mm->mmap_lock){++++}-{3:3}, at: dup_mm kernel/fork.c:1466 [inline]
#1: ffff88807db45528 (&mm->mmap_lock){++++}-{3:3}, at: copy_mm+0x238/0x1380 kernel/fork.c:1518
1 lock held by dhcpcd/3855:
#0: ffffffff8c1420f0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: copy_process+0x2290/0x3e00 kernel/fork.c:2382
2 locks held by getty/3948:
#0: ffff88814cec7098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002d032e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5ba/0x1a30 drivers/tty/n_tty.c:2158
1 lock held by udevd/4175:
#0: ffff88807a85d528 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff88807a85d528 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0xf3/0x230 mm/mmap.c:2949
4 locks held by kworker/u4:5/4273:
#0: ffff8880169cd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1
#1: ffffc900033afd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285
#2: ffffffff8d22c690 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:589
#3: ffff888077e713e8 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_destruct+0x112/0x300 drivers/net/wireguard/device.c:233
1 lock held by udevd/4543:
#0: ffff88807975e328 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff88807975e328 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0xf3/0x230 mm/mmap.c:2949
1 lock held by wg-crypt-wg2/4652:
#0: ffffffff8c1420f0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x256/0x20a0 kernel/exit.c:830
1 lock held by modprobe/4792:
#0: ffff88807ccd4e28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff88807ccd4e28 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x15d/0x2b0 mm/util.c:549
1 lock held by modprobe/4795:
#0: ffff888077d48128 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff888077d48128 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x15d/0x2b0 mm/util.c:549
1 lock held by syz-executor/4797:
#0: ffffffff8c1420f0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x256/0x20a0 kernel/exit.c:830
2 locks held by syz-executor/4798:
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mmap kernel/fork.c:503 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mm kernel/fork.c:1466 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: copy_mm+0x21f/0x1380 kernel/fork.c:1518
#1: ffff88807ccd7128 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#1: ffff88807ccd7128 (&mm->mmap_lock){++++}-{3:3}, at: dup_mmap kernel/fork.c:504 [inline]
#1: ffff88807ccd7128 (&mm->mmap_lock){++++}-{3:3}, at: dup_mm kernel/fork.c:1466 [inline]
#1: ffff88807ccd7128 (&mm->mmap_lock){++++}-{3:3}, at: copy_mm+0x238/0x1380 kernel/fork.c:1518
2 locks held by syz-executor/4806:
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mmap kernel/fork.c:503 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: dup_mm kernel/fork.c:1466 [inline]
#0: ffffffff8c1a67f0 (dup_mmap_sem){.+.+}-{0:0}, at: copy_mm+0x21f/0x1380 kernel/fork.c:1518
#1: ffff88807ccd5c28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#1: ffff88807ccd5c28 (&mm->mmap_lock){++++}-{3:3}, at: dup_mmap kernel/fork.c:504 [inline]
#1: ffff88807ccd5c28 (&mm->mmap_lock){++++}-{3:3}, at: dup_mm kernel/fork.c:1466 [inline]
#1: ffff88807ccd5c28 (&mm->mmap_lock){++++}-{3:3}, at: copy_mm+0x238/0x1380 kernel/fork.c:1518
7 locks held by syz-executor/4820:
#0: ffff88801ab9c460 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x28a/0xd00 fs/read_write.c:590
#1: ffff88805f194c88 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1e5/0x4c0 fs/kernfs/file.c:287
#2: ffffffff8c141f08 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xee/0x230 kernel/cgroup/cgroup.c:1662
#3: ffffffff8bfbadd0 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_attach_lock kernel/cgroup/cgroup.c:2411 [inline]
#3: ffffffff8bfbadd0 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_procs_write_start+0x17c/0x580 kernel/cgroup/cgroup.c:2921
#4: ffffffff8c1420f0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_attach_lock kernel/cgroup/cgroup.c:2413 [inline]
#4: ffffffff8c1420f0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x192/0x580 kernel/cgroup/cgroup.c:2921
#5: ffffffff8c14fa48 (cpuset_mutex){+.+.}-{3:3}, at: cpuset_attach+0xac/0x5f0 kernel/cgroup/cpuset.c:2348
#6: ffff88801e258128 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock include/linux/mmap_lock.h:71 [inline]
#6: ffff88801e258128 (&mm->mmap_lock){++++}-{3:3}, at: mpol_rebind_mm+0x33/0x2c0 mm/mempolicy.c:381

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe0f/0xe50 kernel/hung_task.c:369
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 493 Comm: kworker/u4:3 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Workqueue: phy20 ieee80211_iface_work
RIP: 0010:lookup_object lib/debugobjects.c:197 [inline]
RIP: 0010:lookup_object_or_alloc lib/debugobjects.c:568 [inline]
RIP: 0010:debug_object_activate+0xb8/0x480 lib/debugobjects.c:696
Code: ff c5 48 85 ed 74 3b 4c 8d 7d 18 4c 89 f8 48 c1 e8 03 42 80 3c 30 00 74 08 4c 89 ff e8 21 19 dd fd 49 39 1f 0f 84 f9 00 00 00 <48> 89 e8 48 c1 e8 03 42 80 3c 30 00 74 c3 48 89 ef e8 02 19 dd fd
RSP: 0018:ffffc90002dbf2c8 EFLAGS: 00000083
RAX: 1ffff1100f3e6f37 RBX: ffff888062a52e00 RCX: dffffc0000000000
RDX: 0000000000000001 RSI: 0000000000000004 RDI: ffffc90002dbf1a0
RBP: ffff888079f379a0 R08: 0000000000000004 R09: 0000000000000003
R10: fffff520005b7e34 R11: 1ffff920005b7e34 R12: ffffffff9627a278
R13: 0000000000000010 R14: dffffc0000000000 R15: ffff888079f379b8
FS: 0000000000000000(0000) GS:ffff8880b9100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f3f6e56eb1b CR3: 000000007e0e4000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600
Call Trace:
<TASK>
debug_rcu_head_queue kernel/rcu/rcu.h:176 [inline]
kvfree_call_rcu+0xb5/0x7c0 kernel/rcu/tree.c:3591
cfg80211_update_known_bss+0x178/0xa20 net/wireless/scan.c:-1
cfg80211_bss_update+0x15f/0x2250 net/wireless/scan.c:1836
cfg80211_inform_single_bss_frame_data net/wireless/scan.c:2555 [inline]
cfg80211_inform_bss_frame_data+0x873/0x1f30 net/wireless/scan.c:2588
ieee80211_bss_info_update+0x6c2/0xaa0 net/mac80211/scan.c:190
ieee80211_rx_bss_info net/mac80211/ibss.c:1123 [inline]
ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1614 [inline]
ieee80211_ibss_rx_queued_mgmt+0x16d0/0x29c0 net/mac80211/ibss.c:1643
ieee80211_iface_process_skb net/mac80211/iface.c:1459 [inline]
ieee80211_iface_work+0x70e/0xc60 net/mac80211/iface.c:1513
process_one_work+0x863/0x1000 kernel/workqueue.c:2310
worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages