Hello,
syzbot found the following issue on:
HEAD commit: c2fda4b3f577 Linux 6.1.156
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=123a1734580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7eb38bd5021fec61
dashboard link:
https://syzkaller.appspot.com/bug?extid=76b10d6ba8e0e930c3c1
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/10e802f3fcb6/disk-c2fda4b3.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/66b88bca821f/vmlinux-c2fda4b3.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/5f220b7a420e/bzImage-c2fda4b3.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+76b10d...@syzkaller.appspotmail.com
INFO: task syz.2.1714:11814 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.1714 state:D stack:26240 pid:11814 ppid:4276 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_timeout+0x97/0x280 kernel/time/timer.c:1941
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x2b9/0x590 kernel/sched/completion.c:138
__flush_work+0x912/0xa60 kernel/workqueue.c:3076
__cancel_work_timer+0x3ac/0x520 kernel/workqueue.c:3163
tls_sk_proto_close+0xc4/0x8f0 net/tls/tls_main.c:331
inet_release+0x139/0x180 net/ipv4/af_inet.c:430
__sock_release net/socket.c:654 [inline]
sock_close+0xd5/0x240 net/socket.c:1400
__fput+0x22c/0x920 fs/file_table.c:320
task_work_run+0x1ca/0x250 kernel/task_work.c:203
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline]
syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:303
do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f04e478efc9
RSP: 002b:00007fffc38bfde8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007f04e49e7da0 RCX: 00007f04e478efc9
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007f04e49e7da0 R08: 00000000000066d8 R09: 00000008c38c00df
R10: 00007f04e49e7cb0 R11: 0000000000000246 R12: 00000000000d9185
R13: 00007f04e49e6090 R14: ffffffffffffffff R15: 00007fffc38bff00
</TASK>
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2b570 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2bd90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/27:
#0: ffffffff8cb2abe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2abe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2abe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4028:
#0: ffff88802fd93098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by kworker/u4:8/4386:
#0: ffff8880b8e3aad8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0xa5/0x140 kernel/sched/core.c:545
#1: ffff8880b8f27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x398/0x6d0 kernel/sched/psi.c:999
3 locks held by kworker/1:14/7446:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000603fd00 ((work_completion)(&(&sw_ctx_tx->
tx_work.work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffff88807eccd4d8 (&ctx->tx_lock){+.+.}-{3:3}, at: tx_work_handler+0xfd/0x1f0 net/tls/tls_sw.c:2586
1 lock held by syz.2.1714/11814:
#0: ffff88805486f410 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff88805486f410 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
#0: ffff88805486f410 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1400
1 lock held by syz-executor/11982:
#0: ffffffff8cb308b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#0: ffffffff8cb308b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x455/0x830 kernel/rcu/tree_exp.h:962
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 7124 Comm: kworker/u4:19 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: phy16 ieee80211_iface_work
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:lookup_chain_cache_add kernel/locking/lockdep.c:3737 [inline]
RIP: 0010:validate_chain kernel/locking/lockdep.c:3793 [inline]
RIP: 0010:__lock_acquire+0x13d2/0x7c50 kernel/locking/lockdep.c:5049
Code: e8 03 25 f8 03 00 00 48 8d b8 40 22 ae 90 be 08 00 00 00 e8 80 f3 6d 00 49 b8 00 00 00 00 00 fc ff df 48 0f a3 1d 4e 17 4b 0f <72> 25 48 c7 c0 40 91 be 96 48 c1 e8 03 42 0f b6 04 00 84 c0 0f 85
RSP: 0018:ffffc90003bf7740 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 00000000000007b3 RCX: ffffffff81630ae0
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff90ae2330
RBP: ffffc90003bf7990 R08: dffffc0000000000 R09: fffffbfff215c467
R10: fffffbfff215c467 R11: 1ffffffff215c466 R12: 17a792a43bd6b2e8
R13: 0000000055d4f9e9 R14: 17a792a43bd6b2e8 R15: ffff8880510de410
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fc567bb3ad8 CR3: 0000000020ce1000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
skb_dequeue+0x2a/0x140 net/core/skbuff.c:3421
ieee80211_iface_work+0x7c2/0xc80 net/mac80211/iface.c:1719
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup