Hello,
syzbot found the following issue on:
HEAD commit: 4a243110dc88 Linux 6.6.114
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=1226c7e2580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link:
https://syzkaller.appspot.com/bug?extid=0372270e0e72337a801e
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/1950ac2cd960/disk-4a243110.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/d7dccd93693b/vmlinux-4a243110.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/6f93496e2b47/bzImage-4a243110.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+037227...@syzkaller.appspotmail.com
INFO: task syz.1.2294:16283 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2294 state:D stack:24232 pid:16283 ppid:16121 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
request_wait_answer fs/fuse/dev.c:407 [inline]
__fuse_request_send fs/fuse/dev.c:426 [inline]
fuse_simple_request+0x1195/0x1bb0 fs/fuse/dev.c:513
fuse_flush+0x5b0/0x830 fs/fuse/file.c:522
filp_flush fs/open.c:1531 [inline]
filp_close+0xb1/0x150 fs/open.c:1544
__range_close fs/file.c:698 [inline]
__close_range+0x342/0x630 fs/file.c:755
__do_sys_close_range fs/open.c:1597 [inline]
__se_sys_close_range fs/open.c:1594 [inline]
__x64_sys_close_range+0x7a/0x90 fs/open.c:1594
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f90b5b8efc9
RSP: 002b:00007ffdc5953e68 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: ffffffffffffffda RBX: 00007f90b5de7da0 RCX: 00007f90b5b8efc9
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007f90b5de7da0 R08: 00000000000001cc R09: 0000001ac595415f
R10: 00007f90b5de7cb0 R11: 0000000000000246 R12: 000000000012d187
R13: 00007f90b5de6360 R14: ffffffffffffffff R15: 00007ffdc5953f80
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/u4:4/59:
#0: ffff888148275d38 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888148275d38 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900015a7d00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900015a7d00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: cfg80211_dfs_channels_update_work+0xb7/0x630 net/wireless/mlme.c:915
6 locks held by kworker/u4:8/2902:
#0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000bc27d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000bc27d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfaedd0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x136/0xb90 net/core/net_namespace.c:606
#3: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: ieee80211_unregister_hw+0x55/0x2a0 net/mac80211/main.c:1510
#4: ffff88801ef88768 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: wiphy_lock include/net/cfg80211.h:5777 [inline]
#4: ffff88801ef88768 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: ieee80211_remove_interfaces+0x292/0x680 net/mac80211/iface.c:2332
#5: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#5: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x448/0x830 kernel/rcu/tree_exp.h:1004
3 locks held by kworker/u4:11/2956:
#0: ffff88802c070938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88802c070938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000bfb7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000bfb7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x14e0 net/ipv6/addrconf.c:4158
3 locks held by kworker/u4:12/2973:
1 lock held by klogd/5150:
2 locks held by getty/5551:
#0: ffff88802cea70a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by kworker/0:4/5843:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900047dfd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900047dfd00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
3 locks held by kworker/1:7/6717:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900036a7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900036a7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88805c313240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x26c/0x3170 drivers/net/netdevsim/fib.c:1491
1 lock held by syz-executor/17166:
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6472
1 lock held by syz-executor/17183:
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6472
1 lock held by syz-executor/17190:
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6472
2 locks held by kworker/1:2/17225:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003707d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003707d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by syz.7.2472/17244:
#0: ffffffff8dfbbc08 (rtnl_mutex){+.+.}-{3:3}, at: dev_ioctl+0x86a/0x1170 net/core/dev_ioctl.c:810
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 17166 Comm: syz-executor Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
RIP: 0010:page_table_check_clear+0x241/0x6a0 mm/page_table_check.c:89
Code: 04 00 00 00 e8 30 37 f5 ff 4c 89 e0 48 c1 e8 03 48 b9 00 00 00 00 00 fc ff df 0f b6 04 08 84 c0 0f 85 ff 00 00 00 41 8b 2c 24 <31> ff 89 ee e8 16 df 9d ff 85 ed 0f 85 8d 01 00 00 49 8d 7c 24 04
RSP: 0018:ffffc9000363f460 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88801ac75af0
RBP: 0000000000000000 R08: ffff88801ac75af3 R09: 1ffff1100358eb5e
R10: dffffc0000000000 R11: ffffed100358eb5f R12: ffff88801ac75af0
R13: 0000000000000000 R14: ffff88801ac75ab0 R15: 1ffffffff2de332c
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffcda033fbc CR3: 000000000cb30000 CR4: 00000000003506e0
Call Trace:
<TASK>
ptep_get_and_clear_full arch/x86/include/asm/jump_label.h:-1 [inline]
zap_pte_range mm/memory.c:1428 [inline]
zap_pmd_range mm/memory.c:1570 [inline]
zap_pud_range mm/memory.c:1599 [inline]
zap_p4d_range mm/memory.c:1620 [inline]
unmap_page_range+0x1ad1/0x2fe0 mm/memory.c:1641
unmap_vmas+0x25e/0x3a0 mm/memory.c:1731
exit_mmap+0x200/0xb50 mm/mmap.c:3302
__mmput+0x118/0x3c0 kernel/fork.c:1355
exit_mm+0x1da/0x2c0 kernel/exit.c:569
do_exit+0x88e/0x23c0 kernel/exit.c:870
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x1400 kernel/signal.c:2902
arch_do_signal_or_restart+0x9c/0x7b0 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f9e35d90e5c
Code: Unable to access opcode bytes at 0x7f9e35d90e32.
RSP: 002b:00007fffe886bc40 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: 000000000000002c RBX: 00007f9e36b14620 RCX: 00007f9e35d90e5c
RDX: 000000000000002c RSI: 00007f9e36b14670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fffe886bc94 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f9e36b14670 R15: 0000000000000000
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup