Hello,
syzbot found the following issue on:
HEAD commit: 8ce36b2849ef Linux 6.1.163
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=168ecffa580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=b1adc0bfde2d8a4a
dashboard link:
https://syzkaller.appspot.com/bug?extid=8ff4de422df64236d629
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/e71ac74ccc11/disk-8ce36b28.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/a090f33b3222/vmlinux-8ce36b28.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/13fb3beccc73/Image-8ce36b28.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+8ff4de...@syzkaller.appspotmail.com
INFO: task syz.6.765:6984 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.765 state:D stack:0 pid:6984 ppid:5626 flags:0x00000005
Call trace:
__switch_to+0x2f4/0x550 arch/arm64/kernel/process.c:555
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0xdd0/0x1b0c kernel/sched/core.c:6562
schedule+0xc4/0x170 kernel/sched/core.c:6638
fuse_set_nowrite+0x198/0x2cc fs/fuse/dir.c:1603
fuse_sync_writes fs/fuse/file.c:503 [inline]
fuse_flush+0x244/0x6d8 fs/fuse/file.c:527
filp_close+0xb0/0x160 fs/open.c:1433
__range_close fs/file.c:703 [inline]
__close_range+0x530/0x6d0 fs/file.c:753
__do_sys_close_range fs/open.c:1478 [inline]
__se_sys_close_range fs/open.c:1475 [inline]
__arm64_sys_close_range+0x7c/0x94 fs/open.c:1475
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b4 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x130 arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x128 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffff8000153e7c30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x40/0xbb4 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffff8000153e8450 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x40/0xbb4 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
#0: ffff8000153e72c0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:349
4 locks held by udevd/3935:
#0: ffff0000de576b08 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xa8/0xc00 fs/seq_file.c:182
#1: ffff0000f51bf888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x58/0x3c0 fs/kernfs/file.c:172
#2: ffff0000ef03d578 (kn->active#4){++++}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
#2: ffff0000ef03d578 (kn->active#4){++++}-{0:0}, at: kernfs_seq_start+0xa4/0x3c0 fs/kernfs/file.c:173
#3: ffff0000e952a0e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:840 [inline]
#3: ffff0000e952a0e8 (&dev->mutex){....}-{3:3}, at: uevent_show+0x16c/0x32c drivers/base/core.c:2669
2 locks held by getty/4077:
#0: ffff0000d6a1f098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80002099b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x2ec/0xfa0 drivers/tty/n_tty.c:2198
1 lock held by syz-executor/4308:
#0: ffff00019f557198 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:538 [inline]
#0: ffff00019f557198 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1362 [inline]
#0: ffff00019f557198 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1652 [inline]
#0: ffff00019f557198 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x2b8/0x1b0c kernel/sched/core.c:6478
3 locks held by kworker/1:5/4393:
#0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x6b8/0x13a4 kernel/workqueue.c:2265
#1: ffff800021a27c20 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x13a4 kernel/workqueue.c:2267
#2: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
4 locks held by kworker/u4:5/4406:
#0: ffff0000c0845138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x6b8/0x13a4 kernel/workqueue.c:2265
#1: ffff800021ad7c20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x13a4 kernel/workqueue.c:2267
#2: ffff800017872910 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x134/0xa84 net/core/net_namespace.c:594
#3: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
3 locks held by kworker/u4:7/4632:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x6b8/0x13a4 kernel/workqueue.c:2265
#1: ffff800021717c20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x13a4 kernel/workqueue.c:2267
#2: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
1 lock held by syz.6.765/6984:
#0: ffff0000e1d51590 (&sb->s_type->i_mutex_key#23){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff0000e1d51590 (&sb->s_type->i_mutex_key#23){+.+.}-{3:3}, at: fuse_flush+0x23c/0x6d8 fs/fuse/file.c:526
3 locks held by kworker/u4:15/7577:
#0: ffff0000c002a138 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x6b8/0x13a4 kernel/workqueue.c:2265
#1: ffff8000210e7c20 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x13a4 kernel/workqueue.c:2267
#2: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
7 locks held by syz-executor/15345:
#0: ffff0000d8900460 (sb_writers#7){.+.+}-{0:0}, at: vfs_write+0x244/0x7f0 fs/read_write.c:580
#1: ffff0000f6e45c88 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1c0/0x4c0 fs/kernfs/file.c:343
#2: ffff0000cea4a008 (kn->active#51){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
#2: ffff0000cea4a008 (kn->active#51){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x200/0x4c0 fs/kernfs/file.c:344
#3: ffff800016c29208 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: new_device_store+0x138/0x514 drivers/net/netdevsim/bus.c:160
#4: ffff0000e952a0e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:840 [inline]
#4: ffff0000e952a0e8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x8c/0x3dc drivers/base/dd.c:990
#5: ffff0000e95282f8 (&devlink->lock_key#13){+.+.}-{3:3}, at: devl_lock+0x24/0x34 net/devlink/leftover.c:275
#6: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
1 lock held by syz.2.3991/15639:
#0: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x650/0xcdc net/core/rtnetlink.c:6147
1 lock held by syz.5.3999/15661:
#0: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x650/0xcdc net/core/rtnetlink.c:6147
2 locks held by syz.8.4022/15712:
#0: ffff8000178db290 (cb_lock){++++}-{3:3}, at: genl_rcv+0x28/0x50 net/netlink/genetlink.c:860
#1: ffff80001787f1c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
=============================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup