Hello,
syzbot found the following issue on:
HEAD commit: f6e38ae624cf Linux 6.1.158
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=162e7692580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7eb38bd5021fec61
dashboard link:
https://syzkaller.appspot.com/bug?extid=0801789b05cd1d16eabf
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/2b7003e2661d/disk-f6e38ae6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/0696cea8d016/vmlinux-f6e38ae6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/6b7a89e636d7/bzImage-f6e38ae6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+080178...@syzkaller.appspotmail.com
INFO: task kworker/1:1:26 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:1 state:D stack:22112 pid:26 ppid:2 flags:0x00004000
Workqueue: events_long flush_mdb
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
io_schedule+0x7c/0xd0 kernel/sched/core.c:8797
bit_wait_io+0xd/0xc0 kernel/sched/wait_bit.c:209
__wait_on_bit_lock+0xd8/0x580 kernel/sched/wait_bit.c:90
out_of_line_wait_on_bit_lock+0x11f/0x160 kernel/sched/wait_bit.c:117
lock_buffer include/linux/buffer_head.h:397 [inline]
hfs_mdb_commit+0x111/0x1110 fs/hfs/mdb.c:271
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2b630 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2be50 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
2 locks held by kworker/1:1/26:
#0: ffff888017471138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000a1fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by khungtaskd/28:
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
4 locks held by kworker/u4:2/41:
#0: ffff888144e77138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000b27d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffff8880b8e3aad8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537
#3: ffff888070c35e18 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4576 [inline]
#3: ffff888070c35e18 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_delalloc+0x329/0x1480 fs/xfs/libxfs/xfs_bmap.c:4698
2 locks held by getty/4027:
#0:
ffff88814d1de098
(
&tty->ldisc_sem
){++++}-{0:0}
, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by syz-executor/4273:
2 locks held by syz-executor/4278:
#0: ffff88807e6e40e0 (&type->s_umount_key#67){++++}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
#1: ffff8880247407d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:362 [inline]
#1: ffff8880247407d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x19c/0x9e0 fs/fs-writeback.c:2748
3 locks held by kworker/0:12/4996:
#0: ffff88802eb98538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9001c387d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xc4/0x14d0 net/ipv6/addrconf.c:4131
1 lock held by syz-executor/5412:
#0: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#0: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by kworker/1:18/5546:
2 locks held by kworker/1:19/6746:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9001d42fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by syz.7.1228/8186:
#0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
#0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1400
#1:
ffffffff8dd416e8
(
rtnl_mutex){+.+.}-{3:3}, at: gtp_encap_destroy+0xe/0x20 drivers/net/gtp.c:643
2 locks held by syz.7.1228/8187:
#0:
ffffffff8dd416e8 (
rtnl_mutex
){+.+.}-{3:3}
, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
#1:
ffffffff8cb30978
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup