[v6.6] INFO: task hung in __lru_add_drain_all

2 views
Skip to first unread message

syzbot

unread,
1:31 AM (18 hours ago) 1:31 AM
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0a805b6ea8cd Linux 6.6.116
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1201c114580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=34e48618c49dfa5a873c
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/13874446b4b2/disk-0a805b6e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/69a5e59e1ade/vmlinux-0a805b6e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/322c0711f976/bzImage-0a805b6e.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+34e486...@syzkaller.appspotmail.com

INFO: task syz-executor:13891 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:21736 pid:13891 ppid:1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x9b/0x280 kernel/time/timer.c:2143
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2bd/0x590 kernel/sched/completion.c:148
__flush_work+0x895/0x9f0 kernel/workqueue.c:3430
__lru_add_drain_all+0x574/0x5e0 mm/swap.c:889
invalidate_bdev+0x93/0xc0 block/bdev.c:86
ext4_put_super+0x5f9/0xa40 fs/ext4/super.c:1362
generic_shutdown_super+0x134/0x2b0 fs/super.c:693
kill_block_super+0x44/0x90 fs/super.c:1660
ext4_kill_sb+0x68/0xa0 fs/ext4/super.c:7391
deactivate_locked_super+0x97/0x100 fs/super.c:481
cleanup_mnt+0x429/0x4c0 fs/namespace.c:1259
task_work_run+0x1ce/0x250 kernel/task_work.c:239
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f18fa3909f7
RSP: 002b:00007ffceb0c0238 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f18fa411d7d RCX: 00007f18fa3909f7
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007ffceb0c02f0
RBP: 00007ffceb0c02f0 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffceb0c1380
R13: 00007f18fa411d7d R14: 0000000000113e0a R15: 00007ffceb0c13c0
</TASK>
INFO: task kworker/u4:19:14026 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:19 state:D stack:23592 pid:14026 ppid:2 flags:0x00004000
Workqueue: events_unbound fsnotify_connector_destroy_workfn
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x9b/0x280 kernel/time/timer.c:2143
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2bd/0x590 kernel/sched/completion.c:148
__synchronize_srcu+0x313/0x3a0 kernel/rcu/srcutree.c:1386
fsnotify_connector_destroy_workfn+0x44/0xa0 fs/notify/mark.c:234
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
INFO: task kworker/u4:0:18165 blocked for more than 145 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:0 state:D stack:22728 pid:18165 ppid:2 flags:0x00004000
Workqueue: events_unbound fsnotify_mark_destroy_workfn
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x9b/0x280 kernel/time/timer.c:2143
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2bd/0x590 kernel/sched/completion.c:148
__synchronize_srcu+0x313/0x3a0 kernel/rcu/srcutree.c:1386
fsnotify_mark_destroy_workfn+0x102/0x2e0 fs/notify/mark.c:924
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by getty/5547:
#0: ffff88814cdd30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
2 locks held by syz-executor/13891:
#0: ffff888064b2a0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
#0: ffff888064b2a0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
#0: ffff888064b2a0e0 (&type->s_umount_key#31){++++}-{3:3}, at: deactivate_super+0xa4/0xe0 fs/super.c:513
#1: ffffffff8cddeda8 (lock#3){+.+.}-{3:3}, at: __lru_add_drain_all+0x66/0x5e0 mm/swap.c:844
2 locks held by kworker/u4:19/14026:
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003a17d00 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003a17d00 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
2 locks held by kworker/u4:0/18165:
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004177d00 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004177d00 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by syz.4.3627/18339:
#0: ffff88802f108b20 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:124 [inline]
#0: ffff88802f108b20 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x168/0x400 mm/util.c:554
1 lock held by syz.4.3627/18343:
1 lock held by syz.5.3628/18348:
5 locks held by syz.7.3635/18375:
5 locks held by syz-executor/18497:
#0: ffff88802eb16418 (sb_writers#11){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ef910 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ef910 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
#2: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:369 [inline]
#2: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xf2/0x230 kernel/cgroup/cgroup.c:1677
#3: ffffffff8cbcb1d0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_css_online+0x4a/0x910 kernel/cgroup/cpuset.c:3275
#4: ffffffff8cd681a8 (cpuset_mutex){+.+.}-{3:3}, at: cpuset_css_online+0x58/0x910 kernel/cgroup/cpuset.c:3276
3 locks held by syz-executor/18509:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
#2: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:369 [inline]
#2: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xf2/0x230 kernel/cgroup/cgroup.c:1677
2 locks held by syz-executor/18515:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/18528:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19215:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19237:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19249:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19257:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19366:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19377:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890
2 locks held by syz-executor/19384:
#0: ffff88802e458418 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff8880781ec160 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3890

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 18348 Comm: syz.5.3628 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
RIP: 0010:__lock_acquire+0x191f/0x7c80 kernel/locking/lockdep.c:5144
Code: 00 48 8b 84 24 b8 00 00 00 42 80 3c 00 00 48 8b 9c 24 88 00 00 00 74 12 48 89 df e8 8b d4 75 00 49 b8 00 00 00 00 00 fc ff df <4c> 89 23 48 8b 44 24 78 42 0f b6 04 00 84 c0 48 8b 5c 24 40 0f 85
RSP: 0018:ffffc90004a5f720 EFLAGS: 00000046
RAX: 1ffff1100580615a RBX: ffff88802c030ad0 RCX: ffffffff81671100
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff90da8508
RBP: ffffc90004a5f968 R08: dffffc0000000000 R09: 1ffffffff21b50a1
R10: dffffc0000000000 R11: fffffbfff21b50a2 R12: 0ee264dc3338924c
R13: ffff88802c030000 R14: 0000000000000001 R15: ffff88802c030b28
FS: 00007fcf203f66c0(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000020000006b000 CR3: 000000001e268000 CR4: 00000000003506f0
Call Trace:
<TASK>
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
seqcount_lockdep_reader_access+0xca/0x1c0 include/linux/seqlock.h:102
timekeeping_get_delta kernel/time/timekeeping.c:254 [inline]
timekeeping_get_ns kernel/time/timekeeping.c:388 [inline]
ktime_get+0x7f/0x280 kernel/time/timekeeping.c:848
common_hrtimer_rearm+0x61/0x110 kernel/time/posix-timers.c:248
posixtimer_rearm+0x135/0x340 kernel/time/posix-timers.c:268
dequeue_signal+0x1ba/0x4b0 kernel/signal.c:706
get_signal+0x551/0x1400 kernel/signal.c:2782
arch_do_signal_or_restart+0x9c/0x7b0 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fcf2218f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fcf203f60e8 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 00007fcf223e5fa8 RCX: 00007fcf2218f6c9
RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007fcf223e5fac
RBP: 00007fcf223e5fa0 R08: 0000000000745a2b R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fcf223e6038 R14: 00007fff662bc480 R15: 00007fff662bc568
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages