[syzbot] [kernel?] INFO: task hung in exit_aio (5)

0 views
Skip to first unread message

syzbot

unread,
12:41 PMĀ (2 hours ago)Ā 12:41 PM
to anna-...@linutronix.de, fred...@kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com, tg...@kernel.org
Hello,

syzbot found the following issue on:

HEAD commit: 1d5dcaa3bd65 Merge tag 'probes-fixes-v7.1-rc3' of git://gi..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1135b7ce580000
kernel config: https://syzkaller.appspot.com/x/.config?x=f2e8ebfec4636d32
dashboard link: https://syzkaller.appspot.com/bug?extid=4c9b421ef4f6c18a174e
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/5b889cdb98bd/disk-1d5dcaa3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8953fddf953a/vmlinux-1d5dcaa3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d6e5ef98b3f1/bzImage-1d5dcaa3.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c9b42...@syzkaller.appspotmail.com

INFO: task syz.8.6692:28088 blocked for more than 143 seconds.
Tainted: G L syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.6692 state:D stack:24824 pid:28088 tgid:28088 ppid:21547 task_flags:0x40044c flags:0x00080002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5388 [inline]
__schedule+0x16ec/0x5620 kernel/sched/core.c:7189
__schedule_loop kernel/sched/core.c:7268 [inline]
schedule+0x164/0x360 kernel/sched/core.c:7283
schedule_timeout+0xc3/0x2c0 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2cc/0x5e0 kernel/sched/completion.c:153
exit_aio+0x319/0x3f0 fs/aio.c:981
__mmput+0x68/0x3d0 kernel/fork.c:1175
exit_mm+0x18e/0x250 kernel/exit.c:581
do_exit+0x6a2/0x22c0 kernel/exit.c:963
do_group_exit+0x21b/0x2d0 kernel/exit.c:1118
get_signal+0x125c/0x1310 kernel/signal.c:3037
arch_do_signal_or_restart+0xbc/0x850 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:64 [inline]
exit_to_user_mode_loop kernel/entry/common.c:98 [inline]
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:207 [inline]
irqentry_exit_to_user_mode_prepare include/linux/irq-entry-common.h:252 [inline]
irqentry_exit_to_user_mode include/linux/irq-entry-common.h:323 [inline]
irqentry_exit+0x289/0x760 kernel/entry/common.c:162
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
RIP: 0033:0x7f80933e1580
RSP: 002b:00007fff9c9cd6f0 EFLAGS: 00010202
RAX: 0000001b2f21a000 RBX: ffffffff8247fdf4 RCX: 0000001b2f219ff8
RDX: 0000001b2ec24220 RSI: 0000000000000008 RDI: 00007f80942b5720
RBP: 00000000000000e2 R08: 00007f8093770000 R09: 00007f8093772000
R10: 000000008247fdf8 R11: 0000000000000012 R12: 00007f8093786038
R13: 000000000000010d R14: ffffffff8247f187 R15: 00007f80942b5720
</TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/13:
#0: ffff888032966938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff888032966938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc90000127c40 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc90000127c40 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
#2: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
#2: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4746
5 locks held by rcuc/1/28:
1 lock held by khungtaskd/38:
#0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
#0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:6/163:
6 locks held by kworker/u8:13/3324:
#0: ffff88801b290938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff88801b290938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc9000ef87c40 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc9000ef87c40 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
#2: ffffffff8f3487a0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf4/0x800 net/core/net_namespace.c:673
#3: ffff88803655f160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1040 [inline]
#3: ffff88803655f160 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:124 [inline]
#3: ffff88803655f160 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x129/0x420 net/devlink/core.c:555
#4: ffff888022394310 (&devlink->lock_key#19){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:292 [inline]
#4: ffff888022394310 (&devlink->lock_key#19){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:125 [inline]
#4: ffff888022394310 (&devlink->lock_key#19){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x142/0x420 net/devlink/core.c:555
#5: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
3 locks held by kworker/u8:14/3390:
#0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc9000f1b7c40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc9000f1b7c40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
#2: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:313
2 locks held by getty/5358:
#0: ffff8880371cb0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90003cc62e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13a0 drivers/tty/n_tty.c:2211
5 locks held by kworker/u8:17/6574:
3 locks held by kworker/u8:21/10044:
#0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc90006d47c40 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc90006d47c40 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
#2: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
7 locks held by kworker/0:2/15913:
1 lock held by syz-executor/27881:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/27889:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28054:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28083:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28102:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
7 locks held by syz-executor/28131:
#0: ffff888037648480 (sb_writers#7){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2724 [inline]
#0: ffff888037648480 (sb_writers#7){.+.+}-{0:0}, at: vfs_write+0x22d/0xba0 fs/read_write.c:684
#1: ffff8880702d1478 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1df/0x540 fs/kernfs/file.c:343
#2: ffff888029954008 (kn->active#52){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
#2: ffff888029954008 (kn->active#52){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x232/0x540 fs/kernfs/file.c:344
#3: ffffffff8ebcf8d8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd7/0x370 drivers/net/netdevsim/bus.c:234
#4: ffff88803e7b5160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1040 [inline]
#4: ffff88803e7b5160 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1174 [inline]
#4: ffff88803e7b5160 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xb6/0x870 drivers/base/dd.c:1372
#5: ffff888062650310 (&devlink->lock_key#6){+.+.}-{4:4}, at: nsim_drv_remove+0x50/0x160 drivers/net/netdevsim/dev.c:1799
#6: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
2 locks held by syz.0.6716/28241:
#0: ffffffff8eacd008 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
#0: ffffffff8eacd008 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#0: ffffffff8eacd008 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
#1: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
2 locks held by syz.0.6716/28244:
#0: ffffffff8ea9ea68 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
#0: ffffffff8ea9ea68 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#0: ffffffff8ea9ea68 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
#1: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28250:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28262:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28270:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28289:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
1 lock held by syz-executor/28301:
#0: ffffffff8dfce2f0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828
5 locks held by syz-executor/28401:
2 locks held by syz-executor/28422:
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x883/0x1bb0 net/core/rtnetlink.c:4109
2 locks held by syz-executor/28425:
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#0: ffffffff8f8918e0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x883/0x1bb0 net/core/rtnetlink.c:4109
2 locks held by syz-executor/28450:
#0: ffffffff8f3487a0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x4f7/0x730 net/core/net_namespace.c:575
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x2d7/0x840 net/ipv4/ip_tunnel.c:1146
2 locks held by syz-executor/28458:
#0: ffffffff8f3487a0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x4f7/0x730 net/core/net_namespace.c:575
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
#1: ffffffff8f3574f8 (rtnl_mutex){+.+.}-{4:4}, at: register_netdevice_notifier_net+0x1a/0xa0 net/core/dev.c:2102

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Tainted: G L syzkaller #0 PREEMPT_{RT,(full)}
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
__sys_info lib/sys_info.c:157 [inline]
sys_info+0x135/0x170 lib/sys_info.c:165
check_hung_uninterruptible_tasks kernel/hung_task.c:353 [inline]
watchdog+0xfd3/0x1030 kernel/hung_task.c:561
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 15913 Comm: kworker/0:2 Tainted: G L syzkaller #0 PREEMPT_{RT,(full)}
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Workqueue: events_long defense_work_handler
RIP: 0010:rt_spin_lock+0x68/0x400 kernel/locking/spinlock_rt.c:56
Code: 41 48 c7 44 24 28 d0 d9 7f 8d 48 c7 44 24 30 20 27 1f 8b 48 8d 5c 24 20 48 c1 eb 03 48 b8 f1 f1 f1 f1 f8 f8 f3 f3 4a 89 04 2b <48> 83 c7 58 31 f6 31 d2 31 c9 41 b8 01 00 00 00 45 31 c9 ff 75 08
RSP: 0018:ffffc90003ecf4a0 EFLAGS: 00000a02
RAX: f3f3f8f8f1f1f1f1 RBX: 1ffff920007d9e98 RCX: ffff8880295c5c40
RDX: 0000000000000100 RSI: 0000000000000000 RDI: ffff8880b883d428
RBP: ffffc90003ecf558 R08: 0000000000000000 R09: 0000000000000100
R10: dffffc0000000000 R11: fffffbfff1f11dff R12: ffff8880b883d660
R13: dffffc0000000000 R14: ffff8880b883d428 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff888126176000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000561f62bced08 CR3: 0000000086984000 CR4: 00000000003526f0
Call Trace:
<TASK>
spin_lock include/linux/spinlock_rt.h:45 [inline]
process_backlog+0x13b/0xc60 net/core/dev.c:6662
__napi_poll+0xab/0x550 net/core/dev.c:7730
napi_poll net/core/dev.c:7793 [inline]
net_rx_action+0x696/0xe00 net/core/dev.c:7950
handle_softirqs+0x1de/0x6d0 kernel/softirq.c:622
__do_softirq kernel/softirq.c:656 [inline]
__local_bh_enable_ip+0x170/0x2b0 kernel/softirq.c:302
local_bh_enable include/linux/bottom_half.h:33 [inline]
update_defense_level+0x91e/0xd70 net/netfilter/ipvs/ip_vs_ctl.c:210
defense_work_handler+0x2d/0xd0 net/netfilter/ipvs/ip_vs_ctl.c:235
process_one_work kernel/workqueue.c:3302 [inline]
process_scheduled_works+0xb5d/0x1860 kernel/workqueue.c:3385
worker_thread+0xa53/0xfc0 kernel/workqueue.c:3466
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages