[v6.1] INFO: task hung in ip_tunnel_delete_nets

4 views
Skip to first unread message

syzbot

unread,
May 4, 2023, 3:19:50 AM5/4/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ca48fc16c493 Linux 6.1.27
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1692cfe4280000
kernel config: https://syzkaller.appspot.com/x/.config?x=47d3bbfdb3b1ddd2
dashboard link: https://syzkaller.appspot.com/bug?extid=79bf35c3a2cc8a770410
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/658765c915fa/disk-ca48fc16.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d69e8a1aff2d/vmlinux-ca48fc16.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0317a9546209/bzImage-ca48fc16.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+79bf35...@syzkaller.appspotmail.com

INFO: task kworker/u4:4:102 blocked for more than 143 seconds.
Not tainted 6.1.27-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:4 state:D stack:21368 pid:102 ppid:2 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0x132c/0x4330 kernel/sched/core.c:6554
schedule+0xbf/0x180 kernel/sched/core.c:6630
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6689
__mutex_lock_common+0xe2b/0x2520 kernel/locking/mutex.c:679
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
ip_tunnel_delete_nets+0xc9/0x330 net/ipv4/ip_tunnel.c:1121
ops_exit_list net/core/net_namespace.c:174 [inline]
cleanup_net+0x763/0xb60 net/core/net_namespace.c:601
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cf273f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cf27bf0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffffffff8cf27220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
4 locks held by kworker/u4:4/102:
#0: ffff888012606938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc900015d7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e085510 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
#3: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_delete_nets+0xc9/0x330 net/ipv4/ip_tunnel.c:1121
2 locks held by getty/3302:
#0: ffff88814b253098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
3 locks held by kworker/0:17/5511:
#0: ffff88814b451138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc9000b85fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
3 locks held by kworker/1:0/12070:
#0: ffff88814b451138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc90003b9fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
3 locks held by kworker/1:5/12078:
#0: ffff888012465d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc90003d3fd20 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x90/0xe40 net/wireless/reg.c:2493
3 locks held by syz-executor.1/15875:
1 lock held by syz-executor.1/15945:
#0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088
1 lock held by syz-executor.1/15954:
#0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf18/0xf60 kernel/hung_task.c:377
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 15875 Comm: syz-executor.1 Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
RIP: 0010:constant_test_bit arch/x86/include/asm/bitops.h:207 [inline]
RIP: 0010:arch_test_bit arch/x86/include/asm/bitops.h:239 [inline]
RIP: 0010:_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:142 [inline]
RIP: 0010:folio_test_dirty include/linux/page-flags.h:479 [inline]
RIP: 0010:shrink_folio_list+0x26e0/0x9290 mm/vmscan.c:1891
Code: 1e 68 00 00 0f 1f 44 00 00 e8 9c 46 cc ff 4c 89 e7 be 08 00 00 00 e8 2f 8e 22 00 48 b8 00 00 00 00 00 fc ff df 48 8b 4c 24 20 <80> 3c 01 00 74 08 4c 89 e7 e8 82 8c 22 00 49 8b 1c 24 48 89 de 48
RSP: 0018:ffffc9000544dd60 EFLAGS: 00000256
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 1ffffd40002c7a98
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffea000163d4c0
RBP: ffffc9000544e1d0 R08: dffffc0000000000 R09: fffff940002c7a99
R10: 0000000000000000 R11: dffffc0000000001 R12: ffffea000163d4c0
R13: 1ffffd40002c7a9b R14: 0000000000000001 R15: ffffea000163d4d8
FS: 00007f6e17d13700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c0006a0000 CR3: 0000000048e01000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
evict_folios+0xb42/0x2810 mm/vmscan.c:5017
lru_gen_shrink_lruvec mm/vmscan.c:5201 [inline]
shrink_lruvec+0xdbf/0x4650 mm/vmscan.c:5896
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Aug 23, 2023, 5:02:50 AM8/23/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages