[v5.15] INFO: task hung in rxrpc_release

4 views
Skip to first unread message

syzbot

unread,
Apr 2, 2023, 2:39:44 PM4/2/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c957cbb87315 Linux 5.15.105
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10ae09cdc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=6f83fab0469f5de7
dashboard link: https://syzkaller.appspot.com/bug?extid=689e4e79c96c7386d7a1
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/83f411a78c57/disk-c957cbb8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/38ddd5d692d0/vmlinux-c957cbb8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/958a1729c4a6/bzImage-c957cbb8.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+689e4e...@syzkaller.appspotmail.com

INFO: task kworker/u4:1:144 blocked for more than 143 seconds.
Not tainted 5.15.105-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:1 state:D stack:22648 pid: 144 ppid: 2 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
schedule_timeout+0xac/0x300 kernel/time/timer.c:1860
do_wait_for_common+0x2d9/0x480 kernel/sched/completion.c:85
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x48/0x60 kernel/sched/completion.c:138
flush_workqueue+0x737/0x1610 kernel/workqueue.c:2878
rxrpc_release_sock net/rxrpc/af_rxrpc.c:887 [inline]
rxrpc_release+0x274/0x430 net/rxrpc/af_rxrpc.c:917
__sock_release net/socket.c:649 [inline]
sock_release+0x7a/0x140 net/socket.c:677
afs_close_socket+0x286/0x310 fs/afs/rxrpc.c:125
afs_net_exit+0x58/0xa0 fs/afs/main.c:158
ops_exit_list net/core/net_namespace.c:169 [inline]
cleanup_net+0x6ce/0xb60 net/core/net_namespace.c:596
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2306
worker_thread+0xaca/0x1280 kernel/workqueue.c:2453
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91b920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
3 locks held by kworker/u4:1/144:
#0: ffff888011db5138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000111fd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8d9c7d10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:558
2 locks held by kworker/u4:5/1413:
#0: ffff888011c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000576fd20 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
2 locks held by getty/3271:
#0: ffff88814af38098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002bb32e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1da0 drivers/tty/n_tty.c:2147
3 locks held by syz-executor.3/3626:
3 locks held by kworker/1:5/3671:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000307fd20 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8c9da548 (vmap_purge_lock){+.+.}-{3:3}, at: _vm_unmap_aliases+0x441/0x4e0 mm/vmalloc.c:2105
3 locks held by kworker/1:8/3709:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000315fd20 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8c9da548 (vmap_purge_lock){+.+.}-{3:3}, at: _vm_unmap_aliases+0x441/0x4e0 mm/vmalloc.c:2105
3 locks held by kworker/0:9/3975:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000317fd20 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8c9da548 (vmap_purge_lock){+.+.}-{3:3}, at: _vm_unmap_aliases+0x441/0x4e0 mm/vmalloc.c:2105
2 locks held by kworker/0:13/4911:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000300fd20 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
2 locks held by kworker/u4:7/5572:
#0: ffff888011c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc90002d3fd20 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
3 locks held by kworker/1:3/20051:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffff8880b9b27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53d/0x810 kernel/sched/psi.c:891
#2: ffffffff91576518 (&obj_hash[i].lock){-.-.}-{2:2}, at: __debug_check_no_obj_freed lib/debugobjects.c:987 [inline]
#2: ffffffff91576518 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_check_no_obj_freed+0xc2/0x610 lib/debugobjects.c:1030
2 locks held by syz-executor.4/20250:
2 locks held by dhcpcd/20267:
#0: ffff8880792e6120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1649 [inline]
#0: ffff8880792e6120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2a/0xc90 net/packet/af_packet.c:3159
#1: ffffffff8c91fe68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffffffff8c91fe68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:840
2 locks held by kworker/1:7/20283:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.15.105-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.105-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:debug_lockdep_rcu_enabled+0x25/0x30 kernel/rcu/update.c:281
Code: cc cc cc cc cc 31 c0 83 3d 7b 3e cc 03 00 74 1d 83 3d f2 70 cc 03 00 74 14 65 48 8b 0d a4 92 eb 75 31 c0 83 b9 1c 0a 00 00 00 <0f> 94 c0 c3 cc cc cc cc cc cc cc 41 56 53 89 fb e8 d6 0d 00 00 41
RSP: 0018:ffffc90000ce7bf0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88813fe90000
RDX: 0000000000000000 RSI: ffffffff8ad85120 RDI: ffffffff8ad850e0
RBP: 0000000000000001 R08: ffffffff89f451b1 R09: fffffbfff1f76e15
R10: 0000000000000000 R11: dffffc0000000001 R12: 00000000000000c9
R13: dffffc0000000000 R14: ffff888078a04c80 R15: ffff888076662bc0
FS: 0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe58c8a1058 CR3: 000000005393c000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
rcu_read_unlock include/linux/rcupdate.h:725 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:416 [inline]
batadv_nc_worker+0x1ba/0x5b0 net/batman-adv/network-coding.c:723
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2306
worker_thread+0xaca/0x1280 kernel/workqueue.c:2453
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 31, 2023, 2:39:40 PM7/31/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages