INFO: task can't die in p9_client_rpc (4)

4 views
Skip to first unread message

syzbot

unread,
Oct 25, 2021, 7:35:26 AM10/25/21
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3196a52aff93 Add linux-next specific files for 20211021
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=17712730b00000
kernel config: https://syzkaller.appspot.com/x/.config?x=fb0e2a8a3e9b63e2
dashboard link: https://syzkaller.appspot.com/bug?extid=f7825c5626af6ef4558c
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
CC: [asma...@codewreck.org da...@davemloft.net eri...@gmail.com ku...@kernel.org linux-...@vger.kernel.org lu...@ionkov.net net...@vger.kernel.org v9fs-de...@lists.sourceforge.net]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+f7825c...@syzkaller.appspotmail.com

INFO: task syz-executor.1:13976 can't die for more than 143 seconds.
task:syz-executor.1 state:R running task stack:27328 pid:13976 ppid: 1805 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:4965 [inline]
__schedule+0xa9a/0x4940 kernel/sched/core.c:6246
schedule+0xd2/0x260 kernel/sched/core.c:6319
p9_client_rpc+0x403/0x1240 net/9p/client.c:761
p9_client_read_once+0x22c/0x5f0 net/9p/client.c:1607
p9_client_read+0x13b/0x1a0 net/9p/client.c:1566
v9fs_dir_readdir+0x4ac/0x6d0 fs/9p/vfs_dir.c:112
iterate_dir+0x576/0x700 fs/readdir.c:65
__do_sys_getdents64 fs/readdir.c:369 [inline]
__se_sys_getdents64 fs/readdir.c:354 [inline]
__x64_sys_getdents64+0x13a/0x2c0 fs/readdir.c:354
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f94bc786a39
RSP: 002b:00007f94b9cfc188 EFLAGS: 00000246 ORIG_RAX: 00000000000000d9
RAX: ffffffffffffffda RBX: 00007f94bc889f60 RCX: 00007f94bc786a39
RDX: 0000000000000034 RSI: 0000000000000000 RDI: 0000000000000007
RBP: 00007f94bc7e0c5f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffca511c0ef R14: 00007f94b9cfc300 R15: 0000000000022000
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8bb833a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6458
1 lock held by in:imklog/6248:
2 locks held by rs:main Q:Reg/6249:
#0: ffff88807b7e8370 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990
#1: ffff88807eda6460 (sb_writers#5){.+.+}-{0:0}, at: ksys_write+0x12d/0x250 fs/read_write.c:647
3 locks held by kworker/u4:10/14679:
2 locks held by agetty/13338:
#0: ffff888076c94098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:252
#1: ffffc9000428b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2113
2 locks held by syz-executor.5/13954:
2 locks held by syz-executor.4/13965:
#0: ffff888022ff8ff0 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990
#1: ffff88803b044eb8 (&sb->s_type->i_mutex_key#27){++++}-{3:3}, at: iterate_dir+0xcd/0x700 fs/readdir.c:55
2 locks held by syz-executor.1/13976:
2 locks held by syz-executor.2/13978:
#0: ffff888077652d70 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990
#1: ffff88803b045510 (&sb->s_type->i_mutex_key#27){++++}-{3:3}, at: iterate_dir+0xcd/0x700 fs/readdir.c:55

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Dec 20, 2021, 6:32:14 AM12/20/21
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages