INFO: task can't die in gfs2_gl_hash_clear (2)

5 views
Skip to first unread message

syzbot

unread,
Jan 17, 2021, 9:58:18 PM1/17/21
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: aa515cdc Add linux-next specific files for 20210113
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=1753b0d0d00000
kernel config: https://syzkaller.appspot.com/x/.config?x=3b9e4063c3f02eca
dashboard link: https://syzkaller.appspot.com/bug?extid=79629401bd610baf168d
compiler: gcc (GCC) 10.1.0-syz 20200507
CC: [agru...@redhat.com cluste...@redhat.com linux-...@vger.kernel.org rpet...@redhat.com]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+796294...@syzkaller.appspotmail.com

INFO: task syz-executor.1:13544 can't die for more than 143 seconds.
task:syz-executor.1 state:D stack:25656 pid:13544 ppid: 8507 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4373 [inline]
__schedule+0x90c/0x21a0 kernel/sched/core.c:5124
schedule+0xcf/0x270 kernel/sched/core.c:5203
schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1868
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x163/0x260 kernel/sched/completion.c:138
flush_workqueue+0x3ff/0x13e0 kernel/workqueue.c:2838
gfs2_gl_hash_clear+0xc8/0x270 fs/gfs2/glock.c:1984
gfs2_fill_super+0x2073/0x2720 fs/gfs2/ops_fstype.c:1231
get_tree_bdev+0x440/0x760 fs/super.c:1291
gfs2_get_tree+0x4a/0x270 fs/gfs2/ops_fstype.c:1254
vfs_get_tree+0x89/0x2f0 fs/super.c:1496
do_new_mount fs/namespace.c:2889 [inline]
path_mount+0x12ae/0x1e70 fs/namespace.c:3220
do_mount fs/namespace.c:3233 [inline]
__do_sys_mount fs/namespace.c:3441 [inline]
__se_sys_mount fs/namespace.c:3418 [inline]
__x64_sys_mount+0x27f/0x300 fs/namespace.c:3418
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x460c6a
RSP: 002b:00007f60330c4a78 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f60330c4b10 RCX: 0000000000460c6a
RDX: 0000000020000000 RSI: 0000000020000100 RDI: 00007f60330c4ad0
RBP: 00007f60330c4ad0 R08: 00007f60330c4b10 R09: 0000000020000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000020000000
R13: 0000000020000100 R14: 0000000020000200 R15: 0000000020047a20

Showing all locks held in the system:
3 locks held by kworker/u4:3/115:
1 lock held by khungtaskd/1662:
#0: ffffffff8b374160 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6254
3 locks held by kworker/1:1H/2209:
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
#0: ffff888017165938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246
#1: ffffc90007f87da8 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250
#2: ffff8880672960e0 (&type->s_umount_key#73){+.+.}-{3:3}, at: freeze_super+0x41/0x330 fs/super.c:1663
1 lock held by systemd-journal/4883:
1 lock held by in:imklog/8178:
#0: ffff888024ee1c70 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:947
1 lock held by syz-executor.1/13544:
#0: ffff8880672960e0 (&type->s_umount_key#72/1){+.+.}-{3:3}, at: alloc_super+0x201/0xaf0 fs/super.c:229

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 20, 2021, 11:15:12 PM7/20/21
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages