Hello,
syzbot found the following issue on:
HEAD commit: 7e89efd3ae1c Linux 5.15.164
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=13519dbd980000
kernel config:
https://syzkaller.appspot.com/x/.config?x=8e7768447c833306
dashboard link:
https://syzkaller.appspot.com/bug?extid=477360bf6c6e20cd1c3a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/3d929e236949/disk-7e89efd3.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/8a76a46947c4/vmlinux-7e89efd3.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/12f4fa036ad7/Image-7e89efd3.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+477360...@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
5.15.164-syzkaller #0 Not tainted
------------------------------------------------------
kworker/1:1H/227 is trying to acquire lock:
ffff0000c26d60e0 (&type->s_umount_key#62){+.+.}-{3:3}, at: freeze_super+0x5c/0x388 fs/super.c:1682
but task is already holding lock:
ffff80001af47c00 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2285
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}:
process_one_work+0x6d4/0x11b8 kernel/workqueue.c:2286
worker_thread+0x910/0x1034 kernel/workqueue.c:2457
kthread+0x37c/0x45c kernel/kthread.c:334
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870
-> #1 ((wq_completion)glock_workqueue){+.+.}-{0:0}:
flush_workqueue+0x14c/0x11c4 kernel/workqueue.c:2830
gfs2_gl_hash_clear+0xbc/0x2f4 fs/gfs2/glock.c:2180
gfs2_put_super+0x5e4/0x684 fs/gfs2/super.c:624
generic_shutdown_super+0x130/0x29c fs/super.c:475
kill_block_super+0x70/0xdc fs/super.c:1414
gfs2_kill_sb+0xc0/0xd4
deactivate_locked_super+0xb8/0x13c fs/super.c:335
deactivate_super+0x108/0x128 fs/super.c:366
cleanup_mnt+0x3c0/0x474 fs/namespace.c:1143
__cleanup_mnt+0x20/0x30 fs/namespace.c:1150
task_work_run+0x130/0x1e4 kernel/task_work.c:164
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
do_notify_resume+0x262c/0x32b8 arch/arm64/kernel/signal.c:946
prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:133 [inline]
exit_to_user_mode arch/arm64/kernel/entry-common.c:138 [inline]
el0_svc+0xfc/0x1f0 arch/arm64/kernel/entry-common.c:609
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
-> #0 (&type->s_umount_key#62){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
down_write+0xbc/0x12c kernel/locking/rwsem.c:1551
freeze_super+0x5c/0x388 fs/super.c:1682
freeze_go_sync+0x128/0x31c fs/gfs2/glops.c:587
do_xmote+0x304/0x1054 fs/gfs2/glock.c:742
run_queue+0x3f8/0x6bc fs/gfs2/glock.c:872
glock_work_func+0x27c/0x470 fs/gfs2/glock.c:1039
process_one_work+0x790/0x11b8 kernel/workqueue.c:2310
worker_thread+0x910/0x1034 kernel/workqueue.c:2457
kthread+0x37c/0x45c kernel/kthread.c:334
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870
other info that might help us debug this:
Chain exists of:
&type->s_umount_key#62 --> (wq_completion)glock_workqueue --> (work_completion)(&(&gl->gl_work)->work)
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock((work_completion)(&(&gl->gl_work)->work));
lock((wq_completion)glock_workqueue);
lock((work_completion)(&(&gl->gl_work)->work));
lock(&type->s_umount_key#62);
*** DEADLOCK ***
2 locks held by kworker/1:1H/227:
#0: ffff0000c7260938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2283
#1: ffff80001af47c00 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2285
stack backtrace:
CPU: 1 PID: 227 Comm: kworker/1:1H Not tainted 5.15.164-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/27/2024
Workqueue: glock_workqueue glock_work_func
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
down_write+0xbc/0x12c kernel/locking/rwsem.c:1551
freeze_super+0x5c/0x388 fs/super.c:1682
freeze_go_sync+0x128/0x31c fs/gfs2/glops.c:587
do_xmote+0x304/0x1054 fs/gfs2/glock.c:742
run_queue+0x3f8/0x6bc fs/gfs2/glock.c:872
glock_work_func+0x27c/0x470 fs/gfs2/glock.c:1039
process_one_work+0x790/0x11b8 kernel/workqueue.c:2310
worker_thread+0x910/0x1034 kernel/workqueue.c:2457
kthread+0x37c/0x45c kernel/kthread.c:334
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup