[moderation] [ceph?] [fs?] INFO: task hung in ceph_mdsc_destroy

3 views
Skip to first unread message

syzbot

unread,
Feb 9, 2024, 9:57:19 AMFeb 9
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 076d56d74f17 Add linux-next specific files for 20240202
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=12a48eafe80000
kernel config: https://syzkaller.appspot.com/x/.config?x=428086ff1c010d9f
dashboard link: https://syzkaller.appspot.com/bug?extid=be62847e659b2b618be8
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ceph-...@vger.kernel.org idry...@gmail.com jla...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org xiu...@redhat.com]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/dece45d1a4b5/disk-076d56d7.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4921e269b178/vmlinux-076d56d7.xz
kernel image: https://storage.googleapis.com/syzbot-assets/2a9156da9091/bzImage-076d56d7.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+be6284...@syzkaller.appspotmail.com

INFO: task syz-executor.1:14725 blocked for more than 143 seconds.
Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:26672 pid:14725 tgid:14720 ppid:5093 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5400 [inline]
__schedule+0x17df/0x4a40 kernel/sched/core.c:6727
__schedule_loop kernel/sched/core.c:6804 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6819
schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
__flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
ceph_mdsc_destroy+0x57/0x300 fs/ceph/mds_client.c:5712
destroy_fs_client+0x112/0x270 fs/ceph/super.c:893
deactivate_locked_super+0xc4/0x130 fs/super.c:477
ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
vfs_get_tree+0x90/0x2a0 fs/super.c:1784
vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
__do_sys_fsconfig fs/fsopen.c:476 [inline]
__se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
do_syscall_64+0xfb/0x240
entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fc1aee7dda9
RSP: 002b:00007fc1afb630c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fc1aefabf80 RCX: 00007fc1aee7dda9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00007fc1aeeca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fc1aefabf80 R15: 00007ffd0e913e58
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4827:
#0: ffff88802a7f60a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
1 lock held by syz-executor.4/5092:
#0: ffffffff8e1360f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#0: ffffffff8e1360f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x39a/0x820 kernel/rcu/tree_exp.h:939
2 locks held by kworker/u4:8/9898:
1 lock held by syz-executor.1/14725:
#0: ffff88802eb72c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff88802eb72c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
1 lock held by udevd/16111:
#0: ffff88801d09ce40 (mapping.invalidate_lock#2){.+.+}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:857 [inline]
#0: ffff88801d09ce40 (mapping.invalidate_lock#2){.+.+}-{3:3}, at: page_cache_ra_unbounded+0xf2/0x7c0 mm/readahead.c:225
4 locks held by syz-executor.2/17518:
3 locks held by syz-executor.3/17531:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xfb0/0xff0 kernel/hung_task.c:379
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:112


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Apr 2, 2024, 7:48:17 AMApr 2
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages