[v6.1] INFO: task hung in vfs_unlink (3)

0 views
Skip to first unread message

syzbot

unread,
Mar 19, 2024, 8:40:30 AMMar 19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d7543167affd Linux 6.1.82
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=11fa66be180000
kernel config: https://syzkaller.appspot.com/x/.config?x=59059e181681c079
dashboard link: https://syzkaller.appspot.com/bug?extid=56b3aec222f3a8f4d18b
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/88220954516a/disk-d7543167.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c9062e074717/vmlinux-d7543167.xz
kernel image: https://storage.googleapis.com/syzbot-assets/70391b45a752/bzImage-d7543167.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+56b3ae...@syzkaller.appspotmail.com

INFO: task syz-fuzzer:3558 blocked for more than 143 seconds.
Not tainted 6.1.82-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-fuzzer state:D stack:22120 pid:3558 ppid:3537 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
rwsem_down_write_slowpath+0xea1/0x14b0 kernel/locking/rwsem.c:1189
inode_lock include/linux/fs.h:758 [inline]
vfs_unlink+0xe0/0x5f0 fs/namei.c:4313
do_unlinkat+0x4a5/0x820 fs/namei.c:4392
__do_sys_unlinkat fs/namei.c:4435 [inline]
__se_sys_unlinkat fs/namei.c:4428 [inline]
__x64_sys_unlinkat+0xca/0xf0 fs/namei.c:4428
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x40720e
RSP: 002b:000000c0028bb738 EFLAGS: 00000202 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 0000000000000016 RCX: 000000000040720e
RDX: 0000000000000000 RSI: 000000c00ec44510 RDI: 0000000000000016
RBP: 000000c0028bb778 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000051
R13: 000000c00a5f3c00 R14: 000000c0003ebd40 R15: 000000c00eb91800
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8d12ab10 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8d12b310 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
#0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
#0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
#0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494
2 locks held by getty/3305:
#0: ffff88814ba6a098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2188
3 locks held by syz-fuzzer/3558:
#0: ffff88814c1a8460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
#1: ffff8880af769810 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
#1: ffff8880af769810 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: do_unlinkat+0x266/0x820 fs/namei.c:4375
#2: ffff88807f428400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#2: ffff88807f428400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: vfs_unlink+0xe0/0x5f0 fs/namei.c:4313
2 locks held by syz-executor.1/26382:
2 locks held by syz-executor.0/26487:
5 locks held by syz-executor.4/2878:
4 locks held by syz-executor.2/2980:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.82-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf88/0xfd0 kernel/hung_task.c:377
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 29171 Comm: kworker/u4:17 Not tainted 6.1.82-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Workqueue: phy52 ieee80211_iface_work
RIP: 0010:lockdep_hardirqs_off+0x7b/0x100 kernel/locking/lockdep.c:4417
Code: 65 8b 05 d8 eb 78 75 85 c0 74 53 65 48 8b 1d 5c e5 78 75 48 c7 c7 40 ee eb 8a e8 90 17 00 00 65 c7 05 b5 eb 78 75 00 00 00 00 <4c> 89 b3 88 0a 00 00 8b 83 78 0a 00 00 ff c0 89 83 78 0a 00 00 89
RSP: 0018:ffffc90005def0e8 EFLAGS: 00000086
RAX: 0000000000000001 RBX: ffff8880a22a9dc0 RCX: 0000000000006e40
RDX: 0000000000000000 RSI: ffffffff8aebee40 RDI: ffffffff8b3d2b40
RBP: ffffc90005def1b0 R08: ffffffff89e8afac R09: ffffffff89e94925
R10: 0000000000000002 R11: ffff8880a22a9dc0 R12: 0000000000000246
R13: 1ffff92000bbde24 R14: ffffffff8a93c7ac R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055e939a5a028 CR3: 000000007f92b000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
trace_hardirqs_off+0xe/0x40 kernel/trace/trace_preemptirq.c:76
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
_raw_spin_lock_irqsave+0xac/0x120 kernel/locking/spinlock.c:162
debug_object_activate+0x68/0x4e0 lib/debugobjects.c:697
debug_rcu_head_queue kernel/rcu/rcu.h:189 [inline]
kvfree_call_rcu+0xb4/0x8c0 kernel/rcu/tree.c:3391
cfg80211_update_known_bss+0x16b/0x9e0
cfg80211_bss_update+0x187/0x21e0 net/wireless/scan.c:1773
cfg80211_inform_single_bss_frame_data net/wireless/scan.c:2434 [inline]
cfg80211_inform_bss_frame_data+0xae4/0x1680 net/wireless/scan.c:2467
ieee80211_bss_info_update+0x847/0xf00 net/mac80211/scan.c:190
ieee80211_rx_bss_info net/mac80211/ibss.c:1120 [inline]
ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1609 [inline]
ieee80211_ibss_rx_queued_mgmt+0x1962/0x2dd0 net/mac80211/ibss.c:1638
ieee80211_iface_process_skb net/mac80211/iface.c:1632 [inline]
ieee80211_iface_work+0x7aa/0xce0 net/mac80211/iface.c:1686
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages