[v5.15] INFO: task hung in vfs_setxattr (3)

1 view
Skip to first unread message

syzbot

unread,
Apr 28, 2024, 12:46:29 PMApr 28
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: b925f60c6ee7 Linux 5.15.157
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1162a8a7180000
kernel config: https://syzkaller.appspot.com/x/.config?x=2a1cb0d51cbb9dfb
dashboard link: https://syzkaller.appspot.com/bug?extid=d64441c9cb66600339d4
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/07f51426f82f/disk-b925f60c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d41cc73399aa/vmlinux-b925f60c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0f542c09e64b/bzImage-b925f60c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+d64441...@syzkaller.appspotmail.com

INFO: task syz-executor.1:4770 blocked for more than 143 seconds.
Not tainted 5.15.157-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:27520 pid: 4770 ppid: 3516 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
rwsem_down_write_slowpath+0xf0c/0x16a0 kernel/locking/rwsem.c:1165
inode_lock include/linux/fs.h:789 [inline]
vfs_setxattr+0x1dd/0x420 fs/xattr.c:302
do_setxattr fs/xattr.c:588 [inline]
setxattr+0x27e/0x2e0 fs/xattr.c:611
path_setxattr+0x1bc/0x2a0 fs/xattr.c:630
__do_sys_lsetxattr fs/xattr.c:653 [inline]
__se_sys_lsetxattr fs/xattr.c:649 [inline]
__x64_sys_lsetxattr+0xb4/0xd0 fs/xattr.c:649
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f724aed6ea9
RSP: 002b:00007f72494280c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000bd
RAX: ffffffffffffffda RBX: 00007f724b005050 RCX: 00007f724aed6ea9
RDX: 0000000020000340 RSI: 00000000200000c0 RDI: 0000000020000080
RBP: 00007f724af234a4 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000104 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f724b005050 R15: 00007ffe9a2ef688
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91fb20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
4 locks held by kworker/u4:3/410:
#0: ffff8881423f4938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90002e07d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffff8881417880e0 (&type->s_umount_key#53){++++}-{3:3}, at: trylock_super+0x1b/0xf0 fs/super.c:418
#3: ffff888074199108 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_balance_fs+0x4d4/0x6a0 fs/f2fs/segment.c:528
1 lock held by udevd/2963:
#0: ffff88801adbd118 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x14d/0xa50 block/bdev.c:817
2 locks held by getty/3258:
#0: ffff888023d66098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc9000250b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
5 locks held by syz-executor.1/4736:
2 locks held by syz-executor.1/4770:
#0: ffff888141788460 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff888054c9e9d0 (&sb->s_type->i_mutex_key#23){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff888054c9e9d0 (&sb->s_type->i_mutex_key#23){+.+.}-{3:3}, at: vfs_setxattr+0x1dd/0x420 fs/xattr.c:302
1 lock held by syz-executor.1/8124:
#0: ffff888075f87410 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#0: ffff888075f87410 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:648 [inline]
#0: ffff888075f87410 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1336
2 locks held by syz-executor.3/8214:
#0: ffff88801adbd118 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#1: ffff88801adb3468 (&lo->lo_mutex){+.+.}-{3:3}, at: lo_release+0x4d/0x1f0 drivers/block/loop.c:2070
2 locks held by syz-executor.0/8217:
#0: ffff88801accf918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#1: ffff888147db2468 (&lo->lo_mutex){+.+.}-{3:3}, at: lo_release+0x4d/0x1f0 drivers/block/loop.c:2070
2 locks held by syz-executor.4/8224:
#0: ffff88801ae78918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#1: ffff88801adb6468 (&lo->lo_mutex){+.+.}-{3:3}, at: lo_release+0x4d/0x1f0 drivers/block/loop.c:2070

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.15.157-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 6748 Comm: kworker/u4:9 Not tainted 5.15.157-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: bat_events batadv_tt_purge
RIP: 0010:__kasan_check_read+0x6/0x10 mm/kasan/shadow.c:31
Code: 41 5e 41 5f 5d c3 48 c7 c7 23 af 16 8c eb 0a 48 c7 c7 5b af 16 8c 4c 89 fe e8 56 6c 52 08 31 db eb d7 cc cc 89 f6 48 8b 0c 24 <31> d2 e9 63 ef ff ff 0f 1f 00 89 f6 48 8b 0c 24 ba 01 00 00 00 e9
RSP: 0018:ffffc90003677950 EFLAGS: 00000002
RAX: 000000000000001f RBX: 00000000000007db RCX: ffffffff81631958
RDX: 0000000000000002 RSI: 0000000000000008 RDI: ffffffff8fbf71b8
RBP: 0000000000000002 R08: dffffc0000000000 R09: fffffbfff1f7ee36
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff8880226dc6b8
R13: dffffc0000000000 R14: 0000000000000004 R15: ffff8880226dc698
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c00007302c CR3: 000000007980e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
instrument_atomic_read include/linux/instrumented.h:71 [inline]
test_bit include/asm-generic/bitops/instrumented-non-atomic.h:134 [inline]
hlock_class kernel/locking/lockdep.c:197 [inline]
mark_lock+0x98/0x340 kernel/locking/lockdep.c:4569
mark_held_locks kernel/locking/lockdep.c:4193 [inline]
__trace_hardirqs_on_caller kernel/locking/lockdep.c:4211 [inline]
lockdep_hardirqs_on_prepare+0x27d/0x7a0 kernel/locking/lockdep.c:4278
trace_hardirqs_on+0x67/0x80 kernel/trace/trace_preemptirq.c:49
__local_bh_enable_ip+0x164/0x1f0 kernel/softirq.c:388
spin_unlock_bh include/linux/spinlock.h:408 [inline]
batadv_tt_local_purge+0x2a0/0x340 net/batman-adv/translation-table.c:1356
batadv_tt_purge+0x31/0xa40 net/batman-adv/translation-table.c:3560
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages