[v6.1] INFO: task hung in ext4_evict_ea_inode

0 views
Skip to first unread message

syzbot

unread,
Jun 20, 2023, 10:02:02 PM6/20/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ca87e77a2ef8 Linux 6.1.34
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14dee2db280000
kernel config: https://syzkaller.appspot.com/x/.config?x=c188e92022a334b
dashboard link: https://syzkaller.appspot.com/bug?extid=ea74bb6149f03627ee55
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f48d514c343c/disk-ca87e77a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/563336f1f216/vmlinux-ca87e77a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/2254afa3642b/bzImage-ca87e77a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ea74bb...@syzkaller.appspotmail.com

INFO: task syz-executor.1:23152 blocked for more than 145 seconds.
Not tainted 6.1.34-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:21656 pid:23152 ppid:19975 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0x132c/0x4330 kernel/sched/core.c:6554
schedule+0xbf/0x180 kernel/sched/core.c:6630
mb_cache_entry_wait_unused+0x162/0x240 fs/mbcache.c:148
ext4_evict_ea_inode+0x146/0x2e0 fs/ext4/xattr.c:444
ext4_evict_inode+0x19d/0x1150 fs/ext4/inode.c:181
evict+0x2a4/0x620 fs/inode.c:664
ext4_xattr_set_entry+0x1d32/0x3ec0 fs/ext4/xattr.c:1806
ext4_xattr_block_set+0x698/0x3630 fs/ext4/xattr.c:1906
ext4_xattr_set_handle+0xdac/0x1560 fs/ext4/xattr.c:2392
ext4_xattr_set+0x231/0x3d0 fs/ext4/xattr.c:2494
__vfs_setxattr+0x3e7/0x420 fs/xattr.c:182
__vfs_setxattr_noperm+0x12a/0x5e0 fs/xattr.c:216
vfs_setxattr+0x21d/0x420 fs/xattr.c:309
do_setxattr fs/xattr.c:594 [inline]
setxattr+0x250/0x2b0 fs/xattr.c:617
path_setxattr+0x1bc/0x2a0 fs/xattr.c:636
__do_sys_setxattr fs/xattr.c:652 [inline]
__se_sys_setxattr fs/xattr.c:648 [inline]
__x64_sys_setxattr+0xb7/0xd0 fs/xattr.c:648
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7feaafc8c389
RSP: 002b:00007feab0ac7168 EFLAGS: 00000246 ORIG_RAX: 00000000000000bc
RAX: ffffffffffffffda RBX: 00007feaafdabf80 RCX: 00007feaafc8c389
RDX: 0000000000000000 RSI: 00000000200001c0 RDI: 0000000020000140
RBP: 00007feaafcd7493 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffef83a51f R14: 00007feab0ac7300 R15: 0000000000022000
</TASK>
INFO: task syz-executor.1:23164 blocked for more than 150 seconds.
Not tainted 6.1.34-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:23968 pid:23164 ppid:19975 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0x132c/0x4330 kernel/sched/core.c:6554
schedule+0xbf/0x180 kernel/sched/core.c:6630
__wait_on_freeing_inode fs/inode.c:2199 [inline]
find_inode_fast+0x315/0x450 fs/inode.c:950
iget_locked+0xc7/0x830 fs/inode.c:1273
__ext4_iget+0x25d/0x3ef0 fs/ext4/inode.c:4818
ext4_xattr_inode_cache_find fs/ext4/xattr.c:1493 [inline]
ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1528 [inline]
ext4_xattr_set_entry+0x2213/0x3ec0 fs/ext4/xattr.c:1669
ext4_xattr_block_set+0xb0e/0x3630 fs/ext4/xattr.c:1975
ext4_xattr_set_handle+0xdac/0x1560 fs/ext4/xattr.c:2392
ext4_xattr_set+0x231/0x3d0 fs/ext4/xattr.c:2494
__vfs_setxattr+0x3e7/0x420 fs/xattr.c:182
__vfs_setxattr_noperm+0x12a/0x5e0 fs/xattr.c:216
vfs_setxattr+0x21d/0x420 fs/xattr.c:309
do_setxattr fs/xattr.c:594 [inline]
setxattr+0x250/0x2b0 fs/xattr.c:617
path_setxattr+0x1bc/0x2a0 fs/xattr.c:636
__do_sys_setxattr fs/xattr.c:652 [inline]
__se_sys_setxattr fs/xattr.c:648 [inline]
__x64_sys_setxattr+0xb7/0xd0 fs/xattr.c:648
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7feaafc8c389
RSP: 002b:00007feab0aa6168 EFLAGS: 00000246 ORIG_RAX: 00000000000000bc
RAX: ffffffffffffffda RBX: 00007feaafdac050 RCX: 00007feaafc8c389
RDX: 00000000200005c0 RSI: 0000000020000180 RDI: 00000000200000c0
RBP: 00007feaafcd7493 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000002000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffef83a51f R14: 00007feab0aa6300 R15: 0000000000022000
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cf27470 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cf27c70 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/27:
#0: ffffffff8cf272a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3306:
#0: ffff88814b519098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
1 lock held by syz-executor.5/3580:
#0: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: copy_process+0x23e9/0x4020 kernel/fork.c:2363
4 locks held by kworker/u4:12/3889:
2 locks held by kworker/1:10/4197:
#0: ffff888012466538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc9000b657d20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
1 lock held by syz-executor.0/10130:
#0: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: copy_process+0x23e9/0x4020 kernel/fork.c:2363
3 locks held by syz-executor.1/23152:
#0: ffff888147362460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
#1: ffff88807a19f258 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff88807a19f258 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: vfs_setxattr+0x1dd/0x420 fs/xattr.c:308
#2: ffff88807a19ef20 (&ei->xattr_sem){++++}-{3:3}, at: ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
#2: ffff88807a19ef20 (&ei->xattr_sem){++++}-{3:3}, at: ext4_xattr_set_handle+0x270/0x1560 fs/ext4/xattr.c:2307
3 locks held by syz-executor.1/23164:
#0: ffff888147362460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
#1: ffff8880739f5440 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff8880739f5440 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: vfs_setxattr+0x1dd/0x420 fs/xattr.c:308
#2: ffff8880739f5108 (&ei->xattr_sem){++++}-{3:3}, at: ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
#2: ffff8880739f5108 (&ei->xattr_sem){++++}-{3:3}, at: ext4_xattr_set_handle+0x270/0x1560 fs/ext4/xattr.c:2307
1 lock held by syz-executor.2/24662:
#0: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x32d/0x2300 kernel/exit.c:825
1 lock held by syz-executor.2/24666:
#0: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x32d/0x2300 kernel/exit.c:825
1 lock held by syz-executor.2/24667:
#0: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x32d/0x2300 kernel/exit.c:825
5 locks held by kvm-nx-lpage-re/24665:
#0: ffffffff8cf4f6a8 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_attach_task_all+0x23/0xe0 kernel/cgroup/cgroup-v1.c:61
#1: ffffffff8cdc4970 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_attach_lock+0xd/0x30 kernel/cgroup/cgroup.c:2429
#2: ffffffff8cf4f890 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_attach_task_all+0x2d/0xe0 kernel/cgroup/cgroup-v1.c:62
#3: ffffffff8cf5bfd0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_can_attach+0x208/0x4e0 kernel/cgroup/cpuset.c:2482
#4: ffffffff8cf2c878 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#4: ffffffff8cf2c878 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x479/0x8a0 kernel/rcu/tree_exp.h:950

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 6.1.34-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf18/0xf60 kernel/hung_task.c:377
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3789 Comm: kworker/u4:11 Not tainted 6.1.34-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Workqueue: phy23 ieee80211_iface_work
RIP: 0010:lookup_chain_cache kernel/locking/lockdep.c:3725 [inline]
RIP: 0010:lookup_chain_cache_add kernel/locking/lockdep.c:3745 [inline]
RIP: 0010:validate_chain+0x16f/0x58e0 kernel/locking/lockdep.c:3800
Code: b8 eb 83 b5 80 46 86 c8 61 49 0f af c6 48 c1 e8 2f 48 8d 1c c5 a0 b1 14 90 48 89 d8 48 c1 e8 03 48 89 44 24 58 42 80 3c 20 00 <74> 08 48 89 df e8 27 92 75 00 48 89 5c 24 20 48 8b 1b 48 85 db 74
RSP: 0018:ffffc900058bf600 EFLAGS: 00000046
RAX: 1ffffffff203b210 RBX: ffffffff901d9080 RCX: ffffffff816a8bc5
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff9028b308
RBP: ffffc900058bf8b0 R08: dffffc0000000000 R09: fffffbfff2051662
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: ffff888038f8a910 R14: bb91437071f9b968 R15: 1ffff110071f1522
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f2596b4a718 CR3: 00000000332fc000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
skb_dequeue+0x29/0x140 net/core/skbuff.c:3411
ieee80211_iface_work+0x195/0xce0 net/mac80211/iface.c:1680
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Sep 28, 2023, 10:02:34 PM9/28/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages