[v6.1] INFO: task hung in reiserfs_dirty_inode

0 views
Skip to first unread message

syzbot

unread,
Jun 24, 2023, 5:53:02 PM6/24/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e84a4e368abe Linux 6.1.35
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1058c723280000
kernel config: https://syzkaller.appspot.com/x/.config?x=f77b603a569f81a7
dashboard link: https://syzkaller.appspot.com/bug?extid=58cd75dddb85bbd0196a
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/30886a06f234/disk-e84a4e36.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/84ab10f3c74b/vmlinux-e84a4e36.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ea73248cc405/Image-e84a4e36.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+58cd75...@syzkaller.appspotmail.com

INFO: task syz-executor.3:10616 blocked for more than 143 seconds.
Not tainted 6.1.35-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3 state:D stack:0 pid:10616 ppid:4265 flags:0x00000009
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0xee4/0x1c98 kernel/sched/core.c:6554
schedule+0xc4/0x170 kernel/sched/core.c:6630
schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6689
__mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
reiserfs_dirty_inode+0xe4/0x204 fs/reiserfs/super.c:704
__mark_inode_dirty+0x2f8/0x1354 fs/fs-writeback.c:2411
generic_update_time fs/inode.c:1859 [inline]
inode_update_time fs/inode.c:1872 [inline]
touch_atime+0x5f0/0x8ec fs/inode.c:1944
file_accessed include/linux/fs.h:2535 [inline]
generic_file_mmap+0xb0/0x11c mm/filemap.c:3455
call_mmap include/linux/fs.h:2210 [inline]
mmap_region+0xdd0/0x1a98 mm/mmap.c:2663
do_mmap+0xa00/0x1108 mm/mmap.c:1411
vm_mmap_pgoff+0x1a4/0x2b4 mm/util.c:520
ksys_mmap_pgoff+0x3c8/0x5b0 mm/mmap.c:1457
__do_sys_mmap arch/arm64/kernel/sys.c:28 [inline]
__se_sys_mmap arch/arm64/kernel/sys.c:21 [inline]
__arm64_sys_mmap+0xf8/0x110 arch/arm64/kernel/sys.c:21
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

Showing all locks held in the system:
3 locks held by kworker/u4:1/11:
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015794df0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffff8000157955f0 (rcu_tasks_trace.tasks_gp_mutex
){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffff800015794c20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:305
1 lock held by udevd/3839:
2 locks held by getty/3982:
#0: ffff0000d636c098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001ba202f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1210 drivers/tty/n_tty.c:2177
1 lock held by syz-executor.1/4270:
2 locks held by syz-executor.5/4272:
#0: ffff0000dc53c0e0 (&type->s_umount_key#84){+.+.}-{3:3}, at: deactivate_super+0xe8/0x110 fs/super.c:362
#1: ffff80001579a1f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#1: ffff80001579a1f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e0/0x768 kernel/rcu/tree_exp.h:950
2 locks held by kworker/0:7/5761:
#0: ffff0000c0021d38 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x664/0x1404 kernel/workqueue.c:2262
#1: ffff800021ae7c20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x1404 kernel/workqueue.c:2264
4 locks held by kworker/0:13/6429:
#0: ffff0000c0021538 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x664/0x1404 kernel/workqueue.c:2262
#1: ffff800021c17c20 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x1404 kernel/workqueue.c:2264
#2: ffff00012fee00e0 (&type->s_umount_key#82){++++}-{3:3}, at: flush_old_commits+0xcc/0x2b8 fs/reiserfs/super.c:97
#3: ffff00012a3c5090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
1 lock held by syz-executor.3/10585:
#0: ffff0000c4352b48 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock include/linux/mmap_lock.h:117 [inline]
#0: ffff0000c4352b48 (&mm->mmap_lock){++++}-{3:3}, at: exit_mm+0x74/0x244 kernel/exit.c:539
4 locks held by syz-executor.3/10586:
3 locks held by syz-executor.3/10616:
#0: ffff0000c4352b48 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff0000c4352b48 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x15c/0x2b4 mm/util.c:518
#1: ffff00012fee0460 (sb_writers#23){.+.+}-{0:0}, at: file_accessed include/linux/fs.h:2535 [inline]
#1: ffff00012fee0460 (sb_writers#23){.+.+}-{0:0}, at: generic_file_mmap+0xb0/0x11c mm/filemap.c:3455
#2: ffff00012a3c5090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
1 lock held by syz-executor.3/11490:
#0: ffff00012fee00e0 (&type->s_umount_key#82){++++}-{3:3}, at: user_get_super+0xd8/0x240 fs/super.c:876
4 locks held by syz-executor.0/11556:

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Oct 2, 2023, 5:52:48 PM10/2/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages