[v6.1] INFO: task hung in hfs_mdb_commit (2)

0 views
Skip to first unread message

syzbot

unread,
Apr 7, 2024, 6:21:20 AMApr 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 347385861c50 Linux 6.1.84
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17f4679d180000
kernel config: https://syzkaller.appspot.com/x/.config?x=c6572e59ce99583c
dashboard link: https://syzkaller.appspot.com/bug?extid=e8f67c5a2ba608c1f263
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/101d17869fc3/disk-34738586.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1fd47dc155f3/vmlinux-34738586.xz
kernel image: https://storage.googleapis.com/syzbot-assets/3a82e916ef21/Image-34738586.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e8f67c...@syzkaller.appspotmail.com

INFO: task kworker/1:5:4286 blocked for more than 143 seconds.
Not tainted 6.1.84-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:5 state:D stack:0 pid:4286 ppid:2 flags:0x00000008
Workqueue: events_long flush_mdb
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0xee4/0x1c98 kernel/sched/core.c:6558
schedule+0xc4/0x170 kernel/sched/core.c:6634
io_schedule+0x8c/0x188 kernel/sched/core.c:8786
bit_wait_io+0x1c/0xac kernel/sched/wait_bit.c:209
__wait_on_bit_lock+0xcc/0x1e8 kernel/sched/wait_bit.c:90
out_of_line_wait_on_bit_lock+0x194/0x21c kernel/sched/wait_bit.c:117
wait_on_bit_lock_io include/linux/wait_bit.h:208 [inline]
__lock_buffer+0x78/0xac fs/buffer.c:69
lock_buffer include/linux/buffer_head.h:397 [inline]
hfs_mdb_commit+0x140/0xf2c fs/hfs/mdb.c:271
flush_mdb+0x6c/0x9c fs/hfs/super.c:66
process_one_work+0x7ac/0x1404 kernel/workqueue.c:2292
worker_thread+0x8e4/0xfec kernel/workqueue.c:2439
kthread+0x250/0x2d8 kernel/kthread.c:376
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:864

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015a14e70 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
#0: ffff800015a15670 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
#0: ffff800015a14ca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:349
1 lock held by klogd/3831:
2 locks held by getty/3989:
#0: ffff0000d7ee0098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001bcd02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1214 drivers/tty/n_tty.c:2188
1 lock held by syz-executor.0/4238:
2 locks held by kworker/1:5/4286:
#0: ffff0000c0021138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x664/0x1404 kernel/workqueue.c:2265
#1: ffff80001eb67c20 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x1404 kernel/workqueue.c:2267

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Apr 8, 2024, 8:18:25 AMApr 8
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 9465fef4ae35 Linux 5.15.153
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1681d975180000
kernel config: https://syzkaller.appspot.com/x/.config?x=176c746ee3348b33
dashboard link: https://syzkaller.appspot.com/bug?extid=fe315034987e78af3fc1
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2962c02652ce/disk-9465fef4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d0f5a1ce082d/vmlinux-9465fef4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/86b5b1eea636/bzImage-9465fef4.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+fe3150...@syzkaller.appspotmail.com

INFO: task syz-executor.2:14267 blocked for more than 143 seconds.
Not tainted 5.15.153-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2 state:D stack:20704 pid:14267 ppid: 1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
io_schedule+0x88/0x100 kernel/sched/core.c:8484
bit_wait_io+0xe/0xc0 kernel/sched/wait_bit.c:209
__wait_on_bit_lock+0xbf/0x1a0 kernel/sched/wait_bit.c:90
out_of_line_wait_on_bit_lock+0x1d0/0x250 kernel/sched/wait_bit.c:117
lock_buffer include/linux/buffer_head.h:402 [inline]
hfs_mdb_commit+0xafb/0xfd0 fs/hfs/mdb.c:325
hfs_sync_fs+0x11/0x20 fs/hfs/super.c:35
sync_filesystem+0xe8/0x220 fs/sync.c:56
generic_shutdown_super+0x6e/0x2c0 fs/super.c:448
kill_block_super+0x7a/0xe0 fs/super.c:1414
deactivate_locked_super+0xa0/0x110 fs/super.c:335
cleanup_mnt+0x44e/0x500 fs/namespace.c:1143
task_work_run+0x129/0x1a0 kernel/task_work.c:164
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
exit_to_user_mode_loop+0x106/0x130 kernel/entry/common.c:175
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:208
__syscall_exit_to_user_mode_work kernel/entry/common.c:290 [inline]
syscall_exit_to_user_mode+0x5d/0x250 kernel/entry/common.c:301
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f2bcec21197
RSP: 002b:00007ffc0e0b3b78 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f2bcec21197
RDX: 0000000000000000 RSI: 000000000000000a RDI: 00007ffc0e0b3c30
RBP: 00007ffc0e0b3c30 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffc0e0b4cf0
R13: 00007f2bcec6b3b9 R14: 0000000000077567 R15: 000000000000000f
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by syslogd/2945:
1 lock held by klogd/2952:
2 locks held by getty/3262:
#0: ffff888024ee6098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc9000249b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
2 locks held by kworker/u4:10/5516:
#0: ffff8880b9a3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
#1: ffff8880b9a27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53d/0x810 kernel/sched/psi.c:891
2 locks held by kworker/1:27/11010:
#0: ffff888011c71138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90004bf7d20 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by syz-executor.2/14267:
#0: ffff8880629fe0e0 (&type->s_umount_key#49){+.+.}-{3:3}, at: deactivate_super+0xa9/0xe0 fs/super.c:365

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.15.153-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2952 Comm: klogd Not tainted 5.15.153-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:35 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:106 [inline]
RIP: 0010:lock_acquire+0x177/0x4f0 kernel/locking/lockdep.c:5619
Code: 4c 89 fb 48 c1 eb 03 42 80 3c 2b 00 74 08 4c 89 ff e8 6d 6c 67 00 48 c7 84 24 80 00 00 00 00 00 00 00 9c 8f 84 24 80 00 00 00 <42> 80 3c 2b 00 74 08 4c 89 ff e8 ca 6b 67 00 48 8d 5c 24 60 4c 8b
RSP: 0018:ffffc900000079a0 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 1ffff92000000f44 RCX: ffffffff81628c1c
RDX: 0000000000000000 RSI: ffffffff8ad88fa0 RDI: ffffffff8ad88f60
RBP: ffffc90000007af0 R08: dffffc0000000000 R09: fffffbfff1bc72a6
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff92000000f3c
R13: dffffc0000000000 R14: 0000000000000000 R15: ffffc90000007a20
FS: 00007f51d5bfe380(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c000eb6000 CR3: 0000000023b62000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421
expire_timers kernel/time/timer.c:1461 [inline]
__run_timers+0x6a8/0x890 kernel/time/timer.c:1737
run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1750
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:free_unref_page+0x213/0x2d0 mm/page_alloc.c:3418
Code: 2b 00 74 08 4c 89 ff e8 9b c5 0a 00 f6 44 24 41 02 0f 85 83 00 00 00 41 f7 c4 00 02 00 00 74 01 fb 48 c7 44 24 20 0e 36 e0 45 <4b> c7 44 35 00 00 00 00 00 66 43 c7 44 35 09 00 00 43 c6 44 35 0b
RSP: 0018:ffffc900024d7360 EFLAGS: 00000206
RAX: 56e21b59325b4200 RBX: 1ffff9200049ae74 RCX: ffffffff913c7f03
RDX: dffffc0000000000 RSI: ffffffff8a8b1500 RDI: ffffffff8ad88fc0
RBP: ffffc900024d7438 R08: ffffffff8186b7c0 R09: fffffbfff1bc72a6
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000246
R13: dffffc0000000000 R14: 1ffff9200049ae70 R15: ffffc900024d73a0
free_slab mm/slub.c:2015 [inline]
discard_slab mm/slub.c:2021 [inline]
__unfreeze_partials+0x1b7/0x210 mm/slub.c:2507
put_cpu_partial+0x132/0x1a0 mm/slub.c:2587
do_slab_free mm/slub.c:3487 [inline]
___cache_free+0xe3/0x100 mm/slub.c:3506
qlist_free_all+0x36/0x90 mm/kasan/quarantine.c:176
kasan_quarantine_reduce+0x162/0x180 mm/kasan/quarantine.c:283
__kasan_slab_alloc+0x2f/0xc0 mm/kasan/common.c:444
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
__kmalloc_node_track_caller+0x14d/0x390 mm/slub.c:4958
kmalloc_reserve net/core/skbuff.c:356 [inline]
__alloc_skb+0x12c/0x590 net/core/skbuff.c:427
alloc_skb include/linux/skbuff.h:1167 [inline]
alloc_skb_with_frags+0xa3/0x780 net/core/skbuff.c:6133
sock_alloc_send_pskb+0x915/0xa50 net/core/sock.c:2530
unix_dgram_sendmsg+0x6fd/0x2090 net/unix/af_unix.c:1810
sock_sendmsg_nosec net/socket.c:704 [inline]
__sock_sendmsg net/socket.c:716 [inline]
__sys_sendto+0x564/0x720 net/socket.c:2058
__do_sys_sendto net/socket.c:2070 [inline]
__se_sys_sendto net/socket.c:2066 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2066
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f51d5d609b5
Code: 8b 44 24 08 48 83 c4 28 48 98 c3 48 98 c3 41 89 ca 64 8b 04 25 18 00 00 00 85 c0 75 26 45 31 c9 45 31 c0 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 76 7a 48 8b 15 44 c4 0c 00 f7 d8 64 89 02 48 83
RSP: 002b:00007fffebea4a58 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f51d5d609b5
RDX: 0000000000000039 RSI: 0000563efdf1e4e0 RDI: 0000000000000003
RBP: 0000563efdf15910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000004000 R11: 0000000000000246 R12: 0000000000000013
R13: 00007f51d5eee212 R14: 00007fffebea4b58 R15: 0000000000000000
</TASK>
Reply all
Reply to author
Forward
0 new messages