Hello,
syzbot found the following issue on:
HEAD commit: 43bb85222e53 Linux 5.15.193
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=1534df12580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link:
https://syzkaller.appspot.com/bug?extid=3f9c4b2dba08733065d3
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/aa8fda38f146/disk-43bb8522.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/5cfcd43783fc/vmlinux-43bb8522.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/582ede77e278/bzImage-43bb8522.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+3f9c4b...@syzkaller.appspotmail.com
INFO: task syz.5.211:5482 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.5.211 state:D stack:25136 pid: 5482 ppid: 4455 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
__mutex_lock_common+0xc71/0x2390 kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
blk_trace_setup+0xac/0x1d0 kernel/trace/blktrace.c:616
sg_ioctl_common drivers/scsi/sg.c:1118 [inline]
sg_ioctl+0xe8a/0x2000 drivers/scsi/sg.c:1160
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl+0xfa/0x170 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f1b4b28bec9
RSP: 002b:00007f1b494f3038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f1b4b4e2fa0 RCX: 00007f1b4b28bec9
RDX: 0000200000000200 RSI: 00000000c0481273 RDI: 0000000000000003
RBP: 00007f1b4b30ef91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f1b4b4e3038 R14: 00007f1b4b4e2fa0 R15: 00007fff6c6ec618
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c11c660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by kworker/u4:2/154:
#0: ffff888016879138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1
#1: ffffc90001fa7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285
2 locks held by syslogd/3543:
#0:
ffff88802aebe328 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
ffff88802aebe328 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by klogd/3550:
#0: ffff88807e2a9628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807e2a9628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by udevd/3561:
2 locks held by dhcpcd/3853:
2 locks held by getty/3951:
#0: ffff88814d802098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002cf62e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5ba/0x1a30 drivers/tty/n_tty.c:2158
2 locks held by syz-executor/4172:
#0: ffff88807b016a28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807b016a28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by udevd/4176:
2 locks held by syz-executor/4191:
#0: ffff88801afa1628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88801afa1628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by syz-executor/4192:
#0: ffff88801afa1d28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88801afa1d28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
3 locks held by syz.1.105/4896:
#0: ffff888147d34d48 (&q->debugfs_mutex){+.+.}-{3:3}, at: blk_trace_setup+0xac/0x1d0 kernel/trace/blktrace.c:616
#1: ffffffff8c15ac68 (relay_channels_mutex){+.+.}-{3:3}, at: relay_open+0x314/0x8e0 kernel/relay.c:518
#2: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#2: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#2: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
5 locks held by syz-executor/5168:
1 lock held by syz.5.211/5482:
#0: ffff888147d34d48 (
&q->debugfs_mutex
){+.+.}-{3:3}, at: blk_trace_setup+0xac/0x1d0 kernel/trace/blktrace.c:616
2 locks held by udevd/6263:
#0: ffff888029854b58 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:842 [inline]
#0: ffff888029854b58 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x84b/0x13b0 mm/filemap.c:3096
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by syz.4.532/6887:
#0: ffff88801afa5c28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88801afa5c28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
2 locks held by modprobe/6905:
2 locks held by syz.2.543/6927:
#0: ffff88807e2ab228 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807e2ab228 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1298
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c1dc520 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x104c/0x2790 mm/page_alloc.c:5114
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe0f/0xe50 kernel/hung_task.c:369
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 5168 Comm: syz-executor Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
RIP: 0010:check_kcov_mode kernel/kcov.c:174 [inline]
RIP: 0010:write_comp_data kernel/kcov.c:227 [inline]
RIP: 0010:__sanitizer_cov_trace_switch+0x82/0xe0 kernel/kcov.c:329
Code: 77 4e 8b 54 ce 10 65 44 8b 1d e2 9f 8a 7e 41 81 e3 00 01 ff 00 74 13 41 81 fb 00 01 00 00 75 d9 41 83 b8 34 16 00 00 00 74 cf <45> 8b 98 10 16 00 00 41 83 fb 03 75 c2 4d 8b 98 18 16 00 00 45 8b
RSP: 0000:ffffc90002e0ec68 EFLAGS: 00000246
RAX: 0000000000000012 RBX: 0000000000000000 RCX: 0000000000000001
RDX: ffffffff83fdbb0e RSI: ffffffff8c73b140 RDI: 0000000000000000
RBP: ffffffff8a0bce21 R08: ffff888026359dc0 R09: 000000000000000d
R10: 000000000000000d R11: 0000000000000000 R12: 0000000000000000
R13: ffffc90002e0ef60 R14: 0000000000000001 R15: ffffffff8a0bce20
FS: 0000555560e91500(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff9858476a8 CR3: 0000000059e56000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
vsnprintf+0x10e/0x1a00 lib/vsprintf.c:2767
sprintf+0xd6/0x120 lib/vsprintf.c:3013
print_time kernel/printk/printk.c:1274 [inline]
info_print_prefix+0x152/0x300 kernel/printk/printk.c:1300
record_print_text kernel/printk/printk.c:1349 [inline]
console_unlock+0x7f5/0x1200 kernel/printk/printk.c:2725
vprintk_emit+0xc0/0x150 kernel/printk/printk.c:2274
_printk+0xcc/0x110 kernel/printk/printk.c:2299
dump_unreclaimable_slab+0x10e/0x140 mm/slab_common.c:1156
dump_header+0x359/0x770 mm/oom_kill.c:476
oom_kill_process+0x20e/0x3d0 mm/oom_kill.c:1016
out_of_memory+0xf5f/0x11e0 mm/oom_kill.c:1135
__alloc_pages_may_oom mm/page_alloc.c:4359 [inline]
__alloc_pages_slowpath+0x1d07/0x2790 mm/page_alloc.c:5163
__alloc_pages+0x332/0x470 mm/page_alloc.c:5500
alloc_pages_vma+0x393/0x7c0 mm/mempolicy.c:2146
__read_swap_cache_async+0x1b5/0xa70 mm/swap_state.c:459
read_swap_cache_async mm/swap_state.c:525 [inline]
swap_cluster_readahead+0x6a2/0x7c0 mm/swap_state.c:661
swapin_readahead+0xf3/0xaa0 mm/swap_state.c:854
do_swap_page+0x4b6/0x1f40 mm/memory.c:3622
handle_pte_fault mm/memory.c:4654 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x1ada/0x43c0 mm/memory.c:4883
do_user_addr_fault+0x489/0xc80 arch/x86/mm/fault.c:1357
handle_page_fault arch/x86/mm/fault.c:1445 [inline]
exc_page_fault+0x60/0x100 arch/x86/mm/fault.c:1501
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0033:0x7fa208cfd640
Code: Unable to access opcode bytes at RIP 0x7fa208cfd616.
RSP: 002b:00007ffd445929b8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00007ffd44592a10 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000000002 RDI: 00007ffd44592a50
RBP: 00007ffd445929fc R08: 000000000000000a R09: 00007ffd44592707
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000044
R13: 0000555560ea4590 R14: 00000000000488e0 R15: 00007ffd44592a50
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup