[v5.15] INFO: rcu detected stall in filemap_fault (3)

0 views
Skip to first unread message

syzbot

unread,
Sep 23, 2025, 10:26:35 AMSep 23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 43bb85222e53 Linux 5.15.193
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1145627c580000
kernel config: https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link: https://syzkaller.appspot.com/bug?extid=6be7686d69b96cbb9a22
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/aa8fda38f146/disk-43bb8522.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5cfcd43783fc/vmlinux-43bb8522.xz
kernel image: https://storage.googleapis.com/syzbot-assets/582ede77e278/bzImage-43bb8522.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6be768...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5511/1:b..l P5519/1:b..l
(detected by 1, t=10502 jiffies, g=9793, q=23)
task:modprobe state:R running task stack:27200 pid: 5519 ppid: 4437 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
preempt_schedule_irq+0xb1/0x150 kernel/sched/core.c:6799
irqentry_exit+0x63/0x70 kernel/entry/common.c:432
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:xa_entry include/linux/xarray.h:1197 [inline]
RIP: 0010:xas_descend+0xba/0x3b0 lib/xarray.c:204
Code: 8b 2c 24 44 89 f8 83 e0 3f 48 8b 0c 24 4c 8d 74 c1 28 4c 89 f0 48 c1 e8 03 80 3c 28 00 74 08 4c 89 f7 e8 89 b8 bc fd 4d 8b 36 <e8> 61 79 a9 05 89 c5 31 ff 89 c6 e8 c6 53 78 fd 85 ed 74 27 49 83
RSP: 0000:ffffc9000317f970 EFLAGS: 00000246
RAX: 1ffff110041787b5 RBX: 1ffff9200062ff4c RCX: ffff888020bc3c80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000040
RBP: dffffc0000000000 R08: dffffc0000000000 R09: fffffbfff1ff7819
R10: fffffbfff1ff7819 R11: 1ffffffff1ff7818 R12: ffffc9000317fa60
R13: ffff888020ab8ec8 R14: ffffea00003b2940 R15: 0000000000000060
xas_load+0xba/0x140 lib/xarray.c:240
mapping_get_entry mm/filemap.c:1826 [inline]
pagecache_get_page+0x19d/0xef0 mm/filemap.c:1894
find_get_page include/linux/pagemap.h:351 [inline]
filemap_fault+0x18a/0x13b0 mm/filemap.c:3072
__do_fault+0x141/0x330 mm/memory.c:3928
do_cow_fault mm/memory.c:4293 [inline]
do_fault mm/memory.c:4394 [inline]
handle_pte_fault mm/memory.c:4650 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x25ce/0x43c0 mm/memory.c:4883
do_user_addr_fault+0x489/0xc80 arch/x86/mm/fault.c:1357
handle_page_fault arch/x86/mm/fault.c:1445 [inline]
exc_page_fault+0x60/0x100 arch/x86/mm/fault.c:1501
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0033:0x7f7159958070
RSP: 002b:00007ffc8d176c08 EFLAGS: 00010246
RAX: 00007f715962b010 RBX: 0000000000000004 RCX: 00007f715962b018
RDX: 0000000000000008 RSI: 0000000000000000 RDI: 00007f715962b010
RBP: 00007ffc8d176f90 R08: 00007f715962b010 R09: 0000000000000003
R10: 0000000000000812 R11: 00007ffc8d177078 R12: 00007ffc8d176cb8
R13: 00007f71596366b0 R14: 00007ffc8d177030 R15: 00007f715962b018
</TASK>
task:syz.0.257 state:R running task stack:23872 pid: 5511 ppid: 4182 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
preempt_schedule_irq+0xb1/0x150 kernel/sched/core.c:6799
irqentry_exit+0x63/0x70 kernel/entry/common.c:432
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:task_css include/linux/cgroup.h:496 [inline]
RIP: 0010:mem_cgroup_from_task+0x73/0x110 mm/memcontrol.c:935
Code: 85 c0 75 4d 48 c7 c7 08 1f 14 8c be ff ff ff ff e8 e2 81 e8 07 85 c0 75 38 48 c7 c7 78 1f 14 8c be ff ff ff ff e8 cd 81 e8 07 <85> c0 75 23 49 83 c6 2c 4c 89 f0 48 c1 e8 03 42 0f b6 04 38 84 c0
RSP: 0018:ffffc900031ceb90 EFLAGS: 00000282
RAX: 0000000000000000 RBX: ffff888147c4d800 RCX: 11fbf0d8062ef100
RDX: 0000000000000000 RSI: ffffffff8a0b2ac0 RDI: ffffffff8a59a480
RBP: 0000000001112cca R08: dffffc0000000000 R09: fffffbfff1ff7819
R10: fffffbfff1ff7819 R11: 1ffffffff1ff7818 R12: dffffc0000000000
R13: 1ffff1100f32cddb R14: ffff88802c268000 R15: dffffc0000000000
get_mem_cgroup_from_mm+0xaf/0x260 mm/memcontrol.c:988
__mem_cgroup_charge+0x11/0x80 mm/memcontrol.c:6800
mem_cgroup_charge include/linux/memcontrol.h:700 [inline]
__add_to_page_cache_locked+0x9cf/0xf60 mm/filemap.c:892
add_to_page_cache_lru+0x150/0x4a0 mm/filemap.c:984
page_cache_ra_unbounded+0x410/0x930 mm/readahead.c:222
page_cache_async_readahead include/linux/pagemap.h:856 [inline]
do_async_mmap_readahead mm/filemap.c:3023 [inline]
filemap_fault+0x5ff/0x13b0 mm/filemap.c:3079
__do_fault+0x141/0x330 mm/memory.c:3928
do_read_fault mm/memory.c:4264 [inline]
do_fault mm/memory.c:4392 [inline]
handle_pte_fault mm/memory.c:4650 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x2949/0x43c0 mm/memory.c:4883
faultin_page mm/gup.c:976 [inline]
__get_user_pages+0x93e/0x11c0 mm/gup.c:1197
__get_user_pages_locked mm/gup.c:1382 [inline]
get_dump_page+0x188/0x670 mm/gup.c:1838
dump_user_range+0x54/0x340 fs/coredump.c:1013
elf_core_dump+0x2ff7/0x3530 fs/binfmt_elf.c:2285
do_coredump+0x1419/0x2960 fs/coredump.c:894
get_signal+0x40a/0x12c0 kernel/signal.c:2886
arch_do_signal_or_restart+0xc1/0x1300 arch/x86/kernel/signal.c:867
handle_signal_work kernel/entry/common.c:154 [inline]
exit_to_user_mode_loop+0x9e/0x130 kernel/entry/common.c:178
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:214
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:320
exc_page_fault+0x88/0x100 arch/x86/mm/fault.c:1504
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0033:0x7b3a
RSP: 002b:0000200000000048 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007ff712287fa0 RCX: 00007ff712030ec9
RDX: 0000000000000000 RSI: 0000200000000040 RDI: 0000000000020000
RBP: 00007ff7120b3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000
R13: 00007ff712288038 R14: 00007ff712287fa0 R15: 00007ffde58282b8
</TASK>
rcu: rcu_preempt kthread starved for 10570 jiffies! g9793 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:28032 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11bb/0x4390 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 1
CPU: 1 PID: 0 Comm: swapper/1 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<IRQ>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
rcu_check_gp_kthread_starvation+0x1cd/0x250 kernel/rcu/tree_stall.h:487
print_other_cpu_stall+0x10c8/0x1220 kernel/rcu/tree_stall.h:592
check_cpu_stall kernel/rcu/tree_stall.h:745 [inline]
rcu_pending kernel/rcu/tree.c:3936 [inline]
rcu_sched_clock_irq+0x831/0x1110 kernel/rcu/tree.c:2619
update_process_times+0x193/0x200 kernel/time/timer.c:1818
tick_sched_handle kernel/time/tick-sched.c:254 [inline]
tick_sched_timer+0x37d/0x560 kernel/time/tick-sched.c:1473
__run_hrtimer kernel/time/hrtimer.c:1690 [inline]
__hrtimer_run_queues+0x4fe/0xc40 kernel/time/hrtimer.c:1754
hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1816
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:default_idle+0xb/0x10 arch/x86/kernel/process.c:730
Code: bf 48 89 df e8 86 f5 09 f8 eb b5 e8 3f b0 f6 ff 00 00 cc cc 00 00 cc cc 00 00 cc cc 00 00 cc 66 90 0f 00 2d f7 83 53 00 fb f4 <c3> 0f 1f 40 00 41 57 41 56 53 49 be 00 00 00 00 00 fc ff df 65 48
RSP: 0018:ffffc90000d67d48 EFLAGS: 000002c2
RAX: 96798397104c9800 RBX: ffff88813fe40000 RCX: 96798397104c9800
RDX: 0000000000000001 RSI: ffffffff8a0b1820 RDI: ffffffff8a59a480
RBP: ffffc90000d67e80 R08: dffffc0000000000 R09: ffffed1017227662
R10: ffffed1017227662 R11: 1ffff11017227661 R12: ffffffff8d6991e8
R13: 0000000000000001 R14: 0000000000000001 R15: 1ffff11027fc8000
default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
cpuidle_idle_call kernel/sched/idle.c:194 [inline]
do_idle+0x21b/0x5b0 kernel/sched/idle.c:306
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
start_secondary+0x31f/0x430 arch/x86/kernel/smpboot.c:281
secondary_startup_64_no_verify+0xb1/0xbb
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages