Hello,
syzbot found the following issue on:
HEAD commit: c79648372d02 Linux 5.15.189
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=14cac834580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=9e47345638b85bd0
dashboard link:
https://syzkaller.appspot.com/bug?extid=b4e33091cc766060d675
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/7ab49562317a/disk-c7964837.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/e847ec1e38d3/vmlinux-c7964837.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/263158c6371c/Image-c7964837.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+b4e330...@syzkaller.appspotmail.com
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P4252/1:b..l P4029/1:b..l
(detected by 0, t=10502 jiffies, g=4777, q=111)
task:udevd state:R running task stack: 0 pid: 4029 ppid: 3654 flags:0x0000000c
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
preempt_schedule_notrace+0xc4/0x168 kernel/sched/core.c:6631
rcu_is_watching+0xf4/0x134 kernel/rcu/tree.c:1124
rcu_read_lock include/linux/rcupdate.h:740 [inline]
percpu_ref_put_many include/linux/percpu-refcount.h:317 [inline]
percpu_ref_put+0x30/0x234 include/linux/percpu-refcount.h:338
css_put include/linux/cgroup.h:405 [inline]
uncharge_page+0x39c/0x500 mm/memcontrol.c:6968
__mem_cgroup_uncharge_list+0x7c/0xd4 mm/memcontrol.c:7004
mem_cgroup_uncharge_list include/linux/memcontrol.h:720 [inline]
release_pages+0x13c0/0x16e0 mm/swap.c:962
__pagevec_release+0x84/0xf8 mm/swap.c:983
pagevec_release include/linux/pagevec.h:81 [inline]
shmem_undo_range+0x48c/0x1234 mm/shmem.c:964
shmem_truncate_range mm/shmem.c:1063 [inline]
shmem_evict_inode+0x1c0/0x838 mm/shmem.c:1145
evict+0x3c8/0x810 fs/inode.c:647
iput_final fs/inode.c:1769 [inline]
iput+0x6c4/0x77c fs/inode.c:1795
dentry_unlink_inode+0x360/0x438 fs/dcache.c:380
__dentry_kill+0x320/0x598 fs/dcache.c:586
dentry_kill+0xc8/0x248 fs/dcache.c:-1
dput+0x23c/0x458 fs/dcache.c:893
__fput+0x494/0x7f8 fs/file_table.c:319
____fput+0x20/0x30 fs/file_table.c:339
task_work_run+0x12c/0x1e0 kernel/task_work.c:188
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
do_notify_resume+0x24b4/0x3128 arch/arm64/kernel/signal.c:949
prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:133 [inline]
exit_to_user_mode arch/arm64/kernel/entry-common.c:138 [inline]
el0_svc+0xf0/0x1e0 arch/arm64/kernel/entry-common.c:609
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
task:udevadm state:R running task stack: 0 pid: 4252 ppid: 4179 flags:0x00000004
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
preempt_schedule_irq+0x90/0x214 kernel/sched/core.c:6780
arm64_preempt_schedule_irq+0x14c/0x21c arch/arm64/kernel/entry-common.c:260
el1_interrupt+0x40/0x58 arch/arm64/kernel/entry-common.c:463
el1h_64_irq_handler+0x18/0x24 arch/arm64/kernel/entry-common.c:470
el1h_64_irq+0x78/0x7c arch/arm64/kernel/entry.S:522
arch_local_irq_restore arch/arm64/include/asm/irqflags.h:122 [inline]
seqcount_lockdep_reader_access include/linux/seqlock.h:105 [inline]
read_seqbegin+0x21c/0x304 include/linux/seqlock.h:897
d_alloc_parallel+0x2c4/0x1104 fs/dcache.c:2592
__lookup_slow+0x104/0x380 fs/namei.c:1648
lookup_slow+0x5c/0x80 fs/namei.c:1680
walk_component+0x2b0/0x3a8 fs/namei.c:1976
lookup_last fs/namei.c:2431 [inline]
path_lookupat+0x13c/0x3d0 fs/namei.c:2455
filename_lookup+0x180/0x414 fs/namei.c:2484
user_path_at_empty+0x5c/0x1a0 fs/namei.c:2883
do_readlinkat+0xd4/0x3e0 fs/stat.c:442
__do_sys_readlinkat fs/stat.c:469 [inline]
__se_sys_readlinkat fs/stat.c:466 [inline]
__arm64_sys_readlinkat+0x9c/0xb8 fs/stat.c:466
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x78/0x1e0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
rcu: rcu_preempt kthread starved for 10483 jiffies! g4777 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack: 0 pid: 15 ppid: 2 flags:0x00000008
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
schedule+0x11c/0x1c8 kernel/sched/core.c:6459
schedule_timeout+0x180/0x2c8 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x25c/0x11f0 kernel/rcu/tree.c:1972
rcu_gp_kthread+0xc4/0x2a8 kernel/rcu/tree.c:2145
kthread+0x374/0x454 kernel/kthread.c:334
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:855
rcu: Stack dump where RCU GP kthread last ran:
Task dump for CPU 1:
task:syz.4.20 state:R running task stack: 0 pid: 4251 ppid: 4045 flags:0x00000000
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
0xffff80001f4c7540
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup