[v5.15] INFO: rcu detected stall in sys_getdents64

0 views
Skip to first unread message

syzbot

unread,
May 26, 2024, 12:10:27 PMMay 26
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c61bd26ae81a Linux 5.15.160
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=165f9b3f180000
kernel config: https://syzkaller.appspot.com/x/.config?x=235f0e81ca937c17
dashboard link: https://syzkaller.appspot.com/bug?extid=9d11a2f0a6e27d8ff531
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d61a97eef8b9/disk-c61bd26a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ab4908b4b59b/vmlinux-c61bd26a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d818fd46802b/bzImage-c61bd26a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9d11a2...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 0-...!: (1 GPs behind) idle=abb/1/0x4000000000000000 softirq=41777/41778 fqs=0
(detected by 1, t=10505 jiffies, g=62553, q=18)
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 5733 Comm: syz-executor.4 Not tainted 5.15.160-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x130 kernel/locking/spinlock.c:193
Code: bd 03 db 75 85 c0 74 02 5b c3 e8 2a 22 d9 f6 5b c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 57 41 56 41 55 41 54 53 48 83 e4 e0 <48> 83 ec 60 49 89 f7 48 89 fb 65 48 8b 04 25 28 00 00 00 48 89 44
RSP: 0018:ffffc90000007c80 EFLAGS: 00000082
RAX: 0000000000000000 RBX: ffff8880191f9f88 RCX: 0000000000000001
RDX: dffffc0000000000 RSI: 0000000000000046 RDI: ffffffff916b0e20
RBP: ffffc90000007cb8 R08: dffffc0000000000 R09: 0000000000000003
R10: ffffffffffffffff R11: dffffc0000000001 R12: ffffffff916b0e18
R13: ffff888076b85340 R14: ffff8880191f9f98 R15: dffffc0000000000
FS: 00007fa7354be6c0(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2e824000 CR3: 0000000061464000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
debug_object_activate+0x2f4/0x4e0 lib/debugobjects.c:711
debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
debug_activate kernel/time/hrtimer.c:475 [inline]
enqueue_hrtimer+0x30/0x390 kernel/time/hrtimer.c:1084
__run_hrtimer kernel/time/hrtimer.c:1703 [inline]
__hrtimer_run_queues+0x6b6/0xcf0 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:__getblk_gfp+0x48/0xaf0 fs/buffer.c:1335
Code: ff 48 89 ef 4c 89 fe 44 89 ea e8 33 ce ff ff 48 89 c3 48 c7 c7 80 6a 97 8a be 36 05 00 00 31 d2 e8 0d 71 73 ff 2e 2e 2e 31 c0 <48> 85 db 74 0a e8 4e af 9a ff e9 a6 09 00 00 4c 89 3c 24 44 89 74
RSP: 0018:ffffc9000482f768 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff888021e63b80
RDX: ffff888021e63b80 RSI: ffffffff8a8b3c20 RDI: ffffffff8ad8f6c0
RBP: ffff88801b068bc0 R08: ffffffff81e58346 R09: fffff940002b301f
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff88805d08c018
R13: 0000000000000400 R14: 0000000000000008 R15: 00000000003186d1
__bread_gfp+0x2a/0x390 fs/buffer.c:1381
sb_bread include/linux/buffer_head.h:337 [inline]
get_branch+0x2c3/0x6e0 fs/sysv/itree.c:101
get_block+0x16c/0x1790 fs/sysv/itree.c:221
block_read_full_page+0x2f9/0xde0 fs/buffer.c:2290
do_read_cache_page+0x752/0x1040
read_mapping_page include/linux/pagemap.h:515 [inline]
dir_get_page fs/sysv/dir.c:58 [inline]
sysv_readdir+0x19b/0x820 fs/sysv/dir.c:83
iterate_dir+0x224/0x570
__do_sys_getdents64 fs/readdir.c:369 [inline]
__se_sys_getdents64+0x209/0x4f0 fs/readdir.c:354
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fa736f4aee9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fa7354be0c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d9
RAX: ffffffffffffffda RBX: 00007fa737079f80 RCX: 00007fa736f4aee9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fa736f9749e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fa737079f80 R15: 00007ffe9eb2ba78
</TASK>
rcu: rcu_preempt kthread starved for 10505 jiffies! g62553 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27000 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x2bf/0x1080 kernel/rcu/tree.c:1972
rcu_gp_kthread+0xa4/0x360 kernel/rcu/tree.c:2145
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 1
CPU: 1 PID: 9136 Comm: syz-executor.4 Not tainted 5.15.160-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
rcu_check_gp_kthread_starvation+0x1d2/0x240 kernel/rcu/tree_stall.h:487
print_other_cpu_stall+0x137a/0x14d0 kernel/rcu/tree_stall.h:592
check_cpu_stall kernel/rcu/tree_stall.h:745 [inline]
rcu_pending kernel/rcu/tree.c:3932 [inline]
rcu_sched_clock_irq+0xa38/0x1150 kernel/rcu/tree.c:2619
update_process_times+0x196/0x200 kernel/time/timer.c:1818
tick_sched_handle kernel/time/tick-sched.c:254 [inline]
tick_sched_timer+0x386/0x550 kernel/time/tick-sched.c:1473
__run_hrtimer kernel/time/hrtimer.c:1686 [inline]
__hrtimer_run_queues+0x55b/0xcf0 kernel/time/hrtimer.c:1750
hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:csd_lock_wait kernel/smp.c:440 [inline]
RIP: 0010:smp_call_function_many_cond+0xa93/0xd90 kernel/smp.c:969
Code: 04 03 84 c0 0f 85 84 00 00 00 45 8b 7d 00 44 89 fe 83 e6 01 31 ff e8 4c cf 0b 00 41 83 e7 01 75 07 e8 e1 cb 0b 00 eb 41 f3 90 <48> b8 00 00 00 00 00 fc ff df 0f b6 04 03 84 c0 75 11 41 f7 45 00
RSP: 0018:ffffc90003107200 EFLAGS: 00000246
RAX: ffffffff81749104 RBX: 1ffff11017348509 RCX: 0000000000040000
RDX: ffffc9000ae2a000 RSI: 000000000003ffff RDI: 0000000000040000
RBP: ffffc90003107330 R08: ffffffff817490d4 R09: fffffbfff1f7f01a
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000000
R13: ffff8880b9a42848 R14: ffff8880b9b3b3c0 R15: 0000000000000001
on_each_cpu_cond_mask+0x3b/0x80 kernel/smp.c:1135
on_each_cpu include/linux/smp.h:71 [inline]
flush_tlb_kernel_range+0x197/0x230 arch/x86/mm/tlb.c:1033
__purge_vmap_area_lazy+0x294/0x1740 mm/vmalloc.c:1683
_vm_unmap_aliases+0x453/0x4e0 mm/vmalloc.c:2107
change_page_attr_set_clr+0x308/0x1050 arch/x86/mm/pat/set_memory.c:1740
change_page_attr_clear arch/x86/mm/pat/set_memory.c:1797 [inline]
set_memory_ro+0xa1/0xe0 arch/x86/mm/pat/set_memory.c:1943
bpf_jit_binary_lock_ro include/linux/filter.h:891 [inline]
bpf_int_jit_compile+0xbf57/0xc6e0 arch/x86/net/bpf_jit_comp.c:2372
bpf_prog_select_runtime+0x6e2/0x9b0 kernel/bpf/core.c:1930
bpf_prog_load+0x131c/0x1b60 kernel/bpf/syscall.c:2357
__sys_bpf+0x343/0x670 kernel/bpf/syscall.c:4651
__do_sys_bpf kernel/bpf/syscall.c:4755 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4753 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4753
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fe4d238aee9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe4d08fe0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fe4d24b9f80 RCX: 00007fe4d238aee9
RDX: 0000000000000090 RSI: 00000000200000c0 RDI: 0000000000000005
RBP: 00007fe4d23d749e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fe4d24b9f80 R15: 00007ffcef7bf248
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages