[v5.15] INFO: rcu detected stall in sys_bind

5 views
Skip to first unread message

syzbot

unread,
Jul 30, 2024, 7:03:28 PM7/30/24
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 7e89efd3ae1c Linux 5.15.164
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=11ce3703980000
kernel config: https://syzkaller.appspot.com/x/.config?x=ef3d6716c47442fa
dashboard link: https://syzkaller.appspot.com/bug?extid=c92ccaffd4ff71d074e5
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/39cf3e32ff98/disk-7e89efd3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5832802d8e4b/vmlinux-7e89efd3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/fe6dc074e0ff/bzImage-7e89efd3.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c92cca...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5318/1:b..l
(detected by 1, t=10502 jiffies, g=67145, q=248)
task:syz.0.452 state:R running task stack:19896 pid: 5318 ppid: 4610 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
preempt_schedule_irq+0xf7/0x1c0 kernel/sched/core.c:6780
irqentry_exit+0x53/0x80 kernel/entry/common.c:432
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:lock_acquire+0x252/0x4f0 kernel/locking/lockdep.c:5627
Code: 2b 00 74 08 4c 89 f7 e8 cc 86 67 00 f6 44 24 61 02 0f 85 84 01 00 00 41 f7 c7 00 02 00 00 74 01 fb 48 c7 44 24 40 0e 36 e0 45 <4b> c7 44 25 00 00 00 00 00 43 c7 44 25 09 00 00 00 00 43 c7 44 25
RSP: 0018:ffffc90002eb65a0 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 1ffff920005d6cc0 RCX: 1ffff920005d6c60
RDX: dffffc0000000000 RSI: ffffffff8a8b3ca0 RDI: ffffffff8ad8f800
RBP: ffffc90002eb66e8 R08: dffffc0000000000 R09: fffffbfff1f8e019
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff920005d6cbc
R13: dffffc0000000000 R14: ffffc90002eb6600 R15: 0000000000000246
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:312
rcu_read_lock include/linux/rcupdate.h:739 [inline]
next_demotion_node+0x13/0x190 mm/migrate.c:1177
demote_page_list mm/vmscan.c:1338 [inline]
shrink_page_list+0x6428/0x7540 mm/vmscan.c:1798
shrink_inactive_list mm/vmscan.c:2216 [inline]
shrink_list mm/vmscan.c:2443 [inline]
shrink_lruvec+0x14ae/0x2b90 mm/vmscan.c:2762
shrink_node_memcgs mm/vmscan.c:2949 [inline]
shrink_node+0x10b3/0x26d0 mm/vmscan.c:3072
shrink_zones mm/vmscan.c:3278 [inline]
do_try_to_free_pages+0x697/0x17b0 mm/vmscan.c:3333
try_to_free_mem_cgroup_pages+0x44c/0xa60 mm/vmscan.c:3647
try_charge_memcg+0x4f4/0x1530 mm/memcontrol.c:2651
try_charge mm/memcontrol.c:2776 [inline]
charge_memcg+0x10b/0x340 mm/memcontrol.c:6742
__mem_cgroup_charge+0x23/0x80 mm/memcontrol.c:6778
mem_cgroup_charge include/linux/memcontrol.h:700 [inline]
__add_to_page_cache_locked+0xbdb/0x11a0 mm/filemap.c:892
add_to_page_cache_lru+0x1b3/0x560 mm/filemap.c:984
do_read_cache_page+0x205/0x1040 mm/filemap.c:3460
read_mapping_page include/linux/pagemap.h:515 [inline]
dir_get_page fs/sysv/dir.c:58 [inline]
sysv_find_entry+0x1b0/0x650 fs/sysv/dir.c:146
sysv_inode_by_name+0x9e/0x3f0 fs/sysv/dir.c:360
sysv_lookup+0x63/0xe0 fs/sysv/namei.c:38
lookup_one_qstr_excl+0x117/0x240 fs/namei.c:1563
filename_create+0x293/0x530 fs/namei.c:3836
kern_path_create+0x35/0x180 fs/namei.c:3876
unix_bind_bsd net/unix/af_unix.c:1073 [inline]
unix_bind+0x3b9/0x920 net/unix/af_unix.c:1171
__sys_bind+0x388/0x400 net/socket.c:1715
__do_sys_bind net/socket.c:1726 [inline]
__se_sys_bind net/socket.c:1724 [inline]
__x64_sys_bind+0x76/0x80 net/socket.c:1724
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f58a4ff93b9
RSP: 002b:00007f58a3478048 EFLAGS: 00000246 ORIG_RAX: 0000000000000031
RAX: ffffffffffffffda RBX: 00007f58a5187f80 RCX: 00007f58a4ff93b9
RDX: 000000000000006e RSI: 0000000020003000 RDI: 0000000000000004
RBP: 00007f58a50668e6 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f58a5187f80 R15: 00007ffd94919fb8
</TASK>
rcu: rcu_preempt kthread starved for 9514 jiffies! g67145 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27064 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1914
rcu_gp_fqs_loop+0x2bf/0x1080 kernel/rcu/tree.c:1972
rcu_gp_kthread+0xa4/0x360 kernel/rcu/tree.c:2145
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:108 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_idle_do_entry+0x10f/0x340 drivers/acpi/processor_idle.c:562


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Nov 7, 2024, 6:03:27 PM11/7/24
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages