[moderation] [kernel?] KCSAN: data-race in pcpu_balance_workfn / pcpu_nr_pages

4 views
Skip to first unread message

syzbot

unread,
Nov 23, 2024, 2:22:19 AM11/23/24
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 06afb0f36106 Merge tag 'trace-v6.13' of git://git.kernel.o..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1430975f980000
kernel config: https://syzkaller.appspot.com/x/.config?x=593659e26cb0b41c
dashboard link: https://syzkaller.appspot.com/bug?extid=0318b96abc6766bdcfa8
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [b...@alien8.de dave....@linux.intel.com h...@zytor.com linux-...@vger.kernel.org mi...@redhat.com tg...@linutronix.de x...@kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2e3c069dd3d0/disk-06afb0f3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1d49196df83b/vmlinux-06afb0f3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a785eb7108ec/bzImage-06afb0f3.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0318b9...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in pcpu_balance_workfn / pcpu_nr_pages

read-write to 0xffffffff88bff2f8 of 8 bytes by task 3394 on cpu 1:
pcpu_chunk_depopulated mm/percpu.c:1553 [inline]
pcpu_reclaim_populated mm/percpu.c:2164 [inline]
pcpu_balance_workfn+0x2c0/0xa60 mm/percpu.c:2211
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0x483/0x9a0 kernel/workqueue.c:3310
worker_thread+0x51d/0x6f0 kernel/workqueue.c:3391
kthread+0x1d1/0x210 kernel/kthread.c:389
ret_from_fork+0x4b/0x60 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

read to 0xffffffff88bff2f8 of 8 bytes by task 7492 on cpu 0:
pcpu_nr_pages+0x16/0x40 mm/percpu.c:3393
meminfo_proc_show+0x937/0x9b0 fs/proc/meminfo.c:133
seq_read_iter+0x2d1/0x930 fs/seq_file.c:230
proc_reg_read_iter+0xec/0x190 fs/proc/inode.c:295
copy_splice_read+0x3a0/0x5d0 fs/splice.c:365
do_splice_read fs/splice.c:985 [inline]
splice_direct_to_actor+0x269/0x670 fs/splice.c:1089
do_splice_direct_actor fs/splice.c:1207 [inline]
do_splice_direct+0xd7/0x150 fs/splice.c:1233
do_sendfile+0x398/0x660 fs/read_write.c:1363
__do_sys_sendfile64 fs/read_write.c:1424 [inline]
__se_sys_sendfile64 fs/read_write.c:1410 [inline]
__x64_sys_sendfile64+0x110/0x150 fs/read_write.c:1410
x64_sys_call+0xfbd/0x2dc0 arch/x86/include/generated/asm/syscalls_64.h:41
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xc9/0x1c0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x00000000000033c6 -> 0x00000000000033c5

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 UID: 0 PID: 7492 Comm: syz.0.6196 Not tainted 6.12.0-syzkaller-07834-g06afb0f36106 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jan 18, 2025, 2:22:19 AM1/18/25
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages