[moderation] [fs?] KCSAN: data-race in dentry_lru_isolate_shrink / lookup_fast (3)

7 views
Skip to first unread message

syzbot

unread,
Dec 28, 2023, 7:10:24 PM12/28/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 505e701c0b2c Merge tag 'kbuild-fixes-v6.7-2' of git://git...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17964465e80000
kernel config: https://syzkaller.appspot.com/x/.config?x=4da1e2da456c3a7d
dashboard link: https://syzkaller.appspot.com/bug?extid=73270fa3bdc66aaa444c
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [bra...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org vi...@zeniv.linux.org.uk]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c54197e3c3d3/disk-505e701c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9af44559bead/vmlinux-505e701c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/93492afabdac/bzImage-505e701c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+73270f...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in dentry_lru_isolate_shrink / lookup_fast

write to 0xffff8881004aa9c0 of 4 bytes by task 9314 on cpu 1:
d_lru_shrink_move fs/dcache.c:480 [inline]
dentry_lru_isolate_shrink+0x7f/0x100 fs/dcache.c:1300
__list_lru_walk_one+0x180/0x3b0 mm/list_lru.c:231
list_lru_walk_one mm/list_lru.c:276 [inline]
list_lru_walk_node+0x7f/0x1f0 mm/list_lru.c:304
list_lru_walk include/linux/list_lru.h:214 [inline]
shrink_dcache_sb+0xc5/0x290 fs/dcache.c:1319
reconfigure_super+0x3ef/0x580 fs/super.c:1121
do_remount fs/namespace.c:2884 [inline]
path_mount+0x969/0xb30 fs/namespace.c:3656
do_mount fs/namespace.c:3677 [inline]
__do_sys_mount fs/namespace.c:3886 [inline]
__se_sys_mount+0x27f/0x2d0 fs/namespace.c:3863
__x64_sys_mount+0x67/0x80 fs/namespace.c:3863
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b

read to 0xffff8881004aa9c0 of 4 bytes by task 9316 on cpu 0:
d_revalidate fs/namei.c:861 [inline]
lookup_fast+0xd9/0x290 fs/namei.c:1643
walk_component fs/namei.c:1998 [inline]
link_path_walk+0x3f4/0x7e0 fs/namei.c:2329
path_openat+0x1a0/0x1d70 fs/namei.c:3775
do_filp_open+0xf6/0x200 fs/namei.c:3809
do_sys_openat2+0xab/0x110 fs/open.c:1437
do_sys_open fs/open.c:1452 [inline]
__do_sys_openat fs/open.c:1468 [inline]
__se_sys_openat fs/open.c:1463 [inline]
__x64_sys_openat+0xf3/0x120 fs/open.c:1463
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b

value changed: 0x00680000 -> 0x00008000

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 9316 Comm: modprobe Not tainted 6.7.0-rc7-syzkaller-00027-g505e701c0b2c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Feb 1, 2024, 7:10:22 PM2/1/24
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages