[fs?] KCSAN: data-race in ___d_drop / __d_lookup_rcu

13 views
Skip to first unread message

syzbot

unread,
Jun 21, 2023, 10:04:49 AM6/21/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 1639fae5132b Merge tag 'drm-fixes-2023-06-17' of git://ano..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1467f2cf280000
kernel config: https://syzkaller.appspot.com/x/.config?x=78c1b724e055b4d3
dashboard link: https://syzkaller.appspot.com/bug?extid=35b9c31800ec55a56860
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
CC: [bra...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org vi...@zeniv.linux.org.uk]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/86241ed81f2c/disk-1639fae5.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b8b19729d4ee/vmlinux-1639fae5.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e365d835f9f6/bzImage-1639fae5.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+35b9c3...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in ___d_drop / __d_lookup_rcu

read to 0xffff8881024b3610 of 8 bytes by task 28486 on cpu 1:
hlist_bl_unhashed include/linux/list_bl.h:54 [inline]
d_unhashed include/linux/dcache.h:335 [inline]
__d_lookup_rcu+0x120/0x290 fs/dcache.c:2400
lookup_fast+0x8e/0x290 fs/namei.c:1625
walk_component fs/namei.c:1994 [inline]
link_path_walk+0x3f4/0x7e0 fs/namei.c:2325
path_lookupat+0x72/0x2a0 fs/namei.c:2478
filename_lookup+0x126/0x300 fs/namei.c:2508
vfs_statx+0xa9/0x300 fs/stat.c:238
vfs_fstatat fs/stat.c:276 [inline]
__do_sys_newfstatat fs/stat.c:446 [inline]
__se_sys_newfstatat+0x8a/0x2a0 fs/stat.c:440
__x64_sys_newfstatat+0x55/0x60 fs/stat.c:440
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

write to 0xffff8881024b3610 of 8 bytes by task 3070 on cpu 0:
__hlist_bl_del include/linux/list_bl.h:128 [inline]
___d_drop+0x106/0x220 fs/dcache.c:500
__d_drop fs/dcache.c:507 [inline]
__dentry_kill+0x147/0x4a0 fs/dcache.c:602
dentry_kill+0x8d/0x1e0
dput+0x118/0x1f0 fs/dcache.c:913
handle_mounts fs/namei.c:1551 [inline]
step_into+0x21a/0x800 fs/namei.c:1836
open_last_lookups fs/namei.c:3583 [inline]
path_openat+0x10af/0x1d00 fs/namei.c:3788
do_filp_open+0xf6/0x200 fs/namei.c:3818
do_sys_openat2+0xb5/0x2a0 fs/open.c:1356
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0xf3/0x120 fs/open.c:1383
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 3070 Comm: syz-executor.2 Not tainted 6.4.0-rc6-syzkaller-00242-g1639fae5132b #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jul 22, 2023, 9:59:38 AM7/22/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages