[moderation] [fs?] [mm?] KCSAN: data-race in __filemap_add_folio / invalidate_bdev (6)

0 views
Skip to first unread message

syzbot

unread,
Mar 24, 2024, 5:19:18 AMMar 24
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 70293240c5ce Merge tag 'timers-urgent-2024-03-23' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=15d767b9180000
kernel config: https://syzkaller.appspot.com/x/.config?x=5bd3d8ca9a9838e3
dashboard link: https://syzkaller.appspot.com/bug?extid=54d98583ab10d08b0b47
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ak...@linux-foundation.org linux-...@vger.kernel.org linux-...@vger.kernel.org linu...@kvack.org wi...@infradead.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/048466107418/disk-70293240.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6817d7370564/vmlinux-70293240.xz
kernel image: https://storage.googleapis.com/syzbot-assets/16e8b44bfb95/bzImage-70293240.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+54d985...@syzkaller.appspotmail.com

EXT4-fs (loop0): Invalid log block size: 2049831
==================================================================
BUG: KCSAN: data-race in __filemap_add_folio / invalidate_bdev

read-write to 0xffff8881004c0bb8 of 8 bytes by task 3867 on cpu 0:
__filemap_add_folio+0x492/0x700 mm/filemap.c:912
filemap_add_folio+0x70/0x160 mm/filemap.c:947
page_cache_ra_unbounded+0x15f/0x2e0 mm/readahead.c:250
do_page_cache_ra mm/readahead.c:299 [inline]
force_page_cache_ra+0x18e/0x1d0 mm/readahead.c:330
page_cache_sync_ra+0xcc/0xf0 mm/readahead.c:684
page_cache_sync_readahead include/linux/pagemap.h:1300 [inline]
filemap_get_pages+0x252/0xfb0 mm/filemap.c:2505
filemap_read+0x21c/0x690 mm/filemap.c:2601
blkdev_read_iter+0x217/0x2c0 block/fops.c:754
call_read_iter include/linux/fs.h:2102 [inline]
new_sync_read fs/read_write.c:395 [inline]
vfs_read+0x5bc/0x6b0 fs/read_write.c:476
ksys_read+0xeb/0x1b0 fs/read_write.c:619
__do_sys_read fs/read_write.c:629 [inline]
__se_sys_read fs/read_write.c:627 [inline]
__x64_sys_read+0x42/0x50 fs/read_write.c:627
do_syscall_64+0xd3/0x1d0
entry_SYSCALL_64_after_hwframe+0x6d/0x75

read to 0xffff8881004c0bb8 of 8 bytes by task 8816 on cpu 1:
invalidate_bdev+0x32/0x80 block/bdev.c:93
__ext4_fill_super fs/ext4/super.c:5674 [inline]
ext4_fill_super+0x1788/0x39d0 fs/ext4/super.c:5699
get_tree_bdev+0x253/0x2e0 fs/super.c:1632
ext4_get_tree+0x1c/0x30 fs/ext4/super.c:5731
vfs_get_tree+0x56/0x1d0 fs/super.c:1797
do_new_mount+0x227/0x690 fs/namespace.c:3352
path_mount+0x49b/0xb30 fs/namespace.c:3679
do_mount fs/namespace.c:3692 [inline]
__do_sys_mount fs/namespace.c:3898 [inline]
__se_sys_mount+0x27f/0x2d0 fs/namespace.c:3875
__x64_sys_mount+0x67/0x80 fs/namespace.c:3875
do_syscall_64+0xd3/0x1d0
entry_SYSCALL_64_after_hwframe+0x6d/0x75

value changed: 0x0000000000000009 -> 0x000000000000000a

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 8816 Comm: syz-executor.0 Not tainted 6.8.0-syzkaller-13213-g70293240c5ce #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages