[moderation] [block?] KCSAN: data-race in bdev_set_nr_sectors / block_read_full_folio

6 views
Skip to first unread message

syzbot

unread,
Dec 7, 2023, 5:01:31 AM12/7/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: bee0e7762ad2 Merge tag 'for-linus-iommufd' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14b5878ce80000
kernel config: https://syzkaller.appspot.com/x/.config?x=ac34c1f29a8029df
dashboard link: https://syzkaller.appspot.com/bug?extid=9afba386f14b01b1a40c
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ax...@kernel.dk linux...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/233be5f65dd2/disk-bee0e776.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/94423738a289/vmlinux-bee0e776.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0b977463fa9a/bzImage-bee0e776.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9afba3...@syzkaller.appspotmail.com

BUG: KCSAN: data-race in bdev_set_nr_sectors / block_read_full_folio

write to 0xffff8881006316b0 of 8 bytes by task 8705 on cpu 0:
i_size_write include/linux/fs.h:932 [inline]
bdev_set_nr_sectors+0x42/0x70 block/bdev.c:421
set_capacity block/genhd.c:61 [inline]
set_capacity_and_notify+0x6e/0x170 block/genhd.c:74
loop_set_size+0x2e/0x70 drivers/block/loop.c:237
loop_configure+0xaf9/0xca0 drivers/block/loop.c:1100
lo_ioctl+0x682/0x12e0
blkdev_ioctl+0x375/0x460 block/ioctl.c:633
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xcf/0x140 fs/ioctl.c:857
__x64_sys_ioctl+0x43/0x50 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b

read to 0xffff8881006316b0 of 8 bytes by task 23252 on cpu 1:
i_size_read include/linux/fs.h:910 [inline]
block_read_full_folio+0x71/0x710 fs/buffer.c:2371
blkdev_read_folio+0x1c/0x20 block/fops.c:420
filemap_read_folio mm/filemap.c:2323 [inline]
filemap_update_page mm/filemap.c:2407 [inline]
filemap_get_pages+0xcfd/0xf90 mm/filemap.c:2521
filemap_read+0x214/0x680 mm/filemap.c:2593
blkdev_read_iter+0x217/0x2c0 block/fops.c:742
call_read_iter include/linux/fs.h:2014 [inline]
new_sync_read fs/read_write.c:389 [inline]
vfs_read+0x3c0/0x590 fs/read_write.c:470
ksys_read+0xeb/0x1a0 fs/read_write.c:613
__do_sys_read fs/read_write.c:623 [inline]
__se_sys_read fs/read_write.c:621 [inline]
__x64_sys_read+0x42/0x50 fs/read_write.c:621
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b

value changed: 0x0000000000000000 -> 0x000000000005f800

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 23252 Comm: udevd Not tainted 6.7.0-rc4-syzkaller-00009-gbee0e7762ad2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
==================================================================
I/O error, dev loop3, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jan 9, 2024, 5:45:19 PM1/9/24
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages