KCSAN: data-race in wbt_inflight_cb / wbt_wait (4)

4 views
Skip to first unread message

syzbot

unread,
Dec 7, 2020, 7:07:11 AM12/7/20
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: b6505459 Linux 5.10-rc6
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10b8cd8b500000
kernel config: https://syzkaller.appspot.com/x/.config?x=c949fed53798f819
dashboard link: https://syzkaller.appspot.com/bug?extid=d727355e0b6375535d72
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project.git 913f6005669cfb590c99865a90bc51ed0983d09d)
CC: [ax...@kernel.dk linux...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+d72735...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in wbt_inflight_cb / wbt_wait

write to 0xffff888103814a40 of 8 bytes by task 16195 on cpu 1:
wb_timestamp block/blk-wbt.c:89 [inline]
wbt_wait+0x12b/0x2b0 block/blk-wbt.c:579
__rq_qos_throttle+0x39/0x70 block/blk-rq-qos.c:72
rq_qos_throttle block/blk-rq-qos.h:182 [inline]
blk_mq_submit_bio+0x233/0x1020 block/blk-mq.c:2174
__submit_bio_noacct_mq block/blk-core.c:1026 [inline]
submit_bio_noacct+0x77d/0x930 block/blk-core.c:1059
submit_bio+0x1f3/0x360 block/blk-core.c:1129
iomap_dio_submit_bio fs/iomap/direct-io.c:76 [inline]
iomap_dio_bio_actor+0x82d/0xa60 fs/iomap/direct-io.c:312
iomap_dio_actor+0x266/0x3a0 fs/iomap/direct-io.c:387
iomap_apply+0x1e1/0x4a0 fs/iomap/apply.c:84
__iomap_dio_rw+0x448/0x9b0 fs/iomap/direct-io.c:517
iomap_dio_rw+0x30/0x70 fs/iomap/direct-io.c:605
ext4_dio_read_iter fs/ext4/file.c:77 [inline]
ext4_file_read_iter+0x3bd/0x420 fs/ext4/file.c:129
call_read_iter include/linux/fs.h:1897 [inline]
generic_file_splice_read+0x22a/0x310 fs/splice.c:311
do_splice_to fs/splice.c:788 [inline]
splice_direct_to_actor+0x2aa/0x650 fs/splice.c:867
do_splice_direct+0xf5/0x170 fs/splice.c:976
do_sendfile+0x5db/0xca0 fs/read_write.c:1257
__do_sys_sendfile64 fs/read_write.c:1318 [inline]
__se_sys_sendfile64 fs/read_write.c:1304 [inline]
__x64_sys_sendfile64+0xf2/0x130 fs/read_write.c:1304
do_syscall_64+0x39/0x80 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9

read to 0xffff888103814a40 of 8 bytes by task 22267 on cpu 0:
close_io block/blk-wbt.c:444 [inline]
get_limit block/blk-wbt.c:474 [inline]
wbt_inflight_cb+0x19a/0x220 block/blk-wbt.c:495
rq_qos_wait+0xac/0x210 block/blk-rq-qos.c:266
__wbt_wait block/blk-wbt.c:518 [inline]
wbt_wait+0x1bb/0x2b0 block/blk-wbt.c:583
__rq_qos_throttle+0x39/0x70 block/blk-rq-qos.c:72
rq_qos_throttle block/blk-rq-qos.h:182 [inline]
blk_mq_submit_bio+0x233/0x1020 block/blk-mq.c:2174
__submit_bio_noacct_mq block/blk-core.c:1026 [inline]
submit_bio_noacct+0x77d/0x930 block/blk-core.c:1059
submit_bio+0x1f3/0x360 block/blk-core.c:1129
ext4_io_submit+0xcd/0xf0 fs/ext4/page-io.c:382
ext4_writepages+0x68e/0x1e30 fs/ext4/inode.c:2750
do_writepages+0x7b/0x150 mm/page-writeback.c:2352
__writeback_single_inode+0x84/0x560 fs/fs-writeback.c:1461
writeback_sb_inodes+0x6a0/0x1020 fs/fs-writeback.c:1721
wb_writeback+0x27d/0x660 fs/fs-writeback.c:1894
wb_do_writeback+0x101/0x5d0 fs/fs-writeback.c:2039
wb_workfn+0xb8/0x410 fs/fs-writeback.c:2080
process_one_work+0x3e1/0x950 kernel/workqueue.c:2272
worker_thread+0x635/0xb90 kernel/workqueue.c:2418
kthread+0x1fd/0x220 kernel/kthread.c:292
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 22267 Comm: kworker/u4:7 Not tainted 5.10.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: writeback wb_workfn (flush-8:0)
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jan 4, 2021, 10:23:17 PM1/4/21
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages