[moderation] [block?] KCSAN: data-race in __blk_mq_requeue_request / bt_tags_for_each (3)

2 views
Skip to first unread message

syzbot

unread,
Dec 10, 2023, 7:04:30 PM12/10/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e8f60209d6cf Merge tag 'pmdomain-v6.7-rc2' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16f3499ae80000
kernel config: https://syzkaller.appspot.com/x/.config?x=585869067cd7ce59
dashboard link: https://syzkaller.appspot.com/bug?extid=c7b19bfba39d201a07b6
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ax...@kernel.dk linux...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d70d3690767e/disk-e8f60209.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7602f84ba538/vmlinux-e8f60209.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e9fe9a5875ca/bzImage-e8f60209.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c7b19b...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in __blk_mq_requeue_request / bt_tags_for_each

write to 0xffff8881028aed60 of 4 bytes by task 29827 on cpu 0:
__blk_mq_put_driver_tag block/blk-mq.h:339 [inline]
blk_mq_put_driver_tag block/blk-mq.h:347 [inline]
__blk_mq_requeue_request+0x130/0x2a0 block/blk-mq.c:1432
blk_mq_handle_dev_resource block/blk-mq.c:1905 [inline]
blk_mq_dispatch_rq_list+0x858/0x1090 block/blk-mq.c:2046
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
__blk_mq_sched_dispatch_requests+0x5ec/0xd20 block/blk-mq-sched.c:309
blk_mq_sched_dispatch_requests+0x99/0x100 block/blk-mq-sched.c:333
blk_mq_run_hw_queue+0x2a4/0x4c0 block/blk-mq.c:2252
blk_mq_get_tag+0x479/0x590 block/blk-mq-tag.c:170
__blk_mq_alloc_requests+0x642/0x9e0 block/blk-mq.c:501
blk_mq_get_new_requests block/blk-mq.c:2872 [inline]
blk_mq_submit_bio+0x468/0xd90 block/blk-mq.c:2970
__submit_bio+0x11c/0x350 block/blk-core.c:599
__submit_bio_noacct_mq block/blk-core.c:678 [inline]
submit_bio_noacct_nocheck+0x449/0x5e0 block/blk-core.c:707
submit_bio_noacct+0x71c/0x8c0 block/blk-core.c:801
submit_bio+0xb7/0xc0 block/blk-core.c:834
ext4_io_submit+0x8a/0xa0 fs/ext4/page-io.c:378
ext4_do_writepages+0xb3a/0x2100 fs/ext4/inode.c:2705
ext4_writepages+0x15e/0x2e0 fs/ext4/inode.c:2774
do_writepages+0x1c2/0x340 mm/page-writeback.c:2553
filemap_fdatawrite_wbc+0xdb/0xf0 mm/filemap.c:387
__filemap_fdatawrite_range mm/filemap.c:420 [inline]
__filemap_fdatawrite mm/filemap.c:426 [inline]
filemap_flush+0x95/0xc0 mm/filemap.c:453
ext4_alloc_da_blocks+0x50/0x130 fs/ext4/inode.c:3077
ext4_release_file+0x5f/0x1c0 fs/ext4/file.c:169
__fput+0x299/0x630 fs/file_table.c:394
____fput+0x15/0x20 fs/file_table.c:422
task_work_run+0x135/0x1a0 kernel/task_work.c:180
exit_task_work include/linux/task_work.h:38 [inline]
do_exit+0x604/0x16d0 kernel/exit.c:871
do_group_exit+0x101/0x150 kernel/exit.c:1021
get_signal+0xf4e/0x10a0 kernel/signal.c:2904
arch_do_signal_or_restart+0x95/0x4b0 arch/x86/kernel/signal.c:309
exit_to_user_mode_loop+0x6f/0xe0 kernel/entry/common.c:168
exit_to_user_mode_prepare+0x6c/0xb0 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0x9/0x20 kernel/entry/common.c:309
irqentry_exit+0x12/0x40 kernel/entry/common.c:412
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570

read to 0xffff8881028aed60 of 4 bytes by task 252 on cpu 1:
blk_mq_find_and_get_req block/blk-mq-tag.c:260 [inline]
bt_tags_iter block/blk-mq-tag.c:356 [inline]
__sbitmap_for_each_set include/linux/sbitmap.h:281 [inline]
sbitmap_for_each_set include/linux/sbitmap.h:302 [inline]
bt_tags_for_each+0x2e2/0x500 block/blk-mq-tag.c:391
__blk_mq_all_tag_iter block/blk-mq-tag.c:402 [inline]
blk_mq_tagset_busy_iter+0x114/0x150 block/blk-mq-tag.c:446
scsi_host_busy+0x4f/0x80 drivers/scsi/hosts.c:604
scsi_host_queue_ready drivers/scsi/scsi_lib.c:1341 [inline]
scsi_queue_rq+0x310/0x1a30 drivers/scsi/scsi_lib.c:1735
blk_mq_dispatch_rq_list+0x2d9/0x1090 block/blk-mq.c:2037
__blk_mq_sched_dispatch_requests+0x1ce/0xd20 block/blk-mq-sched.c:301
blk_mq_sched_dispatch_requests+0x99/0x100 block/blk-mq-sched.c:333
blk_mq_run_hw_queue+0x2a4/0x4c0 block/blk-mq.c:2252
blk_mq_run_hw_queues+0x161/0x1e0 block/blk-mq.c:2301
blk_mq_requeue_work+0x408/0x430 block/blk-mq.c:1498
process_one_work kernel/workqueue.c:2630 [inline]
process_scheduled_works+0x5b8/0xa30 kernel/workqueue.c:2703
worker_thread+0x525/0x730 kernel/workqueue.c:2784
kthread+0x1d7/0x210 kernel/kthread.c:388
ret_from_fork+0x48/0x60 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242

value changed: 0x00001b0a -> 0xffffffff

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 252 Comm: kworker/1:1H Not tainted 6.7.0-rc3-syzkaller-00048-ge8f60209d6cf #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Workqueue: kblockd blk_mq_requeue_work
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jan 5, 2024, 5:53:18 AMJan 5
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages