[moderation] [block?] KCSAN: data-race in dd_insert_requests / ll_back_merge_fn (2)

4 views
Skip to first unread message

syzbot

unread,
Dec 9, 2023, 7:09:34 PM12/9/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: bee0e7762ad2 Merge tag 'for-linus-iommufd' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=157d8754e80000
kernel config: https://syzkaller.appspot.com/x/.config?x=ac34c1f29a8029df
dashboard link: https://syzkaller.appspot.com/bug?extid=e621c9fe4266358d53d6
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ax...@kernel.dk linux...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/233be5f65dd2/disk-bee0e776.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/94423738a289/vmlinux-bee0e776.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0b977463fa9a/bzImage-bee0e776.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e621c9...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in dd_insert_requests / ll_back_merge_fn

write to 0xffff888101764390 of 8 bytes by task 7657 on cpu 1:
dd_insert_request block/mq-deadline.c:837 [inline]
dd_insert_requests+0x4b1/0x670 block/mq-deadline.c:878
blk_mq_dispatch_plug_list block/blk-mq.c:2762 [inline]
blk_mq_flush_plug_list+0x643/0xdc0 block/blk-mq.c:2812
__blk_flush_plug+0x210/0x260 block/blk-core.c:1150
blk_finish_plug+0x47/0x60 block/blk-core.c:1174
swap_cluster_readahead+0x416/0x4c0 mm/swap_state.c:669
swapin_readahead+0xe9/0x7f0 mm/swap_state.c:878
do_swap_page+0x4a0/0x1670 mm/memory.c:3883
handle_pte_fault mm/memory.c:5041 [inline]
__handle_mm_fault mm/memory.c:5179 [inline]
handle_mm_fault+0xa36/0x2dd0 mm/memory.c:5344
do_user_addr_fault arch/x86/mm/fault.c:1364 [inline]
handle_page_fault arch/x86/mm/fault.c:1505 [inline]
exc_page_fault+0x3ff/0x6c0 arch/x86/mm/fault.c:1561
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570

read to 0xffff888101764390 of 8 bytes by task 7659 on cpu 0:
req_set_nomerge block/blk.h:351 [inline]
ll_back_merge_fn+0x29c/0x4a0 block/blk-merge.c:647
bio_attempt_back_merge+0x58/0x500 block/blk-merge.c:982
blk_attempt_bio_merge+0x43d/0x480 block/blk-merge.c:1068
blk_attempt_plug_merge+0xb4/0xf0 block/blk-merge.c:1115
blk_mq_attempt_bio_merge block/blk-mq.c:2853 [inline]
blk_mq_get_new_requests block/blk-mq.c:2873 [inline]
blk_mq_submit_bio+0x3ab/0xd90 block/blk-mq.c:2982
__submit_bio+0x11c/0x350 block/blk-core.c:607
__submit_bio_noacct_mq block/blk-core.c:686 [inline]
submit_bio_noacct_nocheck+0x449/0x5e0 block/blk-core.c:715
submit_bio_noacct+0x70c/0x8c0 block/blk-core.c:809
submit_bio+0xb7/0xc0 block/blk-core.c:842
swap_writepage_bdev_async mm/page_io.c:368 [inline]
__swap_writepage+0x69d/0xdb0 mm/page_io.c:386
swap_writepage+0x6e/0x120 mm/page_io.c:204
shmem_writepage+0x7a6/0x970 mm/shmem.c:1500
pageout mm/vmscan.c:654 [inline]
shrink_folio_list+0x1952/0x2540 mm/vmscan.c:1315
shrink_inactive_list mm/vmscan.c:1913 [inline]
shrink_list mm/vmscan.c:2154 [inline]
shrink_lruvec+0xd80/0x17a0 mm/vmscan.c:5626
shrink_node_memcgs mm/vmscan.c:5812 [inline]
shrink_node+0xab3/0x15c0 mm/vmscan.c:5847
shrink_zones mm/vmscan.c:6086 [inline]
do_try_to_free_pages+0x43d/0xce0 mm/vmscan.c:6148
try_to_free_mem_cgroup_pages+0x1e2/0x480 mm/vmscan.c:6463
try_charge_memcg+0x280/0xd30 mm/memcontrol.c:2742
obj_cgroup_charge_pages+0xab/0x130 mm/memcontrol.c:3255
__memcg_kmem_charge_page+0x9c/0x170 mm/memcontrol.c:3281
__alloc_pages+0x1bb/0x340 mm/page_alloc.c:4585
alloc_pages_mpol+0xb1/0x1d0 mm/mempolicy.c:2133
alloc_pages+0xe0/0x100 mm/mempolicy.c:2204
vm_area_alloc_pages mm/vmalloc.c:3063 [inline]
__vmalloc_area_node mm/vmalloc.c:3139 [inline]
__vmalloc_node_range+0x6d2/0xea0 mm/vmalloc.c:3320
kvmalloc_node+0x121/0x160 mm/util.c:642
kvmalloc include/linux/slab.h:738 [inline]
xt_alloc_table_info+0x3d/0x80 net/netfilter/x_tables.c:1192
do_replace net/ipv4/netfilter/arp_tables.c:970 [inline]
do_arpt_set_ctl+0x634/0x13b0 net/ipv4/netfilter/arp_tables.c:1421
nf_setsockopt+0x18d/0x1b0 net/netfilter/nf_sockopt.c:101
ip_setsockopt+0xe6/0x100 net/ipv4/ip_sockglue.c:1426
tcp_setsockopt+0x90/0xa0 net/ipv4/tcp.c:3704
sock_common_setsockopt+0x61/0x70 net/core/sock.c:3711
do_sock_setsockopt net/socket.c:2311 [inline]
__sys_setsockopt+0x1d4/0x240 net/socket.c:2334
__do_sys_setsockopt net/socket.c:2343 [inline]
__se_sys_setsockopt net/socket.c:2340 [inline]
__x64_sys_setsockopt+0x66/0x80 net/socket.c:2340
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b

value changed: 0x0000000000000000 -> 0xffff88810263d680

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 7659 Comm: syz-executor.4 Tainted: G W 6.7.0-rc4-syzkaller-00009-gbee0e7762ad2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
==================================================================
syz-executor.4 (7659) used greatest stack depth: 8024 bytes left


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Feb 7, 2024, 1:53:19 AMFeb 7
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages