[syzbot] [mm?] KASAN: slab-use-after-free Read in mas_next_slot (2)

17 views
Skip to first unread message

syzbot

unread,
Jul 16, 2025, 1:55:37 PM7/16/25
to Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, linux-...@vger.kernel.org, linu...@kvack.org, lorenzo...@oracle.com, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
Hello,

syzbot found the following issue on:

HEAD commit: 0be23810e32e Add linux-next specific files for 20250714
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=11a9a7d4580000
kernel config: https://syzkaller.appspot.com/x/.config?x=adc3ea2bfe31343b
dashboard link: https://syzkaller.appspot.com/bug?extid=ebfd0e44b5c11034e1eb
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11d0658c580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15dd858c580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/13b5be5048fe/disk-0be23810.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3d2b3b2ceddf/vmlinux-0be23810.xz
kernel image: https://storage.googleapis.com/syzbot-assets/c7e5fbf3efa6/bzImage-0be23810.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ebfd0e...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in ma_dead_node lib/maple_tree.c:575 [inline]
BUG: KASAN: slab-use-after-free in mas_rewalk_if_dead lib/maple_tree.c:4415 [inline]
BUG: KASAN: slab-use-after-free in mas_next_slot+0x185/0xcf0 lib/maple_tree.c:4697
Read of size 8 at addr ffff8880755dc600 by task syz.0.656/6830

CPU: 1 UID: 0 PID: 6830 Comm: syz.0.656 Not tainted 6.16.0-rc6-next-20250714-syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xca/0x230 mm/kasan/report.c:480
kasan_report+0x118/0x150 mm/kasan/report.c:593
ma_dead_node lib/maple_tree.c:575 [inline]
mas_rewalk_if_dead lib/maple_tree.c:4415 [inline]
mas_next_slot+0x185/0xcf0 lib/maple_tree.c:4697
mas_find+0xb0e/0xd30 lib/maple_tree.c:6062
vma_find include/linux/mm.h:855 [inline]
remap_move mm/mremap.c:1819 [inline]
do_mremap mm/mremap.c:1904 [inline]
__do_sys_mremap mm/mremap.c:1968 [inline]
__se_sys_mremap+0xaff/0xef0 mm/mremap.c:1936
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4fecf8e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff93ea4718 EFLAGS: 00000246 ORIG_RAX: 0000000000000019
RAX: ffffffffffffffda RBX: 00007f4fed1b5fa0 RCX: 00007f4fecf8e929
RDX: 0000000000600002 RSI: 0000000000600002 RDI: 0000200000000000
RBP: 00007f4fed010b39 R08: 0000200000a00000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4fed1b5fa0 R14: 00007f4fed1b5fa0 R15: 0000000000000005
</TASK>

Allocated by task 6830:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4180 [inline]
slab_alloc_node mm/slub.c:4229 [inline]
kmem_cache_alloc_noprof+0x1c1/0x3c0 mm/slub.c:4236
mt_alloc_one lib/maple_tree.c:176 [inline]
mas_alloc_nodes+0x2e9/0x8e0 lib/maple_tree.c:1255
mas_node_count_gfp lib/maple_tree.c:1337 [inline]
mas_preallocate+0x3ad/0x6f0 lib/maple_tree.c:5537
vma_iter_prealloc mm/vma.h:463 [inline]
__split_vma+0x2fa/0xa00 mm/vma.c:528
vms_gather_munmap_vmas+0x2de/0x12b0 mm/vma.c:1359
__mmap_prepare mm/vma.c:2361 [inline]
__mmap_region mm/vma.c:2653 [inline]
mmap_region+0x724/0x20c0 mm/vma.c:2741
do_mmap+0xc45/0x10d0 mm/mmap.c:561
vm_mmap_pgoff+0x2a6/0x4d0 mm/util.c:579
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 23:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:576
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x62/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2417 [inline]
slab_free mm/slub.c:4680 [inline]
kmem_cache_free+0x18f/0x400 mm/slub.c:4782
rcu_do_batch kernel/rcu/tree.c:2584 [inline]
rcu_core+0xca8/0x1710 kernel/rcu/tree.c:2840
handle_softirqs+0x283/0x870 kernel/softirq.c:579
run_ksoftirqd+0x9b/0x100 kernel/softirq.c:968
smpboot_thread_fn+0x53f/0xa60 kernel/smpboot.c:160
kthread+0x70e/0x8a0 kernel/kthread.c:463
ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245

Last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:47
kasan_record_aux_stack+0xbd/0xd0 mm/kasan/generic.c:548
__call_rcu_common kernel/rcu/tree.c:3102 [inline]
call_rcu+0x157/0x9c0 kernel/rcu/tree.c:3222
mas_wr_node_store lib/maple_tree.c:3893 [inline]
mas_wr_store_entry+0x1f1b/0x25b0 lib/maple_tree.c:4104
mas_store_prealloc+0xb00/0xf60 lib/maple_tree.c:5510
vma_iter_store_new mm/vma.h:509 [inline]
vma_complete+0x224/0xae0 mm/vma.c:354
__split_vma+0x8a6/0xa00 mm/vma.c:568
vms_gather_munmap_vmas+0x2de/0x12b0 mm/vma.c:1359
do_vmi_align_munmap+0x25d/0x420 mm/vma.c:1527
do_vmi_munmap+0x253/0x2e0 mm/vma.c:1584
do_munmap+0xe1/0x140 mm/mmap.c:1071
mremap_to+0x304/0x7b0 mm/mremap.c:1367
remap_move mm/mremap.c:1861 [inline]
do_mremap mm/mremap.c:1904 [inline]
__do_sys_mremap mm/mremap.c:1968 [inline]
__se_sys_mremap+0xa0b/0xef0 mm/mremap.c:1936
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

The buggy address belongs to the object at ffff8880755dc600
which belongs to the cache maple_node of size 256
The buggy address is located 0 bytes inside of
freed 256-byte region [ffff8880755dc600, ffff8880755dc700)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x755dc
head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 00fff00000000040 ffff88801a491000 dead000000000122 0000000000000000
raw: 0000000000000000 0000000080100010 00000000f5000000 0000000000000000
head: 00fff00000000040 ffff88801a491000 dead000000000122 0000000000000000
head: 0000000000000000 0000000080100010 00000000f5000000 0000000000000000
head: 00fff00000000001 ffffea0001d57701 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000002
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 1, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 6828, tgid 6828 (cmp), ts 120765032919, free_ts 112542256570
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1851
prep_new_page mm/page_alloc.c:1859 [inline]
get_page_from_freelist+0x21e4/0x22c0 mm/page_alloc.c:3858
__alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5148
alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416
alloc_slab_page mm/slub.c:2487 [inline]
allocate_slab+0x8a/0x370 mm/slub.c:2655
new_slab mm/slub.c:2709 [inline]
___slab_alloc+0xbeb/0x1410 mm/slub.c:3891
__slab_alloc mm/slub.c:3981 [inline]
__slab_alloc_node mm/slub.c:4056 [inline]
slab_alloc_node mm/slub.c:4217 [inline]
kmem_cache_alloc_noprof+0x283/0x3c0 mm/slub.c:4236
mt_alloc_one lib/maple_tree.c:176 [inline]
mas_alloc_nodes+0x2e9/0x8e0 lib/maple_tree.c:1255
mas_node_count_gfp lib/maple_tree.c:1337 [inline]
mas_preallocate+0x3ad/0x6f0 lib/maple_tree.c:5537
vma_iter_prealloc mm/vma.h:463 [inline]
commit_merge+0x1fd/0x700 mm/vma.c:753
vma_expand+0x40c/0x7e0 mm/vma.c:1158
vma_merge_new_range+0x6a3/0x860 mm/vma.c:1095
__mmap_region mm/vma.c:2666 [inline]
mmap_region+0xd46/0x20c0 mm/vma.c:2741
do_mmap+0xc45/0x10d0 mm/mmap.c:561
vm_mmap_pgoff+0x2a6/0x4d0 mm/util.c:579
ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:607
page last free pid 5955 tgid 5955 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1395 [inline]
__free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2895
__slab_free+0x303/0x3c0 mm/slub.c:4591
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x97/0x140 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4180 [inline]
slab_alloc_node mm/slub.c:4229 [inline]
kmem_cache_alloc_noprof+0x1c1/0x3c0 mm/slub.c:4236
getname_flags+0xb8/0x540 fs/namei.c:146
getname include/linux/fs.h:2914 [inline]
do_sys_openat2+0xbc/0x1c0 fs/open.c:1429
do_sys_open fs/open.c:1450 [inline]
__do_sys_openat fs/open.c:1466 [inline]
__se_sys_openat fs/open.c:1461 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1461
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
ffff8880755dc500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8880755dc580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8880755dc600: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8880755dc680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880755dc700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

Lorenzo Stoakes

unread,
Jul 16, 2025, 2:27:46 PM7/16/25
to syzbot, Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, linux-...@vger.kernel.org, linu...@kvack.org, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
Thanks for the report.

This is due to an older version of the series being in -next which allowed
MREMAP_DONTUNMAP for the move operation which was incorrect.

Andrew - I guess you will merge the newer version to linux-next soon?

In any event, this report is therefore bogus.

Cheers, Lorenzo

Lorenzo Stoakes

unread,
Jul 16, 2025, 2:32:55 PM7/16/25
to syzbot, Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, linux-...@vger.kernel.org, linu...@kvack.org, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
Sorry, I'm operating on not much sleep here.

Disregard below, this is valid, we currently permit MREMAP_DONTUNMAP as
long as MREMAP_FIXED is specified.

Sigh.

The repro doesn't repro of course, and there's no bisect. And the dashboard
references reports unrelated to this change also.

So this is rather a painful one.

It'd be good to get some indication of reproducibility and how long things
took to reproduce.

Let me look into it.

Lorenzo Stoakes

unread,
Jul 16, 2025, 3:04:14 PM7/16/25
to syzbot, Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, linux-...@vger.kernel.org, linu...@kvack.org, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
OK looks very much like the removal in v2 of the resets on unmap were a mistake.

Working on a fix for this.

On Wed, Jul 16, 2025 at 10:55:35AM -0700, syzbot wrote:

syzbot

unread,
Jul 16, 2025, 3:11:05 PM7/16/25
to Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, lorenzo...@oracle.com, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
syzbot has bisected this issue to:

commit ef69a41567549aa8ba7deb350ab1f3f55011591d
Author: Lorenzo Stoakes <lorenzo...@oracle.com>
Date: Fri Jul 11 11:38:23 2025 +0000

mm/mremap: permit mremap() move of multiple VMAs

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=170f458c580000
start commit: 0be23810e32e Add linux-next specific files for 20250714
git tree: linux-next
final oops: https://syzkaller.appspot.com/x/report.txt?x=148f458c580000
console output: https://syzkaller.appspot.com/x/log.txt?x=108f458c580000
Reported-by: syzbot+ebfd0e...@syzkaller.appspotmail.com
Fixes: ef69a4156754 ("mm/mremap: permit mremap() move of multiple VMAs")

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

Lorenzo Stoakes

unread,
Jul 16, 2025, 3:38:56 PM7/16/25
to syzbot, Liam.H...@oracle.com, ak...@linux-foundation.org, ja...@google.com, linux-...@vger.kernel.org, linu...@kvack.org, pfal...@suse.de, syzkall...@googlegroups.com, vba...@suse.cz
On Wed, Jul 16, 2025 at 08:04:03PM +0100, Lorenzo Stoakes wrote:
> OK looks very much like the removal in v2 of the resets on unmap were a mistake.
>
> Working on a fix for this.

Fix at https://lore.kernel.org/linux-mm/4fbf4271-6ab9-49c0...@lucifer.local/

This will get squashed into the commit so I didn't include the tags below as
they'd be eliminated anyway.

Note that I was able to make the reproducer more reliable by introducing an
rcu_barrier() after unmap, as suggested by Liam.

Cheers, Lorenzo

Hillf Danton

unread,
Jul 16, 2025, 9:46:38 PM7/16/25
to syzbot, Liam.H...@oracle.com, ak...@linux-foundation.org, linux-...@vger.kernel.org, linu...@kvack.org, lorenzo...@oracle.com, syzkall...@googlegroups.com, vba...@suse.cz
> Date: Wed, 16 Jul 2025 10:55:35 -0700
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 0be23810e32e Add linux-next specific files for 20250714
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=11a9a7d4580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=adc3ea2bfe31343b
> dashboard link: https://syzkaller.appspot.com/bug?extid=ebfd0e44b5c11034e1eb
Test Lorenzo's patch

#syz test

--- x/mm/mremap.c
+++ y/mm/mremap.c
@@ -1112,6 +1112,7 @@ static void unmap_source_vma(struct vma_

err = do_vmi_munmap(&vmi, mm, addr, len, vrm->uf_unmap, /* unlock= */false);
vrm->vma = NULL; /* Invalidated. */
+ vrm->vmi_needs_reset = true;
if (err) {
/* OOM: unable to split vma, just get accounts right */
vm_acct_memory(len >> PAGE_SHIFT);
@@ -1367,6 +1368,7 @@ static unsigned long mremap_to(struct vm
err = do_munmap(mm, vrm->new_addr, vrm->new_len,
vrm->uf_unmap_early);
vrm->vma = NULL; /* Invalidated. */
+ vrm->vmi_needs_reset = true;
if (err)
return err;

--

syzbot

unread,
Jul 16, 2025, 11:55:04 PM7/16/25
to ak...@linux-foundation.org, hda...@sina.com, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, lorenzo...@oracle.com, syzkall...@googlegroups.com, vba...@suse.cz
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: task hung in exit_mm

INFO: task syz.0.16:6665 blocked for more than 143 seconds.
Not tainted 6.16.0-rc6-next-20250716-syzkaller-ge8352908bdcd-dirty #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.16 state:D stack:26920 pid:6665 tgid:6665 ppid:6577 task_flags:0x40044c flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5314 [inline]
__schedule+0x16fd/0x4cf0 kernel/sched/core.c:6697
__schedule_loop kernel/sched/core.c:6775 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6790
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6847
rwsem_down_read_slowpath+0x5fd/0x8f0 kernel/locking/rwsem.c:1088
__down_read_common kernel/locking/rwsem.c:1263 [inline]
__down_read kernel/locking/rwsem.c:1276 [inline]
down_read+0x98/0x2e0 kernel/locking/rwsem.c:1541
mmap_read_lock include/linux/mmap_lock.h:423 [inline]
exit_mm+0xcc/0x2c0 kernel/exit.c:557
do_exit+0x648/0x2300 kernel/exit.c:947
do_group_exit+0x21c/0x2d0 kernel/exit.c:1100
get_signal+0x1286/0x1340 kernel/signal.c:3034
arch_do_signal_or_restart+0x9a/0x750 arch/x86/kernel/signal.c:337
exit_to_user_mode_loop+0x75/0x110 kernel/entry/common.c:40
exit_to_user_mode_prepare include/linux/irq-entry-common.h:208 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
do_syscall_64+0x2bd/0x3b0 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f524bb8e963
RSP: 002b:00007ffc99164708 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: fffffffffffffffc RBX: 00007f524b5ff6c0 RCX: 00007f524bb8e963
RDX: 0000000000000000 RSI: 0000000000021000 RDI: 0000000000000000
RBP: 0000000000000000 R08: 00000000ffffffff R09: 0000000000000000
R10: 0000000000020022 R11: 0000000000000246 R12: 00007ffc99164860
R13: ffffffffffffffc0 R14: 0000000000001000 R15: 0000000000000000
</TASK>
INFO: task syz.1.17:6807 blocked for more than 144 seconds.
Not tainted 6.16.0-rc6-next-20250716-syzkaller-ge8352908bdcd-dirty #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.17 state:D stack:26920 pid:6807 tgid:6807 ppid:6787 task_flags:0x40044c flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5314 [inline]
__schedule+0x16fd/0x4cf0 kernel/sched/core.c:6697
__schedule_loop kernel/sched/core.c:6775 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6790
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6847
rwsem_down_read_slowpath+0x5fd/0x8f0 kernel/locking/rwsem.c:1088
__down_read_common kernel/locking/rwsem.c:1263 [inline]
__down_read kernel/locking/rwsem.c:1276 [inline]
down_read+0x98/0x2e0 kernel/locking/rwsem.c:1541
mmap_read_lock include/linux/mmap_lock.h:423 [inline]
exit_mm+0xcc/0x2c0 kernel/exit.c:557
do_exit+0x648/0x2300 kernel/exit.c:947
do_group_exit+0x21c/0x2d0 kernel/exit.c:1100
get_signal+0x1286/0x1340 kernel/signal.c:3034
arch_do_signal_or_restart+0x9a/0x750 arch/x86/kernel/signal.c:337
exit_to_user_mode_loop+0x75/0x110 kernel/entry/common.c:40
exit_to_user_mode_prepare include/linux/irq-entry-common.h:208 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
do_syscall_64+0x2bd/0x3b0 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7efc6b58e963
RSP: 002b:00007ffe5b639e88 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: fffffffffffffffc RBX: 00007efc6afff6c0 RCX: 00007efc6b58e963
RDX: 0000000000000000 RSI: 0000000000021000 RDI: 0000000000000000
RBP: 0000000000000000 R08: 00000000ffffffff R09: 0000000000000000
R10: 0000000000020022 R11: 0000000000000246 R12: 00007ffe5b639fe0
R13: ffffffffffffffc0 R14: 0000000000001000 R15: 0000000000000000
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
#0: ffffffff8e13e2e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8e13e2e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#0: ffffffff8e13e2e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
3 locks held by kworker/0:3/981:
3 locks held by kworker/u8:9/3028:
#0: ffff8880b8739f98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:606
#1: ffff8880b8724008 (per_cpu_ptr(&psi_seq, cpu)){-.-.}-{0:0}, at: psi_task_switch+0x53/0x880 kernel/sched/psi.c:937
#2: ffff8880b8725918 (&base->lock){-.-.}-{2:2}, at: lock_timer_base kernel/time/timer.c:1004 [inline]
#2: ffff8880b8725918 (&base->lock){-.-.}-{2:2}, at: __mod_timer+0x1ae/0xf30 kernel/time/timer.c:1085
2 locks held by getty/5607:
#0: ffff88814df960a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000332e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
1 lock held by syz.0.16/6665:
#0: ffff8880242d4260 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff8880242d4260 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.0.16/6666:
1 lock held by syz.1.17/6807:
#0: ffff88807b8c57e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88807b8c57e0 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.1.17/6808:
1 lock held by syz.2.18/6831:
#0: ffff88807e36c260 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88807e36c260 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.2.18/6832:
1 lock held by syz.3.19/6858:
#0: ffff88807b8c2ce0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88807b8c2ce0 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
3 locks held by syz.3.19/6859:
1 lock held by syz.4.20/6888:
#0: ffff88801a476d60 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88801a476d60 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.4.20/6889:
1 lock held by syz.5.21/6925:
#0: ffff88801a472220 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88801a472220 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.5.21/6926:
1 lock held by syz.6.22/6955:
#0: ffff88807f93b7a0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88807f93b7a0 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.6.22/6956:
1 lock held by syz.7.24/6990:
#0: ffff88807c9ec260 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:423 [inline]
#0: ffff88807c9ec260 (&mm->mmap_lock){++++}-{4:4}, at: exit_mm+0xcc/0x2c0 kernel/exit.c:557
1 lock held by syz.7.24/6992:
2 locks held by dhcpcd/6995:
#0: ffff88805e42b808 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#0: ffff88805e42b808 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release net/socket.c:648 [inline]
#0: ffff88805e42b808 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1439
#1: ffffffff8e143e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:311 [inline]
#1: ffffffff8e143e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:967
1 lock held by dhcpcd/6996:
#0: ffff88805e42ca08 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#0: ffff88805e42ca08 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release net/socket.c:648 [inline]
#0: ffff88805e42ca08 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1439
1 lock held by dhcpcd/6997:
#0: ffff888078933208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#0: ffff888078933208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release net/socket.c:648 [inline]
#0: ffff888078933208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1439
2 locks held by dhcpcd/6998:
#0: ffff888078930208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#0: ffff888078930208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release net/socket.c:648 [inline]
#0: ffff888078930208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1439
#1: ffffffff8e143e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline]
#1: ffffffff8e143e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:967

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc6-next-20250716-syzkaller-ge8352908bdcd-dirty #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:328 [inline]
watchdog+0xfee/0x1030 kernel/hung_task.c:491
kthread+0x70e/0x8a0 kernel/kthread.c:463
ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 12 Comm: kworker/u8:0 Not tainted 6.16.0-rc6-next-20250716-syzkaller-ge8352908bdcd-dirty #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:__this_cpu_preempt_check+0xe/0x20 lib/smp_processor_id.c:64
Code: 66 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 89 fe 48 c7 c7 00 65 e3 8b <e9> bd fe ff ff cc cc cc cc cc cc cc cc cc cc cc cc cc 90 90 90 90
RSP: 0018:ffffc90000a08bc8 EFLAGS: 00000002
RAX: 0000000000000001 RBX: ffffffff822479bd RCX: da4b2af8b834fd00
RDX: ffff888029254d90 RSI: ffffffff8d994444 RDI: ffffffff8be36500
RBP: ffffc90000a08ed0 R08: 00000000c506ef33 R09: 00000000624b5ae2
R10: 000000000000000e R11: ffffffff81ac3010 R12: 0000000000000000
R13: ffffffff81a7e844 R14: ffff88801cecda00 R15: 0000000000000286
FS: 0000000000000000(0000) GS:ffff888125ce2000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a7ec2b0660 CR3: 000000005fa88000 CR4: 00000000003526f0
Call Trace:
<IRQ>
lockdep_hardirqs_off+0x74/0x110 kernel/locking/lockdep.c:4514
trace_hardirqs_off+0x12/0x40 kernel/trace/trace_preemptirq.c:104
kasan_quarantine_put+0x3d/0x220 mm/kasan/quarantine.c:207
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2417 [inline]
slab_free_after_rcu_debug+0x129/0x2a0 mm/slub.c:4730
rcu_do_batch kernel/rcu/tree.c:2584 [inline]
rcu_core+0xca5/0x1710 kernel/rcu/tree.c:2840
handle_softirqs+0x286/0x870 kernel/softirq.c:579
do_softirq+0xec/0x180 kernel/softirq.c:480
</IRQ>
<TASK>
__local_bh_enable_ip+0x17d/0x1c0 kernel/softirq.c:407
spin_unlock_bh include/linux/spinlock.h:396 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:833 [inline]
nsim_dev_trap_report_work+0x7c7/0xb80 drivers/net/netdevsim/dev.c:864
process_one_work kernel/workqueue.c:3239 [inline]
process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3322
worker_thread+0x8a0/0xda0 kernel/workqueue.c:3403
kthread+0x70e/0x8a0 kernel/kthread.c:463
ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>


Tested on:

commit: e8352908 Add linux-next specific files for 20250716
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=1523c58c580000
kernel config: https://syzkaller.appspot.com/x/.config?x=2594af20939db736
dashboard link: https://syzkaller.appspot.com/bug?extid=ebfd0e44b5c11034e1eb
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
patch: https://syzkaller.appspot.com/x/patch.diff?x=10776382580000

Lorenzo Stoakes

unread,
Jul 17, 2025, 12:18:27 AM7/17/25
to syzbot, ak...@linux-foundation.org, hda...@sina.com, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, vba...@suse.cz
This looks to be unrelated to my patch and some issue with syzbot (it's doing
weird injection stuff).

As I said, I have tested the change with reproducer locally and it fixes the
issue, and I have been able to reliably observe that (note, without any of the
below stuff happening).

Thanks

Lorenzo Stoakes

unread,
Jul 17, 2025, 12:06:43 PM7/17/25
to syzbot, ak...@linux-foundation.org, hda...@sina.com, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, vba...@suse.cz
OK on second thoughts, there is one additional thing we need to do on each
loop to avoid observing the same VMA, either the prior logic of checking
directly or a vma_next().

So this may be a consequence of that.

I will respin the series to make life easier...

Hillf Danton

unread,
Jul 17, 2025, 7:42:26 PM7/17/25
to Lorenzo Stoakes, syzbot, ak...@linux-foundation.org, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, vba...@suse.cz
On Thu, 17 Jul 2025 17:06:34 +0100 Lorenzo Stoakes <lorenzo...@oracle.com>

Top reply is not encouraged lad.

> OK on second thoughts, there is one additional thing we need to do on each
> loop to avoid observing the same VMA, either the prior logic of checking
> directly or a vma_next().
>
> So this may be a consequence of that.
>
> I will respin the series to make life easier...
>
Better after syzbot gives you Tested-by.

Lorenzo Stoakes

unread,
Jul 18, 2025, 7:08:52 AM7/18/25
to Hillf Danton, syzbot, ak...@linux-foundation.org, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, vba...@suse.cz
Go away Hillf.

Hillf Danton

unread,
Jul 18, 2025, 8:57:11 AM7/18/25
to Lorenzo Stoakes, syzbot, ak...@linux-foundation.org, liam.h...@oracle.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, vba...@suse.cz
On Fri, 18 Jul 2025 12:08:44 +0100 Lorenzo Stoakes wrote:
>
> Go away Hillf.
>
Are you paid much more than thought lad to do so?

syzbot

unread,
Oct 24, 2025, 2:43:17 AM10/24/25
to syzkall...@googlegroups.com
Auto-closing this bug as obsolete.
No recent activity, existing reproducers are no longer triggering the issue.
Reply all
Reply to author
Forward
0 new messages