On Fri, Nov 28, 2025 at 7:55 PM Andrew Morton <
ak...@linux-foundation.org> wrote:
>
>
> The patch titled
> Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN
> has been added to the -mm mm-hotfixes-unstable branch. Its filename is
> mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
>
> This patch will shortly appear at
>
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
>
> This patch will later appear in the mm-hotfixes-unstable branch at
> git://
git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
>
> Before you just go and hit "reply", please:
> a) Consider who else should be cc'ed
> b) Prefer to cc a suitable mailing list as well
> c) Ideally: find the original patch on the mailing list and do a
> reply-to-all to that, adding suitable additional cc's
>
> *** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
>
> The -mm tree is included into linux-next via the mm-everything
> branch at git://
git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
> and is updated there every 2-3 working days
>
> ------------------------------------------------------
> From: Jiayuan Chen <
jiayua...@linux.dev>
> Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN
> Date: Fri, 28 Nov 2025 19:15:14 +0800
Hi Jiayuan,
Please CC
kasa...@googlegroups.com when sending KASAN patches.
>
> Syzkaller reported a memory out-of-bounds bug [1]. This patch fixes two
> issues:
>
> 1. In vrealloc, we were missing the KASAN_VMALLOC_VM_ALLOC flag when
> unpoisoning the extended region. This flag is required to correctly
> associate the allocation with KASAN's vmalloc tracking.
>
> Note: In contrast, vzalloc (via __vmalloc_node_range_noprof) explicitly
> sets KASAN_VMALLOC_VM_ALLOC and calls kasan_unpoison_vmalloc() with it.
> vrealloc must behave consistently — especially when reusing existing
> vmalloc regions — to ensure KASAN can track allocations correctly.
>
> 2. When vrealloc reuses an existing vmalloc region (without allocating new
> pages), KASAN previously generated a new tag, which broke tag-based
> memory access tracking. We now add a 'reuse_tag' parameter to
> __kasan_unpoison_vmalloc() to preserve the original tag in such cases.
I think we actually could assign a new tag to detect accesses through
the old pointer. Just gotta retag the whole region with this tag. But
this is a separate thing; filed
https://bugzilla.kernel.org/show_bug.cgi?id=220829 for this.
>
> A new helper kasan_unpoison_vralloc() is introduced to handle this reuse
> scenario, ensuring consistent tag behavior during reallocation.
>
>
> Link:
https://lkml.kernel.org/r/20251128111516.244...@linux.dev
> Link:
https://syzkaller.appspot.com/bug?extid=997752115a851cb0cf36 [1]
> Fixes: a0309faf1cb0 ("mm: vmalloc: support more granular vrealloc() sizing")
> Signed-off-by: Jiayuan Chen <
jiayua...@linux.dev>
> Reported-by:
syzbot+997752...@syzkaller.appspotmail.com
> Closes:
https://lore.kernel.org/all/68e243a2.050a022...@google.com/T/
> Cc: Alexander Potapenko <
gli...@google.com>
> Cc: Andrey Konovalov <
andre...@gmail.com>
> Cc: Andrey Ryabinin <
ryabin...@gmail.com>
> Cc: Danilo Krummrich <
da...@kernel.org>
> Cc: Dmitriy Vyukov <
dvy...@google.com>
> Cc: Kees Cook <
ke...@kernel.org>
> Cc: "Uladzislau Rezki (Sony)" <
ure...@gmail.com>
> Cc: Vincenzo Frascino <
vincenzo...@arm.com>
> Cc: <
sta...@vger.kernel.org>
> Signed-off-by: Andrew Morton <
ak...@linux-foundation.org>
> ---
>
> include/linux/kasan.h | 21 +++++++++++++++++++--
> mm/kasan/hw_tags.c | 4 ++--
> mm/kasan/shadow.c | 6 ++++--
> mm/vmalloc.c | 4 ++--
> 4 files changed, 27 insertions(+), 8 deletions(-)
>
> --- a/include/linux/kasan.h~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
> +++ a/include/linux/kasan.h
> @@ -596,13 +596,23 @@ static inline void kasan_release_vmalloc
> #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
> - kasan_vmalloc_flags_t flags);
> + kasan_vmalloc_flags_t flags, bool reuse_tag);
> +
> +static __always_inline void *kasan_unpoison_vrealloc(const void *start,
> + unsigned long size,
> + kasan_vmalloc_flags_t flags)
> +{
> + if (kasan_enabled())
> + return __kasan_unpoison_vmalloc(start, size, flags, true);
> + return (void *)start;
> +}
> +
> static __always_inline void *kasan_unpoison_vmalloc(const void *start,
> unsigned long size,
> kasan_vmalloc_flags_t flags)
> {
> if (kasan_enabled())
> - return __kasan_unpoison_vmalloc(start, size, flags);
> + return __kasan_unpoison_vmalloc(start, size, flags, false);
> return (void *)start;
> }
>
> @@ -629,6 +639,13 @@ static inline void kasan_release_vmalloc
> unsigned long free_region_end,
> unsigned long flags) { }
>
> +static inline void *kasan_unpoison_vrealloc(const void *start,
> + unsigned long size,
> + kasan_vmalloc_flags_t flags)
> +{
> + return (void *)start;
> +}
> +
> static inline void *kasan_unpoison_vmalloc(const void *start,
> unsigned long size,
> kasan_vmalloc_flags_t flags)
> --- a/mm/kasan/hw_tags.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
> +++ a/mm/kasan/hw_tags.c
> @@ -317,7 +317,7 @@ static void init_vmalloc_pages(const voi
> }
>
> void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
> - kasan_vmalloc_flags_t flags)
> + kasan_vmalloc_flags_t flags, bool reuse_tag)
> {
> u8 tag;
> unsigned long redzone_start, redzone_size;
> @@ -361,7 +361,7 @@ void *__kasan_unpoison_vmalloc(const voi
> return (void *)start;
> }
>
> - tag = kasan_random_tag();
> + tag = reuse_tag ? get_tag(start) : kasan_random_tag();
> start = set_tag(start, tag);
>
> /* Unpoison and initialize memory up to size. */
> --- a/mm/kasan/shadow.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
> +++ a/mm/kasan/shadow.c
> @@ -625,7 +625,7 @@ void kasan_release_vmalloc(unsigned long
> }
>
> void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
> - kasan_vmalloc_flags_t flags)
> + kasan_vmalloc_flags_t flags, bool reuse_tag)
Since we already have kasan_vmalloc_flags_t, I think it makes sense to
add reuse_tag as another flag.
> {
> /*
> * Software KASAN modes unpoison both VM_ALLOC and non-VM_ALLOC
> @@ -648,7 +648,9 @@ void *__kasan_unpoison_vmalloc(const voi
> !(flags & KASAN_VMALLOC_PROT_NORMAL))
> return (void *)start;
>
> - start = set_tag(start, kasan_random_tag());
> + if (!reuse_tag)
> + start = set_tag(start, kasan_random_tag());
The HW_TAGS mode should also need this fix. Please build it (the build
should be failing with your patch as is), boot it, and run the KASAN
tests. And do the same for the other modes.
Would be good to have tests for vrealloc too. Filed
https://bugzilla.kernel.org/show_bug.cgi?id=220830 for this.
> +
> kasan_unpoison(start, size, false);
> return (void *)start;
> }
> --- a/mm/vmalloc.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
> +++ a/mm/vmalloc.c
> @@ -4175,8 +4175,8 @@ void *vrealloc_node_align_noprof(const v
> * We already have the bytes available in the allocation; use them.
> */
> if (size <= alloced_size) {
> - kasan_unpoison_vmalloc(p + old_size, size - old_size,
> - KASAN_VMALLOC_PROT_NORMAL);
> + kasan_unpoison_vrealloc(p, size,
> + KASAN_VMALLOC_PROT_NORMAL | KASAN_VMALLOC_VM_ALLOC);
Orthogonal to this series, but is it allowed to call vrealloc on
executable mappings? If so, we need to only set
KASAN_VMALLOC_PROT_NORMAL for non-executable mappings. And
kasan_poison_vmalloc should not be called for them as well (so we
likely need to pass a protection flag to it to avoid exposing this
logic).
Kees, I see you worked on vrealloc annotations, do you happen to know?
> /*
> * No need to zero memory here, as unused memory will have
> * already been zeroed at initial allocation time or during
> _
>
> Patches currently in -mm which might be from
jiayua...@linux.dev are
>
> mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
> mm-vmscan-skip-increasing-kswapd_failures-when-reclaim-was-boosted.patch
>