[PATCH 00/31] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS

30 views
Skip to first unread message

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:39 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Hi,

This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
KASAN modes.

About half of patches are cleanups I went for along the way. None of
them seem to be important enough to go through stable, so I decided
not to split them out into separate patches/series.

I'll keep the patchset based on the mainline for now. Once the
high-level issues are resolved, I'll rebase onto mm - there might be
a few conflicts right now.

The patchset is partially based on an early version of the HW_TAGS
patchset by Vincenzo that had vmalloc support. Thus, I added a
Co-developed-by tag into a few patches.

SW_TAGS vmalloc tagging support is straightforward. It reuses all of
the generic KASAN machinery, but uses shadow memory to store tags
instead of magic values. Naturally, vmalloc tagging requires adding
a few kasan_reset_tag() annotations to the vmalloc code.

HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
Arm MTE, which can only assigns tags to physical memory. As a result,
HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
page_alloc memory. It ignores vmap() and others.

Two things about the patchset that might be questionable, and I'd like
to get input on:

1. In this version of the pathset, if both HW_TAGS KASAN and memory
initialization are enabled, the memory for vmalloc() allocations is
initialized by page_alloc, while the tags are assigned in vmalloc.
Initially I thought that moving memory initialization into vmalloc
would be confusing, but I don't have any good arguments to support
that. So unless anyone has objecttions, I will move memory
initialization for HW_TAGS KASAN into vmalloc in v2.

2. In this version of the patchset, when VMAP_STACK is enabled, pointer
tags of stacks allocated via vmalloc() are reset, see the "kasan,
fork: don't tag stacks allocated with vmalloc" patch. However,
allowing sp to be tagged works just fine in my testing setup. Does
anyone has an idea of why having a tagged sp in the kernel could be
bad? If not, I can drop the mentioned patch.

Thanks!

Andrey Konovalov (31):
kasan, page_alloc: deduplicate should_skip_kasan_poison
kasan, page_alloc: move tag_clear_highpage out of
kernel_init_free_pages
kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
kasan, page_alloc: simplify kasan_poison_pages call site
kasan, page_alloc: init memory of skipped pages on free
mm: clarify __GFP_ZEROTAGS comment
kasan: only apply __GFP_ZEROTAGS when memory is zeroed
kasan, page_alloc: refactor init checks in post_alloc_hook
kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
kasan, page_alloc: simplify kasan_unpoison_pages call site
kasan: clean up metadata byte definitions
kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
kasan, x86, arm64, s390: rename functions for modules shadow
kasan, vmalloc: drop outdated VM_KASAN comment
kasan: reorder vmalloc hooks
kasan: add wrappers for vmalloc hooks
kasan, vmalloc: reset tags in vmalloc functions
kasan, fork: don't tag stacks allocated with vmalloc
kasan, vmalloc: add vmalloc support to SW_TAGS
kasan, arm64: allow KASAN_VMALLOC with SW_TAGS
kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
kasan, vmalloc: don't unpoison VM_ALLOC pages before mapping
kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
kasan, vmalloc: add vmalloc support to HW_TAGS
kasan: add kasan.vmalloc command line flag
kasan, arm64: allow KASAN_VMALLOC with HW_TAGS
kasan: documentation updates
kasan: improve vmalloc tests

Documentation/dev-tools/kasan.rst | 17 ++-
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/vmalloc.h | 10 ++
arch/arm64/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/gfp.h | 17 ++-
include/linux/kasan.h | 90 +++++++++------
include/linux/vmalloc.h | 18 ++-
kernel/fork.c | 1 +
lib/Kconfig.kasan | 20 ++--
lib/test_kasan.c | 181 +++++++++++++++++++++++++++++-
mm/kasan/common.c | 4 +-
mm/kasan/hw_tags.c | 142 +++++++++++++++++++----
mm/kasan/kasan.h | 16 ++-
mm/kasan/shadow.c | 54 +++++----
mm/page_alloc.c | 138 +++++++++++++++--------
mm/vmalloc.c | 65 +++++++++--
18 files changed, 597 insertions(+), 184 deletions(-)

--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:40 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, should_skip_kasan_poison() has two definitions: one for when
CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, one for when it's not.
Instead of duplicating the checks, add a deferred_pages_enabled()
helper and use it in a single should_skip_kasan_poison() definition.

Also move should_skip_kasan_poison() closer to its caller and clarify
all conditions in the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 55 +++++++++++++++++++++++++++++--------------------
1 file changed, 33 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5952749ad40..c99566a3b67e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -375,25 +375,9 @@ int page_group_by_mobility_disabled __read_mostly;
*/
static DEFINE_STATIC_KEY_TRUE(deferred_pages);

-/*
- * Calling kasan_poison_pages() only after deferred memory initialization
- * has completed. Poisoning pages during deferred memory init will greatly
- * lengthen the process and cause problem in large memory systems as the
- * deferred pages initialization is done with interrupt disabled.
- *
- * Assuming that there will be no reference to those newly initialized
- * pages before they are ever allocated, this should have no effect on
- * KASAN memory tracking as the poison will be properly inserted at page
- * allocation time. The only corner case is when pages are allocated by
- * on-demand allocation and then freed again before the deferred pages
- * initialization is done, but this is not likely to happen.
- */
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return static_branch_unlikely(&deferred_pages) ||
- (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return static_branch_unlikely(&deferred_pages);
}

/* Returns true if the struct page for the pfn is uninitialised */
@@ -444,11 +428,9 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
return false;
}
#else
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return false;
}

static inline bool early_page_uninitialised(unsigned long pfn)
@@ -1258,6 +1240,35 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
return ret;
}

+/*
+ * Skip KASAN memory poisoning when either:
+ *
+ * 1. Deferred memory initialization has not yet completed,
+ * see the explanation below.
+ * 2. Skipping poisoning is requested via FPI_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ * 3. Skipping poisoning is requested via __GFP_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ *
+ * Poisoning pages during deferred memory init will greatly lengthen the
+ * process and cause problem in large memory systems as the deferred pages
+ * initialization is done with interrupt disabled.
+ *
+ * Assuming that there will be no reference to those newly initialized
+ * pages before they are ever allocated, this should have no effect on
+ * KASAN memory tracking as the poison will be properly inserted at page
+ * allocation time. The only corner case is when pages are allocated by
+ * on-demand allocation and then freed again before the deferred pages
+ * initialization is done, but this is not likely to happen.
+ */
+static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+{
+ return deferred_pages_enabled() ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
+ PageSkipKASanPoison(page);
+}
+
static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
{
int i;
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:43 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, kernel_init_free_pages() serves two purposes: either only
zeroes memory or zeroes both memory and memory tags via a different
code path. As this function has only two callers, each using only one
code path, this behaviour is confusing.

This patch pulls the code that zeroes both memory and tags out of
kernel_init_free_pages().

As a result of this change, the code in free_pages_prepare() starts to
look complicated, but this is improved in the few following patches.
Those improvements are not integrated into this patch to make diffs
easier to read.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c99566a3b67e..3589333b5b77 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
PageSkipKASanPoison(page);
}

-static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
+static void kernel_init_free_pages(struct page *page, int numpages)
{
int i;

- if (zero_tags) {
- for (i = 0; i < numpages; i++)
- tag_clear_highpage(page + i);
- return;
- }
-
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
for (i = 0; i < numpages; i++) {
@@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
bool init = want_init_on_free();

if (init)
- kernel_init_free_pages(page, 1 << order, false);
+ kernel_init_free_pages(page, 1 << order);
if (!skip_kasan_poison)
kasan_poison_pages(page, order, init);
}
@@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);

kasan_unpoison_pages(page, order, init);
- if (init)
- kernel_init_free_pages(page, 1 << order,
- gfp_flags & __GFP_ZEROTAGS);
+
+ if (init) {
+ if (gfp_flags & __GFP_ZEROTAGS) {
+ int i;
+
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+ } else {
+ kernel_init_free_pages(page, 1 << order);
+ }
+ }
}

set_page_owner(page, order, gfp_flags);
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:43 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, the code responsible for initializing and poisoning memory
in free_pages_prepare() is scattered across two locations:
kasan_free_pages() for HW_TAGS KASAN and free_pages_prepare() itself.
This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches also simplify the performed
checks to make them easier to follow.

This patch replaces the only caller of kasan_free_pages() with its
implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 8 --------
mm/kasan/common.c | 2 +-
mm/kasan/hw_tags.c | 11 -----------
mm/page_alloc.c | 6 ++++--
4 files changed, 5 insertions(+), 22 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d8783b682669..89a43d8ae4fe 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -95,7 +95,6 @@ static inline bool kasan_hw_tags_enabled(void)
}

void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
-void kasan_free_pages(struct page *page, unsigned int order);

#else /* CONFIG_KASAN_HW_TAGS */

@@ -116,13 +115,6 @@ static __always_inline void kasan_alloc_pages(struct page *page,
BUILD_BUG();
}

-static __always_inline void kasan_free_pages(struct page *page,
- unsigned int order)
-{
- /* Only available for integrated init. */
- BUILD_BUG();
-}
-
#endif /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_has_integrated_init(void)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8428da2aaf17..66078cc1b4f0 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -387,7 +387,7 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
}

/*
- * The object will be poisoned by kasan_free_pages() or
+ * The object will be poisoned by kasan_poison_pages() or
* kasan_slab_free_mempool().
*/

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 7355cb534e4f..0b8225add2e4 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -213,17 +213,6 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
}
}

-void kasan_free_pages(struct page *page, unsigned int order)
-{
- /*
- * This condition should match the one in free_pages_prepare() in
- * page_alloc.c.
- */
- bool init = want_init_on_free();
-
- kasan_poison_pages(page, order, init);
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589333b5b77..3f3ea41f8c64 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1353,15 +1353,17 @@ static __always_inline bool free_pages_prepare(struct page *page,

/*
* As memory initialization might be integrated into KASAN,
- * kasan_free_pages and kernel_init_free_pages must be
+ * KASAN poisoning and memory initialization code must be
* kept together to avoid discrepancies in behavior.
*
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
if (kasan_has_integrated_init()) {
+ bool init = want_init_on_free();
+
if (!skip_kasan_poison)
- kasan_free_pages(page, order);
+ kasan_poison_pages(page, order, init);
} else {
bool init = want_init_on_free();

--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:44 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Since commit 7a3b83537188 ("kasan: use separate (un)poison implementation
for integrated init"), when all init, kasan_has_integrated_init(), and
skip_kasan_poison are true, free_pages_prepare() doesn't initialize
the page. This is wrong.

Fix it by remembering whether kasan_poison_pages() performed
initialization, and call kernel_init_free_pages() if it didn't.

Fixes: 7a3b83537188 ("kasan: use separate (un)poison implementation for integrated init")
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0673db27dd12..2ada09a58e4b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1360,9 +1360,14 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (!skip_kasan_poison)
+ if (!skip_kasan_poison) {
kasan_poison_pages(page, order, init);
- if (init && !kasan_has_integrated_init())
+
+ /* Memory is already initialized if KASAN did it internally. */
+ if (kasan_has_integrated_init())
+ init = false;
+ }
+ if (init)
kernel_init_free_pages(page, 1 << order);

/*
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:40:44 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Simplify the code around calling kasan_poison_pages() in
free_pages_prepare().

Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
since kernel_init_free_pages() can handle poisoned memory.

This patch does no functional changes besides reordering the calls.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f3ea41f8c64..0673db27dd12 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
{
int bad = 0;
bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
+ bool init = want_init_on_free();

VM_BUG_ON_PAGE(PageTail(page), page);

@@ -1359,19 +1360,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (kasan_has_integrated_init()) {
- bool init = want_init_on_free();
-
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- } else {
- bool init = want_init_on_free();
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- }
+ if (!skip_kasan_poison)
+ kasan_poison_pages(page, order, init);
+ if (init && !kasan_has_integrated_init())
+ kernel_init_free_pages(page, 1 << order);

/*
* arch_free_page() can make the page's contents inaccessible. s390
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:41:38 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

__GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
allocation, it's possible to set memory tags at the same time with little
performance impact.

Clarify this intention of __GFP_ZEROTAGS in the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/gfp.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index b976c4177299..dddd7597689f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -232,8 +232,8 @@ struct vm_area_struct;
*
* %__GFP_ZERO returns a zeroed page on success.
*
- * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
- * __GFP_ZERO is set.
+ * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
* %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
* on deallocation. Typically used for userspace pages. Currently only has an
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:41:48 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

__GFP_ZEROTAGS should only be effective if memory is being zeroed.
Currently, hardware tag-based KASAN violates this requirement.

Fix by including an initialization check along with checking for
__GFP_ZEROTAGS.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/hw_tags.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 0b8225add2e4..c643740b8599 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -199,11 +199,12 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
* page_alloc.c.
*/
bool init = !want_init_on_free() && want_init_on_alloc(flags);
+ bool init_tags = init && (flags & __GFP_ZEROTAGS);

if (flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);

- if (flags & __GFP_ZEROTAGS) {
+ if (init_tags) {
int i;

for (i = 0; i != 1 << order; ++i)
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:41:58 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch separates code for zeroing memory from the code clearing tags
in post_alloc_hook().

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2ada09a58e4b..0561cdafce36 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2406,19 +2406,21 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
kasan_alloc_pages(page, order, gfp_flags);
} else {
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);

kasan_unpoison_pages(page, order, init);

- if (init) {
- if (gfp_flags & __GFP_ZEROTAGS) {
- int i;
+ if (init_tags) {
+ int i;

- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
- } else {
- kernel_init_free_pages(page, 1 << order);
- }
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+
+ init = false;
}
+
+ if (init)
+ kernel_init_free_pages(page, 1 << order);

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:42:06 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, the code responsible for initializing and poisoning memory in
post_alloc_hook() is scattered across two locations: kasan_alloc_pages()
hook for HW_TAGS KASAN and post_alloc_hook() itself. This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches do a step-by-step restructure
the many performed checks to make them easier to follow.

This patch replaces the only caller of kasan_alloc_pages() with its
implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

The patch also moves init and init_tags variables definitions out of
kasan_has_integrated_init() clause in post_alloc_hook(), as they have
the same values regardless of what the if condition evaluates to.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 9 ---------
mm/kasan/common.c | 2 +-
mm/kasan/hw_tags.c | 22 ----------------------
mm/page_alloc.c | 20 +++++++++++++++-----
4 files changed, 16 insertions(+), 37 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 89a43d8ae4fe..1031070be3f3 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -94,8 +94,6 @@ static inline bool kasan_hw_tags_enabled(void)
return kasan_enabled();
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
-
#else /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_enabled(void)
@@ -108,13 +106,6 @@ static inline bool kasan_hw_tags_enabled(void)
return false;
}

-static __always_inline void kasan_alloc_pages(struct page *page,
- unsigned int order, gfp_t flags)
-{
- /* Only available for integrated init. */
- BUILD_BUG();
-}
-
#endif /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_has_integrated_init(void)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 66078cc1b4f0..d7168bfca61a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -536,7 +536,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
return NULL;

/*
- * The object has already been unpoisoned by kasan_alloc_pages() for
+ * The object has already been unpoisoned by kasan_unpoison_pages() for
* alloc_pages() or by kasan_krealloc() for krealloc().
*/

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index c643740b8599..76cf2b6229c7 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,28 +192,6 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
-{
- /*
- * This condition should match the one in post_alloc_hook() in
- * page_alloc.c.
- */
- bool init = !want_init_on_free() && want_init_on_alloc(flags);
- bool init_tags = init && (flags & __GFP_ZEROTAGS);
-
- if (flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
- kasan_unpoison_pages(page, order, init);
- }
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0561cdafce36..2a85aeb45ec1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2384,6 +2384,9 @@ static bool check_new_pages(struct page *page, unsigned int order)
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
+ bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+
set_page_private(page, 0);
set_page_refcounted(page);

@@ -2399,15 +2402,22 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

/*
* As memory initialization might be integrated into KASAN,
- * kasan_alloc_pages and kernel_init_free_pages must be
+ * KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
if (kasan_has_integrated_init()) {
- kasan_alloc_pages(page, order, gfp_flags);
- } else {
- bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
- bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+ if (gfp_flags & __GFP_SKIP_KASAN_POISON)
+ SetPageSkipKASanPoison(page);
+
+ if (init_tags) {
+ int i;

+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+ } else {
+ kasan_unpoison_pages(page, order, init);
+ }
+ } else {
kasan_unpoison_pages(page, order, init);

if (init_tags) {
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 4:52:43 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

The patch moves tag_clear_highpage() loops out of the
kasan_has_integrated_init() clause as a code simplification.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2a85aeb45ec1..e3e9fefbce43 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2405,30 +2405,30 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
* KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
+
+ /*
+ * If memory tags should be zeroed (which happens only when memory
+ * should be initialized as well).
+ */
+ if (init_tags) {
+ int i;
+
+ /* Initialize both memory and tags. */
+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+
+ /* Note that memory is already initialized by the loop above. */
+ init = false;
+ }
if (kasan_has_integrated_init()) {
if (gfp_flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);

- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
+ if (!init_tags)
kasan_unpoison_pages(page, order, init);
- }
} else {
kasan_unpoison_pages(page, order, init);

- if (init_tags) {
- int i;
-
- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
-
- init = false;
- }
-
if (init)
kernel_init_free_pages(page, 1 << order);
}
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:05:19 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Pull the SetPageSkipKASanPoison() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patches.

Also turn the kasan_has_integrated_init() check into the proper
CONFIG_KASAN_HW_TAGS one. These checks evaluate to the same value,
but logically skipping kasan poisoning has nothing to do with
integrated init.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e3e9fefbce43..c78befc4e057 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2421,9 +2421,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (gfp_flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
if (!init_tags)
kasan_unpoison_pages(page, order, init);
} else {
@@ -2432,6 +2429,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
if (init)
kernel_init_free_pages(page, 1 << order);
}
+ /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
+ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
+ (gfp_flags & __GFP_SKIP_KASAN_POISON))
+ SetPageSkipKASanPoison(page);

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:05:24 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Pull the kernel_init_free_pages() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patch.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c78befc4e057..ba950889f5ea 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2421,14 +2421,18 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (!init_tags)
+ if (!init_tags) {
kasan_unpoison_pages(page, order, init);
+
+ /* Note that memory is already initialized by KASAN. */
+ init = false;
+ }
} else {
kasan_unpoison_pages(page, order, init);
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
}
+ /* If memory is still not initialized, do it now. */
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
/* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
(gfp_flags & __GFP_SKIP_KASAN_POISON))
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:05:34 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Simplify the checks around kasan_unpoison_pages() call in
post_alloc_hook().

The logical condition for calling this function is:

- If a software KASAN mode is enabled, we need to mark shadow memory.
- Otherwise, HW_TAGS KASAN is enabled, and it only makes sense to
set tags if they haven't already been cleared by tag_clear_highpage(),
which is indicated by init_tags.

This patch concludes the simplifications for post_alloc_hook().

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ba950889f5ea..4eb341351124 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2420,15 +2420,18 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- if (kasan_has_integrated_init()) {
- if (!init_tags) {
- kasan_unpoison_pages(page, order, init);
+ /*
+ * If either a software KASAN mode is enabled, or,
+ * in the case of hardware tag-based KASAN,
+ * if memory tags have not been cleared via tag_clear_highpage().
+ */
+ if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS) || !init_tags) {
+ /* Mark shadow memory or set memory tags. */
+ kasan_unpoison_pages(page, order, init);

- /* Note that memory is already initialized by KASAN. */
+ /* Note that memory is already initialized by KASAN. */
+ if (kasan_has_integrated_init())
init = false;
- }
- } else {
- kasan_unpoison_pages(page, order, init);
}
/* If memory is still not initialized, do it now. */
if (init)
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:06:39 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Most of the metadata byte values are only used for Generic KASAN.

Remove KASAN_KMALLOC_FREETRACK definition for !CONFIG_KASAN_GENERIC
case, and put it along with other metadata values for the Generic
mode under a corresponding ifdef.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/kasan.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aebd8df86a1f..a50450160638 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,15 +71,16 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
-#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
-#define KASAN_KMALLOC_FREETRACK KASAN_TAG_INVALID
#endif

+#ifdef CONFIG_KASAN_GENERIC
+
+#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

@@ -110,6 +111,8 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_ABI_VERSION 1
#endif

+#endif /* CONFIG_KASAN_GENERIC */
+
/* Metadata layout customization. */
#define META_BYTES_PER_BLOCK 1
#define META_BLOCKS_PER_ROW 16
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:06:47 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

In preparation for adding vmalloc support to SW_TAGS KASAN,
provide a KASAN_VMALLOC_INVALID definition for it.

HW_TAGS KASAN won't be using this value, as it falls back onto
page_alloc for poisoning freed vmalloc() memory.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/kasan.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a50450160638..0827d74d0d87 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,18 +71,19 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
+#define KASAN_VMALLOC_INVALID KASAN_TAG_INVALID /* only for SW_TAGS */
#endif

#ifdef CONFIG_KASAN_GENERIC

#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
-#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

/*
* Stack redzone shadow values
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:06:51 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Rename kasan_free_shadow to kasan_free_module_shadow and
kasan_module_alloc to kasan_alloc_module_shadow.

These functions are used to allocate/free shadow memory for kernel
modules when KASAN_VMALLOC is not enabled. The new names better
reflect their purpose.

Also reword the comment next to their declaration to improve clarity.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/kasan.h | 14 +++++++-------
mm/kasan/shadow.c | 4 ++--
mm/vmalloc.c | 2 +-
6 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index b5ec010c481f..f8bd5100efb5 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -58,7 +58,7 @@ void *module_alloc(unsigned long size)
PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));

- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b01ba460b7ca..a753cebedda9 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -44,7 +44,7 @@ void *module_alloc(unsigned long size)
p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END,
GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 169fb6f4cd2e..dec41d9ba337 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -77,7 +77,7 @@ void *module_alloc(unsigned long size)
MODULES_END, GFP_KERNEL,
PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 1031070be3f3..4eec58e6ef82 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -453,17 +453,17 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
!defined(CONFIG_KASAN_VMALLOC)

/*
- * These functions provide a special case to support backing module
- * allocations with real shadow memory. With KASAN vmalloc, the special
- * case is unnecessary, as the work is handled in the generic case.
+ * These functions allocate and free shadow memory for kernel modules.
+ * They are only required when KASAN_VMALLOC is not supported, as otherwise
+ * shadow memory is allocated by the generic vmalloc handlers.
*/
-int kasan_module_alloc(void *addr, size_t size);
-void kasan_free_shadow(const struct vm_struct *vm);
+int kasan_alloc_module_shadow(void *addr, size_t size);
+void kasan_free_module_shadow(const struct vm_struct *vm);

#else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

-static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
-static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+static inline int kasan_alloc_module_shadow(void *addr, size_t size) { return 0; }
+static inline void kasan_free_module_shadow(const struct vm_struct *vm) {}

#endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 4a4929b29a23..585c2bf1073b 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,

#else /* CONFIG_KASAN_VMALLOC */

-int kasan_module_alloc(void *addr, size_t size)
+int kasan_alloc_module_shadow(void *addr, size_t size)
{
void *ret;
size_t scaled_size;
@@ -529,7 +529,7 @@ int kasan_module_alloc(void *addr, size_t size)
return -ENOMEM;
}

-void kasan_free_shadow(const struct vm_struct *vm)
+void kasan_free_module_shadow(const struct vm_struct *vm)
{
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d2a00ad4e1dd..c5235e3e5857 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2524,7 +2524,7 @@ struct vm_struct *remove_vm_area(const void *addr)
va->vm = NULL;
spin_unlock(&vmap_area_lock);

- kasan_free_shadow(vm);
+ kasan_free_module_shadow(vm);
free_unmap_vmap_area(va);

return vm;
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:06:58 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

The comment about VM_KASAN in include/linux/vmalloc.c is outdated.
VM_KASAN is currently only used to mark vm_areas allocated for
kernel modules when CONFIG_KASAN_VMALLOC is disabled.

Drop the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/vmalloc.h | 11 -----------
1 file changed, 11 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 6e022cc712e6..b22369f540eb 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -28,17 +28,6 @@ struct notifier_block; /* in notifier.h */
#define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
#define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */

-/*
- * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC.
- *
- * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
- * shadow memory has been mapped. It's used to handle allocation errors so that
- * we don't try to poison shadow on free if it was never allocated.
- *
- * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
- * determine which allocations need the module shadow freed.
- */
-
/* bits [20..32] reserved for arch specific ioremap internals */

/*
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:03 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Group functions that [de]populate shadow memory for vmalloc.
Group functions that [un]poison memory for vmalloc.

This patch does no functional changes but prepares KASAN code for
adding vmalloc support to HW_TAGS KASAN.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 20 +++++++++-----------
mm/kasan/shadow.c | 43 ++++++++++++++++++++++---------------------
2 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4eec58e6ef82..af2dd67d2c0e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -417,34 +417,32 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+void kasan_unpoison_vmalloc(const void *start, unsigned long size);
+void kasan_poison_vmalloc(const void *start, unsigned long size);

#else /* CONFIG_KASAN_VMALLOC */

+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size) { }
static inline int kasan_populate_vmalloc(unsigned long start,
unsigned long size)
{
return 0;
}
-
-static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
-{ }
-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
static inline void kasan_release_vmalloc(unsigned long start,
unsigned long end,
unsigned long free_region_start,
- unsigned long free_region_end) {}
+ unsigned long free_region_end) { }

-static inline void kasan_populate_early_vm_area_shadow(void *start,
- unsigned long size)
+static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{ }
+static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

#endif /* CONFIG_KASAN_VMALLOC */
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 585c2bf1073b..49a3660e111a 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -345,27 +345,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
return 0;
}

-/*
- * Poison the shadow for a vmalloc region. Called as part of the
- * freeing process at the time the region is freed.
- */
-void kasan_poison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- size = round_up(size, KASAN_GRANULE_SIZE);
- kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
-}
-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- kasan_unpoison(start, size, false);
-}
-
static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
void *unused)
{
@@ -496,6 +475,28 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

+
+void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ kasan_unpoison(start, size, false);
+}
+
+/*
+ * Poison the shadow for a vmalloc region. Called as part of the
+ * freeing process at the time the region is freed.
+ */
+void kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ size = round_up(size, KASAN_GRANULE_SIZE);
+ kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
+}
+
#else /* CONFIG_KASAN_VMALLOC */

int kasan_alloc_module_shadow(void *addr, size_t size)
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:07 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Add wrappers around functions that [un]poison memory for vmalloc
allocations. These functions will be used by HW_TAGS KASAN and
therefore need to be disabled when kasan=off command line argument
is provided.

This patch does no functional changes for software KASAN modes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 17 +++++++++++++++--
mm/kasan/shadow.c | 5 ++---
2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index af2dd67d2c0e..ad4798e77f60 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -423,8 +423,21 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_unpoison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmalloc(start, size);
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_poison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_poison_vmalloc(start, size);
+}

#else /* CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 49a3660e111a..fa0c8a750d09 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
@@ -488,7 +487,7 @@ void kasan_unpoison_vmalloc(const void *start, unsigned long size)
* Poison the shadow for a vmalloc region. Called as part of the
* freeing process at the time the region is freed.
*/
-void kasan_poison_vmalloc(const void *start, unsigned long size)
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:27 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Once tag-based KASAN modes start tagging vmalloc() allocations,
kernel stacks will start getting tagged if CONFIG_VMAP_STACK is enabled.

Reset the tag of kernel stack pointers after allocation.

For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
instrumentation can't handle the sp register being tagged.

For HW_TAGS KASAN, there's no instrumentation-related issues. However,
the impact of having a tagged SP pointer needs to be properly evaluated,
so keep it non-tagged for now.

Note, that the memory for the stack allocation still gets tagged to
catch vmalloc-into-stack out-of-bounds accesses.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
kernel/fork.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/fork.c b/kernel/fork.c
index 3244cc56b697..062d1484ef42 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -253,6 +253,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
* so cache the vm_struct.
*/
if (stack) {
+ stack = kasan_reset_tag(stack);
tsk->stack_vm_area = find_vm_area(stack);
tsk->stack = stack;
}
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:27 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

In preparation for adding vmalloc support to SW/HW_TAGS KASAN,
reset pointer tags in functions that use pointer values in
range checks.

vread() is a special case here. Resetting the pointer tag in its
prologue could technically lead to missing bad accesses to virtual
mappings in its implementation. However, vread() doesn't access the
virtual mappings cirectly. Instead, it recovers the physical address
via page_address(vmalloc_to_page()) and acceses that. And as
page_address() recovers the pointer tag, the accesses are checked.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/vmalloc.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c5235e3e5857..a059b3100c0a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -72,7 +72,7 @@ static const bool vmap_allow_huge = false;

bool is_vmalloc_addr(const void *x)
{
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);

return addr >= VMALLOC_START && addr < VMALLOC_END;
}
@@ -630,7 +630,7 @@ int is_vmalloc_or_module_addr(const void *x)
* just put it in the vmalloc space.
*/
#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);
if (addr >= MODULES_VADDR && addr < MODULES_END)
return 1;
#endif
@@ -804,6 +804,8 @@ static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
struct vmap_area *va = NULL;
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *tmp;

@@ -825,6 +827,8 @@ static struct vmap_area *__find_vmap_area(unsigned long addr)
{
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *va;

@@ -2143,7 +2147,7 @@ EXPORT_SYMBOL_GPL(vm_unmap_aliases);
void vm_unmap_ram(const void *mem, unsigned int count)
{
unsigned long size = (unsigned long)count << PAGE_SHIFT;
- unsigned long addr = (unsigned long)mem;
+ unsigned long addr = (unsigned long)kasan_reset_tag(mem);
struct vmap_area *va;

might_sleep();
@@ -3361,6 +3365,8 @@ long vread(char *buf, char *addr, unsigned long count)
unsigned long buflen = count;
unsigned long n;

+ addr = kasan_reset_tag(addr);
+
/* Don't allow overflow */
if ((unsigned long) addr + count < count)
count = -(unsigned long) addr;
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:27 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds vmalloc tagging support to SW_TAGS KASAN.

The changes include:

- __kasan_unpoison_vmalloc() now assigns a random pointer tag, poisons
the virtual mapping accordingly, and embeds the tag into the returned
pointer.

- __get_vm_area_node() (used by vmalloc() and vmap()) and
pcpu_get_vm_areas() save the tagged pointer into vm_struct->addr
(note: not into vmap_area->addr). This requires putting
kasan_unpoison_vmalloc() after setup_vmalloc_vm[_locked]();
otherwise the latter will overwrite the tagged pointer.
The tagged pointer then is naturally propagateed to vmalloc()
and vmap().

- vm_map_ram() returns the tagged pointer directly.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 17 +++++++++++------
mm/kasan/shadow.c | 6 ++++--
mm/vmalloc.c | 14 ++++++++------
3 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ad4798e77f60..6a2619759e93 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -423,12 +423,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
-static __always_inline void kasan_unpoison_vmalloc(const void *start,
- unsigned long size)
+void * __must_check __kasan_unpoison_vmalloc(const void *start,
+ unsigned long size);
+static __always_inline void * __must_check kasan_unpoison_vmalloc(
+ const void *start, unsigned long size)
{
if (kasan_enabled())
- __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size);
+ return (void *)start;
}

void __kasan_poison_vmalloc(const void *start, unsigned long size);
@@ -453,8 +455,11 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_start,
unsigned long free_region_end) { }

-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
+static inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size, bool unique)
+{
+ return (void *)start;
+}
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index fa0c8a750d09..4ca280a96fbc 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,12 +475,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
- return;
+ return (void *)start;

+ start = set_tag(start, kasan_random_tag());
kasan_unpoison(start, size, false);
+ return (void *)start;
}

/*
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a059b3100c0a..7be18b292679 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2208,7 +2208,7 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- kasan_unpoison_vmalloc(mem, size);
+ mem = kasan_unpoison_vmalloc(mem, size);

if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
@@ -2441,10 +2441,10 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
return NULL;
}

- kasan_unpoison_vmalloc((void *)va->va_start, requested_size);
-
setup_vmalloc_vm(area, va, flags, caller);

+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+
return area;
}

@@ -3752,9 +3752,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
for (area = 0; area < nr_vms; area++) {
if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
goto err_free_shadow;
-
- kasan_unpoison_vmalloc((void *)vas[area]->va_start,
- sizes[area]);
}

/* insert all vm's */
@@ -3767,6 +3764,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

+ /* mark allocated areas as accessible */
+ for (area = 0; area < nr_vms; area++)
+ vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
+ vms[area]->size);
+
kfree(vas);
return vms;

--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:07:54 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

vmalloc support for SW_TAGS KASAN is now complete.

Allow enabling CONFIG_KASAN_VMALLOC.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/Kconfig | 1 +
lib/Kconfig.kasan | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c4207cf9bb17..c05d7a06276f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -206,6 +206,7 @@ config ARM64
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select KASAN_VMALLOC if KASAN_GENERIC
+ select KASAN_VMALLOC if KASAN_SW_TAGS
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index cdc842d090db..3f144a87f8a3 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -179,7 +179,7 @@ config KASAN_TAGS_IDENTIFY

config KASAN_VMALLOC
bool "Back mappings in vmalloc space with real shadow memory"
- depends on KASAN_GENERIC && HAVE_ARCH_KASAN_VMALLOC
+ depends on (KASAN_GENERIC || KASAN_SW_TAGS) && HAVE_ARCH_KASAN_VMALLOC
help
By default, the shadow region for vmalloc space is the read-only
zero page. This means that KASAN cannot detect errors involving
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:00 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

HW_TAGS KASAN relies on ARM Memory Tagging Extension (MTE). With MTE,
a memory region must be mapped as MT_NORMAL_TAGGED to allow setting
memory tags via MTE-specific instructions.

This change adds proper protection bits to vmalloc() allocations.
These allocations are always backed by page_alloc pages, so the tags
will actually be getting set on the corresponding physical memory.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>
---
arch/arm64/include/asm/vmalloc.h | 10 ++++++++++
include/linux/vmalloc.h | 7 +++++++
mm/vmalloc.c | 2 ++
3 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index b9185503feae..3d35adf365bf 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -25,4 +25,14 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)

#endif

+#define arch_vmalloc_pgprot_modify arch_vmalloc_pgprot_modify
+static inline pgprot_t arch_vmalloc_pgprot_modify(pgprot_t prot)
+{
+ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
+ (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)))
+ prot = pgprot_tagged(prot);
+
+ return prot;
+}
+
#endif /* _ASM_ARM64_VMALLOC_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b22369f540eb..965c4bf475f1 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -108,6 +108,13 @@ static inline int arch_vmap_pte_supported_shift(unsigned long size)
}
#endif

+#ifndef arch_vmalloc_pgprot_modify
+static inline pgprot_t arch_vmalloc_pgprot_modify(pgprot_t prot)
+{
+ return prot;
+}
+#endif
+
/*
* Highlevel APIs for driver use
*/
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7be18b292679..f37d0ed99bf9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3033,6 +3033,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
return NULL;
}

+ prot = arch_vmalloc_pgprot_modify(prot);
+
if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) {
unsigned long size_per_node;

--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:04 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch makes KASAN unpoison vmalloc mappings after that have been
mapped in when it's possible: for vmalloc() (indentified via VM_ALLOC)
and vm_map_ram().

The reasons for this are:

- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
kasan_unpoison_vmalloc().

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/vmalloc.c | 26 ++++++++++++++++++++++----
1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f37d0ed99bf9..82ef1e27e2e4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2208,14 +2208,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- mem = kasan_unpoison_vmalloc(mem, size);
-
if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
vm_unmap_ram(mem, count);
return NULL;
}

+ /* Mark the pages as accessible after they were mapped in. */
+ mem = kasan_unpoison_vmalloc(mem, size);
+
return mem;
}
EXPORT_SYMBOL(vm_map_ram);
@@ -2443,7 +2444,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,

setup_vmalloc_vm(area, va, flags, caller);

- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ /*
+ * For VM_ALLOC mappings, __vmalloc_node_range() mark the pages as
+ * accessible after they are mapped in.
+ * Otherwise, as the pages can be mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
+ if (!(flags & VM_ALLOC))
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);

return area;
}
@@ -3072,6 +3080,12 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
goto fail;

+ /*
+ * Mark the pages for VM_ALLOC mappings as accessible after they were
+ * mapped in.
+ */
+ addr = kasan_unpoison_vmalloc(addr, real_size);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3766,7 +3780,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

- /* mark allocated areas as accessible */
+ /*
+ * Mark allocated areas as accessible.
+ * As the pages are mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
vms[area]->size);
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:11 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch add a new GFP flag __GFP_SKIP_KASAN_UNPOISON that allows
skipping KASAN poisoning for page_alloc allocations. The flag is only
effective with HW_TAGS KASAN.

This flag will be used by vmalloc code for page_alloc allocations
backing vmalloc() mappings in the following patch. The reason to skip
KASAN poisoning for these pages in page_alloc is because vmalloc code
will be poisoning them instead.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/gfp.h | 13 +++++++++----
mm/page_alloc.c | 24 +++++++++++++++++-------
2 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index dddd7597689f..a4c8ff3fbed1 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -54,9 +54,10 @@ struct vm_area_struct;
#define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
-#define ___GFP_SKIP_KASAN_POISON 0x1000000u
+#define ___GFP_SKIP_KASAN_UNPOISON 0x1000000u
+#define ___GFP_SKIP_KASAN_POISON 0x2000000u
#ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP 0x2000000u
+#define ___GFP_NOLOCKDEP 0x4000000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@@ -235,6 +236,9 @@ struct vm_area_struct;
* %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
* is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
+ * %__GFP_SKIP_KASAN_UNPOISON skips KASAN unpoisoning on page allocation.
+ * Currently only has an effect in HW tags mode.
+ *
* %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
* on deallocation. Typically used for userspace pages. Currently only has an
* effect in HW tags mode.
@@ -243,13 +247,14 @@ struct vm_area_struct;
#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
-#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)
+#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON)
+#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)

/* Disable lockdep for GFP context tracking */
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))

/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4eb341351124..3afebc037fcd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2381,6 +2381,21 @@ static bool check_new_pages(struct page *page, unsigned int order)
return false;
}

+static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
+{
+ /* Don't skip if a software KASAN mode is enabled. */
+ if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+ return false;
+
+ /*
+ * For hardware tag-based KASAN, skip if either:
+ *
+ * 1. Memory tags have already been cleared via tag_clear_highpage().
+ * 2. Skipping has been requested via __GFP_SKIP_KASAN_UNPOISON.
+ */
+ return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON);
+}
+
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
@@ -2420,13 +2435,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- /*
- * If either a software KASAN mode is enabled, or,
- * in the case of hardware tag-based KASAN,
- * if memory tags have not been cleared via tag_clear_highpage().
- */
- if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS) || !init_tags) {
- /* Mark shadow memory or set memory tags. */
+ if (!should_skip_kasan_unpoison(gfp_flags, init_tags)) {
+ /* Unpoison shadow memory or set memory tags. */
kasan_unpoison_pages(page, order, init);

/* Note that memory is already initialized by KASAN. */
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:16 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds vmalloc tagging support to HW_TAGS KASAN.

The key difference between HW_TAGS and the other two KASAN modes
when it comes to vmalloc: HW_TAGS KASAN can only assign tags to
physical memory. The other two modes have shadow memory covering
every mapped virtual memory region.

This patch makes __kasan_unpoison_vmalloc() for HW_TAGS KASAN:

- Skip non-VM_ALLOC mappings as HW_TAGS KASAN can only tag a single
mapping of normal physical memory; see the comment in the function.
- Generate a random tag, tag the returned pointer and the allocation.
- Propagate the tag into the page stucts to allow accesses through
page_address(vmalloc_to_page()).

The rest of vmalloc-related KASAN hooks are not needed:

- The shadow-related ones are fully skipped.
- __kasan_poison_vmalloc() is kept as a no-op with a comment.

Poisoning of physical pages that are backing vmalloc() allocations
is skipped via __GFP_SKIP_KASAN_UNPOISON: __kasan_unpoison_vmalloc()
poisons them instead.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>
---
include/linux/kasan.h | 27 +++++++++++--
mm/kasan/hw_tags.c | 92 +++++++++++++++++++++++++++++++++++++++++++
mm/kasan/shadow.c | 8 +++-
mm/vmalloc.c | 25 +++++++++---
4 files changed, 143 insertions(+), 9 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6a2619759e93..df1a09fb7623 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -417,19 +417,40 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

+#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{ }
+static inline int kasan_populate_vmalloc(unsigned long start,
+ unsigned long size)
+{
+ return 0;
+}
+static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+ unsigned long free_region_end) { }
+
+#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
void * __must_check __kasan_unpoison_vmalloc(const void *start,
- unsigned long size);
+ unsigned long size,
+ bool vm_alloc);
static __always_inline void * __must_check kasan_unpoison_vmalloc(
- const void *start, unsigned long size)
+ const void *start, unsigned long size,
+ bool vm_alloc)
{
if (kasan_enabled())
- return __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size, vm_alloc);
return (void *)start;
}

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 76cf2b6229c7..fd3a93dfca42 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,6 +192,98 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

+#ifdef CONFIG_KASAN_VMALLOC
+
+static void unpoison_vmalloc_pages(const void *addr, u8 tag)
+{
+ struct vm_struct *area;
+ int i;
+
+ /*
+ * As hardware tag-based KASAN only tags VM_ALLOC vmalloc allocations
+ * (see the comment in __kasan_unpoison_vmalloc), all of the pages
+ * should belong to a single area.
+ */
+ area = find_vm_area((void *)addr);
+ if (WARN_ON(!area))
+ return;
+
+ for (i = 0; i < area->nr_pages; i++) {
+ struct page *page = area->pages[i];
+
+ page_kasan_tag_set(page, tag);
+ }
+}
+
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ bool vm_alloc)
+{
+ u8 tag;
+ unsigned long redzone_start, redzone_size;
+
+ if (!is_vmalloc_or_module_addr(start))
+ return (void *)start;
+
+ /* Unpoisoning and pointer tag assignment is skipped for non-VM_ALLOC
+ * mappings as:
+ *
+ * 1. Unlike the software KASAN modes, hardware tag-based KASAN only
+ * supports tagging physical memory. Therefore, it can only tag a
+ * single mapping of normal physical pages.
+ * 2. Hardware tag-based KASAN can only tag memory mapped with special
+ * mapping protection bits, see arch_vmalloc_pgprot_modify().
+ * As non-VM_ALLOC mappings can be mapped outside of vmalloc code,
+ * providing these bits would require tracking all non-VM_ALLOC
+ * mappers.
+ *
+ * Thus, for VM_ALLOC mappings, hardware tag-based KASAN only tags
+ * the first virtual mapping, which is created by vmalloc().
+ * Tagging the page_alloc memory backing that vmalloc() allocation is
+ * skipped, see ___GFP_SKIP_KASAN_UNPOISON.
+ *
+ * For non-VM_ALLOC allocations, page_alloc memory is tagged as usual.
+ */
+ if (!vm_alloc)
+ return (void *)start;
+
+ tag = kasan_random_tag();
+ start = set_tag(start, tag);
+
+ /*
+ * Unpoison but don't initialize. The pages have already been
+ * initialized by page_alloc.
+ */
+ kasan_unpoison(start, size, false);
+
+ /*
+ * Unlike software KASAN modes, hardware tag-based KASAN doesn't
+ * unpoison memory when populating shadow for vmalloc() space.
+ * Thus, it needs to explicitly poison the in-page vmalloc() redzone.
+ */
+ redzone_start = round_up((unsigned long)start + size, KASAN_GRANULE_SIZE);
+ redzone_size = round_up(redzone_start, PAGE_SIZE) - redzone_start;
+ kasan_poison((void *)redzone_start, redzone_size, KASAN_TAG_INVALID, false);
+
+ /*
+ * Set per-page tag flags to allow accessing physical memory for the
+ * vmalloc() mapping through page_address(vmalloc_to_page()).
+ */
+ unpoison_vmalloc_pages(start, tag);
+
+ return (void *)start;
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ /*
+ * No tagging here.
+ * The physical pages backing the vmalloc() allocation are poisoned
+ * through the usual page_alloc paths.
+ */
+}
+
+#endif
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 4ca280a96fbc..f27d48c24166 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ bool vm_alloc)
{
+ /*
+ * As software tag-based KASAN tags both VM_ALLOC and non-VM_ALLOC
+ * mappings, the vm_alloc argument is ignored.
+ */
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 82ef1e27e2e4..409a289dec81 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2214,8 +2214,12 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
return NULL;
}

- /* Mark the pages as accessible after they were mapped in. */
- mem = kasan_unpoison_vmalloc(mem, size);
+ /*
+ * Mark the pages as accessible after they were mapped in.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
+ */
+ mem = kasan_unpoison_vmalloc(mem, size, false);

return mem;
}
@@ -2449,9 +2453,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
* accessible after they are mapped in.
* Otherwise, as the pages can be mapped outside of vmalloc code,
* mark them now as a best-effort approach.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
if (!(flags & VM_ALLOC))
- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size,
+ false);

return area;
}
@@ -2849,6 +2856,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
struct page *page;
int i;

+ /*
+ * Skip page_alloc poisoning for pages backing VM_ALLOC mappings,
+ * see __kasan_unpoison_vmalloc. Only effective in HW_TAGS mode.
+ */
+ gfp &= __GFP_SKIP_KASAN_UNPOISON;
+
/*
* For order-0 pages we make use of bulk allocator, if
* the page array is partly or not at all populated due
@@ -3084,7 +3097,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
* Mark the pages for VM_ALLOC mappings as accessible after they were
* mapped in.
*/
- addr = kasan_unpoison_vmalloc(addr, real_size);
+ addr = kasan_unpoison_vmalloc(addr, real_size, true);

/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
@@ -3784,10 +3797,12 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* Mark allocated areas as accessible.
* As the pages are mapped outside of vmalloc code,
* mark them now as a best-effort approach.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size);
+ vms[area]->size, false);

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:19 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Allow disabling vmalloc() tagging for HW_TAGS KASAN via a kasan.vmalloc
command line switch.

This is a fail-safe switch intended for production systems that enable
HW_TAGS KASAN. In case vmalloc() tagging ends up having an issue not
detected during testing but that manifests in production, kasan.vmalloc
allows to turn vmalloc() tagging off while leaving page_alloc/slab
tagging on.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/hw_tags.c | 46 +++++++++++++++++++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 6 ++++++
2 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index fd3a93dfca42..2da9ad051cdd 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -32,6 +32,12 @@ enum kasan_arg_mode {
KASAN_ARG_MODE_ASYMM,
};

+enum kasan_arg_vmalloc {
+ KASAN_ARG_VMALLOC_DEFAULT,
+ KASAN_ARG_VMALLOC_OFF,
+ KASAN_ARG_VMALLOC_ON,
+};
+
enum kasan_arg_stacktrace {
KASAN_ARG_STACKTRACE_DEFAULT,
KASAN_ARG_STACKTRACE_OFF,
@@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {

static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_vmalloc kasan_arg_vmalloc __ro_after_init;
static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;

/* Whether KASAN is enabled at all. */
@@ -50,6 +57,9 @@ EXPORT_SYMBOL(kasan_flag_enabled);
enum kasan_mode kasan_mode __ro_after_init;
EXPORT_SYMBOL_GPL(kasan_mode);

+/* Whether to enable vmalloc tagging. */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
+
/* Whether to collect alloc/free stack traces. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

@@ -89,6 +99,23 @@ static int __init early_kasan_mode(char *arg)
}
early_param("kasan.mode", early_kasan_mode);

+/* kasan.vmalloc=off/on */
+static int __init early_kasan_flag_vmalloc(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
+
/* kasan.stacktrace=off/on */
static int __init early_kasan_flag_stacktrace(char *arg)
{
@@ -174,6 +201,19 @@ void __init kasan_init_hw_tags(void)
break;
}

+ switch (kasan_arg_vmalloc) {
+ case KASAN_ARG_VMALLOC_DEFAULT:
+ /* Default to enabling vmalloc tagging. */
+ static_branch_enable(&kasan_flag_vmalloc);
+ break;
+ case KASAN_ARG_VMALLOC_OFF:
+ /* Do nothing, kasan_flag_vmalloc keeps its default value. */
+ break;
+ case KASAN_ARG_VMALLOC_ON:
+ static_branch_enable(&kasan_flag_vmalloc);
+ break;
+ }
+
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
@@ -187,8 +227,9 @@ void __init kasan_init_hw_tags(void)
break;
}

- pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
+ kasan_vmalloc_enabled() ? "on" : "off",
kasan_stack_collection_enabled() ? "on" : "off");
}

@@ -221,6 +262,9 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
u8 tag;
unsigned long redzone_start, redzone_size;

+ if (!kasan_vmalloc_enabled())
+ return (void *)start;
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0827d74d0d87..b58a4547ec5a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,7 @@
#include <linux/static_key.h>
#include "../slab.h"

+DECLARE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

enum kasan_mode {
@@ -22,6 +23,11 @@ enum kasan_mode {

extern enum kasan_mode kasan_mode __ro_after_init;

+static inline bool kasan_vmalloc_enabled(void)
+{
+ return static_branch_likely(&kasan_flag_vmalloc);
+}
+
static inline bool kasan_stack_collection_enabled(void)
{
return static_branch_unlikely(&kasan_flag_stacktrace);
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:23 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

vmalloc tagging support for HW_TAGS KASAN is now complete.

Allow enabling CONFIG_KASAN_VMALLOC.

Also adjust CONFIG_KASAN_VMALLOC description:

- Mention HW_TAGS support.
- Remove unneeded internal details: they have no place in Kconfig
description and are already explained in the documentation.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/Kconfig | 3 +--
lib/Kconfig.kasan | 20 ++++++++++----------
2 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c05d7a06276f..5981e5460c51 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -205,8 +205,7 @@ config ARM64
select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
- select KASAN_VMALLOC if KASAN_GENERIC
- select KASAN_VMALLOC if KASAN_SW_TAGS
+ select KASAN_VMALLOC
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 3f144a87f8a3..7834c35a7964 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -178,17 +178,17 @@ config KASAN_TAGS_IDENTIFY
memory consumption.

config KASAN_VMALLOC
- bool "Back mappings in vmalloc space with real shadow memory"
- depends on (KASAN_GENERIC || KASAN_SW_TAGS) && HAVE_ARCH_KASAN_VMALLOC
+ bool "Check accesses to vmalloc allocations"
+ depends on HAVE_ARCH_KASAN_VMALLOC
help
- By default, the shadow region for vmalloc space is the read-only
- zero page. This means that KASAN cannot detect errors involving
- vmalloc space.
-
- Enabling this option will hook in to vmap/vmalloc and back those
- mappings with real shadow memory allocated on demand. This allows
- for KASAN to detect more sorts of errors (and to support vmapped
- stacks), but at the cost of higher memory usage.
+ This mode makes KASAN check accesses to vmalloc allocations for
+ validity.
+
+ With software KASAN modes, checking is done for all types of vmalloc
+ allocations. Enabling this option leads to higher memory usage.
+
+ With hardware tag-based KASAN, only VM_ALLOC mappings are checked.
+ There is no additional memory usage.

config KASAN_KUNIT_TEST
tristate "KUnit-compatible tests of KASAN bug detection capabilities" if !KUNIT_ALL_TESTS
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:34 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Update KASAN documentation:

- Bump Clang version requirement for HW_TAGS as ARM64_MTE depends on
AS_HAS_LSE_ATOMICS as of commit 2decad92f4731 ("arm64: mte: Ensure
TIF_MTE_ASYNC_FAULT is set atomically"), which requires Clang 12.
- Add description of the new kasan.vmalloc command line flag.
- Mention that SW_TAGS and HW_TAGS modes now support vmalloc tagging.
- Explicitly say that the "Shadow memory" section is only applicable
to software KASAN modes.
- Mention that shadow-based KASAN_VMALLOC is supported on arm64.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
Documentation/dev-tools/kasan.rst | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 8089c559d339..7614a1fc30fa 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -30,7 +30,7 @@ Software tag-based KASAN mode is only supported in Clang.

The hardware KASAN mode (#3) relies on hardware to perform the checks but
still requires a compiler version that supports memory tagging instructions.
-This mode is supported in GCC 10+ and Clang 11+.
+This mode is supported in GCC 10+ and Clang 12+.

Both software KASAN modes work with SLUB and SLAB memory allocators,
while the hardware tag-based KASAN currently only supports SLUB.
@@ -206,6 +206,9 @@ additional boot parameters that allow disabling KASAN or controlling features:
Asymmetric mode: a bad access is detected synchronously on reads and
asynchronously on writes.

+- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
+ allocations (default: ``on``).
+
- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
traces collection (default: ``on``).

@@ -279,8 +282,8 @@ Software tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Software tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Software tag-based KASAN currently only supports tagging of slab, page_alloc,
+and vmalloc memory.

Hardware tag-based KASAN
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -303,8 +306,8 @@ Hardware tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Hardware tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Hardware tag-based KASAN currently only supports tagging of slab, page_alloc,
+and VM_ALLOC-based vmalloc memory.

If the hardware does not support MTE (pre ARMv8.5), hardware tag-based KASAN
will not be enabled. In this case, all KASAN boot parameters are ignored.
@@ -319,6 +322,8 @@ checking gets disabled.
Shadow memory
-------------

+The contents of this section are only applicable to software KASAN modes.
+
The kernel maps memory in several different parts of the address space.
The range of kernel virtual addresses is large: there is not enough real
memory to support a real shadow region for every address that could be
@@ -349,7 +354,7 @@ CONFIG_KASAN_VMALLOC

With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
cost of greater memory usage. Currently, this is supported on x86,
-riscv, s390, and powerpc.
+arm64, riscv, s390, and powerpc.

This works by hooking into vmalloc and vmap and dynamically
allocating real shadow memory to back the mappings.
--
2.25.1

andrey.k...@linux.dev

unread,
Nov 30, 2021, 5:08:40 PM11/30/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Update the existing vmalloc_oob() test to account for the specifics
of the tag-based modes. Also add a few new checks and comments.

Add new vmalloc-related tests:

- vmalloc_helpers_tags() to check that exported vmalloc helpers can
handle tagged pointers.
- vmap_tags() to check that SW_TAGS mode properly tags vmap() mappings.
- vm_map_ram_tags() to check that SW_TAGS mode properly tags
vm_map_ram() mappings.
- vmalloc_percpu() to check that SW_TAGS mode tags regions allocated
for __alloc_percpu(). The tagging of per-cpu mappings is best-effort;
proper tagging is tracked in [1].

[1] https://bugzilla.kernel.org/show_bug.cgi?id=215019

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
lib/test_kasan.c | 181 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 175 insertions(+), 6 deletions(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 0643573f8686..44875356278a 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -1025,21 +1025,174 @@ static void kmalloc_double_kzfree(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr));
}

+static void vmalloc_helpers_tags(struct kunit *test)
+{
+ void *ptr;
+
+ /* This test is intended for tag-based modes. */
+ KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ ptr = vmalloc(PAGE_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ /* Check that the returned pointer is tagged. */
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure exported vmalloc helpers handle tagged pointers. */
+ KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr));
+
+ vfree(ptr);
+}
+
static void vmalloc_oob(struct kunit *test)
{
- void *area;
+ char *v_ptr, *p_ptr;
+ struct page *page;
+ size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5;

KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);

+ v_ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
/*
- * We have to be careful not to hit the guard page.
+ * We have to be careful not to hit the guard page in vmalloc tests.
* The MMU will catch that and crash us.
*/
- area = vmalloc(3000);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, area);

- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)area)[3100]);
- vfree(area);
+ /* Make sure in-bounds accesses are valid. */
+ v_ptr[0] = 0;
+ v_ptr[size - 1] = 0;
+
+ /*
+ * An unaligned access past the requested vmalloc size.
+ * Only generic KASAN can precisely detect these.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
+
+ /* An aligned access into the first out-of-bounds granule. */
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
+
+ /* Check that in-bounds accesses to the physical page are valid. */
+ page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+ p_ptr[0] = 0;
+
+ vfree(v_ptr);
+
+ /*
+ * We can't check for use-after-unmap bugs in this nor in the following
+ * vmalloc tests, as the page might be fully unmapped and accessing it
+ * will crash the kernel.
+ */
+}
+
+static void vmap_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *p_page, *v_page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vmap mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ p_page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page);
+ p_ptr = page_address(p_page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vmap(&p_page, 1 << order, VM_MAP, PAGE_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ /*
+ * We can't check for out-of-bounds bugs in this nor in the following
+ * vmalloc tests, as allocations have page granularity and accessing
+ * the guard page will crash the kernel.
+ */
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ /* Make sure vmalloc_to_page() correctly recovers the page pointer. */
+ v_page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_page);
+ KUNIT_EXPECT_PTR_EQ(test, p_page, v_page);
+
+ vunmap(v_ptr);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vm_map_ram_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vm_map_ram mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vm_map_ram(&page, 1 << order, -1);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ vm_unmap_ram(v_ptr, 1 << order);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vmalloc_percpu(struct kunit *test)
+{
+ char __percpu *ptr;
+ int cpu;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons percpu mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
+
+ for_each_possible_cpu(cpu) {
+ char *c_ptr = per_cpu_ptr(ptr, cpu);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses don't crash the kernel. */
+ *c_ptr = 0;
+ }
+
+ free_percpu(ptr);
}

/*
@@ -1073,6 +1226,18 @@ static void match_all_not_assigned(struct kunit *test)
KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
free_pages((unsigned long)ptr, order);
}
+
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ return;
+
+ for (i = 0; i < 256; i++) {
+ size = (get_random_int() % 1024) + 1;
+ ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+ vfree(ptr);
+ }
}

/* Check that 0xff works as a match-all pointer tag for tag-based modes. */
@@ -1176,7 +1341,11 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
KUNIT_CASE(kmalloc_double_kzfree),
+ KUNIT_CASE(vmalloc_helpers_tags),
KUNIT_CASE(vmalloc_oob),
+ KUNIT_CASE(vmap_tags),
+ KUNIT_CASE(vm_map_ram_tags),
+ KUNIT_CASE(vmalloc_percpu),
KUNIT_CASE(match_all_not_assigned),
KUNIT_CASE(match_all_ptr_tag),
KUNIT_CASE(match_all_mem_tag),
--
2.25.1

Marco Elver

unread,
Dec 1, 2021, 6:35:30 AM12/1/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> vmalloc tagging support for HW_TAGS KASAN is now complete.
>
> Allow enabling CONFIG_KASAN_VMALLOC.

This actually doesn't "allow" enabling it, it unconditionally enables it
and a user can't disable CONFIG_KASAN_VMALLOC.

I found some background in acc3042d62cb9 why arm64 wants this.

> Also adjust CONFIG_KASAN_VMALLOC description:
>
> - Mention HW_TAGS support.
> - Remove unneeded internal details: they have no place in Kconfig
> description and are already explained in the documentation.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> arch/arm64/Kconfig | 3 +--
> lib/Kconfig.kasan | 20 ++++++++++----------
> 2 files changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index c05d7a06276f..5981e5460c51 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -205,8 +205,7 @@ config ARM64
> select IOMMU_DMA if IOMMU_SUPPORT
> select IRQ_DOMAIN
> select IRQ_FORCED_THREADING
> - select KASAN_VMALLOC if KASAN_GENERIC
> - select KASAN_VMALLOC if KASAN_SW_TAGS
> + select KASAN_VMALLOC

This produces the following warning when making an arm64 defconfig:

| WARNING: unmet direct dependencies detected for KASAN_VMALLOC
| Depends on [n]: KASAN [=n] && HAVE_ARCH_KASAN_VMALLOC [=y]
| Selected by [y]:
| - ARM64 [=y]
|
| WARNING: unmet direct dependencies detected for KASAN_VMALLOC
| Depends on [n]: KASAN [=n] && HAVE_ARCH_KASAN_VMALLOC [=y]
| Selected by [y]:
| - ARM64 [=y]

To unconditionally select KASAN_VMALLOC, it should probably be

select KASAN_VMALLOC if KASAN

Marco Elver

unread,
Dec 1, 2021, 9:10:05 AM12/1/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 10:39PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> Simplify the code around calling kasan_poison_pages() in
> free_pages_prepare().
>
> Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
> since kernel_init_free_pages() can handle poisoned memory.

Why did they have to be reordered?

> This patch does no functional changes besides reordering the calls.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> mm/page_alloc.c | 18 +++++-------------
> 1 file changed, 5 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3f3ea41f8c64..0673db27dd12 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> {
> int bad = 0;
> bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);

skip_kasan_poison is only used once now, so you could remove the
variable -- unless later code will use it in more than once place again.

> + bool init = want_init_on_free();
>
> VM_BUG_ON_PAGE(PageTail(page), page);
>
> @@ -1359,19 +1360,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
> * With hardware tag-based KASAN, memory tags must be set before the
> * page becomes unavailable via debug_pagealloc or arch_free_page.
> */
> - if (kasan_has_integrated_init()) {
> - bool init = want_init_on_free();
> -
> - if (!skip_kasan_poison)
> - kasan_poison_pages(page, order, init);
> - } else {
> - bool init = want_init_on_free();
> -
> - if (init)
> - kernel_init_free_pages(page, 1 << order);
> - if (!skip_kasan_poison)
> - kasan_poison_pages(page, order, init);
> - }
> + if (!skip_kasan_poison)
> + kasan_poison_pages(page, order, init);
> + if (init && !kasan_has_integrated_init())
> + kernel_init_free_pages(page, 1 << order);
>
> /*
> * arch_free_page() can make the page's contents inaccessible. s390
> --
> 2.25.1

Marco Elver

unread,
Dec 2, 2021, 9:17:11 AM12/2/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> In preparation for adding vmalloc support to SW/HW_TAGS KASAN,
> reset pointer tags in functions that use pointer values in
> range checks.
>
> vread() is a special case here. Resetting the pointer tag in its
> prologue could technically lead to missing bad accesses to virtual
> mappings in its implementation. However, vread() doesn't access the
> virtual mappings cirectly. Instead, it recovers the physical address

s/cirectly/directly/

But this paragraph is a little confusing, because first you point out
that vread() might miss bad accesses, but then say that it does checked
accesses. I think to avoid confusing the reader, maybe just say that
vread() is checked, but hypothetically, should its implementation change
to directly access addr, invalid accesses might be missed.

Did I get this right? Or am I still confused?

Marco Elver

unread,
Dec 2, 2021, 9:28:05 AM12/2/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> Once tag-based KASAN modes start tagging vmalloc() allocations,
> kernel stacks will start getting tagged if CONFIG_VMAP_STACK is enabled.
>
> Reset the tag of kernel stack pointers after allocation.
>
> For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
> instrumentation can't handle the sp register being tagged.
>
> For HW_TAGS KASAN, there's no instrumentation-related issues. However,
> the impact of having a tagged SP pointer needs to be properly evaluated,
> so keep it non-tagged for now.

Don't VMAP_STACK stacks have guards? So some out-of-bounds would already
be caught.

What would be the hypothetical benefit of using a tagged stack pointer?
Perhaps wildly out-of-bounds accesses derived from stack pointers?

I agree that unless we understand the impact of using a tagged stack
pointers, it should remain non-tagged for now.

> Note, that the memory for the stack allocation still gets tagged to
> catch vmalloc-into-stack out-of-bounds accesses.

Will the fact it's tagged cause issues for other code? I think kmemleak
already untags all addresses it scans for pointers. Anything else?

Alexander Potapenko

unread,
Dec 2, 2021, 10:25:33 AM12/2/21
to andrey.k...@linux.dev, Marco Elver, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 10:40 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> Currently, kernel_init_free_pages() serves two purposes: either only
Nit: "it either"

> zeroes memory or zeroes both memory and memory tags via a different
> code path. As this function has only two callers, each using only one
> code path, this behaviour is confusing.
>
> This patch pulls the code that zeroes both memory and tags out of
> kernel_init_free_pages().
>
> As a result of this change, the code in free_pages_prepare() starts to
> look complicated, but this is improved in the few following patches.
> Those improvements are not integrated into this patch to make diffs
> easier to read.
>
> This patch does no functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>
> ---
> mm/page_alloc.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c99566a3b67e..3589333b5b77 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
> PageSkipKASanPoison(page);
> }
>
> -static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
> +static void kernel_init_free_pages(struct page *page, int numpages)
> {
> int i;
>
> - if (zero_tags) {
> - for (i = 0; i < numpages; i++)
> - tag_clear_highpage(page + i);
> - return;
> - }
> -
> /* s390's use of memset() could override KASAN redzones. */
> kasan_disable_current();
> for (i = 0; i < numpages; i++) {
> @@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> bool init = want_init_on_free();
>
> if (init)
> - kernel_init_free_pages(page, 1 << order, false);
> + kernel_init_free_pages(page, 1 << order);
> if (!skip_kasan_poison)
> kasan_poison_pages(page, order, init);
> }
> @@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
>
> kasan_unpoison_pages(page, order, init);
> - if (init)
> - kernel_init_free_pages(page, 1 << order,
> - gfp_flags & __GFP_ZEROTAGS);
> +
> + if (init) {
> + if (gfp_flags & __GFP_ZEROTAGS) {
> + int i;
> +
> + for (i = 0; i < 1 << order; i++)
> + tag_clear_highpage(page + i);
> + } else {
> + kernel_init_free_pages(page, 1 << order);
> + }
> + }
> }
>
> set_page_owner(page, order, gfp_flags);
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/e64fc8cd8e08fac044368aaba27be9fc6f60ff9c.1638308023.git.andreyknvl%40google.com.



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

Alexander Potapenko

unread,
Dec 2, 2021, 10:32:50 AM12/2/21
to andrey.k...@linux.dev, Marco Elver, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 10:40 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> Currently, the code responsible for initializing and poisoning memory
> in free_pages_prepare() is scattered across two locations:
> kasan_free_pages() for HW_TAGS KASAN and free_pages_prepare() itself.
> This is confusing.
>
> This and a few following patches combine the code from these two
> locations. Along the way, these patches also simplify the performed
> checks to make them easier to follow.
>
> This patch replaces the only caller of kasan_free_pages() with its
> implementation.
>
> As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
> is enabled, moving the code does no functional changes.
>
> This patch is not useful by itself but makes the simplifications in
> the following patches easier to follow.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>

> ---
> include/linux/kasan.h | 8 --------
> mm/kasan/common.c | 2 +-
> mm/kasan/hw_tags.c | 11 -----------
> mm/page_alloc.c | 6 ++++--
> 4 files changed, 5 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index d8783b682669..89a43d8ae4fe 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -95,7 +95,6 @@ static inline bool kasan_hw_tags_enabled(void)
> }
>
> void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
> -void kasan_free_pages(struct page *page, unsigned int order);
>
> #else /* CONFIG_KASAN_HW_TAGS */
>
> @@ -116,13 +115,6 @@ static __always_inline void kasan_alloc_pages(struct page *page,
> BUILD_BUG();
> }
>
> -static __always_inline void kasan_free_pages(struct page *page,
> - unsigned int order)
> -{
> - /* Only available for integrated init. */
> - BUILD_BUG();
> -}
> -
> #endif /* CONFIG_KASAN_HW_TAGS */
>
> static inline bool kasan_has_integrated_init(void)
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 8428da2aaf17..66078cc1b4f0 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -387,7 +387,7 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
> }
>
> /*
> - * The object will be poisoned by kasan_free_pages() or
> + * The object will be poisoned by kasan_poison_pages() or
> * kasan_slab_free_mempool().
> */
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 7355cb534e4f..0b8225add2e4 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -213,17 +213,6 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
> }
> }
>
> -void kasan_free_pages(struct page *page, unsigned int order)
> -{
> - /*
> - * This condition should match the one in free_pages_prepare() in
> - * page_alloc.c.
> - */
> - bool init = want_init_on_free();
> -
> - kasan_poison_pages(page, order, init);
> -}
> -
> #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
>
> void kasan_enable_tagging_sync(void)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589333b5b77..3f3ea41f8c64 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1353,15 +1353,17 @@ static __always_inline bool free_pages_prepare(struct page *page,
>
> /*
> * As memory initialization might be integrated into KASAN,
> - * kasan_free_pages and kernel_init_free_pages must be
> + * KASAN poisoning and memory initialization code must be
> * kept together to avoid discrepancies in behavior.
> *
> * With hardware tag-based KASAN, memory tags must be set before the
> * page becomes unavailable via debug_pagealloc or arch_free_page.
> */
> if (kasan_has_integrated_init()) {
> + bool init = want_init_on_free();
> +
> if (!skip_kasan_poison)
> - kasan_free_pages(page, order);
> + kasan_poison_pages(page, order, init);
> } else {
> bool init = want_init_on_free();
>
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/64f8b74a4766f886a6df77438e7e098205fd0863.1638308023.git.andreyknvl%40google.com.

Alexander Potapenko

unread,
Dec 2, 2021, 10:40:58 AM12/2/21
to andrey.k...@linux.dev, Marco Elver, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 10:41 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> __GFP_ZEROTAGS should only be effective if memory is being zeroed.
> Currently, hardware tag-based KASAN violates this requirement.
>
> Fix by including an initialization check along with checking for
> __GFP_ZEROTAGS.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>

> ---
> mm/kasan/hw_tags.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 0b8225add2e4..c643740b8599 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -199,11 +199,12 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
> * page_alloc.c.
> */
> bool init = !want_init_on_free() && want_init_on_alloc(flags);
> + bool init_tags = init && (flags & __GFP_ZEROTAGS);
>
> if (flags & __GFP_SKIP_KASAN_POISON)
> SetPageSkipKASanPoison(page);
>
> - if (flags & __GFP_ZEROTAGS) {
> + if (init_tags) {
> int i;
>
> for (i = 0; i != 1 << order; ++i)
> --
> 2.25.1

Alexander Potapenko

unread,
Dec 2, 2021, 11:14:12 AM12/2/21
to andrey.k...@linux.dev, Marco Elver, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 10:41 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> This patch separates code for zeroing memory from the code clearing tags
> in post_alloc_hook().
>
> This patch is not useful by itself but makes the simplifications in
> the following patches easier to follow.
>
> This patch does no functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> mm/page_alloc.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2ada09a58e4b..0561cdafce36 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2406,19 +2406,21 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> kasan_alloc_pages(page, order, gfp_flags);
> } else {
> bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
> + bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
>
> kasan_unpoison_pages(page, order, init);
>
> - if (init) {
> - if (gfp_flags & __GFP_ZEROTAGS) {
> - int i;
> + if (init_tags) {
> + int i;
>
> - for (i = 0; i < 1 << order; i++)
> - tag_clear_highpage(page + i);
> - } else {
> - kernel_init_free_pages(page, 1 << order);
> - }
> + for (i = 0; i < 1 << order; i++)
> + tag_clear_highpage(page + i);
> +
> + init = false;

I find this a bit twisted and prone to breakages.
Maybe just check for (init && !init_tags) below?
> }
> +
> + if (init)
> + kernel_init_free_pages(page, 1 << order);
> }
>
> set_page_owner(page, order, gfp_flags);

Marco Elver

unread,
Dec 3, 2021, 7:09:22 AM12/3/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
[...]
> enum kasan_arg_stacktrace {
> KASAN_ARG_STACKTRACE_DEFAULT,
> KASAN_ARG_STACKTRACE_OFF,
> @@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {
>
> static enum kasan_arg kasan_arg __ro_after_init;
> static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> +static enum kasan_arg_vmalloc kasan_arg_vmalloc __ro_after_init;
> static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;

It just occurred to me that all of these (except kasan_arg_mode) are
only used by __init functions, so they could actually be marked
__initdata instead of __ro_after_init to free up some bytes after init.

Not sure if you think it's worth it, I leave it to you.

[...]
> + switch (kasan_arg_vmalloc) {
> + case KASAN_ARG_VMALLOC_DEFAULT:
> + /* Default to enabling vmalloc tagging. */
> + static_branch_enable(&kasan_flag_vmalloc);
> + break;
> + case KASAN_ARG_VMALLOC_OFF:
> + /* Do nothing, kasan_flag_vmalloc keeps its default value. */
> + break;
> + case KASAN_ARG_VMALLOC_ON:
> + static_branch_enable(&kasan_flag_vmalloc);
> + break;
> + }

The KASAN_ARG_STACKTRACE_DEFAULT and KASAN_ARG_VMALLOC_ON cases can be
combined.

Marco Elver

unread,
Dec 3, 2021, 7:38:04 AM12/3/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> vmalloc support for SW_TAGS KASAN is now complete.
>
> Allow enabling CONFIG_KASAN_VMALLOC.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

This change is small enough that I would have expected the
lib/Kconfig.kasan change to appear in "kasan, vmalloc: add vmalloc
support to SW_TAGS" because that sounds like it would fully unlock
core KASAN support.

However, the arm64 change could be in its own patch, since there may be
conflicts with arm64 tree or during backports, and only dropping that
may be ok.

I've been backporting too many patches lately, that I feel that would
help.

Marco Elver

unread,
Dec 3, 2021, 7:40:27 AM12/3/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> vmalloc tagging support for HW_TAGS KASAN is now complete.
>
> Allow enabling CONFIG_KASAN_VMALLOC.
>
> Also adjust CONFIG_KASAN_VMALLOC description:
>
> - Mention HW_TAGS support.
> - Remove unneeded internal details: they have no place in Kconfig
> description and are already explained in the documentation.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> arch/arm64/Kconfig | 3 +--
> lib/Kconfig.kasan | 20 ++++++++++----------

Like in the SW_TAGS case, consider moving the lib/Kconfig.kasan change
to the final "kasan, vmalloc: add vmalloc support to HW_TAGS" and only
leave the arm64 in its own patch.

Marco Elver

unread,
Dec 3, 2021, 7:41:26 AM12/3/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> This patch adds vmalloc tagging support to HW_TAGS KASAN.
>
> The key difference between HW_TAGS and the other two KASAN modes
> when it comes to vmalloc: HW_TAGS KASAN can only assign tags to
> physical memory. The other two modes have shadow memory covering
> every mapped virtual memory region.
>
> This patch makes __kasan_unpoison_vmalloc() for HW_TAGS KASAN:
>
> - Skip non-VM_ALLOC mappings as HW_TAGS KASAN can only tag a single
> mapping of normal physical memory; see the comment in the function.
> - Generate a random tag, tag the returned pointer and the allocation.
> - Propagate the tag into the page stucts to allow accesses through
> page_address(vmalloc_to_page()).
>
> The rest of vmalloc-related KASAN hooks are not needed:
>
> - The shadow-related ones are fully skipped.
> - __kasan_poison_vmalloc() is kept as a no-op with a comment.
>
> Poisoning of physical pages that are backing vmalloc() allocations
> is skipped via __GFP_SKIP_KASAN_UNPOISON: __kasan_unpoison_vmalloc()
> poisons them instead.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>

This is missing a Signed-off-by from Vincenzo.

Marco Elver

unread,
Dec 3, 2021, 7:42:18 AM12/3/21
to andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> HW_TAGS KASAN relies on ARM Memory Tagging Extension (MTE). With MTE,
> a memory region must be mapped as MT_NORMAL_TAGGED to allow setting
> memory tags via MTE-specific instructions.
>
> This change adds proper protection bits to vmalloc() allocations.
> These allocations are always backed by page_alloc pages, so the tags
> will actually be getting set on the corresponding physical memory.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>

This is also missing Signed-off-by from Vincenzo.

Andrey Konovalov

unread,
Dec 6, 2021, 4:08:02 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Wed, Dec 1, 2021 at 3:10 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 10:39PM +0100, andrey.k...@linux.dev wrote:
> > From: Andrey Konovalov <andre...@google.com>
> >
> > Simplify the code around calling kasan_poison_pages() in
> > free_pages_prepare().
> >
> > Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
> > since kernel_init_free_pages() can handle poisoned memory.
>
> Why did they have to be reordered?

It's for the next patch, I'll move the reordering there in v2.

> > This patch does no functional changes besides reordering the calls.
> >
> > Signed-off-by: Andrey Konovalov <andre...@google.com>
> > ---
> > mm/page_alloc.c | 18 +++++-------------
> > 1 file changed, 5 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3f3ea41f8c64..0673db27dd12 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> > {
> > int bad = 0;
> > bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
>
> skip_kasan_poison is only used once now, so you could remove the
> variable -- unless later code will use it in more than once place again.

Will do in v2.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:08:16 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Thu, Dec 2, 2021 at 3:17 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> > From: Andrey Konovalov <andre...@google.com>
> >
> > In preparation for adding vmalloc support to SW/HW_TAGS KASAN,
> > reset pointer tags in functions that use pointer values in
> > range checks.
> >
> > vread() is a special case here. Resetting the pointer tag in its
> > prologue could technically lead to missing bad accesses to virtual
> > mappings in its implementation. However, vread() doesn't access the
> > virtual mappings cirectly. Instead, it recovers the physical address
>
> s/cirectly/directly/
>
> But this paragraph is a little confusing, because first you point out
> that vread() might miss bad accesses, but then say that it does checked
> accesses. I think to avoid confusing the reader, maybe just say that
> vread() is checked, but hypothetically, should its implementation change
> to directly access addr, invalid accesses might be missed.
>
> Did I get this right? Or am I still confused?

No, you got it right. Will reword in v2.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:09:11 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Thu, Dec 2, 2021 at 3:28 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> > From: Andrey Konovalov <andre...@google.com>
> >
> > Once tag-based KASAN modes start tagging vmalloc() allocations,
> > kernel stacks will start getting tagged if CONFIG_VMAP_STACK is enabled.
> >
> > Reset the tag of kernel stack pointers after allocation.
> >
> > For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
> > instrumentation can't handle the sp register being tagged.
> >
> > For HW_TAGS KASAN, there's no instrumentation-related issues. However,
> > the impact of having a tagged SP pointer needs to be properly evaluated,
> > so keep it non-tagged for now.
>
> Don't VMAP_STACK stacks have guards? So some out-of-bounds would already
> be caught.

True, linear out-of-bounds accesses are already caught.

> What would be the hypothetical benefit of using a tagged stack pointer?
> Perhaps wildly out-of-bounds accesses derived from stack pointers?

Yes, that's the case that comes to mind.

> I agree that unless we understand the impact of using a tagged stack
> pointers, it should remain non-tagged for now.

Ack. I'll file a KASAN bug for this when the series is merged.

> > Note, that the memory for the stack allocation still gets tagged to
> > catch vmalloc-into-stack out-of-bounds accesses.
>
> Will the fact it's tagged cause issues for other code? I think kmemleak
> already untags all addresses it scans for pointers. Anything else?

Tagging stack memory shouldn't cause any stability issues like
conflicts with kmemleak. Tagging memory but not the pointers is not
worse than leaving memory tags uninitialized/random with regards to
this kind of issues.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:09:50 PM12/6/21
to Alexander Potapenko, andrey.k...@linux.dev, Marco Elver, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
I did it this way deliberately. Check out the code after all the changes:

https://github.com/xairy/linux/blob/up-kasan-vmalloc-tags-v1/mm/page_alloc.c#L2447

It's possible to remove resetting the init variable by expanding the
if (init) check listing all conditions under which init is currently
reset, but that would essentially be duplicating the checks. I think
resetting init is more clear.

Please let me know what you think.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:10:07 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Fri, Dec 3, 2021 at 1:09 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
> [...]
> > enum kasan_arg_stacktrace {
> > KASAN_ARG_STACKTRACE_DEFAULT,
> > KASAN_ARG_STACKTRACE_OFF,
> > @@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {
> >
> > static enum kasan_arg kasan_arg __ro_after_init;
> > static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> > +static enum kasan_arg_vmalloc kasan_arg_vmalloc __ro_after_init;
> > static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
>
> It just occurred to me that all of these (except kasan_arg_mode) are
> only used by __init functions, so they could actually be marked
> __initdata instead of __ro_after_init to free up some bytes after init.

*Except kasan_arg_mode and kasan_arg. Both are accessed by
kasan_init_hw_tags_cpu(), which is not __init to support hot-plugged
CPUs.

However, kasan_arg_stacktrace and kasan_arg_vmalloc can indeed be
marked as __initdata, will do in v2.

> [...]
> > + switch (kasan_arg_vmalloc) {
> > + case KASAN_ARG_VMALLOC_DEFAULT:
> > + /* Default to enabling vmalloc tagging. */
> > + static_branch_enable(&kasan_flag_vmalloc);
> > + break;
> > + case KASAN_ARG_VMALLOC_OFF:
> > + /* Do nothing, kasan_flag_vmalloc keeps its default value. */
> > + break;
> > + case KASAN_ARG_VMALLOC_ON:
> > + static_branch_enable(&kasan_flag_vmalloc);
> > + break;
> > + }
>
> The KASAN_ARG_STACKTRACE_DEFAULT and KASAN_ARG_VMALLOC_ON cases can be
> combined.

Andrey Konovalov

unread,
Dec 6, 2021, 4:10:21 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Fri, Dec 3, 2021 at 1:38 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 11:07PM +0100, andrey.k...@linux.dev wrote:
> > From: Andrey Konovalov <andre...@google.com>
> >
> > vmalloc support for SW_TAGS KASAN is now complete.
> >
> > Allow enabling CONFIG_KASAN_VMALLOC.
> >
> > Signed-off-by: Andrey Konovalov <andre...@google.com>
>
> This change is small enough that I would have expected the
> lib/Kconfig.kasan change to appear in "kasan, vmalloc: add vmalloc
> support to SW_TAGS" because that sounds like it would fully unlock
> core KASAN support.
>
> However, the arm64 change could be in its own patch, since there may be
> conflicts with arm64 tree or during backports, and only dropping that
> may be ok.
>
> I've been backporting too many patches lately, that I feel that would
> help.

Sounds good, will do in v2. Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:10:40 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Wed, Dec 1, 2021 at 12:35 PM Marco Elver <el...@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 11:08PM +0100, andrey.k...@linux.dev wrote:
> > From: Andrey Konovalov <andre...@google.com>
> >
> > vmalloc tagging support for HW_TAGS KASAN is now complete.
> >
> > Allow enabling CONFIG_KASAN_VMALLOC.
>
> This actually doesn't "allow" enabling it, it unconditionally enables it
> and a user can't disable CONFIG_KASAN_VMALLOC.
>
> I found some background in acc3042d62cb9 why arm64 wants this.

Indeed. Will adjust the description in v2.
Will fix in v2.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:10:51 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov

Andrey Konovalov

unread,
Dec 6, 2021, 4:12:33 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov, Vincenzo Frascino
I didn't add it myself as the patch is significantly modified from its
original version.

I'll ask Vincenzo to review when I send v2.

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:12:40 PM12/6/21
to Marco Elver, andrey.k...@linux.dev, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
Same here. Thanks!

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:22:59 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, should_skip_kasan_poison() has two definitions: one for when
CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, one for when it's not.
Instead of duplicating the checks, add a deferred_pages_enabled()
helper and use it in a single should_skip_kasan_poison() definition.

Also move should_skip_kasan_poison() closer to its caller and clarify
all conditions in the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 55 +++++++++++++++++++++++++++++--------------------
1 file changed, 33 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5952749ad40..c99566a3b67e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -375,25 +375,9 @@ int page_group_by_mobility_disabled __read_mostly;
*/
static DEFINE_STATIC_KEY_TRUE(deferred_pages);

-/*
- * Calling kasan_poison_pages() only after deferred memory initialization
- * has completed. Poisoning pages during deferred memory init will greatly
- * lengthen the process and cause problem in large memory systems as the
- * deferred pages initialization is done with interrupt disabled.
- *
- * Assuming that there will be no reference to those newly initialized
- * pages before they are ever allocated, this should have no effect on
- * KASAN memory tracking as the poison will be properly inserted at page
- * allocation time. The only corner case is when pages are allocated by
- * on-demand allocation and then freed again before the deferred pages
- * initialization is done, but this is not likely to happen.
- */
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return static_branch_unlikely(&deferred_pages) ||
- (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return static_branch_unlikely(&deferred_pages);
}

/* Returns true if the struct page for the pfn is uninitialised */
@@ -444,11 +428,9 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
return false;
}
#else
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return false;
}

static inline bool early_page_uninitialised(unsigned long pfn)
@@ -1258,6 +1240,35 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
return ret;
}

+/*
+ * Skip KASAN memory poisoning when either:
+ *
+ * 1. Deferred memory initialization has not yet completed,
+ * see the explanation below.
+ * 2. Skipping poisoning is requested via FPI_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ * 3. Skipping poisoning is requested via __GFP_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ *
+ * Poisoning pages during deferred memory init will greatly lengthen the
+ * process and cause problem in large memory systems as the deferred pages
+ * initialization is done with interrupt disabled.
+ *
+ * Assuming that there will be no reference to those newly initialized
+ * pages before they are ever allocated, this should have no effect on
+ * KASAN memory tracking as the poison will be properly inserted at page
+ * allocation time. The only corner case is when pages are allocated by
+ * on-demand allocation and then freed again before the deferred pages
+ * initialization is done, but this is not likely to happen.
+ */
+static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+{
+ return deferred_pages_enabled() ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
+ PageSkipKASanPoison(page);
+}
+
static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
{
int i;
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:22:59 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Hi,

This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
KASAN modes.

The tree with patches is available here:

https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v2

About half of patches are cleanups I went for along the way. None of
them seem to be important enough to go through stable, so I decided
not to split them out into separate patches/series.

I'll keep the patchset based on the mainline for now. Once the
high-level issues are resolved, I'll rebase onto mm - there might be
a few conflicts right now.

The patchset is partially based on an early version of the HW_TAGS
patchset by Vincenzo that had vmalloc support. Thus, I added a
Co-developed-by tag into a few patches.

SW_TAGS vmalloc tagging support is straightforward. It reuses all of
the generic KASAN machinery, but uses shadow memory to store tags
instead of magic values. Naturally, vmalloc tagging requires adding
a few kasan_reset_tag() annotations to the vmalloc code.

HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
Arm MTE, which can only assigns tags to physical memory. As a result,
HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
page_alloc memory. It ignores vmap() and others.

Changes in v1->v2:
- Move memory init for vmalloc() into vmalloc code for HW_TAGS KASAN.
- Minor fixes and code reshuffling, see patches for lists of changes.

Thanks!

Andrey Konovalov (34):
kasan, page_alloc: deduplicate should_skip_kasan_poison
kasan, page_alloc: move tag_clear_highpage out of
kernel_init_free_pages
kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
kasan, page_alloc: simplify kasan_poison_pages call site
kasan, page_alloc: init memory of skipped pages on free
kasan: drop skip_kasan_poison variable in free_pages_prepare
mm: clarify __GFP_ZEROTAGS comment
kasan: only apply __GFP_ZEROTAGS when memory is zeroed
kasan, page_alloc: refactor init checks in post_alloc_hook
kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
kasan, page_alloc: simplify kasan_unpoison_pages call site
kasan: clean up metadata byte definitions
kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
kasan, x86, arm64, s390: rename functions for modules shadow
kasan, vmalloc: drop outdated VM_KASAN comment
kasan: reorder vmalloc hooks
kasan: add wrappers for vmalloc hooks
kasan, vmalloc: reset tags in vmalloc functions
kasan, fork: don't tag stacks allocated with vmalloc
kasan, vmalloc: add vmalloc support to SW_TAGS
kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
kasan, vmalloc: don't unpoison VM_ALLOC pages before mapping
kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
kasan, page_alloc: allow skipping memory init for HW_TAGS
kasan, vmalloc: add vmalloc support to HW_TAGS
kasan: mark kasan_arg_stacktrace as __initdata
kasan: simplify kasan_init_hw_tags
kasan: add kasan.vmalloc command line flag
arm64: select KASAN_VMALLOC for SW/HW_TAGS modes
kasan: documentation updates
kasan: improve vmalloc tests

Documentation/dev-tools/kasan.rst | 17 ++-
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/vmalloc.h | 10 ++
arch/arm64/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/gfp.h | 28 +++--
include/linux/kasan.h | 91 +++++++++------
include/linux/vmalloc.h | 18 ++-
kernel/fork.c | 1 +
lib/Kconfig.kasan | 20 ++--
lib/test_kasan.c | 181 +++++++++++++++++++++++++++++-
mm/kasan/common.c | 4 +-
mm/kasan/hw_tags.c | 157 +++++++++++++++++++++-----
mm/kasan/kasan.h | 16 ++-
mm/kasan/shadow.c | 57 ++++++----
mm/page_alloc.c | 150 +++++++++++++++++--------
mm/vmalloc.c | 72 ++++++++++--
18 files changed, 631 insertions(+), 199 deletions(-)

--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:31:46 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, kernel_init_free_pages() serves two purposes: it either only
zeroes memory or zeroes both memory and memory tags via a different
code path. As this function has only two callers, each using only one
code path, this behaviour is confusing.

This patch pulls the code that zeroes both memory and tags out of
kernel_init_free_pages().

As a result of this change, the code in free_pages_prepare() starts to
look complicated, but this is improved in the few following patches.
Those improvements are not integrated into this patch to make diffs
easier to read.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>
---
mm/page_alloc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c99566a3b67e..3589333b5b77 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
PageSkipKASanPoison(page);
}

-static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
+static void kernel_init_free_pages(struct page *page, int numpages)
{
int i;

- if (zero_tags) {
- for (i = 0; i < numpages; i++)
- tag_clear_highpage(page + i);
- return;
- }
-
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
for (i = 0; i < numpages; i++) {
@@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
bool init = want_init_on_free();

if (init)
- kernel_init_free_pages(page, 1 << order, false);
+ kernel_init_free_pages(page, 1 << order);
if (!skip_kasan_poison)
kasan_poison_pages(page, order, init);
}
@@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);

kasan_unpoison_pages(page, order, init);
- if (init)
- kernel_init_free_pages(page, 1 << order,
- gfp_flags & __GFP_ZEROTAGS);
+
+ if (init) {
+ if (gfp_flags & __GFP_ZEROTAGS) {
+ int i;
+
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+ } else {
+ kernel_init_free_pages(page, 1 << order);
+ }
+ }
}

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:32:17 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, the code responsible for initializing and poisoning memory
in free_pages_prepare() is scattered across two locations:
kasan_free_pages() for HW_TAGS KASAN and free_pages_prepare() itself.
This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches also simplify the performed
checks to make them easier to follow.

This patch replaces the only caller of kasan_free_pages() with its
implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>
---
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 7355cb534e4f..0b8225add2e4 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -213,17 +213,6 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
}
}

-void kasan_free_pages(struct page *page, unsigned int order)
-{
- /*
- * This condition should match the one in free_pages_prepare() in
- * page_alloc.c.
- */
- bool init = want_init_on_free();
-
- kasan_poison_pages(page, order, init);
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589333b5b77..3f3ea41f8c64 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:14 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Simplify the code around calling kasan_poison_pages() in
free_pages_prepare().

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Don't reorder kasan_poison_pages() and free_pages_prepare().
---
mm/page_alloc.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f3ea41f8c64..15f76bc1fa3e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
{
int bad = 0;
bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
+ bool init = want_init_on_free();

VM_BUG_ON_PAGE(PageTail(page), page);

@@ -1359,19 +1360,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (kasan_has_integrated_init()) {
- bool init = want_init_on_free();
-
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- } else {
- bool init = want_init_on_free();
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- }
+ if (init && !kasan_has_integrated_init())
+ kernel_init_free_pages(page, 1 << order);
+ if (!skip_kasan_poison)
+ kasan_poison_pages(page, order, init);

/*
* arch_free_page() can make the page's contents inaccessible. s390
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:21 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Since commit 7a3b83537188 ("kasan: use separate (un)poison implementation
for integrated init"), when all init, kasan_has_integrated_init(), and
skip_kasan_poison are true, free_pages_prepare() doesn't initialize
the page. This is wrong.

Fix it by remembering whether kasan_poison_pages() performed
initialization, and call kernel_init_free_pages() if it didn't.

Reordering kasan_poison_pages() and kernel_init_free_pages() is OK,
since kernel_init_free_pages() can handle poisoned memory.

Fixes: 7a3b83537188 ("kasan: use separate (un)poison implementation for integrated init")
Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Reorder kasan_poison_pages() and free_pages_prepare() in this patch
instead of doing it in the previous one.
---
mm/page_alloc.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 15f76bc1fa3e..2ada09a58e4b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1360,11 +1360,16 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (init && !kasan_has_integrated_init())
- kernel_init_free_pages(page, 1 << order);
- if (!skip_kasan_poison)
+ if (!skip_kasan_poison) {
kasan_poison_pages(page, order, init);

+ /* Memory is already initialized if KASAN did it internally. */
+ if (kasan_has_integrated_init())
+ init = false;
+ }
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
+
/*
* arch_free_page() can make the page's contents inaccessible. s390
* does this. So nothing which can access the page's contents should
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:27 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

skip_kasan_poison is only used in a single place.
Call should_skip_kasan_poison() directly for simplicity.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Suggested-by: Marco Elver <el...@google.com>

---

Changes v1->v2:
- Add this patch.
---
mm/page_alloc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2ada09a58e4b..f70bfa63a374 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1288,7 +1288,6 @@ static __always_inline bool free_pages_prepare(struct page *page,
unsigned int order, bool check_free, fpi_t fpi_flags)
{
int bad = 0;
- bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
bool init = want_init_on_free();

VM_BUG_ON_PAGE(PageTail(page), page);
@@ -1360,7 +1359,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (!skip_kasan_poison) {
+ if (!should_skip_kasan_poison(page, fpi_flags)) {
kasan_poison_pages(page, order, init);

/* Memory is already initialized if KASAN did it internally. */
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:34 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

__GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
allocation, it's possible to set memory tags at the same time with little
performance impact.

Clarify this intention of __GFP_ZEROTAGS in the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/gfp.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index b976c4177299..dddd7597689f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -232,8 +232,8 @@ struct vm_area_struct;
*
* %__GFP_ZERO returns a zeroed page on success.
*
- * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
- * __GFP_ZERO is set.
+ * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
* %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
* on deallocation. Typically used for userspace pages. Currently only has an
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:40 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

__GFP_ZEROTAGS should only be effective if memory is being zeroed.
Currently, hardware tag-based KASAN violates this requirement.

Fix by including an initialization check along with checking for
__GFP_ZEROTAGS.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Reviewed-by: Alexander Potapenko <gli...@google.com>
---
mm/kasan/hw_tags.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 0b8225add2e4..c643740b8599 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:44 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch separates code for zeroing memory from the code clearing tags
in post_alloc_hook().

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f70bfa63a374..507004a54f2f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2405,19 +2405,21 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
kasan_alloc_pages(page, order, gfp_flags);
} else {
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);

kasan_unpoison_pages(page, order, init);

- if (init) {
- if (gfp_flags & __GFP_ZEROTAGS) {
- int i;
+ if (init_tags) {
+ int i;

- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
- } else {
- kernel_init_free_pages(page, 1 << order);
- }
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+
+ init = false;
}
+
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
}

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:44:48 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Currently, the code responsible for initializing and poisoning memory in
post_alloc_hook() is scattered across two locations: kasan_alloc_pages()
hook for HW_TAGS KASAN and post_alloc_hook() itself. This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches do a step-by-step restructure
the many performed checks to make them easier to follow.

This patch replaces the only caller of kasan_alloc_pages() with its
implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

The patch also moves init and init_tags variables definitions out of
kasan_has_integrated_init() clause in post_alloc_hook(), as they have
the same values regardless of what the if condition evaluates to.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 9 ---------
mm/kasan/common.c | 2 +-
mm/kasan/hw_tags.c | 22 ----------------------
mm/page_alloc.c | 20 +++++++++++++++-----
4 files changed, 16 insertions(+), 37 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 89a43d8ae4fe..1031070be3f3 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -94,8 +94,6 @@ static inline bool kasan_hw_tags_enabled(void)
return kasan_enabled();
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
-
#else /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_enabled(void)
@@ -108,13 +106,6 @@ static inline bool kasan_hw_tags_enabled(void)
return false;
}

-static __always_inline void kasan_alloc_pages(struct page *page,
- unsigned int order, gfp_t flags)
-{
- /* Only available for integrated init. */
- BUILD_BUG();
-}
-
#endif /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_has_integrated_init(void)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 66078cc1b4f0..d7168bfca61a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -536,7 +536,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
return NULL;

/*
- * The object has already been unpoisoned by kasan_alloc_pages() for
+ * The object has already been unpoisoned by kasan_unpoison_pages() for
* alloc_pages() or by kasan_krealloc() for krealloc().
*/

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index c643740b8599..76cf2b6229c7 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,28 +192,6 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
-{
- /*
- * This condition should match the one in post_alloc_hook() in
- * page_alloc.c.
- */
- bool init = !want_init_on_free() && want_init_on_alloc(flags);
- bool init_tags = init && (flags & __GFP_ZEROTAGS);
-
- if (flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
- kasan_unpoison_pages(page, order, init);
- }
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 507004a54f2f..d33e0b0547be 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2383,6 +2383,9 @@ static bool check_new_pages(struct page *page, unsigned int order)
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
+ bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+
set_page_private(page, 0);
set_page_refcounted(page);

@@ -2398,15 +2401,22 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

/*
* As memory initialization might be integrated into KASAN,
- * kasan_alloc_pages and kernel_init_free_pages must be
+ * KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
if (kasan_has_integrated_init()) {
- kasan_alloc_pages(page, order, gfp_flags);
- } else {
- bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
- bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+ if (gfp_flags & __GFP_SKIP_KASAN_POISON)
+ SetPageSkipKASanPoison(page);
+
+ if (init_tags) {
+ int i;

+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+ } else {
+ kasan_unpoison_pages(page, order, init);
+ }
+ } else {
kasan_unpoison_pages(page, order, init);

if (init_tags) {
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:02 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

The patch moves tag_clear_highpage() loops out of the
kasan_has_integrated_init() clause as a code simplification.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d33e0b0547be..781b75563276 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2404,30 +2404,30 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
* KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
+
+ /*
+ * If memory tags should be zeroed (which happens only when memory
+ * should be initialized as well).
+ */
+ if (init_tags) {
+ int i;
+
+ /* Initialize both memory and tags. */
+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+
+ /* Note that memory is already initialized by the loop above. */
+ init = false;
+ }
if (kasan_has_integrated_init()) {
if (gfp_flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);

- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
+ if (!init_tags)
kasan_unpoison_pages(page, order, init);
- }
} else {
kasan_unpoison_pages(page, order, init);

- if (init_tags) {
- int i;
-
- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
-
- init = false;
- }
-
if (init)
kernel_init_free_pages(page, 1 << order);
}
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:13 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Pull the SetPageSkipKASanPoison() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patches.

Also turn the kasan_has_integrated_init() check into the proper
CONFIG_KASAN_HW_TAGS one. These checks evaluate to the same value,
but logically skipping kasan poisoning has nothing to do with
integrated init.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 781b75563276..cbbaf76db6d9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2420,9 +2420,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (gfp_flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
if (!init_tags)
kasan_unpoison_pages(page, order, init);
} else {
@@ -2431,6 +2428,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
if (init)
kernel_init_free_pages(page, 1 << order);
}
+ /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
+ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
+ (gfp_flags & __GFP_SKIP_KASAN_POISON))
+ SetPageSkipKASanPoison(page);

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:20 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Pull the kernel_init_free_pages() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patch.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cbbaf76db6d9..5c346375cff9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2420,14 +2420,18 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (!init_tags)
+ if (!init_tags) {
kasan_unpoison_pages(page, order, init);
+
+ /* Note that memory is already initialized by KASAN. */
+ init = false;
+ }
} else {
kasan_unpoison_pages(page, order, init);
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
}
+ /* If memory is still not initialized, do it now. */
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
/* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
(gfp_flags & __GFP_SKIP_KASAN_POISON))
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:26 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Simplify the checks around kasan_unpoison_pages() call in
post_alloc_hook().

The logical condition for calling this function is:

- If a software KASAN mode is enabled, we need to mark shadow memory.
- Otherwise, HW_TAGS KASAN is enabled, and it only makes sense to
set tags if they haven't already been cleared by tag_clear_highpage(),
which is indicated by init_tags.

This patch concludes the simplifications for post_alloc_hook().

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/page_alloc.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5c346375cff9..73e6500c9767 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2419,15 +2419,18 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- if (kasan_has_integrated_init()) {
- if (!init_tags) {
- kasan_unpoison_pages(page, order, init);
+ /*
+ * If either a software KASAN mode is enabled, or,
+ * in the case of hardware tag-based KASAN,
+ * if memory tags have not been cleared via tag_clear_highpage().
+ */
+ if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS) || !init_tags) {
+ /* Mark shadow memory or set memory tags. */
+ kasan_unpoison_pages(page, order, init);

- /* Note that memory is already initialized by KASAN. */
+ /* Note that memory is already initialized by KASAN. */
+ if (kasan_has_integrated_init())
init = false;
- }
- } else {
- kasan_unpoison_pages(page, order, init);
}
/* If memory is still not initialized, do it now. */
if (init)
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:33 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Most of the metadata byte values are only used for Generic KASAN.

Remove KASAN_KMALLOC_FREETRACK definition for !CONFIG_KASAN_GENERIC
case, and put it along with other metadata values for the Generic
mode under a corresponding ifdef.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/kasan.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aebd8df86a1f..a50450160638 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,15 +71,16 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
-#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
-#define KASAN_KMALLOC_FREETRACK KASAN_TAG_INVALID
#endif

+#ifdef CONFIG_KASAN_GENERIC
+
+#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

@@ -110,6 +111,8 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_ABI_VERSION 1
#endif

+#endif /* CONFIG_KASAN_GENERIC */
+
/* Metadata layout customization. */
#define META_BYTES_PER_BLOCK 1
#define META_BLOCKS_PER_ROW 16
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:40 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

In preparation for adding vmalloc support to SW_TAGS KASAN,
provide a KASAN_VMALLOC_INVALID definition for it.

HW_TAGS KASAN won't be using this value, as it falls back onto
page_alloc for poisoning freed vmalloc() memory.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/kasan.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a50450160638..0827d74d0d87 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,18 +71,19 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
+#define KASAN_VMALLOC_INVALID KASAN_TAG_INVALID /* only for SW_TAGS */
#endif

#ifdef CONFIG_KASAN_GENERIC

#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
-#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

/*
* Stack redzone shadow values
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:46 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Rename kasan_free_shadow to kasan_free_module_shadow and
kasan_module_alloc to kasan_alloc_module_shadow.

These functions are used to allocate/free shadow memory for kernel
modules when KASAN_VMALLOC is not enabled. The new names better
reflect their purpose.

Also reword the comment next to their declaration to improve clarity.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/kasan.h | 14 +++++++-------
mm/kasan/shadow.c | 4 ++--
mm/vmalloc.c | 2 +-
6 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index b5ec010c481f..f8bd5100efb5 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -58,7 +58,7 @@ void *module_alloc(unsigned long size)
PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));

- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b01ba460b7ca..a753cebedda9 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -44,7 +44,7 @@ void *module_alloc(unsigned long size)
p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END,
GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 169fb6f4cd2e..dec41d9ba337 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -77,7 +77,7 @@ void *module_alloc(unsigned long size)
MODULES_END, GFP_KERNEL,
PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size) < 0)) {
vfree(p);
return NULL;
}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 1031070be3f3..4eec58e6ef82 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -453,17 +453,17 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
!defined(CONFIG_KASAN_VMALLOC)

/*
- * These functions provide a special case to support backing module
- * allocations with real shadow memory. With KASAN vmalloc, the special
- * case is unnecessary, as the work is handled in the generic case.
+ * These functions allocate and free shadow memory for kernel modules.
+ * They are only required when KASAN_VMALLOC is not supported, as otherwise
+ * shadow memory is allocated by the generic vmalloc handlers.
*/
-int kasan_module_alloc(void *addr, size_t size);
-void kasan_free_shadow(const struct vm_struct *vm);
+int kasan_alloc_module_shadow(void *addr, size_t size);
+void kasan_free_module_shadow(const struct vm_struct *vm);

#else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

-static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
-static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+static inline int kasan_alloc_module_shadow(void *addr, size_t size) { return 0; }
+static inline void kasan_free_module_shadow(const struct vm_struct *vm) {}

#endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 4a4929b29a23..585c2bf1073b 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,

#else /* CONFIG_KASAN_VMALLOC */

-int kasan_module_alloc(void *addr, size_t size)
+int kasan_alloc_module_shadow(void *addr, size_t size)
{
void *ret;
size_t scaled_size;
@@ -529,7 +529,7 @@ int kasan_module_alloc(void *addr, size_t size)
return -ENOMEM;
}

-void kasan_free_shadow(const struct vm_struct *vm)
+void kasan_free_module_shadow(const struct vm_struct *vm)
{
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d2a00ad4e1dd..c5235e3e5857 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2524,7 +2524,7 @@ struct vm_struct *remove_vm_area(const void *addr)
va->vm = NULL;
spin_unlock(&vmap_area_lock);

- kasan_free_shadow(vm);
+ kasan_free_module_shadow(vm);
free_unmap_vmap_area(va);

return vm;
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:54 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

The comment about VM_KASAN in include/linux/vmalloc.c is outdated.
VM_KASAN is currently only used to mark vm_areas allocated for
kernel modules when CONFIG_KASAN_VMALLOC is disabled.

Drop the comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/vmalloc.h | 11 -----------
1 file changed, 11 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 6e022cc712e6..b22369f540eb 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -28,17 +28,6 @@ struct notifier_block; /* in notifier.h */
#define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
#define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */

-/*
- * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC.
- *
- * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
- * shadow memory has been mapped. It's used to handle allocation errors so that
- * we don't try to poison shadow on free if it was never allocated.
- *
- * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
- * determine which allocations need the module shadow freed.
- */
-
/* bits [20..32] reserved for arch specific ioremap internals */

/*
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:45:59 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Group functions that [de]populate shadow memory for vmalloc.
Group functions that [un]poison memory for vmalloc.

This patch does no functional changes but prepares KASAN code for
adding vmalloc support to HW_TAGS KASAN.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 20 +++++++++-----------
mm/kasan/shadow.c | 43 ++++++++++++++++++++++---------------------
2 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4eec58e6ef82..af2dd67d2c0e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -417,34 +417,32 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+void kasan_unpoison_vmalloc(const void *start, unsigned long size);
+void kasan_poison_vmalloc(const void *start, unsigned long size);

#else /* CONFIG_KASAN_VMALLOC */

+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size) { }
static inline int kasan_populate_vmalloc(unsigned long start,
unsigned long size)
{
return 0;
}
-
-static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
-{ }
-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
static inline void kasan_release_vmalloc(unsigned long start,
unsigned long end,
unsigned long free_region_start,
- unsigned long free_region_end) {}
+ unsigned long free_region_end) { }

-static inline void kasan_populate_early_vm_area_shadow(void *start,
- unsigned long size)
+static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{ }
+static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

#endif /* CONFIG_KASAN_VMALLOC */
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 585c2bf1073b..49a3660e111a 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -345,27 +345,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
return 0;
}

-/*
- * Poison the shadow for a vmalloc region. Called as part of the
- * freeing process at the time the region is freed.
- */
-void kasan_poison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- size = round_up(size, KASAN_GRANULE_SIZE);
- kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
-}
-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- kasan_unpoison(start, size, false);
-}
-
static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
void *unused)
{
@@ -496,6 +475,28 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

+
+void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ kasan_unpoison(start, size, false);
+}
+
+/*
+ * Poison the shadow for a vmalloc region. Called as part of the
+ * freeing process at the time the region is freed.
+ */
+void kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ size = round_up(size, KASAN_GRANULE_SIZE);
+ kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
+}
+
#else /* CONFIG_KASAN_VMALLOC */

int kasan_alloc_module_shadow(void *addr, size_t size)
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:04 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Add wrappers around functions that [un]poison memory for vmalloc
allocations. These functions will be used by HW_TAGS KASAN and
therefore need to be disabled when kasan=off command line argument
is provided.

This patch does no functional changes for software KASAN modes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 17 +++++++++++++++--
mm/kasan/shadow.c | 5 ++---
2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index af2dd67d2c0e..ad4798e77f60 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -423,8 +423,21 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_unpoison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmalloc(start, size);
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_poison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_poison_vmalloc(start, size);
+}

#else /* CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 49a3660e111a..fa0c8a750d09 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
@@ -488,7 +487,7 @@ void kasan_unpoison_vmalloc(const void *start, unsigned long size)
* Poison the shadow for a vmalloc region. Called as part of the
* freeing process at the time the region is freed.
*/
-void kasan_poison_vmalloc(const void *start, unsigned long size)
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:09 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

In preparation for adding vmalloc support to SW/HW_TAGS KASAN,
reset pointer tags in functions that use pointer values in
range checks.

vread() is a special case here. Despite the untagging of the addr
pointer in its prologue, the accesses performed by vread() are checked.

Instead of accessing the virtual mappings though addr directly, vread()
recovers the physical address via page_address(vmalloc_to_page()) and
acceses that. And as page_address() recovers the pointer tag, the
accesses get checked.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Clarified the description of untagging in vread().
---
mm/vmalloc.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c5235e3e5857..a059b3100c0a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -72,7 +72,7 @@ static const bool vmap_allow_huge = false;

bool is_vmalloc_addr(const void *x)
{
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);

return addr >= VMALLOC_START && addr < VMALLOC_END;
}
@@ -630,7 +630,7 @@ int is_vmalloc_or_module_addr(const void *x)
* just put it in the vmalloc space.
*/
#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);
if (addr >= MODULES_VADDR && addr < MODULES_END)
return 1;
#endif
@@ -804,6 +804,8 @@ static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
struct vmap_area *va = NULL;
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *tmp;

@@ -825,6 +827,8 @@ static struct vmap_area *__find_vmap_area(unsigned long addr)
{
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *va;

@@ -2143,7 +2147,7 @@ EXPORT_SYMBOL_GPL(vm_unmap_aliases);
void vm_unmap_ram(const void *mem, unsigned int count)
{
unsigned long size = (unsigned long)count << PAGE_SHIFT;
- unsigned long addr = (unsigned long)mem;
+ unsigned long addr = (unsigned long)kasan_reset_tag(mem);
struct vmap_area *va;

might_sleep();
@@ -3361,6 +3365,8 @@ long vread(char *buf, char *addr, unsigned long count)
unsigned long buflen = count;
unsigned long n;

+ addr = kasan_reset_tag(addr);
+
/* Don't allow overflow */
if ((unsigned long) addr + count < count)
count = -(unsigned long) addr;
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:13 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Once tag-based KASAN modes start tagging vmalloc() allocations,
kernel stacks will start getting tagged if CONFIG_VMAP_STACK is enabled.

Reset the tag of kernel stack pointers after allocation.

For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
instrumentation can't handle the sp register being tagged.

For HW_TAGS KASAN, there's no instrumentation-related issues. However,
the impact of having a tagged SP pointer needs to be properly evaluated,
so keep it non-tagged for now.

Note, that the memory for the stack allocation still gets tagged to
catch vmalloc-into-stack out-of-bounds accesses.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
kernel/fork.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/fork.c b/kernel/fork.c
index 3244cc56b697..062d1484ef42 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -253,6 +253,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
* so cache the vm_struct.
*/
if (stack) {
+ stack = kasan_reset_tag(stack);
tsk->stack_vm_area = find_vm_area(stack);
tsk->stack = stack;
}
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:19 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds vmalloc tagging support to SW_TAGS KASAN.

The changes include:

- __kasan_unpoison_vmalloc() now assigns a random pointer tag, poisons
the virtual mapping accordingly, and embeds the tag into the returned
pointer.

- __get_vm_area_node() (used by vmalloc() and vmap()) and
pcpu_get_vm_areas() save the tagged pointer into vm_struct->addr
(note: not into vmap_area->addr). This requires putting
kasan_unpoison_vmalloc() after setup_vmalloc_vm[_locked]();
otherwise the latter will overwrite the tagged pointer.
The tagged pointer then is naturally propagateed to vmalloc()
and vmap().

- vm_map_ram() returns the tagged pointer directly.

- Allow enabling KASAN_VMALLOC with SW_TAGS.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Allow enabling KASAN_VMALLOC with SW_TAGS in this patch.
---
include/linux/kasan.h | 17 +++++++++++------
lib/Kconfig.kasan | 2 +-
mm/kasan/shadow.c | 6 ++++--
mm/vmalloc.c | 14 ++++++++------
4 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ad4798e77f60..6a2619759e93 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -423,12 +423,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
-static __always_inline void kasan_unpoison_vmalloc(const void *start,
- unsigned long size)
+void * __must_check __kasan_unpoison_vmalloc(const void *start,
+ unsigned long size);
+static __always_inline void * __must_check kasan_unpoison_vmalloc(
+ const void *start, unsigned long size)
{
if (kasan_enabled())
- __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size);
+ return (void *)start;
}

void __kasan_poison_vmalloc(const void *start, unsigned long size);
@@ -453,8 +455,11 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_start,
unsigned long free_region_end) { }

-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
+static inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size, bool unique)
+{
+ return (void *)start;
+}
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index cdc842d090db..3f144a87f8a3 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -179,7 +179,7 @@ config KASAN_TAGS_IDENTIFY

config KASAN_VMALLOC
bool "Back mappings in vmalloc space with real shadow memory"
- depends on KASAN_GENERIC && HAVE_ARCH_KASAN_VMALLOC
+ depends on (KASAN_GENERIC || KASAN_SW_TAGS) && HAVE_ARCH_KASAN_VMALLOC
help
By default, the shadow region for vmalloc space is the read-only
zero page. This means that KASAN cannot detect errors involving
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index fa0c8a750d09..4ca280a96fbc 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,12 +475,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
- return;
+ return (void *)start;

+ start = set_tag(start, kasan_random_tag());
kasan_unpoison(start, size, false);
+ return (void *)start;
}

/*
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a059b3100c0a..7be18b292679 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2208,7 +2208,7 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- kasan_unpoison_vmalloc(mem, size);
+ mem = kasan_unpoison_vmalloc(mem, size);

if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
@@ -2441,10 +2441,10 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
return NULL;
}

- kasan_unpoison_vmalloc((void *)va->va_start, requested_size);
-
setup_vmalloc_vm(area, va, flags, caller);

+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+
return area;
}

@@ -3752,9 +3752,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
for (area = 0; area < nr_vms; area++) {
if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
goto err_free_shadow;
-
- kasan_unpoison_vmalloc((void *)vas[area]->va_start,
- sizes[area]);
}

/* insert all vm's */
@@ -3767,6 +3764,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

+ /* mark allocated areas as accessible */
+ for (area = 0; area < nr_vms; area++)
+ vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
+ vms[area]->size);
+
kfree(vas);
return vms;

--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:25 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

HW_TAGS KASAN relies on ARM Memory Tagging Extension (MTE). With MTE,
a memory region must be mapped as MT_NORMAL_TAGGED to allow setting
memory tags via MTE-specific instructions.

This change adds proper protection bits to vmalloc() allocations.
These allocations are always backed by page_alloc pages, so the tags
will actually be getting set on the corresponding physical memory.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>
---
arch/arm64/include/asm/vmalloc.h | 10 ++++++++++
include/linux/vmalloc.h | 7 +++++++
mm/vmalloc.c | 2 ++
3 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index b9185503feae..3d35adf365bf 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -25,4 +25,14 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)

#endif

+#define arch_vmalloc_pgprot_modify arch_vmalloc_pgprot_modify
+static inline pgprot_t arch_vmalloc_pgprot_modify(pgprot_t prot)
+{
+ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
+ (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)))
+ prot = pgprot_tagged(prot);
+
+ return prot;
+}
+
#endif /* _ASM_ARM64_VMALLOC_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b22369f540eb..965c4bf475f1 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -108,6 +108,13 @@ static inline int arch_vmap_pte_supported_shift(unsigned long size)
}
#endif

+#ifndef arch_vmalloc_pgprot_modify
+static inline pgprot_t arch_vmalloc_pgprot_modify(pgprot_t prot)
+{
+ return prot;
+}
+#endif
+
/*
* Highlevel APIs for driver use
*/
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7be18b292679..f37d0ed99bf9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3033,6 +3033,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
return NULL;
}

+ prot = arch_vmalloc_pgprot_modify(prot);
+
if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) {
unsigned long size_per_node;

--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:31 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch makes KASAN unpoison vmalloc mappings after that have been
mapped in when it's possible: for vmalloc() (indentified via VM_ALLOC)
and vm_map_ram().

The reasons for this are:

- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
kasan_unpoison_vmalloc().

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/vmalloc.c | 26 ++++++++++++++++++++++----
1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f37d0ed99bf9..82ef1e27e2e4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2208,14 +2208,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- mem = kasan_unpoison_vmalloc(mem, size);
-
if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
vm_unmap_ram(mem, count);
return NULL;
}

+ /* Mark the pages as accessible after they were mapped in. */
+ mem = kasan_unpoison_vmalloc(mem, size);
+
return mem;
}
EXPORT_SYMBOL(vm_map_ram);
@@ -2443,7 +2444,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,

setup_vmalloc_vm(area, va, flags, caller);

- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ /*
+ * For VM_ALLOC mappings, __vmalloc_node_range() mark the pages as
+ * accessible after they are mapped in.
+ * Otherwise, as the pages can be mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
+ if (!(flags & VM_ALLOC))
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);

return area;
}
@@ -3072,6 +3080,12 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
goto fail;

+ /*
+ * Mark the pages for VM_ALLOC mappings as accessible after they were
+ * mapped in.
+ */
+ addr = kasan_unpoison_vmalloc(addr, real_size);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3766,7 +3780,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

- /* mark allocated areas as accessible */
+ /*
+ * Mark allocated areas as accessible.
+ * As the pages are mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
vms[area]->size);
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:37 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds a new GFP flag __GFP_SKIP_KASAN_UNPOISON that allows
skipping KASAN poisoning for page_alloc allocations. The flag is only
effective with HW_TAGS KASAN.

This flag will be used by vmalloc code for page_alloc allocations
backing vmalloc() mappings in a following patch. The reason to skip
KASAN poisoning for these pages in page_alloc is because vmalloc code
will be poisoning them instead.

This patch also rewords the comment for __GFP_SKIP_KASAN_POISON.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/gfp.h | 18 +++++++++++-------
mm/page_alloc.c | 24 +++++++++++++++++-------
2 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index dddd7597689f..8a3083d4cbbe 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -54,9 +54,10 @@ struct vm_area_struct;
#define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
-#define ___GFP_SKIP_KASAN_POISON 0x1000000u
+#define ___GFP_SKIP_KASAN_UNPOISON 0x1000000u
+#define ___GFP_SKIP_KASAN_POISON 0x2000000u
#ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP 0x2000000u
+#define ___GFP_NOLOCKDEP 0x4000000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@@ -235,21 +236,24 @@ struct vm_area_struct;
* %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
* is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
- * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
- * on deallocation. Typically used for userspace pages. Currently only has an
- * effect in HW tags mode.
+ * %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation.
+ * Only effective in HW_TAGS mode.
+ *
+ * %__GFP_SKIP_KASAN_POISON makes KASAN skip poisoning on page deallocation.
+ * Typically, used for userspace pages. Only effective in HW_TAGS mode.
*/
#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
-#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)
+#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON)
+#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)

/* Disable lockdep for GFP context tracking */
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))

/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73e6500c9767..7065d0e763e9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2380,6 +2380,21 @@ static bool check_new_pages(struct page *page, unsigned int order)
return false;
}

+static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
+{
+ /* Don't skip if a software KASAN mode is enabled. */
+ if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+ return false;
+
+ /*
+ * For hardware tag-based KASAN, skip if either:
+ *
+ * 1. Memory tags have already been cleared via tag_clear_highpage().
+ * 2. Skipping has been requested via __GFP_SKIP_KASAN_UNPOISON.
+ */
+ return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON);
+}
+
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
@@ -2419,13 +2434,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- /*
- * If either a software KASAN mode is enabled, or,
- * in the case of hardware tag-based KASAN,
- * if memory tags have not been cleared via tag_clear_highpage().
- */
- if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS) || !init_tags) {
- /* Mark shadow memory or set memory tags. */
+ if (!should_skip_kasan_unpoison(gfp_flags, init_tags)) {
+ /* Unpoison shadow memory or set memory tags. */
kasan_unpoison_pages(page, order, init);

/* Note that memory is already initialized by KASAN. */
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:43 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds a new GFP flag __GFP_SKIP_ZERO that allows to skip
memory initialization. The flag is only effective with HW_TAGS KASAN.

This flag will be used by vmalloc code for page_alloc allocations
backing vmalloc() mappings in a following patch. The reason to skip
memory initialization for these pages in page_alloc is because vmalloc
code will be initializing them instead.

With the current implementation, when __GFP_SKIP_ZERO is provided,
__GFP_ZEROTAGS is ignored. This doesn't matter, as these two flags are
never provided at the same time. However, if this is changed in the
future, this particular implementation detail can be changed as well.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- This is a new patch.
---
include/linux/gfp.h | 16 +++++++++++-----
mm/page_alloc.c | 13 ++++++++++++-
2 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 8a3083d4cbbe..5dbde04e8e7b 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -54,10 +54,11 @@ struct vm_area_struct;
#define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
-#define ___GFP_SKIP_KASAN_UNPOISON 0x1000000u
-#define ___GFP_SKIP_KASAN_POISON 0x2000000u
+#define ___GFP_SKIP_ZERO 0x1000000u
+#define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u
+#define ___GFP_SKIP_KASAN_POISON 0x4000000u
#ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP 0x4000000u
+#define ___GFP_NOLOCKDEP 0x8000000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@@ -234,7 +235,11 @@ struct vm_area_struct;
* %__GFP_ZERO returns a zeroed page on success.
*
* %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
- * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that
+ * __GFP_SKIP_ZERO is not set).
+ *
+ * %__GFP_SKIP_ZERO makes page_alloc skip zeroing memory.
+ * Only effective when HW_TAGS KASAN is enabled.
*
* %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation.
* Only effective in HW_TAGS mode.
@@ -246,6 +251,7 @@ struct vm_area_struct;
#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
+#define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO)
#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON)
#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)

@@ -253,7 +259,7 @@ struct vm_area_struct;
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))

/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7065d0e763e9..366b08b761ee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2395,10 +2395,21 @@ static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON);
}

+static inline bool should_skip_init(gfp_t flags)
+{
+ /* Don't skip if a software KASAN mode is enabled. */
+ if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+ return false;
+
+ /* For hardware tag-based KASAN, skip if requested. */
+ return (flags & __GFP_SKIP_ZERO);
+}
+
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
- bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) &&
+ !should_skip_init(gfp_flags);
bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);

set_page_private(page, 0);
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:47 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

This patch adds vmalloc tagging support to HW_TAGS KASAN.

The key difference between HW_TAGS and the other two KASAN modes
when it comes to vmalloc: HW_TAGS KASAN can only assign tags to
physical memory. The other two modes have shadow memory covering
every mapped virtual memory region.

This patch makes __kasan_unpoison_vmalloc() for HW_TAGS KASAN:

- Skip non-VM_ALLOC mappings as HW_TAGS KASAN can only tag a single
mapping of normal physical memory; see the comment in the function.
- Generate a random tag, tag the returned pointer and the allocation,
and initialize the allocation at the same time.
- Propagate the tag into the page stucts to allow accesses through
page_address(vmalloc_to_page()).

The rest of vmalloc-related KASAN hooks are not needed:

- The shadow-related ones are fully skipped.
- __kasan_poison_vmalloc() is kept as a no-op with a comment.

Poisoning and zeroing of physical pages that are backing vmalloc()
allocations are skipped via __GFP_SKIP_KASAN_UNPOISON and
__GFP_SKIP_ZERO: __kasan_unpoison_vmalloc() does that instead.

This patch allows enabling CONFIG_KASAN_VMALLOC with HW_TAGS
and adjusts CONFIG_KASAN_VMALLOC description:

- Mention HW_TAGS support.
- Remove unneeded internal details: they have no place in Kconfig
description and are already explained in the documentation.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>

---

Changes v1->v2:
- Allow enabling CONFIG_KASAN_VMALLOC with HW_TAGS in this patch.
- Move memory init for page_alloc pages backing vmalloc() into
kasan_unpoison_vmalloc().
---
include/linux/kasan.h | 30 +++++++++++++--
lib/Kconfig.kasan | 20 +++++-----
mm/kasan/hw_tags.c | 89 +++++++++++++++++++++++++++++++++++++++++++
mm/kasan/shadow.c | 11 +++++-
mm/vmalloc.c | 32 +++++++++++++---
5 files changed, 162 insertions(+), 20 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6a2619759e93..0bdc2b824b9c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -417,19 +417,40 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

+#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{ }
+static inline int kasan_populate_vmalloc(unsigned long start,
+ unsigned long size)
+{
+ return 0;
+}
+static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+ unsigned long free_region_end) { }
+
+#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
void * __must_check __kasan_unpoison_vmalloc(const void *start,
- unsigned long size);
+ unsigned long size,
+ bool vm_alloc, bool init);
static __always_inline void * __must_check kasan_unpoison_vmalloc(
- const void *start, unsigned long size)
+ const void *start, unsigned long size,
+ bool vm_alloc, bool init)
{
if (kasan_enabled())
- return __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size, vm_alloc, init);
return (void *)start;
}

@@ -456,7 +477,8 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_end) { }

static inline void *kasan_unpoison_vmalloc(const void *start,
- unsigned long size, bool unique)
+ unsigned long size,
+ bool vm_alloc, bool init)
{
return (void *)start;
}
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 3f144a87f8a3..7834c35a7964 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -178,17 +178,17 @@ config KASAN_TAGS_IDENTIFY
memory consumption.

config KASAN_VMALLOC
- bool "Back mappings in vmalloc space with real shadow memory"
- depends on (KASAN_GENERIC || KASAN_SW_TAGS) && HAVE_ARCH_KASAN_VMALLOC
+ bool "Check accesses to vmalloc allocations"
+ depends on HAVE_ARCH_KASAN_VMALLOC
help
- By default, the shadow region for vmalloc space is the read-only
- zero page. This means that KASAN cannot detect errors involving
- vmalloc space.
-
- Enabling this option will hook in to vmap/vmalloc and back those
- mappings with real shadow memory allocated on demand. This allows
- for KASAN to detect more sorts of errors (and to support vmapped
- stacks), but at the cost of higher memory usage.
+ This mode makes KASAN check accesses to vmalloc allocations for
+ validity.
+
+ With software KASAN modes, checking is done for all types of vmalloc
+ allocations. Enabling this option leads to higher memory usage.
+
+ With hardware tag-based KASAN, only VM_ALLOC mappings are checked.
+ There is no additional memory usage.

config KASAN_KUNIT_TEST
tristate "KUnit-compatible tests of KASAN bug detection capabilities" if !KUNIT_ALL_TESTS
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 76cf2b6229c7..837c260beec6 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,6 +192,95 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

+#ifdef CONFIG_KASAN_VMALLOC
+
+static void unpoison_vmalloc_pages(const void *addr, u8 tag)
+{
+ struct vm_struct *area;
+ int i;
+
+ /*
+ * As hardware tag-based KASAN only tags VM_ALLOC vmalloc allocations
+ * (see the comment in __kasan_unpoison_vmalloc), all of the pages
+ * should belong to a single area.
+ */
+ area = find_vm_area((void *)addr);
+ if (WARN_ON(!area))
+ return;
+
+ for (i = 0; i < area->nr_pages; i++) {
+ struct page *page = area->pages[i];
+
+ page_kasan_tag_set(page, tag);
+ }
+}
+
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ bool vm_alloc, bool init)
+{
+ u8 tag;
+ unsigned long redzone_start, redzone_size;
+
+ if (!is_vmalloc_or_module_addr(start))
+ return (void *)start;
+
+ /* Unpoisoning and pointer tag assignment is skipped for non-VM_ALLOC
+ * mappings as:
+ *
+ * 1. Unlike the software KASAN modes, hardware tag-based KASAN only
+ * supports tagging physical memory. Therefore, it can only tag a
+ * single mapping of normal physical pages.
+ * 2. Hardware tag-based KASAN can only tag memory mapped with special
+ * mapping protection bits, see arch_vmalloc_pgprot_modify().
+ * As non-VM_ALLOC mappings can be mapped outside of vmalloc code,
+ * providing these bits would require tracking all non-VM_ALLOC
+ * mappers.
+ *
+ * Thus, for VM_ALLOC mappings, hardware tag-based KASAN only tags
+ * the first virtual mapping, which is created by vmalloc().
+ * Tagging the page_alloc memory backing that vmalloc() allocation is
+ * skipped, see ___GFP_SKIP_KASAN_UNPOISON.
+ *
+ * For non-VM_ALLOC allocations, page_alloc memory is tagged as usual.
+ */
+ if (!vm_alloc)
+ return (void *)start;
+
+ tag = kasan_random_tag();
+ start = set_tag(start, tag);
+
+ /* Unpoison and initialize memory up to size. */
+ kasan_unpoison(start, size, init);
+
+ /*
+ * Explicitly poison and initialize the in-page vmalloc() redzone.
+ * Unlike software KASAN modes, hardware tag-based KASAN doesn't
+ * unpoison memory when populating shadow for vmalloc() space.
+ */
+ redzone_start = round_up((unsigned long)start + size, KASAN_GRANULE_SIZE);
+ redzone_size = round_up(redzone_start, PAGE_SIZE) - redzone_start;
+ kasan_poison((void *)redzone_start, redzone_size, KASAN_TAG_INVALID, init);
+
+ /*
+ * Set per-page tag flags to allow accessing physical memory for the
+ * vmalloc() mapping through page_address(vmalloc_to_page()).
+ */
+ unpoison_vmalloc_pages(start, tag);
+
+ return (void *)start;
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ /*
+ * No tagging here.
+ * The physical pages backing the vmalloc() allocation are poisoned
+ * through the usual page_alloc paths.
+ */
+}
+
+#endif
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 4ca280a96fbc..8600dd925f35 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,17 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ bool vm_alloc, bool init)
{
+ /*
+ * Software tag-based KASAN tags both VM_ALLOC and non-VM_ALLOC
+ * mappings, so the vm_alloc argument is ignored.
+ * Software tag-based KASAN can't optimize zeroing memory by combining
+ * it with setting memory tags, so the init argument is ignored;
+ * vmalloc() memory is poisoned via page_alloc.
+ */
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 82ef1e27e2e4..d48db7cc3358 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2214,8 +2214,12 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
return NULL;
}

- /* Mark the pages as accessible after they were mapped in. */
- mem = kasan_unpoison_vmalloc(mem, size);
+ /*
+ * Mark the pages as accessible after they were mapped in.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
+ */
+ mem = kasan_unpoison_vmalloc(mem, size, false, false);

return mem;
}
@@ -2449,9 +2453,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
* accessible after they are mapped in.
* Otherwise, as the pages can be mapped outside of vmalloc code,
* mark them now as a best-effort approach.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
if (!(flags & VM_ALLOC))
- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size,
+ false, false);

return area;
}
@@ -2849,6 +2856,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
struct page *page;
int i;

+ /*
+ * Skip page_alloc poisoning and zeroing for pages backing VM_ALLOC
+ * mappings. Only effective in HW_TAGS mode.
+ */
+ gfp &= __GFP_SKIP_KASAN_UNPOISON & __GFP_SKIP_ZERO;
+
/*
* For order-0 pages we make use of bulk allocator, if
* the page array is partly or not at all populated due
@@ -3027,6 +3040,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
{
struct vm_struct *area;
void *addr;
+ bool init;
unsigned long real_size = size;
unsigned long real_align = align;
unsigned int shift = PAGE_SHIFT;
@@ -3083,8 +3097,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
/*
* Mark the pages for VM_ALLOC mappings as accessible after they were
* mapped in.
+ * The init condition should match the one in post_alloc_hook()
+ * (except for the should_skip_init() check) to make sure that memory
+ * is initialized under the same conditions regardless of the enabled
+ * KASAN mode.
*/
- addr = kasan_unpoison_vmalloc(addr, real_size);
+ init = !want_init_on_free() && want_init_on_alloc(gfp_mask);
+ addr = kasan_unpoison_vmalloc(addr, real_size, true, init);

/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
@@ -3784,10 +3803,13 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* Mark allocated areas as accessible.
* As the pages are mapped outside of vmalloc code,
* mark them now as a best-effort approach.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size);
+ vms[area]->size,
+ false, false);

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:52 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

As kasan_arg_stacktrace is only used in __init functions, mark it as
__initdata instead of __ro_after_init to allow it be freed after boot.

The other enums for KASAN args are used in kasan_init_hw_tags_cpu(),
which is not marked as __init as a CPU can be hot-plugged after boot.
Clarify this in a comment.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Suggested-by: Marco Elver <el...@google.com>

---

Changes v1->v2:
- Add this patch.
---
mm/kasan/hw_tags.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 837c260beec6..983ae15ed4f0 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -40,7 +40,7 @@ enum kasan_arg_stacktrace {

static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
-static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
+static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;

/* Whether KASAN is enabled at all. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
@@ -116,7 +116,10 @@ static inline const char *kasan_mode_info(void)
return "sync";
}

-/* kasan_init_hw_tags_cpu() is called for each CPU. */
+/*
+ * kasan_init_hw_tags_cpu() is called for each CPU.
+ * Not marked as __init as a CPU can be hot-plugged after boot.
+ */
void kasan_init_hw_tags_cpu(void)
{
/*
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:46:58 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Simplify kasan_init_hw_tags():

- Remove excessive comments in kasan_arg_mode switch.
- Combine DEFAULT and ON cases in kasan_arg_stacktrace switch.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Add this patch.
---
mm/kasan/hw_tags.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 983ae15ed4f0..e12f2d195cc9 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -159,20 +159,15 @@ void __init kasan_init_hw_tags(void)

switch (kasan_arg_mode) {
case KASAN_ARG_MODE_DEFAULT:
- /*
- * Default to sync mode.
- */
+ /* Default to sync mode. */
fallthrough;
case KASAN_ARG_MODE_SYNC:
- /* Sync mode enabled. */
kasan_mode = KASAN_MODE_SYNC;
break;
case KASAN_ARG_MODE_ASYNC:
- /* Async mode enabled. */
kasan_mode = KASAN_MODE_ASYNC;
break;
case KASAN_ARG_MODE_ASYMM:
- /* Asymm mode enabled. */
kasan_mode = KASAN_MODE_ASYMM;
break;
}
@@ -180,14 +175,13 @@ void __init kasan_init_hw_tags(void)
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
+ fallthrough;
+ case KASAN_ARG_STACKTRACE_ON:
static_branch_enable(&kasan_flag_stacktrace);
break;
case KASAN_ARG_STACKTRACE_OFF:
/* Do nothing, kasan_flag_stacktrace keeps its default value. */
break;
- case KASAN_ARG_STACKTRACE_ON:
- static_branch_enable(&kasan_flag_stacktrace);
- break;
}

pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:47:04 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Allow disabling vmalloc() tagging for HW_TAGS KASAN via a kasan.vmalloc
command line switch.

This is a fail-safe switch intended for production systems that enable
HW_TAGS KASAN. In case vmalloc() tagging ends up having an issue not
detected during testing but that manifests in production, kasan.vmalloc
allows to turn vmalloc() tagging off while leaving page_alloc/slab
tagging on.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Mark kasan_arg_stacktrace as __initdata instead of __ro_after_init.
- Combine KASAN_ARG_VMALLOC_DEFAULT and KASAN_ARG_VMALLOC_ON switch
cases.
---
mm/kasan/hw_tags.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 6 ++++++
2 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index e12f2d195cc9..5683eeac7348 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -32,6 +32,12 @@ enum kasan_arg_mode {
KASAN_ARG_MODE_ASYMM,
};

+enum kasan_arg_vmalloc {
+ KASAN_ARG_VMALLOC_DEFAULT,
+ KASAN_ARG_VMALLOC_OFF,
+ KASAN_ARG_VMALLOC_ON,
+};
+
enum kasan_arg_stacktrace {
KASAN_ARG_STACKTRACE_DEFAULT,
KASAN_ARG_STACKTRACE_OFF,
@@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {

static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;

/* Whether KASAN is enabled at all. */
@@ -50,6 +57,9 @@ EXPORT_SYMBOL(kasan_flag_enabled);
enum kasan_mode kasan_mode __ro_after_init;
EXPORT_SYMBOL_GPL(kasan_mode);

+/* Whether to enable vmalloc tagging. */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
+
/* Whether to collect alloc/free stack traces. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

@@ -89,6 +99,23 @@ static int __init early_kasan_mode(char *arg)
}
early_param("kasan.mode", early_kasan_mode);

+/* kasan.vmalloc=off/on */
+static int __init early_kasan_flag_vmalloc(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
+
/* kasan.stacktrace=off/on */
static int __init early_kasan_flag_stacktrace(char *arg)
{
@@ -172,6 +199,18 @@ void __init kasan_init_hw_tags(void)
break;
}

+ switch (kasan_arg_vmalloc) {
+ case KASAN_ARG_VMALLOC_DEFAULT:
+ /* Default to enabling vmalloc tagging. */
+ fallthrough;
+ case KASAN_ARG_VMALLOC_ON:
+ static_branch_enable(&kasan_flag_vmalloc);
+ break;
+ case KASAN_ARG_VMALLOC_OFF:
+ /* Do nothing, kasan_flag_vmalloc keeps its default value. */
+ break;
+ }
+
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
@@ -184,8 +223,9 @@ void __init kasan_init_hw_tags(void)
break;
}

- pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
+ kasan_vmalloc_enabled() ? "on" : "off",
kasan_stack_collection_enabled() ? "on" : "off");
}

@@ -218,6 +258,9 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
u8 tag;
unsigned long redzone_start, redzone_size;

+ if (!kasan_vmalloc_enabled())
+ return (void *)start;
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0827d74d0d87..b58a4547ec5a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,7 @@
#include <linux/static_key.h>
#include "../slab.h"

+DECLARE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

enum kasan_mode {
@@ -22,6 +23,11 @@ enum kasan_mode {

extern enum kasan_mode kasan_mode __ro_after_init;

+static inline bool kasan_vmalloc_enabled(void)
+{
+ return static_branch_likely(&kasan_flag_vmalloc);
+}
+
static inline bool kasan_stack_collection_enabled(void)
{
return static_branch_unlikely(&kasan_flag_stacktrace);
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:47:11 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Generic KASAN already selects KASAN_VMALLOC to allow VMAP_STACK to be
selected unconditionally, see commit acc3042d62cb9 ("arm64: Kconfig:
select KASAN_VMALLOC if KANSAN_GENERIC is enabled").

The same change is needed for SW_TAGS KASAN.

HW_TAGS KASAN does not require enabling KASAN_VMALLOC for VMAP_STACK,
they already work together as is. Still, selecting KASAN_VMALLOC still
makes sense to make vmalloc() always protected. In case any bugs in
KASAN's vmalloc() support are discovered, the command line kasan.vmalloc
flag can be used to disable vmalloc() checking.

This patch selects KASAN_VMALLOC for all KASAN modes for arm64.

Signed-off-by: Andrey Konovalov <andre...@google.com>

---

Changes v1->v2:
- Split out this patch.
---
arch/arm64/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c4207cf9bb17..f0aa434e3b7a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -205,7 +205,7 @@ config ARM64
select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
- select KASAN_VMALLOC if KASAN_GENERIC
+ select KASAN_VMALLOC if KASAN
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:47:18 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Update KASAN documentation:

- Bump Clang version requirement for HW_TAGS as ARM64_MTE depends on
AS_HAS_LSE_ATOMICS as of commit 2decad92f4731 ("arm64: mte: Ensure
TIF_MTE_ASYNC_FAULT is set atomically"), which requires Clang 12.
- Add description of the new kasan.vmalloc command line flag.
- Mention that SW_TAGS and HW_TAGS modes now support vmalloc tagging.
- Explicitly say that the "Shadow memory" section is only applicable
to software KASAN modes.
- Mention that shadow-based KASAN_VMALLOC is supported on arm64.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
Documentation/dev-tools/kasan.rst | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 8089c559d339..7614a1fc30fa 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -30,7 +30,7 @@ Software tag-based KASAN mode is only supported in Clang.

The hardware KASAN mode (#3) relies on hardware to perform the checks but
still requires a compiler version that supports memory tagging instructions.
-This mode is supported in GCC 10+ and Clang 11+.
+This mode is supported in GCC 10+ and Clang 12+.

Both software KASAN modes work with SLUB and SLAB memory allocators,
while the hardware tag-based KASAN currently only supports SLUB.
@@ -206,6 +206,9 @@ additional boot parameters that allow disabling KASAN or controlling features:
Asymmetric mode: a bad access is detected synchronously on reads and
asynchronously on writes.

+- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
+ allocations (default: ``on``).
+
- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
traces collection (default: ``on``).

@@ -279,8 +282,8 @@ Software tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Software tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Software tag-based KASAN currently only supports tagging of slab, page_alloc,
+and vmalloc memory.

Hardware tag-based KASAN
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -303,8 +306,8 @@ Hardware tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Hardware tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Hardware tag-based KASAN currently only supports tagging of slab, page_alloc,
+and VM_ALLOC-based vmalloc memory.

If the hardware does not support MTE (pre ARMv8.5), hardware tag-based KASAN
will not be enabled. In this case, all KASAN boot parameters are ignored.
@@ -319,6 +322,8 @@ checking gets disabled.
Shadow memory
-------------

+The contents of this section are only applicable to software KASAN modes.
+
The kernel maps memory in several different parts of the address space.
The range of kernel virtual addresses is large: there is not enough real
memory to support a real shadow region for every address that could be
@@ -349,7 +354,7 @@ CONFIG_KASAN_VMALLOC

With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
cost of greater memory usage. Currently, this is supported on x86,
-riscv, s390, and powerpc.
+arm64, riscv, s390, and powerpc.

This works by hooking into vmalloc and vmap and dynamically
allocating real shadow memory to back the mappings.
--
2.25.1

andrey.k...@linux.dev

unread,
Dec 6, 2021, 4:47:22 PM12/6/21
to Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Update the existing vmalloc_oob() test to account for the specifics
of the tag-based modes. Also add a few new checks and comments.

Add new vmalloc-related tests:

- vmalloc_helpers_tags() to check that exported vmalloc helpers can
handle tagged pointers.
- vmap_tags() to check that SW_TAGS mode properly tags vmap() mappings.
- vm_map_ram_tags() to check that SW_TAGS mode properly tags
vm_map_ram() mappings.
- vmalloc_percpu() to check that SW_TAGS mode tags regions allocated
for __alloc_percpu(). The tagging of per-cpu mappings is best-effort;
proper tagging is tracked in [1].

[1] https://bugzilla.kernel.org/show_bug.cgi?id=215019

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
lib/test_kasan.c | 181 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 175 insertions(+), 6 deletions(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 0643573f8686..44875356278a 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -1025,21 +1025,174 @@ static void kmalloc_double_kzfree(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr));
}

+static void vmalloc_helpers_tags(struct kunit *test)
+{
+ void *ptr;
+
+ /* This test is intended for tag-based modes. */
+ KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ ptr = vmalloc(PAGE_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ /* Check that the returned pointer is tagged. */
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure exported vmalloc helpers handle tagged pointers. */
+ KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr));
+
+ vfree(ptr);
+}
+
static void vmalloc_oob(struct kunit *test)
{
- void *area;
+ char *v_ptr, *p_ptr;
+ struct page *page;
+ size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5;

KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);

+ v_ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
/*
- * We have to be careful not to hit the guard page.
+ * We have to be careful not to hit the guard page in vmalloc tests.
* The MMU will catch that and crash us.
*/
- area = vmalloc(3000);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, area);

- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)area)[3100]);
- vfree(area);
+ /* Make sure in-bounds accesses are valid. */
+ v_ptr[0] = 0;
+ v_ptr[size - 1] = 0;
+
+ /*
+ * An unaligned access past the requested vmalloc size.
+ * Only generic KASAN can precisely detect these.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
+
+ /* An aligned access into the first out-of-bounds granule. */
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
+
+ /* Check that in-bounds accesses to the physical page are valid. */
+ page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+ p_ptr[0] = 0;
+
+ vfree(v_ptr);
+
+ /*
+ * We can't check for use-after-unmap bugs in this nor in the following
+ * vmalloc tests, as the page might be fully unmapped and accessing it
+ * will crash the kernel.
+ */
+}
+
+static void vmap_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *p_page, *v_page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vmap mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ p_page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page);
+ p_ptr = page_address(p_page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vmap(&p_page, 1 << order, VM_MAP, PAGE_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ /*
+ * We can't check for out-of-bounds bugs in this nor in the following
+ * vmalloc tests, as allocations have page granularity and accessing
+ * the guard page will crash the kernel.
+ */
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ /* Make sure vmalloc_to_page() correctly recovers the page pointer. */
+ v_page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_page);
+ KUNIT_EXPECT_PTR_EQ(test, p_page, v_page);
+
+ vunmap(v_ptr);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vm_map_ram_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vm_map_ram mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vm_map_ram(&page, 1 << order, -1);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ vm_unmap_ram(v_ptr, 1 << order);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vmalloc_percpu(struct kunit *test)
+{
+ char __percpu *ptr;
+ int cpu;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons percpu mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
+
+ for_each_possible_cpu(cpu) {
+ char *c_ptr = per_cpu_ptr(ptr, cpu);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses don't crash the kernel. */
+ *c_ptr = 0;
+ }
+
+ free_percpu(ptr);
}

/*
@@ -1073,6 +1226,18 @@ static void match_all_not_assigned(struct kunit *test)
KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
free_pages((unsigned long)ptr, order);
}
+
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ return;
+
+ for (i = 0; i < 256; i++) {
+ size = (get_random_int() % 1024) + 1;
+ ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+ vfree(ptr);
+ }
}

/* Check that 0xff works as a match-all pointer tag for tag-based modes. */
@@ -1176,7 +1341,11 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
KUNIT_CASE(kmalloc_double_kzfree),
+ KUNIT_CASE(vmalloc_helpers_tags),
KUNIT_CASE(vmalloc_oob),
+ KUNIT_CASE(vmap_tags),
+ KUNIT_CASE(vm_map_ram_tags),
+ KUNIT_CASE(vmalloc_percpu),
KUNIT_CASE(match_all_not_assigned),
KUNIT_CASE(match_all_ptr_tag),
KUNIT_CASE(match_all_mem_tag),
--
2.25.1

Andrey Konovalov

unread,
Dec 6, 2021, 4:49:04 PM12/6/21
to Vincenzo Frascino, Marco Elver, Alexander Potapenko, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Mark Rutland, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov, andrey.k...@linux.dev
Hi Vincenzo,

This patch is based on an early version of the HW_TAGS series you had.
Could you PTAL and give your sign-off?

Thanks!

Andrey Konovalov

unread,
Dec 6, 2021, 4:49:45 PM12/6/21
to Vincenzo Frascino, Marco Elver, Alexander Potapenko, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Mark Rutland, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov, andrey.k...@linux.dev
On Mon, Dec 6, 2021 at 10:46 PM <andrey.k...@linux.dev> wrote:
>
Hi Vincenzo,

This patch is partially based on an early version of the HW_TAGS

Andrey Konovalov

unread,
Dec 7, 2021, 12:33:21 PM12/7/21
to Peter Collingbourne, Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Mark Rutland, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov, andrey.k...@linux.dev
On Mon, Dec 6, 2021 at 10:44 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> __GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
> allocation, it's possible to set memory tags at the same time with little
> performance impact.
>
> Clarify this intention of __GFP_ZEROTAGS in the comment.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> include/linux/gfp.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index b976c4177299..dddd7597689f 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -232,8 +232,8 @@ struct vm_area_struct;
> *
> * %__GFP_ZERO returns a zeroed page on success.
> *
> - * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
> - * __GFP_ZERO is set.
> + * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
> + * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
> *
> * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
> * on deallocation. Typically used for userspace pages. Currently only has an
> --
> 2.25.1
>

Hi Peter,

Could you check whether I correctly understood the intention of
__GFP_ZEROTAGS and give your ack on this patch and the next one?

Thanks!

Andrey Konovalov

unread,
Dec 7, 2021, 2:46:35 PM12/7/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Vincenzo Frascino, Catalin Marinas, Peter Collingbourne, Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Andrew Morton, Linux Memory Management List, Will Deacon, Mark Rutland, Linux ARM, Evgenii Stepanov, LKML, Andrey Konovalov
On Mon, Dec 6, 2021 at 10:22 PM <andrey.k...@linux.dev> wrote:
>
> From: Andrey Konovalov <andre...@google.com>
>
> Hi,
>
> This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
> KASAN modes.
>
> The tree with patches is available here:
>
> https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v2
>
> About half of patches are cleanups I went for along the way. None of
> them seem to be important enough to go through stable, so I decided
> not to split them out into separate patches/series.
>
> I'll keep the patchset based on the mainline for now. Once the
> high-level issues are resolved, I'll rebase onto mm - there might be
> a few conflicts right now.
>
> The patchset is partially based on an early version of the HW_TAGS
> patchset by Vincenzo that had vmalloc support. Thus, I added a
> Co-developed-by tag into a few patches.
>
> SW_TAGS vmalloc tagging support is straightforward. It reuses all of
> the generic KASAN machinery, but uses shadow memory to store tags
> instead of magic values. Naturally, vmalloc tagging requires adding
> a few kasan_reset_tag() annotations to the vmalloc code.
>
> HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
> Arm MTE, which can only assigns tags to physical memory. As a result,
> HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
> page_alloc memory. It ignores vmap() and others.
>
> Changes in v1->v2:
> - Move memory init for vmalloc() into vmalloc code for HW_TAGS KASAN.
> - Minor fixes and code reshuffling, see patches for lists of changes.
>
> Thanks!

FTR, I found a few issues with a tag propagating to PC (in BPF JIT and
a few other places). Will address them in v3.

Catalin Marinas

unread,
Dec 10, 2021, 12:48:57 PM12/10/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Vincenzo Frascino, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Mon, Dec 06, 2021 at 10:43:45PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> __GFP_ZEROTAGS should only be effective if memory is being zeroed.
> Currently, hardware tag-based KASAN violates this requirement.
>
> Fix by including an initialization check along with checking for
> __GFP_ZEROTAGS.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Reviewed-by: Alexander Potapenko <gli...@google.com>
> ---
> mm/kasan/hw_tags.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 0b8225add2e4..c643740b8599 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -199,11 +199,12 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
> * page_alloc.c.
> */
> bool init = !want_init_on_free() && want_init_on_alloc(flags);
> + bool init_tags = init && (flags & __GFP_ZEROTAGS);
>
> if (flags & __GFP_SKIP_KASAN_POISON)
> SetPageSkipKASanPoison(page);
>
> - if (flags & __GFP_ZEROTAGS) {
> + if (init_tags) {

You can probably leave this unchanged but add a WARN_ON_ONCE() if !init.
AFAICT there's only a single place where __GFP_ZEROTAGS is passed.

--
Catalin

Catalin Marinas

unread,
Dec 10, 2021, 12:55:45 PM12/10/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Vincenzo Frascino, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Mon, Dec 06, 2021 at 10:43:54PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> Rename kasan_free_shadow to kasan_free_module_shadow and
> kasan_module_alloc to kasan_alloc_module_shadow.
>
> These functions are used to allocate/free shadow memory for kernel
> modules when KASAN_VMALLOC is not enabled. The new names better
> reflect their purpose.
>
> Also reword the comment next to their declaration to improve clarity.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

For arm64:

Acked-by: Catalin Marinas <catalin...@arm.com>

Catalin Marinas

unread,
Dec 10, 2021, 1:04:12 PM12/10/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Vincenzo Frascino, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
On Mon, Dec 06, 2021 at 10:44:09PM +0100, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> Generic KASAN already selects KASAN_VMALLOC to allow VMAP_STACK to be
> selected unconditionally, see commit acc3042d62cb9 ("arm64: Kconfig:
> select KASAN_VMALLOC if KANSAN_GENERIC is enabled").
>
> The same change is needed for SW_TAGS KASAN.
>
> HW_TAGS KASAN does not require enabling KASAN_VMALLOC for VMAP_STACK,
> they already work together as is. Still, selecting KASAN_VMALLOC still
> makes sense to make vmalloc() always protected. In case any bugs in
> KASAN's vmalloc() support are discovered, the command line kasan.vmalloc
> flag can be used to disable vmalloc() checking.
>
> This patch selects KASAN_VMALLOC for all KASAN modes for arm64.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

Acked-by: Catalin Marinas <catalin...@arm.com>

I also had a look at the rest of the patches and they look fine to me
(even the init_tags comment, feel free to ignore it). I'll poke Vincenzo
next week to look at the patches with his co-developed-by tag.

--
Catalin

Vincenzo Frascino

unread,
Dec 13, 2021, 10:17:31 AM12/13/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
Hi Andrey,

On 12/6/21 9:44 PM, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> HW_TAGS KASAN relies on ARM Memory Tagging Extension (MTE). With MTE,
> a memory region must be mapped as MT_NORMAL_TAGGED to allow setting
> memory tags via MTE-specific instructions.
>
> This change adds proper protection bits to vmalloc() allocations.

Please avoid "this patch/this change" in patch description and use imperative
mode as if you are giving a command to the code base ([1] paragraph 2).

> These allocations are always backed by page_alloc pages, so the tags
> will actually be getting set on the corresponding physical memory.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Co-developed-by: Vincenzo Frascino <vincenzo...@arm.com>

With the change to the commit message:

Signed-off-by: Vincenzo Frascino <vincenzo...@arm.com>
Regards,
Vincenzo

Vincenzo Frascino

unread,
Dec 13, 2021, 10:34:11 AM12/13/21
to andrey.k...@linux.dev, Marco Elver, Alexander Potapenko, Catalin Marinas, Peter Collingbourne, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, Andrew Morton, linu...@kvack.org, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
Hi Andrey,

On 12/6/21 9:44 PM, andrey.k...@linux.dev wrote:
> From: Andrey Konovalov <andre...@google.com>
>
> This patch adds vmalloc tagging support to HW_TAGS KASAN.
>

Can we reorganize the patch description in line with what I commented on patch 24?
Can we replace booleans with enumerations? It should make the code clearer on
the calling site.

...

With these changes:

Signed-off-by: Vincenzo Frascino <vincenzo...@arm.com>

---

Regards,
Vincenzo

andrey.k...@linux.dev

unread,
Dec 13, 2021, 4:52:02 PM12/13/21
to Marco Elver, Alexander Potapenko, Andrew Morton, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasa...@googlegroups.com, linu...@kvack.org, Vincenzo Frascino, Catalin Marinas, Will Deacon, Mark Rutland, linux-ar...@lists.infradead.org, Peter Collingbourne, Evgenii Stepanov, linux-...@vger.kernel.org, Andrey Konovalov
From: Andrey Konovalov <andre...@google.com>

Hi,

This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
KASAN modes.

The tree with patches is available here:

https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v3-akpm

About half of patches are cleanups I went for along the way. None of
them seem to be important enough to go through stable, so I decided
not to split them out into separate patches/series.

The patchset is partially based on an early version of the HW_TAGS
patchset by Vincenzo that had vmalloc support. Thus, I added a
Co-developed-by tag into a few patches.

SW_TAGS vmalloc tagging support is straightforward. It reuses all of
the generic KASAN machinery, but uses shadow memory to store tags
instead of magic values. Naturally, vmalloc tagging requires adding
a few kasan_reset_tag() annotations to the vmalloc code.

HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
Arm MTE, which can only assigns tags to physical memory. As a result,
HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
page_alloc memory. It ignores vmap() and others.

Changes in v2->v3:
- Rebase onto mm.
- New patch: "kasan, arm64: reset pointer tags of vmapped stacks".
- New patch: "kasan, vmalloc: don't tag executable vmalloc allocations".
- New patch: "kasan, arm64: don't tag executable vmalloc allocations".
- Allowing enabling KASAN_VMALLOC with SW/HW_TAGS is moved to
"kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS", as this can only
be done once executable allocations are no longer tagged.
- Minor fixes, see patches for lists of changes.

Changes in v1->v2:
- Move memory init for vmalloc() into vmalloc code for HW_TAGS KASAN.
- Minor fixes and code reshuffling, see patches for lists of changes.

Thanks!

Andrey Konovalov (38):
kasan, page_alloc: deduplicate should_skip_kasan_poison
kasan, page_alloc: move tag_clear_highpage out of
kernel_init_free_pages
kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
kasan, page_alloc: simplify kasan_poison_pages call site
kasan, page_alloc: init memory of skipped pages on free
kasan: drop skip_kasan_poison variable in free_pages_prepare
mm: clarify __GFP_ZEROTAGS comment
kasan: only apply __GFP_ZEROTAGS when memory is zeroed
kasan, page_alloc: refactor init checks in post_alloc_hook
kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
kasan, page_alloc: simplify kasan_unpoison_pages call site
kasan: clean up metadata byte definitions
kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
kasan, x86, arm64, s390: rename functions for modules shadow
kasan, vmalloc: drop outdated VM_KASAN comment
kasan: reorder vmalloc hooks
kasan: add wrappers for vmalloc hooks
kasan, vmalloc: reset tags in vmalloc functions
kasan, fork: reset pointer tags of vmapped stacks
kasan, arm64: reset pointer tags of vmapped stacks
kasan, vmalloc: add vmalloc tagging for SW_TAGS
kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
kasan, vmalloc: don't unpoison VM_ALLOC pages before mapping
kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
kasan, page_alloc: allow skipping memory init for HW_TAGS
kasan, vmalloc: add vmalloc tagging for HW_TAGS
kasan, vmalloc: don't tag executable vmalloc allocations
kasan, arm64: don't tag executable vmalloc allocations
kasan: mark kasan_arg_stacktrace as __initdata
kasan: simplify kasan_init_hw_tags
kasan: add kasan.vmalloc command line flag
kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS
arm64: select KASAN_VMALLOC for SW/HW_TAGS modes
kasan: documentation updates
kasan: improve vmalloc tests

Documentation/dev-tools/kasan.rst | 17 ++-
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/vmalloc.h | 10 ++
arch/arm64/include/asm/vmap_stack.h | 5 +-
arch/arm64/kernel/module.c | 5 +-
arch/arm64/net/bpf_jit_comp.c | 3 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/gfp.h | 28 +++--
include/linux/kasan.h | 97 +++++++++------
include/linux/vmalloc.h | 18 ++-
kernel/fork.c | 1 +
kernel/scs.c | 4 +-
lib/Kconfig.kasan | 20 +--
lib/test_kasan.c | 181 +++++++++++++++++++++++++++-
mm/kasan/common.c | 4 +-
mm/kasan/hw_tags.c | 166 ++++++++++++++++++++-----
mm/kasan/kasan.h | 16 ++-
mm/kasan/shadow.c | 63 ++++++----
mm/page_alloc.c | 150 +++++++++++++++--------
mm/vmalloc.c | 78 ++++++++++--
21 files changed, 668 insertions(+), 204 deletions(-)

--
2.25.1

It is loading more messages.
0 new messages