[PATCH RFC 0/8] kasan: hardware tag-based mode for production use on arm64

38 views
Skip to first unread message

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:45 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
This patchset is not complete (see particular TODOs in the last patch),
and I haven't performed any benchmarking yet, but I would like to start the
discussion now and hear people's opinions regarding the questions mentioned
below.

=== Overview

This patchset adopts the existing hardware tag-based KASAN mode [1] for
use in production as a memory corruption mitigation. Hardware tag-based
KASAN relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory
and pointer tagging. Please see [3] and [4] for detailed analysis of how
MTE helps to fight memory safety problems.

The current plan is reuse CONFIG_KASAN_HW_TAGS for production, but add a
boot time switch, that allows to choose between a debugging mode, that
includes all KASAN features as they are, and a production mode, that only
includes the essentials like tag checking.

It is essential that switching between these modes doesn't require
rebuilding the kernel with different configs, as this is required by the
Android GKI initiative [5].

The last patch of this series adds a new boot time parameter called
kasan_mode, which can have the following values:

- "kasan_mode=on" - only production features
- "kasan_mode=debug" - all debug features
- "kasan_mode=off" - no checks at all (not implemented yet)

Currently outlined differences between "on" and "debug":

- "on" doesn't keep track of alloc/free stacks, and therefore doesn't
require the additional memory to store those
- "on" uses asyncronous tag checking (not implemented yet)

=== Questions

The intention with this kind of a high level switch is to hide the
implementation details. Arguably, we could add multiple switches that allow
to separately control each KASAN or MTE feature, but I'm not sure there's
much value in that.

Does this make sense? Any preference regarding the name of the parameter
and its values?

What should be the default when the parameter is not specified? I would
argue that it should be "debug" (for hardware that supports MTE, otherwise
"off"), as it's the implied default for all other KASAN modes.

Should we somehow control whether to panic the kernel on a tag fault?
Another boot time parameter perhaps?

Any ideas as to how properly estimate the slowdown? As there's no
MTE-enabled hardware yet, the only way to test these patches is use an
emulator (like QEMU). The delay that is added by the emulator (for setting
and checking the tags) is different from the hardware delay, and this skews
the results.

A question to KASAN maintainers: what would be the best way to support the
"off" mode? I see two potential approaches: add a check into each kasan
callback (easier to implement, but we still call kasan callbacks, even
though they immediately return), or add inline header wrappers that do the
same.

=== Notes

This patchset is available here:

https://github.com/xairy/linux/tree/up-prod-mte-rfc1

and on Gerrit here:

https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/3460

This patchset is based on v5 of "kasan: add hardware tag-based mode for
arm64" patchset [1].

For testing in QEMU hardware tag-based KASAN requires:

1. QEMU built from master [6] (use "-machine virt,mte=on -cpu max" arguments
to run).
2. GCC version 10.

[1] https://lore.kernel.org/linux-arm-kernel/cover.160253539...@google.com/
[2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety
[3] https://arxiv.org/pdf/1802.09517.pdf
[4] https://github.com/microsoft/MSRC-Security-Research/blob/master/papers/2020/Security%20analysis%20of%20memory%20tagging.pdf
[5] https://source.android.com/devices/architecture/kernel/generic-kernel-image
[6] https://github.com/qemu/qemu

Andrey Konovalov (8):
kasan: simplify quarantine_put call
kasan: rename get_alloc/free_info
kasan: introduce set_alloc_info
kasan: unpoison stack only with CONFIG_KASAN_STACK
kasan: mark kasan_init_tags as __init
kasan, arm64: move initialization message
arm64: kasan: Add system_supports_tags helper
kasan: add and integrate kasan_mode boot param

arch/arm64/include/asm/memory.h | 1 +
arch/arm64/kernel/sleep.S | 2 +-
arch/arm64/mm/kasan_init.c | 3 ++
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 14 ++---
mm/kasan/common.c | 90 ++++++++++++++++++--------------
mm/kasan/generic.c | 18 ++++---
mm/kasan/hw_tags.c | 63 ++++++++++++++++++++--
mm/kasan/kasan.h | 25 ++++++---
mm/kasan/quarantine.c | 5 +-
mm/kasan/report.c | 22 +++++---
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/sw_tags.c | 14 +++--
13 files changed, 182 insertions(+), 79 deletions(-)

--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:48 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Move get_free_info() call into quarantine_put() to simplify the call site.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Iab0f04e7ebf8d83247024b7190c67c3c34c7940f
---
mm/kasan/common.c | 2 +-
mm/kasan/kasan.h | 5 ++---
mm/kasan/quarantine.c | 3 ++-
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2bb0ef6da6bd..5712c66c11c1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -308,7 +308,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,

kasan_set_free_info(cache, object, tag);

- quarantine_put(get_free_info(cache, object), cache);
+ quarantine_put(cache, object);

return IS_ENABLED(CONFIG_KASAN_GENERIC);
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 32ddb18541e3..a3bf60ceb5e1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -214,12 +214,11 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,

#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
+void quarantine_put(struct kmem_cache *cache, void *object);
void quarantine_reduce(void);
void quarantine_remove_cache(struct kmem_cache *cache);
#else
-static inline void quarantine_put(struct kasan_free_meta *info,
- struct kmem_cache *cache) { }
+static inline void quarantine_put(struct kmem_cache *cache, void *object) { }
static inline void quarantine_reduce(void) { }
static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
#endif
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 580ff5610fc1..a0792f0d6d0f 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -161,11 +161,12 @@ static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
qlist_init(q);
}

-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
+void quarantine_put(struct kmem_cache *cache, void *object)
{
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
+ struct kasan_free_meta *info = get_free_info(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:50 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Rename get_alloc_info() and get_free_info() to kasan_get_alloc_meta()
and kasan_get_free_meta() to better reflect what those do, and avoid
confusion with kasan_set_free_info().

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ib6e4ba61c8b12112b403d3479a9799ac8fff8de1
---
mm/kasan/common.c | 16 ++++++++--------
mm/kasan/generic.c | 12 ++++++------
mm/kasan/hw_tags.c | 4 ++--
mm/kasan/kasan.h | 8 ++++----
mm/kasan/quarantine.c | 4 ++--
mm/kasan/report.c | 12 ++++++------
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/sw_tags.c | 4 ++--
8 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 5712c66c11c1..8fd04415d8f4 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -175,14 +175,14 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
sizeof(struct kasan_free_meta) : 0);
}

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object)
{
return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
}

-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object)
{
BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
@@ -259,13 +259,13 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;

if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_info = get_alloc_info(cache, object);
- __memset(alloc_info, 0, sizeof(*alloc_info));
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -345,7 +345,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags);
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);

return set_tag(object, tag);
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index e1af3b6c53b8..de6b3f03a023 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -331,7 +331,7 @@ void kasan_record_aux_stack(void *addr)
{
struct page *page = kasan_addr_to_page(addr);
struct kmem_cache *cache;
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;
void *object;

if (!(page && PageSlab(page)))
@@ -339,13 +339,13 @@ void kasan_record_aux_stack(void *addr)

cache = page->slab_cache;
object = nearest_obj(cache, page, addr);
- alloc_info = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

/*
* record the last two call_rcu() call stacks.
*/
- alloc_info->aux_stack[1] = alloc_info->aux_stack[0];
- alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
+ alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0];
+ alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
}

void kasan_set_free_info(struct kmem_cache *cache,
@@ -353,7 +353,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_free_meta *free_meta;

- free_meta = get_free_info(cache, object);
+ free_meta = kasan_get_free_meta(cache, object);
kasan_set_track(&free_meta->free_track, GFP_NOWAIT);

/*
@@ -367,5 +367,5 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK)
return NULL;
- return &get_free_info(cache, object)->free_track;
+ return &kasan_get_free_meta(cache, object)->free_track;
}
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 7f0568df2a93..2a38885014e3 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -56,7 +56,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT);
}

@@ -65,6 +65,6 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
return &alloc_meta->free_track[0];
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a3bf60ceb5e1..e5b8367a07f2 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -148,10 +148,10 @@ struct kasan_free_meta {
#endif
};

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object);
-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object);
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object);
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object);

void kasan_poison_memory(const void *address, size_t size, u8 value);

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index a0792f0d6d0f..0da3d37e1589 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -166,7 +166,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
- struct kasan_free_meta *info = get_free_info(cache, object);
+ struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
@@ -179,7 +179,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
local_irq_save(flags);

q = this_cpu_ptr(&cpu_quarantine);
- qlist_put(q, &info->quarantine_link, cache->size);
+ qlist_put(q, &meta->quarantine_link, cache->size);
if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
qlist_move_all(q, &temp);

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f8817d5685a7..dee5350b459c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -162,12 +162,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
static void describe_object(struct kmem_cache *cache, void *object,
const void *addr, u8 tag)
{
- struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
+ struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

if (cache->flags & SLAB_KASAN) {
struct kasan_track *free_track;

- print_track(&alloc_info->alloc_track, "Allocated");
+ print_track(&alloc_meta->alloc_track, "Allocated");
pr_err("\n");
free_track = kasan_get_free_track(cache, object, tag);
if (free_track) {
@@ -176,14 +176,14 @@ static void describe_object(struct kmem_cache *cache, void *object,
}

#ifdef CONFIG_KASAN_GENERIC
- if (alloc_info->aux_stack[0]) {
+ if (alloc_meta->aux_stack[0]) {
pr_err("Last call_rcu():\n");
- print_stack(alloc_info->aux_stack[0]);
+ print_stack(alloc_meta->aux_stack[0]);
pr_err("\n");
}
- if (alloc_info->aux_stack[1]) {
+ if (alloc_meta->aux_stack[1]) {
pr_err("Second to last call_rcu():\n");
- print_stack(alloc_info->aux_stack[1]);
+ print_stack(alloc_meta->aux_stack[1]);
pr_err("\n");
}
#endif
diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c
index aebc44a29e83..317100fd95b9 100644
--- a/mm/kasan/report_sw_tags.c
+++ b/mm/kasan/report_sw_tags.c
@@ -46,7 +46,7 @@ const char *get_bug_type(struct kasan_access_info *info)
if (page && PageSlab(page)) {
cache = page->slab_cache;
object = nearest_obj(cache, page, (void *)addr);
- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

for (i = 0; i < KASAN_NR_FREE_STACKS; i++)
if (alloc_meta->free_pointer_tag[i] == tag)
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index ccc35a311179..c10863a45775 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -172,7 +172,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
u8 idx = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
idx = alloc_meta->free_track_idx;
@@ -189,7 +189,7 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
int i = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:52 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Add set_alloc_info() helper and move kasan_set_track() into it. This will
simplify the code for one of the upcoming changes.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a
---
mm/kasan/common.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8fd04415d8f4..a880e5a547ed 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -318,6 +318,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
}

+static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+}
+
static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
@@ -345,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+ set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
}
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:55 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
There's a config option CONFIG_KASAN_STACK that has to be enabled for
KASAN to use stack instrumentation and perform validity checks for
stack variables.

There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled.
Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is
enabled.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3
---
arch/arm64/kernel/sleep.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 10 ++++++----
mm/kasan/common.c | 2 ++
4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ba40d57757d6..bdadfa56b40e 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume)
*/
bl cpu_do_resume

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
mov x0, sp
bl kasan_unpoison_task_stack_below
#endif
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index c8daa92f38dc..5d3a0b8fd379 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
movq pt_regs_r14(%rax), %r14
movq pt_regs_r15(%rax), %r15

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
/*
* The suspend path may have poisoned some areas deeper in the stack,
* which we now need to unpoison.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3f3f541e5d5f..7be9fb9146ac 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -68,8 +68,6 @@ static inline void kasan_disable_current(void) {}

void kasan_unpoison_memory(const void *address, size_t size);

-void kasan_unpoison_task_stack(struct task_struct *task);
-
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);

@@ -114,8 +112,6 @@ void kasan_restore_multi_shot(bool enabled);

static inline void kasan_unpoison_memory(const void *address, size_t size) {}

-static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
-
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}

@@ -167,6 +163,12 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }

#endif /* CONFIG_KASAN */

+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
+void kasan_unpoison_task_stack(struct task_struct *task);
+#else
+static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
+#endif
+
#ifdef CONFIG_KASAN_GENERIC

void kasan_cache_shrink(struct kmem_cache *cache);
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a880e5a547ed..a3e67d49b893 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -58,6 +58,7 @@ void kasan_disable_current(void)
}
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+#if CONFIG_KASAN_STACK
static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
{
void *base = task_stack_page(task);
@@ -84,6 +85,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)

kasan_unpoison_memory(base, watermark - base);
}
+#endif /* CONFIG_KASAN_STACK */

void kasan_alloc_pages(struct page *page, unsigned int order)
{
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:56 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Similarly to kasan_init() mark kasan_init_tags() as __init.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I8792e22f1ca5a703c5e979969147968a99312558
---
include/linux/kasan.h | 4 ++--
mm/kasan/hw_tags.c | 2 +-
mm/kasan/sw_tags.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7be9fb9146ac..af8317b416a8 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -185,7 +185,7 @@ static inline void kasan_record_aux_stack(void *ptr) {}

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

-void kasan_init_tags(void);
+void __init kasan_init_tags(void);

void *kasan_reset_tag(const void *addr);

@@ -194,7 +194,7 @@ bool kasan_report(unsigned long addr, size_t size,

#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */

-static inline void kasan_init_tags(void) { }
+static inline void __init kasan_init_tags(void) { }

static inline void *kasan_reset_tag(const void *addr)
{
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 2a38885014e3..0128062320d5 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -15,7 +15,7 @@

#include "kasan.h"

-void kasan_init_tags(void)
+void __init kasan_init_tags(void)
{
init_tags(KASAN_TAG_MAX);
}
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index c10863a45775..bf1422282bb5 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -35,7 +35,7 @@

static DEFINE_PER_CPU(u32, prng_state);

-void kasan_init_tags(void)
+void __init kasan_init_tags(void)
{
int cpu;

--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:44:59 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Tag-based KASAN modes are initialized with kasan_init_tags() instead of
kasan_init() for the generic mode. Move the initialization message for
tag-based modes into kasan_init_tags().

Also fix pr_fmt() usage for KASAN code: generic mode doesn't need it,
tag-based modes should use "kasan:" instead of KBUILD_MODNAME.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Idfd1e50625ffdf42dfc3dbf7455b11bd200a0a49
---
arch/arm64/mm/kasan_init.c | 3 +++
mm/kasan/generic.c | 2 --
mm/kasan/hw_tags.c | 4 ++++
mm/kasan/sw_tags.c | 4 +++-
4 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index b6b9d55bb72e..8f17fa834b62 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -290,5 +290,8 @@ void __init kasan_init(void)
{
kasan_init_shadow();
kasan_init_depth();
+#if defined(CONFIG_KASAN_GENERIC)
+ /* CONFIG_KASAN_SW/HW_TAGS also requires kasan_init_tags(). */
pr_info("KernelAddressSanitizer initialized\n");
+#endif
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index de6b3f03a023..d259e4c3aefd 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -9,8 +9,6 @@
* Andrey Konovalov <andre...@gmail.com>
*/

-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
#include <linux/export.h>
#include <linux/interrupt.h>
#include <linux/init.h>
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 0128062320d5..b372421258c8 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -6,6 +6,8 @@
* Author: Andrey Konovalov <andre...@google.com>
*/

+#define pr_fmt(fmt) "kasan: " fmt
+
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
@@ -18,6 +20,8 @@
void __init kasan_init_tags(void)
{
init_tags(KASAN_TAG_MAX);
+
+ pr_info("KernelAddressSanitizer initialized\n");
}

void *kasan_reset_tag(const void *addr)
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index bf1422282bb5..099af6dc8f7e 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -6,7 +6,7 @@
* Author: Andrey Konovalov <andre...@google.com>
*/

-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define pr_fmt(fmt) "kasan: " fmt

#include <linux/export.h>
#include <linux/interrupt.h>
@@ -41,6 +41,8 @@ void __init kasan_init_tags(void)

for_each_possible_cpu(cpu)
per_cpu(prng_state, cpu) = (u32)get_cycles();
+
+ pr_info("KernelAddressSanitizer initialized\n");
}

/*
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:45:02 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Add a helper that exposes information about whether the system supports
memory tagging to be called in generic code.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ib4b56a42c57c6293df29a0cdfee334c3ca7bdab4
---
arch/arm64/include/asm/memory.h | 1 +
mm/kasan/kasan.h | 4 ++++
2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b5d6b824c21c..6d2b7c54780e 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -232,6 +232,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
}

#ifdef CONFIG_KASAN_HW_TAGS
+#define arch_system_supports_tags() system_supports_mte()
#define arch_init_tags(max_tag) mte_init_tags(max_tag)
#define arch_get_random_tag() mte_get_random_tag()
#define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index e5b8367a07f2..47d6074c7958 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -257,6 +257,9 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr))
#define get_tag(addr) arch_kasan_get_tag(addr)

+#ifndef arch_system_supports_tags
+#define arch_system_supports_tags() (false)
+#endif
#ifndef arch_init_tags
#define arch_init_tags(max_tag)
#endif
@@ -270,6 +273,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
#endif

+#define system_supports_tags() arch_system_supports_tags()
#define init_tags(max_tag) arch_init_tags(max_tag)
#define get_random_tag() arch_get_random_tag()
#define get_mem_tag(addr) arch_get_mem_tag(addr)
--
2.28.0.1011.ga647a8990f-goog

Andrey Konovalov

unread,
Oct 14, 2020, 4:45:04 PM10/14/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
TODO: no meaningful description here yet, please see the cover letter
for this RFC series.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
---
mm/kasan/common.c | 69 +++++++++++++++++++++++++---------------------
mm/kasan/generic.c | 4 +++
mm/kasan/hw_tags.c | 53 +++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 8 ++++++
mm/kasan/report.c | 10 +++++--
mm/kasan/sw_tags.c | 4 +++
6 files changed, 115 insertions(+), 33 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a3e67d49b893..d642d5fce1e5 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -135,35 +135,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
unsigned int redzone_size;
int redzone_adjust;

- /* Add alloc meta. */
- cache->kasan_info.alloc_meta_offset = *size;
- *size += sizeof(struct kasan_alloc_meta);
-
- /* Add free meta. */
- if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
- cache->object_size < sizeof(struct kasan_free_meta))) {
- cache->kasan_info.free_meta_offset = *size;
- *size += sizeof(struct kasan_free_meta);
- }
-
- redzone_size = optimal_redzone(cache->object_size);
- redzone_adjust = redzone_size - (*size - cache->object_size);
- if (redzone_adjust > 0)
- *size += redzone_adjust;
-
- *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
- max(*size, cache->object_size + redzone_size));
+ if (static_branch_unlikely(&kasan_debug)) {
+ /* Add alloc meta. */
+ cache->kasan_info.alloc_meta_offset = *size;
+ *size += sizeof(struct kasan_alloc_meta);
+
+ /* Add free meta. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
+ cache->object_size < sizeof(struct kasan_free_meta))) {
+ cache->kasan_info.free_meta_offset = *size;
+ *size += sizeof(struct kasan_free_meta);
+ }

- /*
- * If the metadata doesn't fit, don't enable KASAN at all.
- */
- if (*size <= cache->kasan_info.alloc_meta_offset ||
- *size <= cache->kasan_info.free_meta_offset) {
- cache->kasan_info.alloc_meta_offset = 0;
- cache->kasan_info.free_meta_offset = 0;
- *size = orig_size;
- return;
+ redzone_size = optimal_redzone(cache->object_size);
+ redzone_adjust = redzone_size - (*size - cache->object_size);
+ if (redzone_adjust > 0)
+ *size += redzone_adjust;
+
+ *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
+ max(*size, cache->object_size + redzone_size));
+
+ /*
+ * If the metadata doesn't fit, don't enable KASAN at all.
+ */
+ if (*size <= cache->kasan_info.alloc_meta_offset ||
+ *size <= cache->kasan_info.free_meta_offset) {
+ cache->kasan_info.alloc_meta_offset = 0;
+ cache->kasan_info.free_meta_offset = 0;
+ *size = orig_size;
+ return;
+ }
}

*flags |= SLAB_KASAN;
@@ -180,6 +182,7 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
const void *object)
{
+ WARN_ON(!static_branch_unlikely(&kasan_debug));
return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
}

@@ -187,6 +190,7 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
const void *object)
{
BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+ WARN_ON(!static_branch_unlikely(&kasan_debug));
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
}

@@ -266,8 +270,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_meta = kasan_get_alloc_meta(cache, object);
- __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ if (static_branch_unlikely(&kasan_debug)) {
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ }

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -305,6 +311,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);

if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
+ !static_branch_unlikely(&kasan_debug) ||
unlikely(!(cache->flags & SLAB_KASAN)))
return false;

@@ -351,7 +358,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

- if (cache->flags & SLAB_KASAN)
+ if (static_branch_unlikely(&kasan_debug) && cache->flags & SLAB_KASAN)
set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d259e4c3aefd..9d968eaedc98 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -33,6 +33,10 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_debug);
+
/*
* All functions below always inlined so compiler could
* perform better optimizations in each of __asan_loadX/__assn_storeX
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index b372421258c8..fc6ab1c8b155 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -8,6 +8,8 @@

#define pr_fmt(fmt) "kasan: " fmt

+#include <linux/init.h>
+#include <linux/jump_label.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
@@ -17,8 +19,57 @@

#include "kasan.h"

+enum kasan_mode {
+ KASAN_MODE_OFF,
+ KASAN_MODE_ON,
+ KASAN_MODE_DEBUG,
+};
+
+static enum kasan_mode kasan_mode __ro_after_init;
+
+/* Whether KASAN is enabled at all. */
+/* TODO: ideally no KASAN callbacks when this is disabled. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_enabled);
+
+/* Whether to collect debugging info, e.g. alloc/free stack traces. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_debug);
+
+/* Whether to use syncronous or asynchronous tag checking. */
+static bool kasan_sync __ro_after_init;
+
+static int __init early_kasan_mode(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (strcmp(arg, "on") == 0)
+ kasan_mode = KASAN_MODE_ON;
+ else if (strcmp(arg, "debug") == 0)
+ kasan_mode = KASAN_MODE_DEBUG;
+ return 0;
+}
+early_param("kasan_mode", early_kasan_mode);
+
void __init kasan_init_tags(void)
{
+ /* TODO: system_supports_tags() always returns 0 here, fix. */
+ if (0 /*!system_supports_tags()*/)
+ return;
+
+ switch (kasan_mode) {
+ case KASAN_MODE_OFF:
+ return;
+ case KASAN_MODE_ON:
+ static_branch_enable(&kasan_enabled);
+ break;
+ case KASAN_MODE_DEBUG:
+ static_branch_enable(&kasan_enabled);
+ static_branch_enable(&kasan_debug);
+ kasan_sync = true;
+ break;
+ }
+
+ /* TODO: choose between sync and async based on kasan_sync. */
init_tags(KASAN_TAG_MAX);

pr_info("KernelAddressSanitizer initialized\n");
@@ -60,6 +111,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

+ WARN_ON(!static_branch_unlikely(&kasan_debug));
alloc_meta = kasan_get_alloc_meta(cache, object);
kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT);
}
@@ -69,6 +121,7 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

+ WARN_ON(!static_branch_unlikely(&kasan_debug));
alloc_meta = kasan_get_alloc_meta(cache, object);
return &alloc_meta->free_track[0];
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 47d6074c7958..3712e7a39717 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -279,6 +279,14 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define get_mem_tag(addr) arch_get_mem_tag(addr)
#define set_mem_tag_range(addr, size, tag) arch_set_mem_tag_range((addr), (size), (tag))

+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_enabled);
+DECLARE_STATIC_KEY_FALSE(kasan_debug);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_enabled);
+DECLARE_STATIC_KEY_TRUE(kasan_debug);
+#endif
+
/*
* Exported functions for interfaces called from assembly or from generated
* code. Declarations here to avoid warning about missing declarations.
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index dee5350b459c..ae956a29ad4e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -159,8 +159,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}

-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
{
struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

@@ -188,7 +188,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
}
#endif
}
+}

+static void describe_object(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
+{
+ if (static_branch_unlikely(&kasan_debug))
+ describe_object_stacks(cache, object, addr, tag);
describe_object_addr(cache, object, addr);
}

diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 099af6dc8f7e..50e797a16e17 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -33,6 +33,10 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_debug);
+
static DEFINE_PER_CPU(u32, prng_state);

void __init kasan_init_tags(void)
--
2.28.0.1011.ga647a8990f-goog

Marco Elver

unread,
Oct 15, 2020, 6:23:25 AM10/15/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
On Wed, 14 Oct 2020 at 22:44, Andrey Konovalov <andre...@google.com> wrote:
>
> Similarly to kasan_init() mark kasan_init_tags() as __init.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I8792e22f1ca5a703c5e979969147968a99312558
> ---
> include/linux/kasan.h | 4 ++--
> mm/kasan/hw_tags.c | 2 +-
> mm/kasan/sw_tags.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7be9fb9146ac..af8317b416a8 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -185,7 +185,7 @@ static inline void kasan_record_aux_stack(void *ptr) {}
>
> #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
>
> -void kasan_init_tags(void);
> +void __init kasan_init_tags(void);
>
> void *kasan_reset_tag(const void *addr);
>
> @@ -194,7 +194,7 @@ bool kasan_report(unsigned long addr, size_t size,
>
> #else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
>
> -static inline void kasan_init_tags(void) { }
> +static inline void __init kasan_init_tags(void) { }

Should we mark empty static inline functions __init? __init comes with
a bunch of other attributes, but hopefully they don't interfere with
inlining?

Marco Elver

unread,
Oct 15, 2020, 9:56:48 AM10/15/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
The WARN_ON condition itself should be unlikely, so that would imply
that the static branch here should be likely since you're negating it.
And AFAIK, this function should only be called if kasan_debug is true.

> return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
> }
>
> @@ -187,6 +190,7 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> const void *object)
> {
> BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
> + WARN_ON(!static_branch_unlikely(&kasan_debug));

Same here.
s/syncronous/synchronous/

> +static int __init early_kasan_mode(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (strcmp(arg, "on") == 0)
> + kasan_mode = KASAN_MODE_ON;
> + else if (strcmp(arg, "debug") == 0)

s/strcmp(..) == 0/!strcmp(..)/ ?
What actually happens if any of these are called with !kasan_debug and
the warning triggers? Is it still valid to execute the below, or
should it bail out? Or possibly even disable KASAN entirely?

Marco Elver

unread,
Oct 15, 2020, 10:41:59 AM10/15/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
On Wed, 14 Oct 2020 at 22:44, Andrey Konovalov <andre...@google.com> wrote:
KASAN itself used to be a debugging tool only. So introducing an "on"
mode which no longer follows this convention may be confusing.
Instead, maybe the following might be less confusing:

"full" - current "debug", normal KASAN, all debugging help available.
"opt" - current "on", optimized mode for production.
"on" - automatic selection => chooses "full" if CONFIG_DEBUG_KERNEL,
"opt" otherwise.
"off" - as before.

Also, if there is no other kernel boot parameter named "kasan" yet,
maybe it could just be "kasan=..." ?

> What should be the default when the parameter is not specified? I would
> argue that it should be "debug" (for hardware that supports MTE, otherwise
> "off"), as it's the implied default for all other KASAN modes.

Perhaps we could make this dependent on CONFIG_DEBUG_KERNEL as above.
I do not think that having the full/debug KASAN enabled on production
kernels adds any value because for it to be useful requires somebody
to actually look at the stacktraces; I think that choice should be
made explicitly if it's a production kernel. My guess is that we'll
save explaining performance differences and resulting headaches for
ourselves and others that way.

> Should we somehow control whether to panic the kernel on a tag fault?
> Another boot time parameter perhaps?

It already respects panic_on_warn, correct?

> Any ideas as to how properly estimate the slowdown? As there's no
> MTE-enabled hardware yet, the only way to test these patches is use an
> emulator (like QEMU). The delay that is added by the emulator (for setting
> and checking the tags) is different from the hardware delay, and this skews
> the results.
>
> A question to KASAN maintainers: what would be the best way to support the
> "off" mode? I see two potential approaches: add a check into each kasan
> callback (easier to implement, but we still call kasan callbacks, even
> though they immediately return), or add inline header wrappers that do the
> same.
[...]

Thanks,
-- Marco

Andrey Konovalov

unread,
Oct 16, 2020, 9:04:52 AM10/16/20
to Marco Elver, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
I think it's a good idea to drop __init, as the function call should
be optimized away anyway.

Thanks!

Andrey Konovalov

unread,
Oct 16, 2020, 9:10:55 AM10/16/20
to Marco Elver, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
On Thu, Oct 15, 2020 at 3:56 PM Marco Elver <el...@google.com> wrote:
>
> On Wed, 14 Oct 2020 at 22:45, Andrey Konovalov <andre...@google.com> wrote:
> >

[...]

> > @@ -180,6 +182,7 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
> > struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> > const void *object)
> > {
> > + WARN_ON(!static_branch_unlikely(&kasan_debug));
>
> The WARN_ON condition itself should be unlikely, so that would imply
> that the static branch here should be likely since you're negating it.

Here I was thinking that we should optimize for the production use
case, which shouldn't have kasan_debug enabled, hence the unlikely.
But technically this function shouldn't be called in production
anyway, so likely will do fine too.

> And AFAIK, this function should only be called if kasan_debug is true.

Yes, this WARN_ON is to make sure this doesn't happen.

[...]

> > +/* Whether to use syncronous or asynchronous tag checking. */
> > +static bool kasan_sync __ro_after_init;
>
> s/syncronous/synchronous/

Ack.

>
> > +static int __init early_kasan_mode(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (strcmp(arg, "on") == 0)
> > + kasan_mode = KASAN_MODE_ON;
> > + else if (strcmp(arg, "debug") == 0)
>
> s/strcmp(..) == 0/!strcmp(..)/ ?

Sounds good.

[...]

> > @@ -60,6 +111,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
> > {
> > struct kasan_alloc_meta *alloc_meta;
> >
> > + WARN_ON(!static_branch_unlikely(&kasan_debug));
>
> What actually happens if any of these are called with !kasan_debug and
> the warning triggers? Is it still valid to execute the below, or
> should it bail out? Or possibly even disable KASAN entirely?

It shouldn't happen, but if it happens maybe it indeed makes sense to
disable KASAN here is a failsafe. It might be tricky to disable MTE
though, but I'll see what we can do here.

Thank you!

Andrey Konovalov

unread,
Oct 16, 2020, 9:17:46 AM10/16/20
to Marco Elver, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
Yeah, perhaps "on" is not the best name here.

> Instead, maybe the following might be less confusing:
>
> "full" - current "debug", normal KASAN, all debugging help available.
> "opt" - current "on", optimized mode for production.

How about "prod" here?

> "on" - automatic selection => chooses "full" if CONFIG_DEBUG_KERNEL,
> "opt" otherwise.
> "off" - as before.

It actually makes sense to depend on CONFIG_DEBUG_KERNEL, I like this idea.

>
> Also, if there is no other kernel boot parameter named "kasan" yet,
> maybe it could just be "kasan=..." ?

Sounds good to me too.

> > What should be the default when the parameter is not specified? I would
> > argue that it should be "debug" (for hardware that supports MTE, otherwise
> > "off"), as it's the implied default for all other KASAN modes.
>
> Perhaps we could make this dependent on CONFIG_DEBUG_KERNEL as above.
> I do not think that having the full/debug KASAN enabled on production
> kernels adds any value because for it to be useful requires somebody
> to actually look at the stacktraces; I think that choice should be
> made explicitly if it's a production kernel. My guess is that we'll
> save explaining performance differences and resulting headaches for
> ourselves and others that way.

Ack.

> > Should we somehow control whether to panic the kernel on a tag fault?
> > Another boot time parameter perhaps?
>
> It already respects panic_on_warn, correct?

Yes, but Android is unlikely to enable panic_on_warn as they have
warnings happening all over. AFAIR Pixel 3/4 kernels actually have a
custom patch that enables kernel panic for KASAN crashes specifically
(even though they don't obviously use KASAN in production), and I
think it's better to provide a similar facility upstream. Maybe call
it panic_on_kasan or something?

Marco Elver

unread,
Oct 16, 2020, 9:31:42 AM10/16/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
On Fri, 16 Oct 2020 at 15:17, 'Andrey Konovalov' via kasan-dev
<kasa...@googlegroups.com> wrote:
[...]
> > > The intention with this kind of a high level switch is to hide the
> > > implementation details. Arguably, we could add multiple switches that allow
> > > to separately control each KASAN or MTE feature, but I'm not sure there's
> > > much value in that.
> > >
> > > Does this make sense? Any preference regarding the name of the parameter
> > > and its values?
> >
> > KASAN itself used to be a debugging tool only. So introducing an "on"
> > mode which no longer follows this convention may be confusing.
>
> Yeah, perhaps "on" is not the best name here.
>
> > Instead, maybe the following might be less confusing:
> >
> > "full" - current "debug", normal KASAN, all debugging help available.
> > "opt" - current "on", optimized mode for production.
>
> How about "prod" here?

SGTM.

[...]
>
> > > Should we somehow control whether to panic the kernel on a tag fault?
> > > Another boot time parameter perhaps?
> >
> > It already respects panic_on_warn, correct?
>
> Yes, but Android is unlikely to enable panic_on_warn as they have
> warnings happening all over. AFAIR Pixel 3/4 kernels actually have a
> custom patch that enables kernel panic for KASAN crashes specifically
> (even though they don't obviously use KASAN in production), and I
> think it's better to provide a similar facility upstream. Maybe call
> it panic_on_kasan or something?

Best would be if kasan= can take another option, e.g.
"kasan=prod,panic". I think you can change the strcmp() to a
str_has_prefix() for the checks for full/prod/on/off, and then check
if what comes after it is ",panic".

Thanks,
-- Marco

Andrey Konovalov

unread,
Oct 16, 2020, 11:50:59 AM10/16/20
to Kostya Serebryany, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov
CC Kostya and Serban.

Andrey Konovalov

unread,
Oct 16, 2020, 11:52:28 AM10/16/20
to Kostya Serebryany, Serban Constantinescu, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Marco Elver
On Thu, Oct 15, 2020 at 4:41 PM Marco Elver <el...@google.com> wrote:
>
CC Kostya and Serban.

Andrey Konovalov

unread,
Oct 16, 2020, 11:52:54 AM10/16/20
to Kostya Serebryany, Serban Constantinescu, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Marco Elver
CC Kostya and Serban.

Marco Elver

unread,
Oct 19, 2020, 8:23:33 AM10/19/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Serban Constantinescu, Kostya Serebryany
On Wed, 14 Oct 2020 at 22:44, Andrey Konovalov <andre...@google.com> wrote:
[...]
> A question to KASAN maintainers: what would be the best way to support the
> "off" mode? I see two potential approaches: add a check into each kasan
> callback (easier to implement, but we still call kasan callbacks, even
> though they immediately return), or add inline header wrappers that do the
> same.

This is tricky, because we don't know how bad the performance will be
if we keep them as calls. We'd have to understand the performance
impact of keeping them as calls, and if the performance impact is
acceptable or not.

Without understanding the performance impact, the only viable option I
see is to add __always_inline kasan_foo() wrappers, which use the
static branch to guard calls to __kasan_foo().

Thanks,
-- Marco

Kostya Serebryany

unread,
Oct 19, 2020, 6:51:28 PM10/19/20
to Andrey Konovalov, Serban Constantinescu, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Marco Elver
Hi,
I would like to hear opinions from others in CC on these choices:
* Production use of In-kernel MTE should be based on stripped-down
KASAN, or implemented independently?
* Should we aim at a single boot-time flag (with several values) or
for several independent flags (OFF/SYNC/ASYNC, Stack traces on/off)

Andrey, please give us some idea of the CPU and RAM overheads other
than those coming from MTE
* stack trace collection and storage
* adding redzones to every allocation - not strictly needed for MTE,
but convenient to store the stack trace IDs.

Andrey: with production MTE we should not be using quarantine, which
means storing the stack trace IDs
in the deallocated memory doesn't provide good report quality.
We may need to consider another approach, e.g. the one used in HWASAN
(separate ring buffer, per thread or per core)

--kcc

Dmitry Vyukov

unread,
Oct 20, 2020, 1:20:18 AM10/20/20
to Marco Elver, Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Serban Constantinescu, Kostya Serebryany
This sounds reasonable to me.

Dmitry Vyukov

unread,
Oct 20, 2020, 1:34:55 AM10/20/20
to Kostya Serebryany, Andrey Konovalov, Serban Constantinescu, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Marco Elver
On Tue, Oct 20, 2020 at 12:51 AM Kostya Serebryany <k...@google.com> wrote:
>
> Hi,
> I would like to hear opinions from others in CC on these choices:
> * Production use of In-kernel MTE should be based on stripped-down
> KASAN, or implemented independently?

Andrey, what are the fundamental consequences of basing MTE on KASAN?
I would assume that there are none as we can change KASAN code and
special case some code paths as necessary.

> * Should we aim at a single boot-time flag (with several values) or
> for several independent flags (OFF/SYNC/ASYNC, Stack traces on/off)

We won't be able to answer this question for several years until we
have actual hardware/users...
It's definitely safer to aim at multiple options. I would reuse the fs
opt parsing code as we seem to have lots of potential things to
configure so that we can do:
kasan_options=quarantine=off,fault=panic,trap=async

I am also always confused by the term "debug" when configuring the
kernel. In some cases it's for debugging of the subsystem (for
developers of KASAN), in some cases it adds additional checks to catch
misuses of the subsystem. in some - it just adds more debugging output
on console. And in this case it's actually neither of these. But I am
not sure what's a better name ("full"?). Even if we split options into
multiple, we still can have some kind of presents that just flip all
other options into reasonable values.

Hillf Danton

unread,
Oct 20, 2020, 2:23:02 AM10/20/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Hillf Danton, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org
On Wed, 14 Oct 2020 22:44:35 +0200
>
> #ifdef CONFIG_KASAN_HW_TAGS
> +#define arch_system_supports_tags() system_supports_mte()

s/system_supports/support/ in order to look more like the brother of

> #define arch_init_tags(max_tag) mte_init_tags(max_tag)

Andrey Konovalov

unread,
Oct 20, 2020, 8:13:16 AM10/20/20
to Dmitry Vyukov, Kostya Serebryany, Serban Constantinescu, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML, Marco Elver
On Tue, Oct 20, 2020 at 7:34 AM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Tue, Oct 20, 2020 at 12:51 AM Kostya Serebryany <k...@google.com> wrote:
> >
> > Hi,
> > I would like to hear opinions from others in CC on these choices:
> > * Production use of In-kernel MTE should be based on stripped-down
> > KASAN, or implemented independently?
>
> Andrey, what are the fundamental consequences of basing MTE on KASAN?
> I would assume that there are none as we can change KASAN code and
> special case some code paths as necessary.

The main consequence is psychological and manifests in inheriting the name :)

But generally you're right. As we can change KASAN code, we can do
whatever we want, like adding fast paths for MTE, etc. If we Ctrl+C
Ctrl+V KASAN common code, we could potentially do some micro
optimizations (like avoiding a couple of checks), but I doubt that
will make any difference.

> > * Should we aim at a single boot-time flag (with several values) or
> > for several independent flags (OFF/SYNC/ASYNC, Stack traces on/off)
>
> We won't be able to answer this question for several years until we
> have actual hardware/users...
> It's definitely safer to aim at multiple options. I would reuse the fs
> opt parsing code as we seem to have lots of potential things to
> configure so that we can do:
> kasan_options=quarantine=off,fault=panic,trap=async
>
> I am also always confused by the term "debug" when configuring the
> kernel. In some cases it's for debugging of the subsystem (for
> developers of KASAN), in some cases it adds additional checks to catch
> misuses of the subsystem. in some - it just adds more debugging output
> on console. And in this case it's actually neither of these. But I am
> not sure what's a better name ("full"?). Even if we split options into
> multiple, we still can have some kind of presents that just flip all
> other options into reasonable values.

OK, let me try to incorporate the feedback I've heard so far into the
next version.

>
> > Andrey, please give us some idea of the CPU and RAM overheads other
> > than those coming from MTE
> > * stack trace collection and storage
> > * adding redzones to every allocation - not strictly needed for MTE,
> > but convenient to store the stack trace IDs.
> >
> > Andrey: with production MTE we should not be using quarantine, which
> > means storing the stack trace IDs
> > in the deallocated memory doesn't provide good report quality.
> > We may need to consider another approach, e.g. the one used in HWASAN
> > (separate ring buffer, per thread or per core)

My current priority is cleaning up the mode where stack traces are
disabled and estimating the slowdown from KASAN callbacks. Once done
with that, I'll switch to these ones.

Andrey Konovalov

unread,
Oct 20, 2020, 8:39:37 AM10/20/20
to Hillf Danton, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
Well, init_tags() does initialize tags, but supports_tags() doesn't
not enable support for tags, and rather returns its status. So using
"support" here would be wrong from the English language standpoint.

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:26 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
This patchset is not complete (hence sending as RFC), but I would like to
start the discussion now and hear people's opinions regarding the
questions mentioned below.

=== Overview

This patchset adopts the existing hardware tag-based KASAN mode [1] for
use in production as a memory corruption mitigation. Hardware tag-based
KASAN relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory
and pointer tagging. Please see [3] and [4] for detailed analysis of how
MTE helps to fight memory safety problems.

The current plan is reuse CONFIG_KASAN_HW_TAGS for production, but add a
boot time switch, that allows to choose between a debugging mode, that
includes all KASAN features as they are, and a production mode, that only
includes the essentials like tag checking.

It is essential that switching between these modes doesn't require
rebuilding the kernel with different configs, as this is required by the
Android GKI initiative [5].

The patch titled "kasan: add and integrate kasan boot parameters" of this
series adds a few new boot parameters:

kasan.mode allows choosing one of main three modes:

- kasan.mode=off - no checks at all
- kasan.mode=prod - only essential production features
- kasan.mode=full - all features

Those mode configs provide default values for three more internal configs
listed below. However it's also possible to override the default values
by providing:

- kasan.stack=off/on - enable stacks collection
(default: on for mode=full, otherwise off)
- kasan.trap=async/sync - use async or sync MTE mode
(default: sync for mode=full, otherwise async)
- kasan.fault=report/panic - only report MTE fault or also panic
(default: report)

=== Benchmarks

For now I've only performed a few simple benchmarks such as measuring
kernel boot time and slab memory usage after boot. The benchmarks were
performed in QEMU and the results below exclude the slowdown caused by
QEMU memory tagging emulation (as it's different from the slowdown that
will be introduced by hardware and therefore irrelevant).

KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory
impact compared to KASAN_HW_TAGS=n.

kasan.mode=prod (without executing the tagging instructions) introduces
7% of both performace and memory impact compared to kasan.mode=off.
Note, that 4% of performance and all 7% of memory impact are caused by the
fact that enabling KASAN essentially results in CONFIG_SLAB_MERGE_DEFAULT
being disabled.

Recommended Android config has CONFIG_SLAB_MERGE_DEFAULT disabled (I assume
for security reasons), but Pixel 4 has it enabled. It's arguable, whether
"disabling" CONFIG_SLAB_MERGE_DEFAULT introduces any security benefit on
top of MTE. Without MTE it makes exploiting some heap corruption harder.
With MTE it will only make it harder provided that the attacker is able to
predict allocation tags.

kasan.mode=full has 40% performance and 30% memory impact over
kasan.mode=prod. Both come from alloc/free stack collection.

=== Questions

Any concerns about the boot parameters?

Should we try to deal with CONFIG_SLAB_MERGE_DEFAULT-like behavor mentioned
above?

=== Notes

This patchset is available here:

https://github.com/xairy/linux/tree/up-prod-mte-rfc2

and on Gerrit here:

https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/3707

This patchset is based on v5 of "kasan: add hardware tag-based mode for
arm64" patchset [1] (along with some fixes).
=== History

Changes RFCv1->RFCv2:
- Rework boot parameters.
- Drop __init from empty kasan_init_tags() definition.
- Add cpu_supports_mte() helper that can be used during early boot and use
it in kasan_init_tags()
- Lots of new KASAN optimization commits.

Andrey Konovalov (21):
kasan: simplify quarantine_put call site
kasan: rename get_alloc/free_info
kasan: introduce set_alloc_info
kasan: unpoison stack only with CONFIG_KASAN_STACK
kasan: allow VMAP_STACK for HW_TAGS mode
kasan: mark kasan_init_tags as __init
kasan, arm64: move initialization message
kasan: remove __kasan_unpoison_stack
kasan: inline kasan_reset_tag for tag-based modes
kasan: inline random_tag for HW_TAGS
kasan: inline kasan_poison_memory and check_invalid_free
kasan: inline and rename kasan_unpoison_memory
arm64: kasan: Add cpu_supports_tags helper
kasan: add and integrate kasan boot parameters
kasan: check kasan_enabled in annotations
kasan: optimize poisoning in kmalloc and krealloc
kasan: simplify kasan_poison_kfree
kasan: rename kasan_poison_kfree
kasan: don't round_up too much
kasan: simplify assign_tag and set_tag calls
kasan: clarify comment in __kasan_kfree_large

arch/Kconfig | 2 +-
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +
arch/arm64/kernel/mte.c | 20 +++
arch/arm64/kernel/sleep.S | 2 +-
arch/arm64/mm/kasan_init.c | 3 +
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 225 ++++++++++++++++++-------
include/linux/mm.h | 27 ++-
kernel/fork.c | 2 +-
mm/kasan/common.c | 256 ++++++++++++++++-------------
mm/kasan/generic.c | 19 ++-
mm/kasan/hw_tags.c | 182 +++++++++++++++++---
mm/kasan/kasan.h | 102 ++++++++----
mm/kasan/quarantine.c | 5 +-
mm/kasan/report.c | 26 ++-
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/shadow.c | 1 +
mm/kasan/sw_tags.c | 20 ++-
mm/mempool.c | 2 +-
mm/slab_common.c | 2 +-
mm/slub.c | 3 +-
22 files changed, 641 insertions(+), 269 deletions(-)

--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:28 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Move get_free_info() call into quarantine_put() to simplify the call site.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Iab0f04e7ebf8d83247024b7190c67c3c34c7940f
---
mm/kasan/common.c | 2 +-
mm/kasan/kasan.h | 5 ++---
mm/kasan/quarantine.c | 3 ++-
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2bb0ef6da6bd..5712c66c11c1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -308,7 +308,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,

kasan_set_free_info(cache, object, tag);

- quarantine_put(get_free_info(cache, object), cache);
+ quarantine_put(cache, object);

return IS_ENABLED(CONFIG_KASAN_GENERIC);
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6850308c798a..5c0116c70579 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -214,12 +214,11 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,

#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
+void quarantine_put(struct kmem_cache *cache, void *object);
void quarantine_reduce(void);
void quarantine_remove_cache(struct kmem_cache *cache);
#else
-static inline void quarantine_put(struct kasan_free_meta *info,
- struct kmem_cache *cache) { }
+static inline void quarantine_put(struct kmem_cache *cache, void *object) { }
static inline void quarantine_reduce(void) { }
static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
#endif
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 580ff5610fc1..a0792f0d6d0f 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -161,11 +161,12 @@ static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
qlist_init(q);
}

-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
+void quarantine_put(struct kmem_cache *cache, void *object)
{
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
+ struct kasan_free_meta *info = get_free_info(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:31 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Rename get_alloc_info() and get_free_info() to kasan_get_alloc_meta()
and kasan_get_free_meta() to better reflect what those do and avoid
confusion with kasan_set_free_info().

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ib6e4ba61c8b12112b403d3479a9799ac8fff8de1
---
mm/kasan/common.c | 16 ++++++++--------
mm/kasan/generic.c | 12 ++++++------
mm/kasan/hw_tags.c | 4 ++--
mm/kasan/kasan.h | 8 ++++----
mm/kasan/quarantine.c | 4 ++--
mm/kasan/report.c | 12 ++++++------
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/sw_tags.c | 4 ++--
8 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 5712c66c11c1..8fd04415d8f4 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -175,14 +175,14 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
sizeof(struct kasan_free_meta) : 0);
}

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object)
{
return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
}

-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object)
{
BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
@@ -259,13 +259,13 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;

if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_info = get_alloc_info(cache, object);
- __memset(alloc_info, 0, sizeof(*alloc_info));
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -345,7 +345,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags);
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);

return set_tag(object, tag);
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index e1af3b6c53b8..de6b3f03a023 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -331,7 +331,7 @@ void kasan_record_aux_stack(void *addr)
{
struct page *page = kasan_addr_to_page(addr);
struct kmem_cache *cache;
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;
void *object;

if (!(page && PageSlab(page)))
@@ -339,13 +339,13 @@ void kasan_record_aux_stack(void *addr)

cache = page->slab_cache;
object = nearest_obj(cache, page, addr);
- alloc_info = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

/*
* record the last two call_rcu() call stacks.
*/
- alloc_info->aux_stack[1] = alloc_info->aux_stack[0];
- alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
+ alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0];
+ alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
}

void kasan_set_free_info(struct kmem_cache *cache,
@@ -353,7 +353,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_free_meta *free_meta;

- free_meta = get_free_info(cache, object);
+ free_meta = kasan_get_free_meta(cache, object);
kasan_set_track(&free_meta->free_track, GFP_NOWAIT);

/*
@@ -367,5 +367,5 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK)
return NULL;
- return &get_free_info(cache, object)->free_track;
+ return &kasan_get_free_meta(cache, object)->free_track;
}
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 7f0568df2a93..2a38885014e3 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -56,7 +56,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT);
}

@@ -65,6 +65,6 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
return &alloc_meta->free_track[0];
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5c0116c70579..456b264e5124 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -148,10 +148,10 @@ struct kasan_free_meta {
#endif
};

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object);
-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object);
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object);
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object);

void kasan_poison_memory(const void *address, size_t size, u8 value);

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index a0792f0d6d0f..0da3d37e1589 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -166,7 +166,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
- struct kasan_free_meta *info = get_free_info(cache, object);
+ struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
@@ -179,7 +179,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
local_irq_save(flags);

q = this_cpu_ptr(&cpu_quarantine);
- qlist_put(q, &info->quarantine_link, cache->size);
+ qlist_put(q, &meta->quarantine_link, cache->size);
if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
qlist_move_all(q, &temp);

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f8817d5685a7..dee5350b459c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -162,12 +162,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
static void describe_object(struct kmem_cache *cache, void *object,
const void *addr, u8 tag)
{
- struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
+ struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

if (cache->flags & SLAB_KASAN) {
struct kasan_track *free_track;

- print_track(&alloc_info->alloc_track, "Allocated");
+ print_track(&alloc_meta->alloc_track, "Allocated");
pr_err("\n");
free_track = kasan_get_free_track(cache, object, tag);
if (free_track) {
@@ -176,14 +176,14 @@ static void describe_object(struct kmem_cache *cache, void *object,
}

#ifdef CONFIG_KASAN_GENERIC
- if (alloc_info->aux_stack[0]) {
+ if (alloc_meta->aux_stack[0]) {
pr_err("Last call_rcu():\n");
- print_stack(alloc_info->aux_stack[0]);
+ print_stack(alloc_meta->aux_stack[0]);
pr_err("\n");
}
- if (alloc_info->aux_stack[1]) {
+ if (alloc_meta->aux_stack[1]) {
pr_err("Second to last call_rcu():\n");
- print_stack(alloc_info->aux_stack[1]);
+ print_stack(alloc_meta->aux_stack[1]);
pr_err("\n");
}
#endif
diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c
index aebc44a29e83..317100fd95b9 100644
--- a/mm/kasan/report_sw_tags.c
+++ b/mm/kasan/report_sw_tags.c
@@ -46,7 +46,7 @@ const char *get_bug_type(struct kasan_access_info *info)
if (page && PageSlab(page)) {
cache = page->slab_cache;
object = nearest_obj(cache, page, (void *)addr);
- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

for (i = 0; i < KASAN_NR_FREE_STACKS; i++)
if (alloc_meta->free_pointer_tag[i] == tag)
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index ccc35a311179..c10863a45775 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -172,7 +172,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
u8 idx = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
idx = alloc_meta->free_track_idx;
@@ -189,7 +189,7 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
int i = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:33 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Add set_alloc_info() helper and move kasan_set_track() into it. This will
simplify the code for one of the upcoming changes.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a
---
mm/kasan/common.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8fd04415d8f4..a880e5a547ed 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -318,6 +318,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
}

+static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+}
+
static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
@@ -345,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+ set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
}
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:35 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
There's a config option CONFIG_KASAN_STACK that has to be enabled for
KASAN to use stack instrumentation and perform validity checks for
stack variables.

There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled.
Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is
enabled.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3
---
arch/arm64/kernel/sleep.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 10 ++++++----
mm/kasan/common.c | 2 ++
4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ba40d57757d6..bdadfa56b40e 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume)
*/
bl cpu_do_resume

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
mov x0, sp
bl kasan_unpoison_task_stack_below
#endif
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index c8daa92f38dc..5d3a0b8fd379 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
movq pt_regs_r14(%rax), %r14
movq pt_regs_r15(%rax), %r15

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
/*
* The suspend path may have poisoned some areas deeper in the stack,
* which we now need to unpoison.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3f3f541e5d5f..7be9fb9146ac 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -68,8 +68,6 @@ static inline void kasan_disable_current(void) {}

void kasan_unpoison_memory(const void *address, size_t size);

-void kasan_unpoison_task_stack(struct task_struct *task);
-
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);

@@ -114,8 +112,6 @@ void kasan_restore_multi_shot(bool enabled);

static inline void kasan_unpoison_memory(const void *address, size_t size) {}

-static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
-
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}

@@ -167,6 +163,12 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }

#endif /* CONFIG_KASAN */

+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
+void kasan_unpoison_task_stack(struct task_struct *task);
+#else
+static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
+#endif
+
#ifdef CONFIG_KASAN_GENERIC

void kasan_cache_shrink(struct kmem_cache *cache);
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a880e5a547ed..a3e67d49b893 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -58,6 +58,7 @@ void kasan_disable_current(void)
}
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+#if CONFIG_KASAN_STACK
static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
{
void *base = task_stack_page(task);
@@ -84,6 +85,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)

kasan_unpoison_memory(base, watermark - base);
}
+#endif /* CONFIG_KASAN_STACK */

void kasan_alloc_pages(struct page *page, unsigned int order)
{
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:38 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Even though hardware tag-based mode currently doesn't support checking
vmalloc allocations, it doesn't use shadow memory and works with
VMAP_STACK as is.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I3552cbc12321dec82cd7372676e9372a2eb452ac
---
arch/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index af14a567b493..3caf7bcdcf93 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -868,7 +868,7 @@ config VMAP_STACK
default y
bool "Use a virtually-mapped stack"
depends on HAVE_ARCH_VMAP_STACK
- depends on !KASAN || KASAN_VMALLOC
+ depends on !(KASAN_GENERIC || KASAN_SW_TAGS) || KASAN_VMALLOC
help
Enable this if you want the use virtually-mapped kernel stacks
with guard pages. This causes kernel stack overflows to be
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:40 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Similarly to kasan_init() mark kasan_init_tags() as __init.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I8792e22f1ca5a703c5e979969147968a99312558
---
include/linux/kasan.h | 2 +-
mm/kasan/hw_tags.c | 2 +-
mm/kasan/sw_tags.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7be9fb9146ac..93d9834b7122 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -185,7 +185,7 @@ static inline void kasan_record_aux_stack(void *ptr) {}

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

-void kasan_init_tags(void);
+void __init kasan_init_tags(void);

void *kasan_reset_tag(const void *addr);

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 2a38885014e3..0128062320d5 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -15,7 +15,7 @@

#include "kasan.h"

-void kasan_init_tags(void)
+void __init kasan_init_tags(void)
{
init_tags(KASAN_TAG_MAX);
}
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index c10863a45775..bf1422282bb5 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -35,7 +35,7 @@

static DEFINE_PER_CPU(u32, prng_state);

-void kasan_init_tags(void)
+void __init kasan_init_tags(void)
{
int cpu;

--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:45 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
There's no need for __kasan_unpoison_stack() helper, as it's only
currently used in a single place. Removing it also removes undeed
arithmetic.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ie5ba549d445292fe629b4a96735e4034957bcc50
---
mm/kasan/common.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a3e67d49b893..9008fc6b0810 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -59,18 +59,12 @@ void kasan_disable_current(void)
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

#if CONFIG_KASAN_STACK
-static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
-{
- void *base = task_stack_page(task);
- size_t size = sp - base;
-
- kasan_unpoison_memory(base, size);
-}
-
/* Unpoison the entire stack for a task. */
void kasan_unpoison_task_stack(struct task_struct *task)
{
- __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
+ void *base = task_stack_page(task);
+
+ kasan_unpoison_memory(base, THREAD_SIZE);
}

/* Unpoison the stack for the current task beyond a watermark sp value. */
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:45 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Tag-based KASAN modes are fully initialized with kasan_init_tags(),
while the generic mode only requireds kasan_init(). Move the
initialization message for tag-based modes into kasan_init_tags().

Also fix pr_fmt() usage for KASAN code: generic mode doesn't need it,
tag-based modes should use "kasan:" instead of KBUILD_MODNAME.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Idfd1e50625ffdf42dfc3dbf7455b11bd200a0a49
---
arch/arm64/mm/kasan_init.c | 3 +++
mm/kasan/generic.c | 2 --
mm/kasan/hw_tags.c | 4 ++++
mm/kasan/sw_tags.c | 4 +++-
4 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index b6b9d55bb72e..8f17fa834b62 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -290,5 +290,8 @@ void __init kasan_init(void)
{
kasan_init_shadow();
kasan_init_depth();
+#if defined(CONFIG_KASAN_GENERIC)
+ /* CONFIG_KASAN_SW/HW_TAGS also requires kasan_init_tags(). */
pr_info("KernelAddressSanitizer initialized\n");
+#endif
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index de6b3f03a023..d259e4c3aefd 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -9,8 +9,6 @@
* Andrey Konovalov <andre...@gmail.com>
*/

-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
#include <linux/export.h>
#include <linux/interrupt.h>
#include <linux/init.h>
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 0128062320d5..b372421258c8 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -6,6 +6,8 @@
* Author: Andrey Konovalov <andre...@google.com>
*/

+#define pr_fmt(fmt) "kasan: " fmt
+
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
@@ -18,6 +20,8 @@
void __init kasan_init_tags(void)
{
init_tags(KASAN_TAG_MAX);
+
+ pr_info("KernelAddressSanitizer initialized\n");
}

void *kasan_reset_tag(const void *addr)
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index bf1422282bb5..099af6dc8f7e 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -6,7 +6,7 @@
* Author: Andrey Konovalov <andre...@google.com>
*/

-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define pr_fmt(fmt) "kasan: " fmt

#include <linux/export.h>
#include <linux/interrupt.h>
@@ -41,6 +41,8 @@ void __init kasan_init_tags(void)

for_each_possible_cpu(cpu)
per_cpu(prng_state, cpu) = (u32)get_cycles();
+
+ pr_info("KernelAddressSanitizer initialized\n");
}

/*
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:47 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Using kasan_reset_tag() currently results in a function call. As it's
called quite often from the allocator code this leads to a noticeable
slowdown. Move it to include/linux/kasan.h and turn it into a static
inline function.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I4d2061acfe91d480a75df00b07c22d8494ef14b5
---
include/linux/kasan.h | 5 ++++-
mm/kasan/hw_tags.c | 5 -----
mm/kasan/kasan.h | 6 ++----
mm/kasan/sw_tags.c | 5 -----
4 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 93d9834b7122..6377d7d3a951 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -187,7 +187,10 @@ static inline void kasan_record_aux_stack(void *ptr) {}

void __init kasan_init_tags(void);

-void *kasan_reset_tag(const void *addr);
+static inline void *kasan_reset_tag(const void *addr)
+{
+ return (void *)arch_kasan_reset_tag(addr);
+}

bool kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index b372421258c8..c3a0e83b5e7a 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -24,11 +24,6 @@ void __init kasan_init_tags(void)
pr_info("KernelAddressSanitizer initialized\n");
}

-void *kasan_reset_tag(const void *addr)
-{
- return reset_tag(addr);
-}
-
void kasan_poison_memory(const void *address, size_t size, u8 value)
{
set_mem_tag_range(reset_tag(address),
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 456b264e5124..0ccbb3c4c519 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -246,15 +246,13 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
return addr;
}
#endif
-#ifndef arch_kasan_reset_tag
-#define arch_kasan_reset_tag(addr) ((void *)(addr))
-#endif
#ifndef arch_kasan_get_tag
#define arch_kasan_get_tag(addr) 0
#endif

+/* kasan_reset_tag() defined in include/linux/kasan.h. */
+#define reset_tag(addr) ((void *)kasan_reset_tag(addr))
#define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
-#define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr))
#define get_tag(addr) arch_kasan_get_tag(addr)

#ifndef arch_init_tags
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 099af6dc8f7e..4db41f274702 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -67,11 +67,6 @@ u8 random_tag(void)
return (u8)(state % (KASAN_TAG_MAX + 1));
}

-void *kasan_reset_tag(const void *addr)
-{
- return reset_tag(addr);
-}
-
bool check_memory_region(unsigned long addr, size_t size, bool write,
unsigned long ret_ip)
{
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:49 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Using random_tag() currently results in a function call. Move its
definition to mm/kasan/kasan.h and turn it into a static inline function
for hardware tag-based mode to avoid uneeded function call.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Iac5b2faf9a912900e16cca6834d621f5d4abf427
---
mm/kasan/hw_tags.c | 5 -----
mm/kasan/kasan.h | 37 ++++++++++++++++++++-----------------
2 files changed, 20 insertions(+), 22 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index c3a0e83b5e7a..4c24bfcfeff9 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -36,11 +36,6 @@ void kasan_unpoison_memory(const void *address, size_t size)
round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
}

-u8 random_tag(void)
-{
- return get_random_tag();
-}
-
bool check_invalid_free(void *addr)
{
u8 ptr_tag = get_tag(addr);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0ccbb3c4c519..94ba15c2f860 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -188,6 +188,12 @@ static inline bool addr_has_metadata(const void *addr)

#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+void print_tags(u8 addr_tag, const void *addr);
+#else
+static inline void print_tags(u8 addr_tag, const void *addr) { }
+#endif
+
bool check_invalid_free(void *addr);

void *find_first_bad_addr(void *addr, size_t size);
@@ -223,23 +229,6 @@ static inline void quarantine_reduce(void) { }
static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
#endif

-#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
-
-void print_tags(u8 addr_tag, const void *addr);
-
-u8 random_tag(void);
-
-#else
-
-static inline void print_tags(u8 addr_tag, const void *addr) { }
-
-static inline u8 random_tag(void)
-{
- return 0;
-}
-
-#endif
-
#ifndef arch_kasan_set_tag
static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
{
@@ -273,6 +262,20 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define get_mem_tag(addr) arch_get_mem_tag(addr)
#define set_mem_tag_range(addr, size, tag) arch_set_mem_tag_range((addr), (size), (tag))

+#ifdef CONFIG_KASAN_SW_TAGS
+u8 random_tag(void);
+#elif defined(CONFIG_KASAN_HW_TAGS)
+static inline u8 random_tag(void)
+{
+ return get_random_tag();
+}
+#else
+static inline u8 random_tag(void)
+{
+ return 0;
+}
+#endif
+
/*
* Exported functions for interfaces called from assembly or from generated
* code. Declarations here to avoid warning about missing declarations.
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:52 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Using kasan_poison_memory() or check_invalid_free() currently results in
function calls. Move their definitions to mm/kasan/kasan.h and turn them
into static inline functions for hardware tag-based mode to avoid uneeded
function calls.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ia9d8191024a12d1374675b3d27197f10193f50bb
---
mm/kasan/hw_tags.c | 15 ---------------
mm/kasan/kasan.h | 28 ++++++++++++++++++++++++----
2 files changed, 24 insertions(+), 19 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 4c24bfcfeff9..f03161f3da19 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -24,27 +24,12 @@ void __init kasan_init_tags(void)
pr_info("KernelAddressSanitizer initialized\n");
}

-void kasan_poison_memory(const void *address, size_t size, u8 value)
-{
- set_mem_tag_range(reset_tag(address),
- round_up(size, KASAN_GRANULE_SIZE), value);
-}
-
void kasan_unpoison_memory(const void *address, size_t size)
{
set_mem_tag_range(reset_tag(address),
round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
}

-bool check_invalid_free(void *addr)
-{
- u8 ptr_tag = get_tag(addr);
- u8 mem_tag = get_mem_tag(addr);
-
- return (mem_tag == KASAN_TAG_INVALID) ||
- (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
-}
-
void kasan_set_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 94ba15c2f860..8d84ae6f58f1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -153,8 +153,6 @@ struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
const void *object);

-void kasan_poison_memory(const void *address, size_t size, u8 value);
-
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)

static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
@@ -194,8 +192,6 @@ void print_tags(u8 addr_tag, const void *addr);
static inline void print_tags(u8 addr_tag, const void *addr) { }
#endif

-bool check_invalid_free(void *addr);
-
void *find_first_bad_addr(void *addr, size_t size);
const char *get_bug_type(struct kasan_access_info *info);
void metadata_fetch_row(char *buffer, void *row);
@@ -276,6 +272,30 @@ static inline u8 random_tag(void)
}
#endif

+#ifdef CONFIG_KASAN_HW_TAGS
+
+static inline void kasan_poison_memory(const void *address, size_t size, u8 value)
+{
+ set_mem_tag_range(reset_tag(address),
+ round_up(size, KASAN_GRANULE_SIZE), value);
+}
+
+static inline bool check_invalid_free(void *addr)
+{
+ u8 ptr_tag = get_tag(addr);
+ u8 mem_tag = get_mem_tag(addr);
+
+ return (mem_tag == KASAN_TAG_INVALID) ||
+ (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
+}
+
+#else /* CONFIG_KASAN_HW_TAGS */
+
+void kasan_poison_memory(const void *address, size_t size, u8 value);
+bool check_invalid_free(void *addr);
+
+#endif /* CONFIG_KASAN_HW_TAGS */

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:55 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Currently kasan_unpoison_memory() is used as both an external annotation
and as internal memory poisoning helper. Rename external annotation to
kasan_unpoison_data() and inline the internal helper for for hardware
tag-based mode to avoid undeeded function calls.

There's the external annotation kasan_unpoison_slab() that is currently
defined as static inline and uses kasan_unpoison_memory(). With this
change it's turned into a function call. Overall, this results in the
same number of calls for hardware tag-based mode as
kasan_unpoison_memory() is now inlined.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ia7c8b659f79209935cbaab3913bf7f082cc43a0e
---
include/linux/kasan.h | 16 ++++++----------
kernel/fork.c | 2 +-
mm/kasan/common.c | 10 ++++++++++
mm/kasan/hw_tags.c | 6 ------
mm/kasan/kasan.h | 7 +++++++
mm/slab_common.c | 2 +-
6 files changed, 25 insertions(+), 18 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6377d7d3a951..2b9023224474 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -66,14 +66,15 @@ static inline void kasan_disable_current(void) {}

#ifdef CONFIG_KASAN

-void kasan_unpoison_memory(const void *address, size_t size);
-
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);

void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
slab_flags_t *flags);

+void kasan_unpoison_data(const void *address, size_t size);
+void kasan_unpoison_slab(const void *ptr);
+
void kasan_poison_slab(struct page *page);
void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
void kasan_poison_object_data(struct kmem_cache *cache, void *object);
@@ -98,11 +99,6 @@ struct kasan_cache {
int free_meta_offset;
};

-size_t __ksize(const void *);
-static inline void kasan_unpoison_slab(const void *ptr)
-{
- kasan_unpoison_memory(ptr, __ksize(ptr));
-}
size_t kasan_metadata_size(struct kmem_cache *cache);

bool kasan_save_enable_multi_shot(void);
@@ -110,8 +106,6 @@ void kasan_restore_multi_shot(bool enabled);

#else /* CONFIG_KASAN */

-static inline void kasan_unpoison_memory(const void *address, size_t size) {}
-
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}

@@ -119,6 +113,9 @@ static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}

+static inline void kasan_unpoison_data(const void *address, size_t size) { }
+static inline void kasan_unpoison_slab(const void *ptr) { }
+
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -158,7 +155,6 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
return false;
}

-static inline void kasan_unpoison_slab(const void *ptr) { }
static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }

#endif /* CONFIG_KASAN */
diff --git a/kernel/fork.c b/kernel/fork.c
index b41fecca59d7..858d78eee6ec 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -225,7 +225,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
continue;

/* Mark stack accessible for KASAN. */
- kasan_unpoison_memory(s->addr, THREAD_SIZE);
+ kasan_unpoison_data(s->addr, THREAD_SIZE);

/* Clear stale pointers from reused stack. */
memset(s->addr, 0, THREAD_SIZE);
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 9008fc6b0810..1a5e6c279a72 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -184,6 +184,16 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
}

+void kasan_unpoison_data(const void *address, size_t size)
+{
+ kasan_unpoison_memory(address, size);
+}
+
+void kasan_unpoison_slab(const void *ptr)
+{
+ kasan_unpoison_memory(ptr, __ksize(ptr));
+}
+
void kasan_poison_slab(struct page *page)
{
unsigned long i;
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index f03161f3da19..915142da6b57 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -24,12 +24,6 @@ void __init kasan_init_tags(void)
pr_info("KernelAddressSanitizer initialized\n");
}

-void kasan_unpoison_memory(const void *address, size_t size)
-{
- set_mem_tag_range(reset_tag(address),
- round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
-}
-
void kasan_set_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 8d84ae6f58f1..da08b2533d73 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -280,6 +280,12 @@ static inline void kasan_poison_memory(const void *address, size_t size, u8 valu
round_up(size, KASAN_GRANULE_SIZE), value);
}

+static inline void kasan_unpoison_memory(const void *address, size_t size)
+{
+ set_mem_tag_range(reset_tag(address),
+ round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
+}
+
static inline bool check_invalid_free(void *addr)
{
u8 ptr_tag = get_tag(addr);
@@ -292,6 +298,7 @@ static inline bool check_invalid_free(void *addr)
#else /* CONFIG_KASAN_HW_TAGS */

void kasan_poison_memory(const void *address, size_t size, u8 value);
+void kasan_unpoison_memory(const void *address, size_t size);
bool check_invalid_free(void *addr);

#endif /* CONFIG_KASAN_HW_TAGS */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 53d0f8bb57ea..f1b0c4a22f08 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1176,7 +1176,7 @@ size_t ksize(const void *objp)
* We assume that ksize callers could use whole allocated area,
* so we need to unpoison this area.
*/
- kasan_unpoison_memory(objp, size);
+ kasan_unpoison_data(objp, size);
return size;
}
EXPORT_SYMBOL(ksize);
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:57 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Add an arm64 helper called cpu_supports_mte() that exposes information
about whether the CPU supports memory tagging and that can be called
during early boot (unlike system_supports_mte()).

Use that helper to implement a generic cpu_supports_tags() helper, that
will be used by hardware tag-based KASAN.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ib4b56a42c57c6293df29a0cdfee334c3ca7bdab4
---
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 ++++++
arch/arm64/kernel/mte.c | 20 ++++++++++++++++++++
mm/kasan/kasan.h | 4 ++++
4 files changed, 31 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b5d6b824c21c..f496abfcf7f5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -232,6 +232,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
}

#ifdef CONFIG_KASAN_HW_TAGS
+#define arch_cpu_supports_tags() cpu_supports_mte()
#define arch_init_tags(max_tag) mte_init_tags(max_tag)
#define arch_get_random_tag() mte_get_random_tag()
#define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index a4c61b926d4a..4c3f2c6b4fe6 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -9,6 +9,7 @@

#ifndef __ASSEMBLY__

+#include <linux/init.h>
#include <linux/types.h>

/*
@@ -30,6 +31,7 @@ u8 mte_get_random_tag(void);
void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);

void mte_init_tags(u64 max_tag);
+bool __init cpu_supports_mte(void);

#else /* CONFIG_ARM64_MTE */

@@ -54,6 +56,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
static inline void mte_init_tags(u64 max_tag)
{
}
+static inline bool cpu_supports_mte(void)
+{
+ return false;
+}

#endif /* CONFIG_ARM64_MTE */

diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index ca8206b7f9a6..8fcd17408515 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -134,6 +134,26 @@ void mte_init_tags(u64 max_tag)
gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
}

+/*
+ * This function can be used during early boot to determine whether the CPU
+ * supports MTE. The alternative that must be used after boot is completed is
+ * system_supports_mte(), but it only works after the cpufeature framework
+ * learns about MTE.
+ */
+bool __init cpu_supports_mte(void)
+{
+ u64 pfr1;
+ u32 val;
+
+ if (!IS_ENABLED(CONFIG_ARM64_MTE))
+ return false;
+
+ pfr1 = read_cpuid(ID_AA64PFR1_EL1);
+ val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
+
+ return val >= ID_AA64PFR1_MTE;
+}
+
static void update_sctlr_el1_tcf0(u64 tcf0)
{
/* ISB required for the kernel uaccess routines */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da08b2533d73..f7ae0c23f023 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -240,6 +240,9 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
#define get_tag(addr) arch_kasan_get_tag(addr)

+#ifndef arch_cpu_supports_tags
+#define arch_cpu_supports_tags() (false)
+#endif
#ifndef arch_init_tags
#define arch_init_tags(max_tag)
#endif
@@ -253,6 +256,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
#endif

+#define cpu_supports_tags() arch_cpu_supports_tags()
#define init_tags(max_tag) arch_init_tags(max_tag)
#define get_random_tag() arch_get_random_tag()
#define get_mem_tag(addr) arch_get_mem_tag(addr)
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:19:59 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
TODO: no meaningful description here yet, please see the cover letter
for this RFC series.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
---
mm/kasan/common.c | 92 +++++++++++++-----------
mm/kasan/generic.c | 5 ++
mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 9 +++
mm/kasan/report.c | 14 +++-
mm/kasan/sw_tags.c | 5 ++
6 files changed, 250 insertions(+), 44 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 1a5e6c279a72..cc129ef62ab1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
unsigned int redzone_size;
int redzone_adjust;

- /* Add alloc meta. */
- cache->kasan_info.alloc_meta_offset = *size;
- *size += sizeof(struct kasan_alloc_meta);
-
- /* Add free meta. */
- if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
- cache->object_size < sizeof(struct kasan_free_meta))) {
- cache->kasan_info.free_meta_offset = *size;
- *size += sizeof(struct kasan_free_meta);
- }
-
- redzone_size = optimal_redzone(cache->object_size);
- redzone_adjust = redzone_size - (*size - cache->object_size);
- if (redzone_adjust > 0)
- *size += redzone_adjust;
-
- *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
- max(*size, cache->object_size + redzone_size));
+ if (static_branch_unlikely(&kasan_stack)) {
+ /* Add alloc meta. */
+ cache->kasan_info.alloc_meta_offset = *size;
+ *size += sizeof(struct kasan_alloc_meta);
+
+ /* Add free meta. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
+ cache->object_size < sizeof(struct kasan_free_meta))) {
+ cache->kasan_info.free_meta_offset = *size;
+ *size += sizeof(struct kasan_free_meta);
+ }

- /*
- * If the metadata doesn't fit, don't enable KASAN at all.
- */
- if (*size <= cache->kasan_info.alloc_meta_offset ||
- *size <= cache->kasan_info.free_meta_offset) {
- cache->kasan_info.alloc_meta_offset = 0;
- cache->kasan_info.free_meta_offset = 0;
- *size = orig_size;
- return;
+ redzone_size = optimal_redzone(cache->object_size);
+ redzone_adjust = redzone_size - (*size - cache->object_size);
+ if (redzone_adjust > 0)
+ *size += redzone_adjust;
+
+ *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
+ max(*size, cache->object_size + redzone_size));
+
+ /*
+ * If the metadata doesn't fit, don't enable KASAN at all.
+ */
+ if (*size <= cache->kasan_info.alloc_meta_offset ||
+ *size <= cache->kasan_info.free_meta_offset) {
+ cache->kasan_info.alloc_meta_offset = 0;
+ cache->kasan_info.free_meta_offset = 0;
+ *size = orig_size;
+ return;
+ }
}

*flags |= SLAB_KASAN;
@@ -165,10 +167,12 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,

size_t kasan_metadata_size(struct kmem_cache *cache)
{
- return (cache->kasan_info.alloc_meta_offset ?
- sizeof(struct kasan_alloc_meta) : 0) +
- (cache->kasan_info.free_meta_offset ?
- sizeof(struct kasan_free_meta) : 0);
+ if (static_branch_unlikely(&kasan_stack))
+ return (cache->kasan_info.alloc_meta_offset ?
+ sizeof(struct kasan_alloc_meta) : 0) +
+ (cache->kasan_info.free_meta_offset ?
+ sizeof(struct kasan_free_meta) : 0);
+ return 0;
}

struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
@@ -270,8 +274,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_meta = kasan_get_alloc_meta(cache, object);
- __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ if (static_branch_unlikely(&kasan_stack)) {
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ }

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -308,15 +314,19 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);

- if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
- unlikely(!(cache->flags & SLAB_KASAN)))
- return false;
+ if (static_branch_unlikely(&kasan_stack)) {
+ if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
+ unlikely(!(cache->flags & SLAB_KASAN)))
+ return false;
+
+ kasan_set_free_info(cache, object, tag);

- kasan_set_free_info(cache, object, tag);
+ quarantine_put(cache, object);

- quarantine_put(cache, object);
+ return IS_ENABLED(CONFIG_KASAN_GENERIC);
+ }

- return IS_ENABLED(CONFIG_KASAN_GENERIC);
+ return false;
}

bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
@@ -355,7 +365,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

- if (cache->flags & SLAB_KASAN)
+ if (static_branch_unlikely(&kasan_stack) && (cache->flags & SLAB_KASAN))
set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d259e4c3aefd..20a1e753e0c5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -33,6 +33,11 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
+
/*
* All functions below always inlined so compiler could
* perform better optimizations in each of __asan_loadX/__assn_storeX
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 915142da6b57..bccd781011ad 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -8,6 +8,8 @@

#define pr_fmt(fmt) "kasan: " fmt

+#include <linux/init.h>
+#include <linux/jump_label.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
@@ -17,10 +19,175 @@

#include "kasan.h"

+enum kasan_arg_mode {
+ KASAN_ARG_MODE_OFF,
+ KASAN_ARG_MODE_PROD,
+ KASAN_ARG_MODE_FULL,
+};
+
+enum kasan_arg_stack {
+ KASAN_ARG_STACK_DEFAULT,
+ KASAN_ARG_STACK_OFF,
+ KASAN_ARG_STACK_ON,
+};
+
+enum kasan_arg_trap {
+ KASAN_ARG_TRAP_DEFAULT,
+ KASAN_ARG_TRAP_ASYNC,
+ KASAN_ARG_TRAP_SYNC,
+};
+
+enum kasan_arg_fault {
+ KASAN_ARG_FAULT_DEFAULT,
+ KASAN_ARG_FAULT_REPORT,
+ KASAN_ARG_FAULT_PANIC,
+};
+
+static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_stack kasan_arg_stack __ro_after_init;
+static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
+static enum kasan_arg_trap kasan_arg_trap __ro_after_init;
+
+/* Whether KASAN is enabled at all. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+
+/* Whether to collect alloc/free stack traces. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_stack);
+
+/* Whether to use synchronous or asynchronous tag checking. */
+static bool kasan_sync __ro_after_init;
+
+/* Whether panic or disable tag checking on fault. */
+bool kasan_panic __ro_after_init;
+
+/* kasan.mode=off/prod/full */
+static int __init early_kasan_mode(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_mode = KASAN_ARG_MODE_OFF;
+ else if (!strcmp(arg, "prod"))
+ kasan_arg_mode = KASAN_ARG_MODE_PROD;
+ else if (!strcmp(arg, "full"))
+ kasan_arg_mode = KASAN_ARG_MODE_FULL;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.mode", early_kasan_mode);
+
+/* kasan.stack=off/on */
+static int __init early_kasan_stack(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_stack = KASAN_ARG_STACK_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_stack = KASAN_ARG_STACK_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.stack", early_kasan_stack);
+
+/* kasan.trap=sync/async */
+static int __init early_kasan_trap(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "ASYNC"))
+ kasan_arg_trap = KASAN_ARG_TRAP_ASYNC;
+ else if (!strcmp(arg, "sync"))
+ kasan_arg_trap = KASAN_ARG_TRAP_SYNC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.trap", early_kasan_trap);
+
+/* kasan.fault=report/panic */
+static int __init early_kasan_fault(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "report"))
+ kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
+ else if (!strcmp(arg, "panic"))
+ kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.fault", early_kasan_fault);
+
void __init kasan_init_tags(void)
{
- init_tags(KASAN_TAG_MAX);
+ if (!cpu_supports_tags())
+ return;
+
+ /* First, preset values based on the mode. */
+
+ switch (kasan_arg_mode) {
+ case KASAN_ARG_MODE_OFF:
+ return;
+ case KASAN_ARG_MODE_PROD:
+ static_branch_enable(&kasan_enabled);
+ break;
+ case KASAN_ARG_MODE_FULL:
+ static_branch_enable(&kasan_enabled);
+ static_branch_enable(&kasan_stack);
+ kasan_sync = true;
+ break;
+ }
+
+ /* Now, optionally override the presets. */

+ switch (kasan_arg_stack) {
+ case KASAN_ARG_STACK_OFF:
+ static_branch_disable(&kasan_stack);
+ break;
+ case KASAN_ARG_STACK_ON:
+ static_branch_enable(&kasan_stack);
+ break;
+ default:
+ break;
+ }
+
+ switch (kasan_arg_trap) {
+ case KASAN_ARG_TRAP_ASYNC:
+ kasan_sync = false;
+ break;
+ case KASAN_ARG_TRAP_SYNC:
+ kasan_sync = true;
+ break;
+ default:
+ break;
+ }
+
+ switch (kasan_arg_fault) {
+ case KASAN_ARG_FAULT_REPORT:
+ kasan_panic = false;
+ break;
+ case KASAN_ARG_FAULT_PANIC:
+ kasan_panic = true;
+ break;
+ default:
+ break;
+ }
+
+ /* TODO: choose between sync and async based on kasan_sync. */
+ init_tags(KASAN_TAG_MAX);
pr_info("KernelAddressSanitizer initialized\n");
}

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f7ae0c23f023..00b47bc753aa 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -2,9 +2,18 @@
#ifndef __MM_KASAN_KASAN_H
#define __MM_KASAN_KASAN_H

+#include <linux/jump_label.h>
#include <linux/kasan.h>
#include <linux/stackdepot.h>

+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_stack);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_stack);
+#endif
+
+extern bool kasan_panic __ro_after_init;
+
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#else
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index dee5350b459c..426dd1962d3c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -97,6 +97,10 @@ static void end_report(unsigned long *flags)
panic_on_warn = 0;
panic("panic_on_warn set ...\n");
}
+#ifdef CONFIG_KASAN_HW_TAGS
+ if (kasan_panic)
+ panic("kasan.fault=panic set ...\n");
+#endif
kasan_enable_current();
}

@@ -159,8 +163,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}

-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
{
struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

@@ -188,7 +192,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
}
#endif
}
+}

+static void describe_object(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
+{
+ if (static_branch_unlikely(&kasan_stack))
+ describe_object_stacks(cache, object, addr, tag);
describe_object_addr(cache, object, addr);
}

diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 4db41f274702..b6d185adf2c5 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -33,6 +33,11 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
+
static DEFINE_PER_CPU(u32, prng_state);

void __init kasan_init_tags(void)
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:01 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Declare the kasan_enabled static key in include/linux/kasan.h and in
include/linux/mm.h and check it in all kasan annotations. This allows to
avoid any slowdown caused by function calls when kasan_enabled is
disabled.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
---
include/linux/kasan.h | 210 ++++++++++++++++++++++++++++++++----------
include/linux/mm.h | 27 ++++--
mm/kasan/common.c | 60 ++++++------
3 files changed, 211 insertions(+), 86 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 2b9023224474..8654275aa62e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -2,6 +2,7 @@
#ifndef _LINUX_KASAN_H
#define _LINUX_KASAN_H

+#include <linux/jump_label.h>
#include <linux/types.h>

struct kmem_cache;
@@ -66,40 +67,154 @@ static inline void kasan_disable_current(void) {}

#ifdef CONFIG_KASAN

-void kasan_alloc_pages(struct page *page, unsigned int order);
-void kasan_free_pages(struct page *page, unsigned int order);
+struct kasan_cache {
+ int alloc_meta_offset;
+ int free_meta_offset;
+};
+
+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_enabled);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_enabled);
+#endif

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags);
+void __kasan_alloc_pages(struct page *page, unsigned int order);
+static inline void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_alloc_pages(page, order);
+}

-void kasan_unpoison_data(const void *address, size_t size);
-void kasan_unpoison_slab(const void *ptr);
+void __kasan_free_pages(struct page *page, unsigned int order);
+static inline void kasan_free_pages(struct page *page, unsigned int order)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_free_pages(page, order);
+}

-void kasan_poison_slab(struct page *page);
-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
-void kasan_poison_object_data(struct kmem_cache *cache, void *object);
-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
- const void *object);
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags);
+static inline void kasan_cache_create(struct kmem_cache *cache,
+ unsigned int *size, slab_flags_t *flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_cache_create(cache, size, flags);
+}

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags);
-void kasan_kfree_large(void *ptr, unsigned long ip);
-void kasan_poison_kfree(void *ptr, unsigned long ip);
-void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags);
+size_t __kasan_metadata_size(struct kmem_cache *cache);
+static inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_metadata_size(cache);
+ return 0;
+}

-void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags);
-bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+void __kasan_unpoison_data(const void *addr, size_t size);
+static inline void kasan_unpoison_data(const void *addr, size_t size)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_unpoison_data(addr, size);
+}

-struct kasan_cache {
- int alloc_meta_offset;
- int free_meta_offset;
-};
+void __kasan_unpoison_slab(const void *ptr);
+static inline void kasan_unpoison_slab(const void *ptr)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_unpoison_slab(ptr);
+}
+
+void __kasan_poison_slab(struct page *page);
+static inline void kasan_poison_slab(struct page *page)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_poison_slab(page);
+}
+
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_unpoison_object_data(cache, object);
+}
+
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
+static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_poison_object_data(cache, object);
+}
+
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
+ const void *object);
+static inline void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
+ const void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_init_slab_obj(cache, object);
+ return (void *)object;
+}
+
+bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_slab_free(s, object, ip);
+ return false;
+}

-size_t kasan_metadata_size(struct kmem_cache *cache);
+void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
+ void *object, gfp_t flags);
+static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
+ void *object, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_slab_alloc(s, object, flags);
+ return object;
+}
+
+void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags);
+static inline void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_kmalloc(s, object, size, flags);
+ return (void *)object;
+}
+
+void * __must_check __kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags);
+static inline void * __must_check kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_kmalloc_large(ptr, size, flags);
+ return (void *)ptr;
+}
+
+void * __must_check __kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags);
+static inline void * __must_check kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_krealloc(object, new_size, flags);
+ return (void *)object;
+}
+
+void __kasan_poison_kfree(void *ptr, unsigned long ip);
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_poison_kfree(ptr, ip);
+}
+
+void __kasan_kfree_large(void *ptr, unsigned long ip);
+static inline void kasan_kfree_large(void *ptr, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_kfree_large(ptr, ip);
+}

bool kasan_save_enable_multi_shot(void);
void kasan_restore_multi_shot(bool enabled);
@@ -108,14 +223,12 @@ void kasan_restore_multi_shot(bool enabled);

static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
-
static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}
-
-static inline void kasan_unpoison_data(const void *address, size_t size) { }
-static inline void kasan_unpoison_slab(const void *ptr) { }
-
+static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
+static inline void kasan_unpoison_data(const void *address, size_t size) {}
+static inline void kasan_unpoison_slab(const void *ptr) {}
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -126,36 +239,33 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
{
return (void *)object;
}
-
-static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
+ unsigned long ip)
{
- return ptr;
+ return false;
}
-static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
-static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags)
+static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
+ gfp_t flags)
{
- return (void *)object;
+ return object;
}
-static inline void *kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags)
+static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags)
{
return (void *)object;
}

-static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags)
+static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
{
- return object;
+ return (void *)ptr;
}
-static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
- unsigned long ip)
+static inline void *kasan_krealloc(const void *object, size_t new_size,
+ gfp_t flags)
{
- return false;
+ return (void *)object;
}
-
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
+static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}

#endif /* CONFIG_KASAN */

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a3cac68c737c..701e9d7666d6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1412,22 +1412,36 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
#endif /* CONFIG_NUMA_BALANCING */

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+
+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_enabled);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_enabled);
+#endif
+
static inline u8 page_kasan_tag(const struct page *page)
{
- return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ if (static_branch_likely(&kasan_enabled))
+ return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ return 0xff;
}

static inline void page_kasan_tag_set(struct page *page, u8 tag)
{
- page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
- page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ if (static_branch_likely(&kasan_enabled)) {
+ page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+ page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ }
}

static inline void page_kasan_tag_reset(struct page *page)
{
- page_kasan_tag_set(page, 0xff);
+ if (static_branch_likely(&kasan_enabled))
+ page_kasan_tag_set(page, 0xff);
}
-#else
+
+#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
static inline u8 page_kasan_tag(const struct page *page)
{
return 0xff;
@@ -1435,7 +1449,8 @@ static inline u8 page_kasan_tag(const struct page *page)

static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
static inline void page_kasan_tag_reset(struct page *page) { }
-#endif
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */

static inline struct zone *page_zone(const struct page *page)
{
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index cc129ef62ab1..c5ec60e1a4d2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -81,7 +81,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
}
#endif /* CONFIG_KASAN_STACK */

-void kasan_alloc_pages(struct page *page, unsigned int order)
+void __kasan_alloc_pages(struct page *page, unsigned int order)
{
u8 tag;
unsigned long i;
@@ -95,7 +95,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
kasan_unpoison_memory(page_address(page), PAGE_SIZE << order);
}

-void kasan_free_pages(struct page *page, unsigned int order)
+void __kasan_free_pages(struct page *page, unsigned int order)
{
if (likely(!PageHighMem(page)))
kasan_poison_memory(page_address(page),
@@ -122,8 +122,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
object_size <= (1 << 16) - 1024 ? 1024 : 2048;
}

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags)
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags)
{
unsigned int orig_size = *size;
unsigned int redzone_size;
@@ -165,7 +165,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
*flags |= SLAB_KASAN;
}

-size_t kasan_metadata_size(struct kmem_cache *cache)
+size_t __kasan_metadata_size(struct kmem_cache *cache)
{
if (static_branch_unlikely(&kasan_stack))
return (cache->kasan_info.alloc_meta_offset ?
@@ -188,17 +188,17 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
}

-void kasan_unpoison_data(const void *address, size_t size)
+void __kasan_unpoison_data(const void *addr, size_t size)
{
- kasan_unpoison_memory(address, size);
+ kasan_unpoison_memory(addr, size);
}

-void kasan_unpoison_slab(const void *ptr)
+void __kasan_unpoison_slab(const void *ptr)
{
kasan_unpoison_memory(ptr, __ksize(ptr));
}

-void kasan_poison_slab(struct page *page)
+void __kasan_poison_slab(struct page *page)
{
unsigned long i;

@@ -208,12 +208,12 @@ void kasan_poison_slab(struct page *page)
KASAN_KMALLOC_REDZONE);
}

-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
{
kasan_unpoison_memory(object, cache->object_size);
}

-void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
kasan_poison_memory(object,
round_up(cache->object_size, KASAN_GRANULE_SIZE),
@@ -266,7 +266,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
#endif
}

-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
struct kasan_alloc_meta *alloc_meta;
@@ -285,7 +285,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}

-static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
+static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
unsigned long ip, bool quarantine)
{
u8 tag;
@@ -329,9 +329,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
return false;
}

-bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
{
- return __kasan_slab_free(cache, object, ip, true);
+ return ____kasan_slab_free(cache, object, ip, true);
}

static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
@@ -339,7 +339,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
}

-static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
unsigned long redzone_start;
@@ -371,20 +371,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
return set_tag(object, tag);
}

-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
- gfp_t flags)
+void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
+ void *object, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+ return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
}

-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, size, flags, true);
+ return ____kasan_kmalloc(cache, object, size, flags, true);
}
-EXPORT_SYMBOL(kasan_kmalloc);
+EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
gfp_t flags)
{
struct page *page;
@@ -409,7 +409,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
return (void *)ptr;
}

-void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;

@@ -419,13 +419,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return kasan_kmalloc_large(object, size, flags);
+ return __kasan_kmalloc_large(object, size, flags);
else
- return __kasan_kmalloc(page->slab_cache, object, size,
+ return ____kasan_kmalloc(page->slab_cache, object, size,
flags, true);
}

-void kasan_poison_kfree(void *ptr, unsigned long ip)
+void __kasan_poison_kfree(void *ptr, unsigned long ip)
{
struct page *page;

@@ -438,11 +438,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
}
kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
} else {
- __kasan_slab_free(page->slab_cache, ptr, ip, false);
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
}
}

-void kasan_kfree_large(void *ptr, unsigned long ip)
+void __kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr)))
kasan_report_invalid_free(ptr, ip);
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:03 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
to reunpoison the object data, only to poison the redzone.

This requires changing kasan annotation for early SLUB cache to
kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
This doesn't do any functional changes, as kmem_cache_node->object_size
is equal to sizeof(struct kmem_cache_node).

Similarly for kasan_krealloc(), as it's called after ksize(), which
already unpoisoned the object, there's no need to do it again.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2
---
mm/kasan/common.c | 31 +++++++++++++++++++++----------
mm/slub.c | 3 +--
2 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index c5ec60e1a4d2..a581937c2a44 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
tag = assign_tag(cache, object, false, keep_tag);

- /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
- kasan_unpoison_memory(set_tag(object, tag), size);
+ /*
+ * Don't unpoison the object when keeping the tag. Tag is kept for:
+ * 1. krealloc(), and then the memory has already been unpoisoned via ksize();
+ * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc().
+ * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS.
+ */
+ if (!keep_tag)
+ kasan_unpoison_memory(set_tag(object, tag), size);
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

@@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
}
EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags)
+static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr,
+ size_t size, gfp_t flags, bool realloc)
{
- struct page *page;
unsigned long redzone_start;
unsigned long redzone_end;

@@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
if (unlikely(ptr == NULL))
return NULL;

- page = virt_to_page(ptr);
- redzone_start = round_up((unsigned long)(ptr + size),
- KASAN_GRANULE_SIZE);
+ redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
redzone_end = (unsigned long)ptr + page_size(page);

- kasan_unpoison_memory(ptr, size);
+ /* ksize() in __do_krealloc() already unpoisoned the memory. */
+ if (!realloc)
+ kasan_unpoison_memory(ptr, size);
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_PAGE_REDZONE);

return (void *)ptr;
}

+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
+ gfp_t flags)
+{
+ return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false);
+}
+
void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;
@@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return __kasan_kmalloc_large(object, size, flags);
+ return ____kasan_kmalloc_large(page, object, size, flags, true);
else
return ____kasan_kmalloc(page->slab_cache, object, size,
flags, true);
diff --git a/mm/slub.c b/mm/slub.c
index 1d3f2355df3b..afb035b0bf2d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
- n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
- GFP_KERNEL);
+ n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
page->freelist = get_freepointer(kmem_cache_node, n);
page->inuse = 1;
page->frozen = 0;
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:07 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
kasan_poison_kfree() is currently only called for mempool allocations
that are backed by either kmem_cache_alloc() or kmalloc(). Therefore, the
page passed to kasan_poison_kfree() is always PageSlab() and there's no
need to do the check.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/If31f88726745da8744c6bea96fb32584e6c2778c
---
mm/kasan/common.c | 11 +----------
1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a581937c2a44..b82dbae0c5d6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -441,16 +441,7 @@ void __kasan_poison_kfree(void *ptr, unsigned long ip)
struct page *page;

page = virt_to_head_page(ptr);
-
- if (unlikely(!PageSlab(page))) {
- if (ptr != page_address(page)) {
- kasan_report_invalid_free(ptr, ip);
- return;
- }
- kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
- } else {
- ____kasan_slab_free(page->slab_cache, ptr, ip, false);
- }
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
}

void __kasan_kfree_large(void *ptr, unsigned long ip)
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:08 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Rename kasan_poison_kfree() into kasan_slab_free_mempool() as it better
reflects what this annotation does.

No functional changes.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I5026f87364e556b506ef1baee725144bb04b8810
---
include/linux/kasan.h | 16 ++++++++--------
mm/kasan/common.c | 16 ++++++++--------
mm/mempool.c | 2 +-
3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 8654275aa62e..2ae92f295f76 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -162,6 +162,13 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned
return false;
}

+void __kasan_slab_free_mempool(void *ptr, unsigned long ip);
+static inline void kasan_slab_free_mempool(void *ptr, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_slab_free_mempool(ptr, ip);
+}
+
void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
void *object, gfp_t flags);
static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
@@ -202,13 +209,6 @@ static inline void * __must_check kasan_krealloc(const void *object,
return (void *)object;
}

-void __kasan_poison_kfree(void *ptr, unsigned long ip);
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
-{
- if (static_branch_likely(&kasan_enabled))
- __kasan_poison_kfree(ptr, ip);
-}
-
void __kasan_kfree_large(void *ptr, unsigned long ip);
static inline void kasan_kfree_large(void *ptr, unsigned long ip)
{
@@ -244,6 +244,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
{
return false;
}
+static inline void kasan_slab_free_mempool(void *ptr, unsigned long ip) {}
static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags)
{
@@ -264,7 +265,6 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
{
return (void *)object;
}
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}

#endif /* CONFIG_KASAN */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b82dbae0c5d6..5622b0ec0907 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -334,6 +334,14 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return ____kasan_slab_free(cache, object, ip, true);
}

+void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
+{
+ struct page *page;
+
+ page = virt_to_head_page(ptr);
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
+}
+
static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
{
kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
@@ -436,14 +444,6 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
flags, true);
}

-void __kasan_poison_kfree(void *ptr, unsigned long ip)
-{
- struct page *page;
-
- page = virt_to_head_page(ptr);
- ____kasan_slab_free(page->slab_cache, ptr, ip, false);
-}
-
void __kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr)))
diff --git a/mm/mempool.c b/mm/mempool.c
index 79bff63ecf27..0e8d877fbbc6 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -106,7 +106,7 @@ static inline void poison_element(mempool_t *pool, void *element)
static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
{
if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
- kasan_poison_kfree(element, _RET_IP_);
+ kasan_slab_free_mempool(element, _RET_IP_);
if (pool->alloc == mempool_alloc_pages)
kasan_free_pages(element, (unsigned long)pool->pool_data);
}
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:11 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
For tag-based mode kasan_poison_memory() already rounds up the size. Do
the same for software modes and remove round_up() from common code.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/Ib397128fac6eba874008662b4964d65352db4aa4
---
mm/kasan/common.c | 8 ++------
mm/kasan/shadow.c | 1 +
2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 5622b0ec0907..983383ebe32a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -215,9 +215,7 @@ void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)

void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
- kasan_poison_memory(object,
- round_up(cache->object_size, KASAN_GRANULE_SIZE),
- KASAN_KMALLOC_REDZONE);
+ kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_REDZONE);
}

/*
@@ -290,7 +288,6 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
{
u8 tag;
void *tagged_object;
- unsigned long rounded_up_size;

tag = get_tag(object);
tagged_object = object;
@@ -311,8 +308,7 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
return true;
}

- rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
- kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
+ kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_FREE);

if (static_branch_unlikely(&kasan_stack)) {
if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 616ac64c4a21..ab1d39c566b9 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -82,6 +82,7 @@ void kasan_poison_memory(const void *address, size_t size, u8 value)
* addresses to this function.
*/
address = reset_tag(address);
+ size = round_up(size, KASAN_GRANULE_SIZE);

shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size);
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:13 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
set_tag() already ignores the tag for the generic mode, so just call it
as is. Add a check for the generic mode to assign_tag(), and simplify its
call in ____kasan_kmalloc().

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I18905ca78fb4a3d60e1a34a4ca00247272480438
---
mm/kasan/common.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 983383ebe32a..3cd56861eb11 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -235,6 +235,9 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
static u8 assign_tag(struct kmem_cache *cache, const void *object,
bool init, bool keep_tag)
{
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ return 0xff;
+
/*
* 1. When an object is kmalloc()'ed, two hooks are called:
* kasan_slab_alloc() and kasan_kmalloc(). We assign the
@@ -277,8 +280,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
__memset(alloc_meta, 0, sizeof(*alloc_meta));
}

- if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
- object = set_tag(object, assign_tag(cache, object, true, false));
+ /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
+ object = set_tag(object, assign_tag(cache, object, true, false));

return (void *)object;
}
@@ -360,9 +363,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_GRANULE_SIZE);
redzone_end = round_up((unsigned long)object + cache->object_size,
KASAN_GRANULE_SIZE);
-
- if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
- tag = assign_tag(cache, object, false, keep_tag);
+ tag = assign_tag(cache, object, false, keep_tag);

/*
* Don't unpoison the object when keeping the tag. Tag is kept for:
--
2.29.0.rc1.297.gfa9743e501-goog

Andrey Konovalov

unread,
Oct 22, 2020, 9:20:15 AM10/22/20
to Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, Andrey Konovalov
Currently it says that the memory gets poisoned by page_alloc code.
Clarify this by mentioning the specific callback that poisons the
memory.

Signed-off-by: Andrey Konovalov <andre...@google.com>
Link: https://linux-review.googlesource.com/id/I1334dffb69b87d7986fab88a1a039cc3ea764725
---
mm/kasan/common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 3cd56861eb11..54af79aa8d3f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -445,5 +445,5 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr)))
kasan_report_invalid_free(ptr, ip);
- /* The object will be poisoned by page_alloc. */
+ /* The object will be poisoned by kasan_free_pages(). */
}
--
2.29.0.rc1.297.gfa9743e501-goog

Dmitry Vyukov

unread,
Oct 22, 2020, 11:16:08 AM10/22/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
For boot parameters I think we are now "safe" in the sense that we
provide maximum possible flexibility and can defer any actual
decisions.

> Should we try to deal with CONFIG_SLAB_MERGE_DEFAULT-like behavor mentioned
> above?

How hard it is to allow KASAN with CONFIG_SLAB_MERGE_DEFAULT? Are
there any principal conflicts?
The numbers you provided look quite substantial (on a par of what MTE
itself may introduce). So I would assume if a vendor does not have
CONFIG_SLAB_MERGE_DEFAULT disabled, it may not want to disable it
because of MTE (effectively doubles overhead).

Andrey Konovalov

unread,
Oct 22, 2020, 1:00:56 PM10/22/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
FTR, this only accounts for slab memory overhead that comes from
redzones that store stack ids. There's also page_alloc overhead from
the stacks themselves, which I didn't measure yet.

> >
> > === Questions
> >
> > Any concerns about the boot parameters?
>
> For boot parameters I think we are now "safe" in the sense that we
> provide maximum possible flexibility and can defer any actual
> decisions.

Perfect!

I realized that I actually forgot to think about the default values
when no boot params are specified, I'll fix this in the next version.

> > Should we try to deal with CONFIG_SLAB_MERGE_DEFAULT-like behavor mentioned
> > above?
>
> How hard it is to allow KASAN with CONFIG_SLAB_MERGE_DEFAULT? Are
> there any principal conflicts?

I'll explore this.

> The numbers you provided look quite substantial (on a par of what MTE
> itself may introduce). So I would assume if a vendor does not have
> CONFIG_SLAB_MERGE_DEFAULT disabled, it may not want to disable it
> because of MTE (effectively doubles overhead).

Sounds reasonable.

Thanks!

Kostya Serebryany

unread,
Oct 22, 2020, 2:30:12 PM10/22/20
to Andrey Konovalov, Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
The boot parameters look great!

Do we use redzones in kasan.mode=prod?
(I think we should not)

Please separate the work on improving the stack trace collection form the work
on enabling kasan.mode=prod, the latter is more important IMHO.

Still some notes on stack traces:

> kasan.mode=full has 40% performance and 30% memory impact over
> kasan.mode=prod. Both come from alloc/free stack collection.

This is a lot. Right?
Please provide a more detailed breakdown:
* CPU overhead of collecting stack traces vs overhead of putting them
in a container/depot
* RAM overhead depending on the number of frames stored
* RAM overhead of the storage container (or redones?)
* How much is 30% in absolute numbers?

Do we perform any stack trace compressions?

Can we collect stack traces from the shadow call stack, when it's
available (default on Android)?

As we discussed offline, I think we have a way to compress reasonably
long stack traces into 8 bytes,
but it will take some effort and time to implement:
* collect the stack trace as usual (with shadow stack, when available)
* compute a hash of the top N frames
* store the hash, discard the stack trace. On trap, report the hashes
for allocation/deallocation
* Offline, analyze the binary to reconstruct the call graph, including
the indirect calls
* Perform DFS search from kmalloc/kfree up the call graph to depth N,
compute hashes for all paths,
report paths with the hash that matches the hash in the report.
My preliminary investigation shows that we can do it easily for N <= 10.
The trickiest bit here is to build the call graph for indirect calls,
but we should be able to do it.

Marco Elver

unread,
Oct 22, 2020, 2:50:40 PM10/22/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
Why is this "ASYNC" and not "async"?

Andrey Konovalov

unread,
Oct 22, 2020, 4:28:54 PM10/22/20
to Marco Elver, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
Typo, will fix in the next version. Thanks!

Dmitry Vyukov

unread,
Oct 27, 2020, 8:40:50 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Move get_free_info() call into quarantine_put() to simplify the call site.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Iab0f04e7ebf8d83247024b7190c67c3c34c7940f

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 2 +-
> mm/kasan/kasan.h | 5 ++---
> mm/kasan/quarantine.c | 3 ++-
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2bb0ef6da6bd..5712c66c11c1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -308,7 +308,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>
> kasan_set_free_info(cache, object, tag);
>
> - quarantine_put(get_free_info(cache, object), cache);
> + quarantine_put(cache, object);
>
> return IS_ENABLED(CONFIG_KASAN_GENERIC);
> }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 6850308c798a..5c0116c70579 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -214,12 +214,11 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
>
> #if defined(CONFIG_KASAN_GENERIC) && \
> (defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
> -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
> +void quarantine_put(struct kmem_cache *cache, void *object);
> void quarantine_reduce(void);
> void quarantine_remove_cache(struct kmem_cache *cache);
> #else
> -static inline void quarantine_put(struct kasan_free_meta *info,
> - struct kmem_cache *cache) { }
> +static inline void quarantine_put(struct kmem_cache *cache, void *object) { }
> static inline void quarantine_reduce(void) { }
> static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
> #endif
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 580ff5610fc1..a0792f0d6d0f 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -161,11 +161,12 @@ static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
> qlist_init(q);
> }
>
> -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
> +void quarantine_put(struct kmem_cache *cache, void *object)
> {
> unsigned long flags;
> struct qlist_head *q;
> struct qlist_head temp = QLIST_INIT;
> + struct kasan_free_meta *info = get_free_info(cache, object);
>
> /*
> * Note: irq must be disabled until after we move the batch to the
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 27, 2020, 8:41:05 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Rename get_alloc_info() and get_free_info() to kasan_get_alloc_meta()
> and kasan_get_free_meta() to better reflect what those do and avoid
> confusion with kasan_set_free_info().
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ib6e4ba61c8b12112b403d3479a9799ac8fff8de1

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 16 ++++++++--------
> mm/kasan/generic.c | 12 ++++++------
> mm/kasan/hw_tags.c | 4 ++--
> mm/kasan/kasan.h | 8 ++++----
> mm/kasan/quarantine.c | 4 ++--
> mm/kasan/report.c | 12 ++++++------
> mm/kasan/report_sw_tags.c | 2 +-
> mm/kasan/sw_tags.c | 4 ++--
> 8 files changed, 31 insertions(+), 31 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 5712c66c11c1..8fd04415d8f4 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -175,14 +175,14 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
> sizeof(struct kasan_free_meta) : 0);
> }
>
> -struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
> - const void *object)
> +struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> + const void *object)
> {
> return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
> }
>
> -struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
> - const void *object)
> +struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> + const void *object)
> {
> BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
> return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
> @@ -259,13 +259,13 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object)
> {
> - struct kasan_alloc_meta *alloc_info;
> + struct kasan_alloc_meta *alloc_meta;
>
> if (!(cache->flags & SLAB_KASAN))
> return (void *)object;
>
> - alloc_info = get_alloc_info(cache, object);
> - __memset(alloc_info, 0, sizeof(*alloc_info));
> + alloc_meta = kasan_get_alloc_meta(cache, object);
> + __memset(alloc_meta, 0, sizeof(*alloc_meta));
>
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> object = set_tag(object, assign_tag(cache, object, true, false));
> @@ -345,7 +345,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_KMALLOC_REDZONE);
>
> if (cache->flags & SLAB_KASAN)
> - kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags);
> + kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
>
> return set_tag(object, tag);
> }
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index e1af3b6c53b8..de6b3f03a023 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -331,7 +331,7 @@ void kasan_record_aux_stack(void *addr)
> {
> struct page *page = kasan_addr_to_page(addr);
> struct kmem_cache *cache;
> - struct kasan_alloc_meta *alloc_info;
> + struct kasan_alloc_meta *alloc_meta;
> void *object;
>
> if (!(page && PageSlab(page)))
> @@ -339,13 +339,13 @@ void kasan_record_aux_stack(void *addr)
>
> cache = page->slab_cache;
> object = nearest_obj(cache, page, addr);
> - alloc_info = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
>
> /*
> * record the last two call_rcu() call stacks.
> */
> - alloc_info->aux_stack[1] = alloc_info->aux_stack[0];
> - alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
> + alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0];
> + alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
> }
>
> void kasan_set_free_info(struct kmem_cache *cache,
> @@ -353,7 +353,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
> {
> struct kasan_free_meta *free_meta;
>
> - free_meta = get_free_info(cache, object);
> + free_meta = kasan_get_free_meta(cache, object);
> kasan_set_track(&free_meta->free_track, GFP_NOWAIT);
>
> /*
> @@ -367,5 +367,5 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> {
> if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK)
> return NULL;
> - return &get_free_info(cache, object)->free_track;
> + return &kasan_get_free_meta(cache, object)->free_track;
> }
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 7f0568df2a93..2a38885014e3 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -56,7 +56,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
> {
> struct kasan_alloc_meta *alloc_meta;
>
> - alloc_meta = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
> kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT);
> }
>
> @@ -65,6 +65,6 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> {
> struct kasan_alloc_meta *alloc_meta;
>
> - alloc_meta = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
> return &alloc_meta->free_track[0];
> }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 5c0116c70579..456b264e5124 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -148,10 +148,10 @@ struct kasan_free_meta {
> #endif
> };
>
> -struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
> - const void *object);
> -struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
> - const void *object);
> +struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> + const void *object);
> +struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> + const void *object);
>
> void kasan_poison_memory(const void *address, size_t size, u8 value);
>
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index a0792f0d6d0f..0da3d37e1589 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -166,7 +166,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
> unsigned long flags;
> struct qlist_head *q;
> struct qlist_head temp = QLIST_INIT;
> - struct kasan_free_meta *info = get_free_info(cache, object);
> + struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);
>
> /*
> * Note: irq must be disabled until after we move the batch to the
> @@ -179,7 +179,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
> local_irq_save(flags);
>
> q = this_cpu_ptr(&cpu_quarantine);
> - qlist_put(q, &info->quarantine_link, cache->size);
> + qlist_put(q, &meta->quarantine_link, cache->size);
> if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
> qlist_move_all(q, &temp);
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f8817d5685a7..dee5350b459c 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -162,12 +162,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> static void describe_object(struct kmem_cache *cache, void *object,
> const void *addr, u8 tag)
> {
> - struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
> + struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
>
> if (cache->flags & SLAB_KASAN) {
> struct kasan_track *free_track;
>
> - print_track(&alloc_info->alloc_track, "Allocated");
> + print_track(&alloc_meta->alloc_track, "Allocated");
> pr_err("\n");
> free_track = kasan_get_free_track(cache, object, tag);
> if (free_track) {
> @@ -176,14 +176,14 @@ static void describe_object(struct kmem_cache *cache, void *object,
> }
>
> #ifdef CONFIG_KASAN_GENERIC
> - if (alloc_info->aux_stack[0]) {
> + if (alloc_meta->aux_stack[0]) {
> pr_err("Last call_rcu():\n");
> - print_stack(alloc_info->aux_stack[0]);
> + print_stack(alloc_meta->aux_stack[0]);
> pr_err("\n");
> }
> - if (alloc_info->aux_stack[1]) {
> + if (alloc_meta->aux_stack[1]) {
> pr_err("Second to last call_rcu():\n");
> - print_stack(alloc_info->aux_stack[1]);
> + print_stack(alloc_meta->aux_stack[1]);
> pr_err("\n");
> }
> #endif
> diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c
> index aebc44a29e83..317100fd95b9 100644
> --- a/mm/kasan/report_sw_tags.c
> +++ b/mm/kasan/report_sw_tags.c
> @@ -46,7 +46,7 @@ const char *get_bug_type(struct kasan_access_info *info)
> if (page && PageSlab(page)) {
> cache = page->slab_cache;
> object = nearest_obj(cache, page, (void *)addr);
> - alloc_meta = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
>
> for (i = 0; i < KASAN_NR_FREE_STACKS; i++)
> if (alloc_meta->free_pointer_tag[i] == tag)
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index ccc35a311179..c10863a45775 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -172,7 +172,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
> struct kasan_alloc_meta *alloc_meta;
> u8 idx = 0;
>
> - alloc_meta = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
>
> #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
> idx = alloc_meta->free_track_idx;
> @@ -189,7 +189,7 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> struct kasan_alloc_meta *alloc_meta;
> int i = 0;
>
> - alloc_meta = get_alloc_info(cache, object);
> + alloc_meta = kasan_get_alloc_meta(cache, object);
>
> #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
> for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 27, 2020, 8:41:25 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Add set_alloc_info() helper and move kasan_set_track() into it. This will
> simplify the code for one of the upcoming changes.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 8fd04415d8f4..a880e5a547ed 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -318,6 +318,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> return __kasan_slab_free(cache, object, ip, true);
> }
>
> +static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> +{
> + kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> +}
> +
> static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags, bool keep_tag)
> {
> @@ -345,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_KMALLOC_REDZONE);
>
> if (cache->flags & SLAB_KASAN)
> - kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> + set_alloc_info(cache, (void *)object, flags);
>
> return set_tag(object, tag);
> }
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 27, 2020, 8:44:34 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> There's a config option CONFIG_KASAN_STACK that has to be enabled for
> KASAN to use stack instrumentation and perform validity checks for
> stack variables.
>
> There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled.
> Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is
> enabled.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3
> ---
> arch/arm64/kernel/sleep.S | 2 +-
> arch/x86/kernel/acpi/wakeup_64.S | 2 +-
> include/linux/kasan.h | 10 ++++++----
> mm/kasan/common.c | 2 ++
> 4 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index ba40d57757d6..bdadfa56b40e 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume)
> */
> bl cpu_do_resume
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
> mov x0, sp
> bl kasan_unpoison_task_stack_below
> #endif
> diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
> index c8daa92f38dc..5d3a0b8fd379 100644
> --- a/arch/x86/kernel/acpi/wakeup_64.S
> +++ b/arch/x86/kernel/acpi/wakeup_64.S
> @@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
> movq pt_regs_r14(%rax), %r14
> movq pt_regs_r15(%rax), %r15
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
> /*
> * The suspend path may have poisoned some areas deeper in the stack,
> * which we now need to unpoison.
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 3f3f541e5d5f..7be9fb9146ac 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -68,8 +68,6 @@ static inline void kasan_disable_current(void) {}
>
> void kasan_unpoison_memory(const void *address, size_t size);
>
> -void kasan_unpoison_task_stack(struct task_struct *task);
> -
> void kasan_alloc_pages(struct page *page, unsigned int order);
> void kasan_free_pages(struct page *page, unsigned int order);
>
> @@ -114,8 +112,6 @@ void kasan_restore_multi_shot(bool enabled);
>
> static inline void kasan_unpoison_memory(const void *address, size_t size) {}
>
> -static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
> -
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>
> @@ -167,6 +163,12 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
>
> #endif /* CONFIG_KASAN */
>
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK

&& defined(CONFIG_KASAN_STACK) for consistency

> +void kasan_unpoison_task_stack(struct task_struct *task);
> +#else
> +static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
> +#endif
> +
> #ifdef CONFIG_KASAN_GENERIC
>
> void kasan_cache_shrink(struct kmem_cache *cache);
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a880e5a547ed..a3e67d49b893 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -58,6 +58,7 @@ void kasan_disable_current(void)
> }
> #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> +#if CONFIG_KASAN_STACK

#ifdef CONFIG_ is the form used toughout the kernel code

> static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
> {
> void *base = task_stack_page(task);
> @@ -84,6 +85,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
>
> kasan_unpoison_memory(base, watermark - base);
> }
> +#endif /* CONFIG_KASAN_STACK */
>
> void kasan_alloc_pages(struct page *page, unsigned int order)
> {
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 27, 2020, 8:45:40 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
And similarly here

> > mov x0, sp
> > bl kasan_unpoison_task_stack_below
> > #endif
> > diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
> > index c8daa92f38dc..5d3a0b8fd379 100644
> > --- a/arch/x86/kernel/acpi/wakeup_64.S
> > +++ b/arch/x86/kernel/acpi/wakeup_64.S
> > @@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
> > movq pt_regs_r14(%rax), %r14
> > movq pt_regs_r15(%rax), %r15
> >
> > -#ifdef CONFIG_KASAN
> > +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK

and here

Dmitry Vyukov

unread,
Oct 27, 2020, 8:49:31 AM10/27/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Even though hardware tag-based mode currently doesn't support checking
> vmalloc allocations, it doesn't use shadow memory and works with
> VMAP_STACK as is.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I3552cbc12321dec82cd7372676e9372a2eb452ac
> ---
> arch/Kconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index af14a567b493..3caf7bcdcf93 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -868,7 +868,7 @@ config VMAP_STACK
> default y
> bool "Use a virtually-mapped stack"
> depends on HAVE_ARCH_VMAP_STACK
> - depends on !KASAN || KASAN_VMALLOC
> + depends on !(KASAN_GENERIC || KASAN_SW_TAGS) || KASAN_VMALLOC

I find it a bit simpler to interpret:

depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC

due to simpler structure. But maybe it's just me.

> help
> Enable this if you want the use virtually-mapped kernel stacks
> with guard pages. This causes kernel stack overflows to be
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 6:08:12 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM 'Andrey Konovalov' via kasan-dev
<kasa...@googlegroups.com> wrote:
>
> Similarly to kasan_init() mark kasan_init_tags() as __init.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I8792e22f1ca5a703c5e979969147968a99312558

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

init_tags itself is not __init, but that's added in a different patch.
I've commented on that patch.


> ---
> include/linux/kasan.h | 2 +-
> mm/kasan/hw_tags.c | 2 +-
> mm/kasan/sw_tags.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7be9fb9146ac..93d9834b7122 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -185,7 +185,7 @@ static inline void kasan_record_aux_stack(void *ptr) {}
>
> #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
>
> -void kasan_init_tags(void);
> +void __init kasan_init_tags(void);
>
> void *kasan_reset_tag(const void *addr);
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 2a38885014e3..0128062320d5 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -15,7 +15,7 @@
>
> #include "kasan.h"
>
> -void kasan_init_tags(void)
> +void __init kasan_init_tags(void)
> {
> init_tags(KASAN_TAG_MAX);
> }
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index c10863a45775..bf1422282bb5 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -35,7 +35,7 @@
>
> static DEFINE_PER_CPU(u32, prng_state);
>
> -void kasan_init_tags(void)
> +void __init kasan_init_tags(void)
> {
> int cpu;
>
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/155123c77b1a068089421022c4c5b1ccb75defd8.1603372719.git.andreyknvl%40google.com.

Dmitry Vyukov

unread,
Oct 28, 2020, 6:56:09 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Tag-based KASAN modes are fully initialized with kasan_init_tags(),
> while the generic mode only requireds kasan_init(). Move the
> initialization message for tag-based modes into kasan_init_tags().
>
> Also fix pr_fmt() usage for KASAN code: generic mode doesn't need it,

Why doesn't it need it? What's the difference with tag modes?

> tag-based modes should use "kasan:" instead of KBUILD_MODNAME.

With generic KASAN I currently see:

[ 0.571473][ T0] kasan: KernelAddressSanitizer initialized

So KBUILD_MODNAME somehow works. Is there some difference between files?

> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Idfd1e50625ffdf42dfc3dbf7455b11bd200a0a49
> ---
> arch/arm64/mm/kasan_init.c | 3 +++
> mm/kasan/generic.c | 2 --
> mm/kasan/hw_tags.c | 4 ++++
> mm/kasan/sw_tags.c | 4 +++-
> 4 files changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index b6b9d55bb72e..8f17fa834b62 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -290,5 +290,8 @@ void __init kasan_init(void)
> {
> kasan_init_shadow();
> kasan_init_depth();
> +#if defined(CONFIG_KASAN_GENERIC)
> + /* CONFIG_KASAN_SW/HW_TAGS also requires kasan_init_tags(). */

A bit cleaner way may be to introduce kasan_init_early() and
kasan_init_late(). Late() will do tag init and always print the
message.

> pr_info("KernelAddressSanitizer initialized\n");
> +#endif
> }
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index de6b3f03a023..d259e4c3aefd 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -9,8 +9,6 @@
> * Andrey Konovalov <andre...@gmail.com>
> */
>
> -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> -
> #include <linux/export.h>
> #include <linux/interrupt.h>
> #include <linux/init.h>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 0128062320d5..b372421258c8 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -6,6 +6,8 @@
> * Author: Andrey Konovalov <andre...@google.com>
> */
>
> +#define pr_fmt(fmt) "kasan: " fmt
> +
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> #include <linux/memory.h>
> @@ -18,6 +20,8 @@
> void __init kasan_init_tags(void)
> {
> init_tags(KASAN_TAG_MAX);
> +
> + pr_info("KernelAddressSanitizer initialized\n");
> }
>
> void *kasan_reset_tag(const void *addr)
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index bf1422282bb5..099af6dc8f7e 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -6,7 +6,7 @@
> * Author: Andrey Konovalov <andre...@google.com>
> */
>
> -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#define pr_fmt(fmt) "kasan: " fmt
>
> #include <linux/export.h>
> #include <linux/interrupt.h>
> @@ -41,6 +41,8 @@ void __init kasan_init_tags(void)
>
> for_each_possible_cpu(cpu)
> per_cpu(prng_state, cpu) = (u32)get_cycles();
> +
> + pr_info("KernelAddressSanitizer initialized\n");
> }
>
> /*
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 6:58:06 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> There's no need for __kasan_unpoison_stack() helper, as it's only
> currently used in a single place. Removing it also removes undeed
> arithmetic.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ie5ba549d445292fe629b4a96735e4034957bcc50

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a3e67d49b893..9008fc6b0810 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -59,18 +59,12 @@ void kasan_disable_current(void)
> #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> #if CONFIG_KASAN_STACK
> -static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
> -{
> - void *base = task_stack_page(task);
> - size_t size = sp - base;
> -
> - kasan_unpoison_memory(base, size);
> -}
> -
> /* Unpoison the entire stack for a task. */
> void kasan_unpoison_task_stack(struct task_struct *task)
> {
> - __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
> + void *base = task_stack_page(task);
> +
> + kasan_unpoison_memory(base, THREAD_SIZE);
> }
>
> /* Unpoison the stack for the current task beyond a watermark sp value. */
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 7:05:34 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Using kasan_reset_tag() currently results in a function call. As it's
> called quite often from the allocator code this leads to a noticeable
> slowdown. Move it to include/linux/kasan.h and turn it into a static
> inline function.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I4d2061acfe91d480a75df00b07c22d8494ef14b5
> ---
> include/linux/kasan.h | 5 ++++-
> mm/kasan/hw_tags.c | 5 -----
> mm/kasan/kasan.h | 6 ++----
> mm/kasan/sw_tags.c | 5 -----
> 4 files changed, 6 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 93d9834b7122..6377d7d3a951 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -187,7 +187,10 @@ static inline void kasan_record_aux_stack(void *ptr) {}
>
> void __init kasan_init_tags(void);
>
> -void *kasan_reset_tag(const void *addr);
> +static inline void *kasan_reset_tag(const void *addr)
> +{
> + return (void *)arch_kasan_reset_tag(addr);

It seems that all implementations already return (void *), so the cast
is not needed.

> +}
>
> bool kasan_report(unsigned long addr, size_t size,
> bool is_write, unsigned long ip);
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index b372421258c8..c3a0e83b5e7a 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -24,11 +24,6 @@ void __init kasan_init_tags(void)
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> -void *kasan_reset_tag(const void *addr)
> -{
> - return reset_tag(addr);
> -}
> -
> void kasan_poison_memory(const void *address, size_t size, u8 value)
> {
> set_mem_tag_range(reset_tag(address),
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 456b264e5124..0ccbb3c4c519 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -246,15 +246,13 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> return addr;
> }
> #endif
> -#ifndef arch_kasan_reset_tag
> -#define arch_kasan_reset_tag(addr) ((void *)(addr))
> -#endif
> #ifndef arch_kasan_get_tag
> #define arch_kasan_get_tag(addr) 0
> #endif
>
> +/* kasan_reset_tag() defined in include/linux/kasan.h. */
> +#define reset_tag(addr) ((void *)kasan_reset_tag(addr))

The cast is not needed.

I would also now remove reset_tag entirely by replacing it with
kasan_reset_tag. Having 2 names for the same thing does not add
clarity.


> #define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
> -#define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr))
> #define get_tag(addr) arch_kasan_get_tag(addr)
>
> #ifndef arch_init_tags
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index 099af6dc8f7e..4db41f274702 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -67,11 +67,6 @@ u8 random_tag(void)
> return (u8)(state % (KASAN_TAG_MAX + 1));
> }
>
> -void *kasan_reset_tag(const void *addr)
> -{
> - return reset_tag(addr);
> -}
> -
> bool check_memory_region(unsigned long addr, size_t size, bool write,
> unsigned long ret_ip)
> {
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 7:08:35 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Using random_tag() currently results in a function call. Move its
> definition to mm/kasan/kasan.h and turn it into a static inline function
> for hardware tag-based mode to avoid uneeded function call.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Iac5b2faf9a912900e16cca6834d621f5d4abf427
> ---
> mm/kasan/hw_tags.c | 5 -----
> mm/kasan/kasan.h | 37 ++++++++++++++++++++-----------------
> 2 files changed, 20 insertions(+), 22 deletions(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index c3a0e83b5e7a..4c24bfcfeff9 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -36,11 +36,6 @@ void kasan_unpoison_memory(const void *address, size_t size)
> round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> }
>
> -u8 random_tag(void)
> -{
> - return get_random_tag();
> -}
> -
> bool check_invalid_free(void *addr)
> {
> u8 ptr_tag = get_tag(addr);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 0ccbb3c4c519..94ba15c2f860 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -188,6 +188,12 @@ static inline bool addr_has_metadata(const void *addr)
>
> #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +void print_tags(u8 addr_tag, const void *addr);
> +#else
> +static inline void print_tags(u8 addr_tag, const void *addr) { }
> +#endif
> +
> bool check_invalid_free(void *addr);
>
> void *find_first_bad_addr(void *addr, size_t size);
> @@ -223,23 +229,6 @@ static inline void quarantine_reduce(void) { }
> static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
> #endif
>
> -#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> -
> -void print_tags(u8 addr_tag, const void *addr);
> -
> -u8 random_tag(void);
> -
> -#else
> -
> -static inline void print_tags(u8 addr_tag, const void *addr) { }
> -
> -static inline u8 random_tag(void)
> -{
> - return 0;
> -}
> -
> -#endif
> -
> #ifndef arch_kasan_set_tag
> static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> {
> @@ -273,6 +262,20 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define get_mem_tag(addr) arch_get_mem_tag(addr)
> #define set_mem_tag_range(addr, size, tag) arch_set_mem_tag_range((addr), (size), (tag))
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +u8 random_tag(void);
> +#elif defined(CONFIG_KASAN_HW_TAGS)
> +static inline u8 random_tag(void)
> +{
> + return get_random_tag();

What's the difference between random_tag() and get_random_tag()? Do we
need both?


> +}
> +#else
> +static inline u8 random_tag(void)
> +{
> + return 0;
> +}
> +#endif
> +
> /*
> * Exported functions for interfaces called from assembly or from generated
> * code. Declarations here to avoid warning about missing declarations.
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 7:29:14 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM 'Andrey Konovalov' via kasan-dev
<kasa...@googlegroups.com> wrote:
>
> Using kasan_poison_memory() or check_invalid_free() currently results in
> function calls. Move their definitions to mm/kasan/kasan.h and turn them
> into static inline functions for hardware tag-based mode to avoid uneeded
> function calls.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ia9d8191024a12d1374675b3d27197f10193f50bb

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/hw_tags.c | 15 ---------------
> mm/kasan/kasan.h | 28 ++++++++++++++++++++++++----
> 2 files changed, 24 insertions(+), 19 deletions(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 4c24bfcfeff9..f03161f3da19 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -24,27 +24,12 @@ void __init kasan_init_tags(void)
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> -void kasan_poison_memory(const void *address, size_t size, u8 value)
> -{
> - set_mem_tag_range(reset_tag(address),
> - round_up(size, KASAN_GRANULE_SIZE), value);
> -}
> -
> void kasan_unpoison_memory(const void *address, size_t size)
> {
> set_mem_tag_range(reset_tag(address),
> round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> }
>
> -bool check_invalid_free(void *addr)
> -{
> - u8 ptr_tag = get_tag(addr);
> - u8 mem_tag = get_mem_tag(addr);
> -
> - return (mem_tag == KASAN_TAG_INVALID) ||
> - (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
> -}
> -
> void kasan_set_free_info(struct kmem_cache *cache,
> void *object, u8 tag)
> {
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 94ba15c2f860..8d84ae6f58f1 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -153,8 +153,6 @@ struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> const void *object);
>
> -void kasan_poison_memory(const void *address, size_t size, u8 value);
> -
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>
> static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
> @@ -194,8 +192,6 @@ void print_tags(u8 addr_tag, const void *addr);
> static inline void print_tags(u8 addr_tag, const void *addr) { }
> #endif
>
> -bool check_invalid_free(void *addr);
> -
> void *find_first_bad_addr(void *addr, size_t size);
> const char *get_bug_type(struct kasan_access_info *info);
> void metadata_fetch_row(char *buffer, void *row);
> @@ -276,6 +272,30 @@ static inline u8 random_tag(void)
> }
> #endif
>
> +#ifdef CONFIG_KASAN_HW_TAGS
> +
> +static inline void kasan_poison_memory(const void *address, size_t size, u8 value)
> +{
> + set_mem_tag_range(reset_tag(address),
> + round_up(size, KASAN_GRANULE_SIZE), value);
> +}
> +
> +static inline bool check_invalid_free(void *addr)
> +{
> + u8 ptr_tag = get_tag(addr);
> + u8 mem_tag = get_mem_tag(addr);
> +
> + return (mem_tag == KASAN_TAG_INVALID) ||
> + (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
> +}
> +
> +#else /* CONFIG_KASAN_HW_TAGS */
> +
> +void kasan_poison_memory(const void *address, size_t size, u8 value);
> +bool check_invalid_free(void *addr);
> +
> +#endif /* CONFIG_KASAN_HW_TAGS */
> +
> /*
> * Exported functions for interfaces called from assembly or from generated
> * code. Declarations here to avoid warning about missing declarations.
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/a3cd7d83cc1f9ca06ef6d8c84e70f122212bf8ef.1603372719.git.andreyknvl%40google.com.

Dmitry Vyukov

unread,
Oct 28, 2020, 7:36:48 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Currently kasan_unpoison_memory() is used as both an external annotation
> and as internal memory poisoning helper. Rename external annotation to
> kasan_unpoison_data() and inline the internal helper for for hardware
> tag-based mode to avoid undeeded function calls.
>
> There's the external annotation kasan_unpoison_slab() that is currently
> defined as static inline and uses kasan_unpoison_memory(). With this
> change it's turned into a function call. Overall, this results in the
> same number of calls for hardware tag-based mode as
> kasan_unpoison_memory() is now inlined.

Can't we leave kasan_unpoison_slab as is? Or there are other reasons
to uninline it?
It seems that uninling it is orthogonal to the rest of this patch.

> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ia7c8b659f79209935cbaab3913bf7f082cc43a0e
> ---
> include/linux/kasan.h | 16 ++++++----------
> kernel/fork.c | 2 +-
> mm/kasan/common.c | 10 ++++++++++
> mm/kasan/hw_tags.c | 6 ------
> mm/kasan/kasan.h | 7 +++++++
> mm/slab_common.c | 2 +-
> 6 files changed, 25 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 6377d7d3a951..2b9023224474 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -66,14 +66,15 @@ static inline void kasan_disable_current(void) {}
>
> #ifdef CONFIG_KASAN
>
> -void kasan_unpoison_memory(const void *address, size_t size);
> -
> void kasan_alloc_pages(struct page *page, unsigned int order);
> void kasan_free_pages(struct page *page, unsigned int order);
>
> void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> slab_flags_t *flags);
>
> +void kasan_unpoison_data(const void *address, size_t size);
> +void kasan_unpoison_slab(const void *ptr);
> +
> void kasan_poison_slab(struct page *page);
> void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> @@ -98,11 +99,6 @@ struct kasan_cache {
> int free_meta_offset;
> };
>
> -size_t __ksize(const void *);
> -static inline void kasan_unpoison_slab(const void *ptr)
> -{
> - kasan_unpoison_memory(ptr, __ksize(ptr));
> -}
> size_t kasan_metadata_size(struct kmem_cache *cache);
>
> bool kasan_save_enable_multi_shot(void);
> @@ -110,8 +106,6 @@ void kasan_restore_multi_shot(bool enabled);
>
> #else /* CONFIG_KASAN */
>
> -static inline void kasan_unpoison_memory(const void *address, size_t size) {}
> -
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>
> @@ -119,6 +113,9 @@ static inline void kasan_cache_create(struct kmem_cache *cache,
> unsigned int *size,
> slab_flags_t *flags) {}
>
> +static inline void kasan_unpoison_data(const void *address, size_t size) { }
> +static inline void kasan_unpoison_slab(const void *ptr) { }
> +
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> void *object) {}
> @@ -158,7 +155,6 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> return false;
> }
>
> -static inline void kasan_unpoison_slab(const void *ptr) { }
> static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
>
> #endif /* CONFIG_KASAN */
> diff --git a/kernel/fork.c b/kernel/fork.c
> index b41fecca59d7..858d78eee6ec 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -225,7 +225,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> continue;
>
> /* Mark stack accessible for KASAN. */
> - kasan_unpoison_memory(s->addr, THREAD_SIZE);
> + kasan_unpoison_data(s->addr, THREAD_SIZE);
>
> /* Clear stale pointers from reused stack. */
> memset(s->addr, 0, THREAD_SIZE);
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 9008fc6b0810..1a5e6c279a72 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -184,6 +184,16 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
> }
>
> +void kasan_unpoison_data(const void *address, size_t size)
> +{
> + kasan_unpoison_memory(address, size);
> +}
> +
> +void kasan_unpoison_slab(const void *ptr)
> +{
> + kasan_unpoison_memory(ptr, __ksize(ptr));
> +}
> +
> void kasan_poison_slab(struct page *page)
> {
> unsigned long i;
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index f03161f3da19..915142da6b57 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -24,12 +24,6 @@ void __init kasan_init_tags(void)
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> -void kasan_unpoison_memory(const void *address, size_t size)
> -{
> - set_mem_tag_range(reset_tag(address),
> - round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> -}
> -
> void kasan_set_free_info(struct kmem_cache *cache,
> void *object, u8 tag)
> {
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8d84ae6f58f1..da08b2533d73 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -280,6 +280,12 @@ static inline void kasan_poison_memory(const void *address, size_t size, u8 valu
> round_up(size, KASAN_GRANULE_SIZE), value);
> }
>
> +static inline void kasan_unpoison_memory(const void *address, size_t size)
> +{
> + set_mem_tag_range(reset_tag(address),
> + round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> +}
> +
> static inline bool check_invalid_free(void *addr)
> {
> u8 ptr_tag = get_tag(addr);
> @@ -292,6 +298,7 @@ static inline bool check_invalid_free(void *addr)
> #else /* CONFIG_KASAN_HW_TAGS */
>
> void kasan_poison_memory(const void *address, size_t size, u8 value);
> +void kasan_unpoison_memory(const void *address, size_t size);
> bool check_invalid_free(void *addr);
>
> #endif /* CONFIG_KASAN_HW_TAGS */
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 53d0f8bb57ea..f1b0c4a22f08 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1176,7 +1176,7 @@ size_t ksize(const void *objp)
> * We assume that ksize callers could use whole allocated area,
> * so we need to unpoison this area.
> */
> - kasan_unpoison_memory(objp, size);
> + kasan_unpoison_data(objp, size);
> return size;
> }
> EXPORT_SYMBOL(ksize);
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 7:38:55 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Add an arm64 helper called cpu_supports_mte() that exposes information
> about whether the CPU supports memory tagging and that can be called
> during early boot (unlike system_supports_mte()).
>
> Use that helper to implement a generic cpu_supports_tags() helper, that
> will be used by hardware tag-based KASAN.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ib4b56a42c57c6293df29a0cdfee334c3ca7bdab4

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> arch/arm64/include/asm/memory.h | 1 +
> arch/arm64/include/asm/mte-kasan.h | 6 ++++++
> arch/arm64/kernel/mte.c | 20 ++++++++++++++++++++
> mm/kasan/kasan.h | 4 ++++
> 4 files changed, 31 insertions(+)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b5d6b824c21c..f496abfcf7f5 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -232,6 +232,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> }
>
> #ifdef CONFIG_KASAN_HW_TAGS
> +#define arch_cpu_supports_tags() cpu_supports_mte()
> #define arch_init_tags(max_tag) mte_init_tags(max_tag)
> #define arch_get_random_tag() mte_get_random_tag()
> #define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index a4c61b926d4a..4c3f2c6b4fe6 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -9,6 +9,7 @@
>
> #ifndef __ASSEMBLY__
>
> +#include <linux/init.h>
> #include <linux/types.h>
>
> /*
> @@ -30,6 +31,7 @@ u8 mte_get_random_tag(void);
> void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);
>
> void mte_init_tags(u64 max_tag);
> +bool __init cpu_supports_mte(void);
>
> #else /* CONFIG_ARM64_MTE */
>
> @@ -54,6 +56,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> static inline void mte_init_tags(u64 max_tag)
> {
> }
> +static inline bool cpu_supports_mte(void)
> +{
> + return false;
> +}
>
> #endif /* CONFIG_ARM64_MTE */
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index ca8206b7f9a6..8fcd17408515 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -134,6 +134,26 @@ void mte_init_tags(u64 max_tag)
> gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> }
>
> +/*
> + * This function can be used during early boot to determine whether the CPU
> + * supports MTE. The alternative that must be used after boot is completed is
> + * system_supports_mte(), but it only works after the cpufeature framework
> + * learns about MTE.
> + */
> +bool __init cpu_supports_mte(void)
> +{
> + u64 pfr1;
> + u32 val;
> +
> + if (!IS_ENABLED(CONFIG_ARM64_MTE))
> + return false;
> +
> + pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> + val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
> +
> + return val >= ID_AA64PFR1_MTE;
> +}
> +
> static void update_sctlr_el1_tcf0(u64 tcf0)
> {
> /* ISB required for the kernel uaccess routines */
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index da08b2533d73..f7ae0c23f023 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -240,6 +240,9 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
> #define get_tag(addr) arch_kasan_get_tag(addr)
>
> +#ifndef arch_cpu_supports_tags
> +#define arch_cpu_supports_tags() (false)
> +#endif
> #ifndef arch_init_tags
> #define arch_init_tags(max_tag)
> #endif
> @@ -253,6 +256,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
> #endif
>
> +#define cpu_supports_tags() arch_cpu_supports_tags()
> #define init_tags(max_tag) arch_init_tags(max_tag)
> #define get_random_tag() arch_get_random_tag()
> #define get_mem_tag(addr) arch_get_mem_tag(addr)
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 8:27:30 AM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
>
> TODO: no meaningful description here yet, please see the cover letter
> for this RFC series.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> ---
> mm/kasan/common.c | 92 +++++++++++++-----------
> mm/kasan/generic.c | 5 ++
> mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 9 +++
> mm/kasan/report.c | 14 +++-
> mm/kasan/sw_tags.c | 5 ++
> 6 files changed, 250 insertions(+), 44 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 1a5e6c279a72..cc129ef62ab1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> unsigned int redzone_size;
> int redzone_adjust;
>
> - /* Add alloc meta. */
> - cache->kasan_info.alloc_meta_offset = *size;
> - *size += sizeof(struct kasan_alloc_meta);
> -
> - /* Add free meta. */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> - cache->object_size < sizeof(struct kasan_free_meta))) {
> - cache->kasan_info.free_meta_offset = *size;
> - *size += sizeof(struct kasan_free_meta);
> - }
> -
> - redzone_size = optimal_redzone(cache->object_size);
> - redzone_adjust = redzone_size - (*size - cache->object_size);
> - if (redzone_adjust > 0)
> - *size += redzone_adjust;
> -
> - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> - max(*size, cache->object_size + redzone_size));
> + if (static_branch_unlikely(&kasan_stack)) {

Initially I thought kasan_stack is related to stack instrumentation.
And then wondered why we check it during slab creation.
I suggest giving it a slightly longer and more descriptive name.

... reading code further, it also disables quarantine, right?
Something to mention somewhere.
> struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> @@ -270,8 +274,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> if (!(cache->flags & SLAB_KASAN))
> return (void *)object;
>
> - alloc_meta = kasan_get_alloc_meta(cache, object);
> - __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + if (static_branch_unlikely(&kasan_stack)) {

Interestingly, now SLAB_KASAN is always set when kasan_stack is not
enabled. So it seems to me we can move the SLAB_KASAN check into this
unlikely branch now.

> + alloc_meta = kasan_get_alloc_meta(cache, object);
> + __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + }
>
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> object = set_tag(object, assign_tag(cache, object, true, false));
> @@ -308,15 +314,19 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> - if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> - unlikely(!(cache->flags & SLAB_KASAN)))
> - return false;
> + if (static_branch_unlikely(&kasan_stack)) {
> + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> + unlikely(!(cache->flags & SLAB_KASAN)))
> + return false;
> +
> + kasan_set_free_info(cache, object, tag);
>
> - kasan_set_free_info(cache, object, tag);
> + quarantine_put(cache, object);
>
> - quarantine_put(cache, object);
> + return IS_ENABLED(CONFIG_KASAN_GENERIC);
> + }
>
> - return IS_ENABLED(CONFIG_KASAN_GENERIC);
> + return false;
> }
>
> bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> @@ -355,7 +365,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> - if (cache->flags & SLAB_KASAN)
> + if (static_branch_unlikely(&kasan_stack) && (cache->flags & SLAB_KASAN))
> set_alloc_info(cache, (void *)object, flags);
>
> return set_tag(object, tag);
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index d259e4c3aefd..20a1e753e0c5 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -33,6 +33,11 @@
> #include "kasan.h"
> #include "../slab.h"
>
> +/* See the comments in hw_tags.c */
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
> +EXPORT_SYMBOL(kasan_enabled);
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
> +
> /*
> * All functions below always inlined so compiler could
> * perform better optimizations in each of __asan_loadX/__assn_storeX
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 915142da6b57..bccd781011ad 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -8,6 +8,8 @@
>
> #define pr_fmt(fmt) "kasan: " fmt
>
> +#include <linux/init.h>
> +#include <linux/jump_label.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> #include <linux/memory.h>
> +
> + return 0;
> +}
> +early_param("kasan.mode", early_kasan_mode);
> +
> +/* kasan.stack=off/on */
> +static int __init early_kasan_stack(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_stack = KASAN_ARG_STACK_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_stack = KASAN_ARG_STACK_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.stack", early_kasan_stack);
> +
> +/* kasan.trap=sync/async */
> +static int __init early_kasan_trap(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "ASYNC"))
> + kasan_arg_trap = KASAN_ARG_TRAP_ASYNC;
> + else if (!strcmp(arg, "sync"))
> + kasan_arg_trap = KASAN_ARG_TRAP_SYNC;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.trap", early_kasan_trap);
> +
> +/* kasan.fault=report/panic */
> +static int __init early_kasan_fault(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "report"))
> + kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
> + else if (!strcmp(arg, "panic"))
> + kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index f7ae0c23f023..00b47bc753aa 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -2,9 +2,18 @@
> #ifndef __MM_KASAN_KASAN_H
> #define __MM_KASAN_KASAN_H
>
> +#include <linux/jump_label.h>
> #include <linux/kasan.h>
> #include <linux/stackdepot.h>
>
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_stack);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_stack);
> +#endif

kasan_stack and kasan_enabled make sense and changed only in hw_tags mode.
It would be cleaner (and faster for other modes) to abstract static keys as:

#ifdef CONFIG_KASAN_HW_TAGS
#include <linux/jump_label.h>
DECLARE_STATIC_KEY_FALSE(kasan_stack);
static inline bool kasan_stack_collection_enabled()
{
return static_branch_unlikely(&kasan_stack);
}
#else
static inline bool kasan_stack_collection_enabled() { return true; }
#endif

This way we don't need to include and define static keys for other modes.

> +extern bool kasan_panic __ro_after_init;
> +
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> #else
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index dee5350b459c..426dd1962d3c 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -97,6 +97,10 @@ static void end_report(unsigned long *flags)
> panic_on_warn = 0;
> panic("panic_on_warn set ...\n");
> }
> +#ifdef CONFIG_KASAN_HW_TAGS
> + if (kasan_panic)
> + panic("kasan.fault=panic set ...\n");
> +#endif
> kasan_enable_current();
> }
>
> @@ -159,8 +163,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> (void *)(object_addr + cache->object_size));
> }
>
> -static void describe_object(struct kmem_cache *cache, void *object,
> - const void *addr, u8 tag)
> +static void describe_object_stacks(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> {
> struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
>
> @@ -188,7 +192,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
> }
> #endif
> }
> +}
>
> +static void describe_object(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> +{
> + if (static_branch_unlikely(&kasan_stack))
> + describe_object_stacks(cache, object, addr, tag);
> describe_object_addr(cache, object, addr);
> }
>
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index 4db41f274702..b6d185adf2c5 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c

Dmitry Vyukov

unread,
Oct 28, 2020, 12:47:52 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Declare the kasan_enabled static key in include/linux/kasan.h and in
> include/linux/mm.h and check it in all kasan annotations. This allows to
> avoid any slowdown caused by function calls when kasan_enabled is
> disabled.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
> ---
> include/linux/kasan.h | 210 ++++++++++++++++++++++++++++++++----------
> include/linux/mm.h | 27 ++++--
> mm/kasan/common.c | 60 ++++++------
> 3 files changed, 211 insertions(+), 86 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 2b9023224474..8654275aa62e 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -2,6 +2,7 @@
> #ifndef _LINUX_KASAN_H
> #define _LINUX_KASAN_H
>
> +#include <linux/jump_label.h>
> #include <linux/types.h>
>
> struct kmem_cache;
> @@ -66,40 +67,154 @@ static inline void kasan_disable_current(void) {}
>
> #ifdef CONFIG_KASAN
>
> -void kasan_alloc_pages(struct page *page, unsigned int order);
> -void kasan_free_pages(struct page *page, unsigned int order);
> +struct kasan_cache {
> + int alloc_meta_offset;
> + int free_meta_offset;
> +};
> +
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> +#endif
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags);
> +void __kasan_alloc_pages(struct page *page, unsigned int order);
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_alloc_pages(page, order);

The patch looks fine per se, but I think with the suggestion in the
previous patch, this should be:

if (kasan_is_enabled())
__kasan_alloc_pages(page, order);

No overhead for other modes and less logic duplication.

> +}
>
> -void kasan_unpoison_data(const void *address, size_t size);
> -void kasan_unpoison_slab(const void *ptr);
> +void __kasan_free_pages(struct page *page, unsigned int order);
> +static inline void kasan_free_pages(struct page *page, unsigned int order)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_free_pages(page, order);
> +}
>
> -void kasan_poison_slab(struct page *page);
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> - const void *object);
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags);
> +static inline void kasan_cache_create(struct kmem_cache *cache,
> + unsigned int *size, slab_flags_t *flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_cache_create(cache, size, flags);
> +}
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags);
> -void kasan_kfree_large(void *ptr, unsigned long ip);
> -void kasan_poison_kfree(void *ptr, unsigned long ip);
> -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags);
> -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags);
> +size_t __kasan_metadata_size(struct kmem_cache *cache);
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_metadata_size(cache);
> + return 0;
> +}
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags);
> -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +void __kasan_unpoison_data(const void *addr, size_t size);
> +static inline void kasan_unpoison_data(const void *addr, size_t size)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_unpoison_data(addr, size);
> +}
>
> -struct kasan_cache {
> - int alloc_meta_offset;
> - int free_meta_offset;
> -};
> +void __kasan_unpoison_slab(const void *ptr);
> +static inline void kasan_unpoison_slab(const void *ptr)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_unpoison_slab(ptr);
> +}
> +
> +void __kasan_poison_slab(struct page *page);
> +static inline void kasan_poison_slab(struct page *page)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_poison_slab(page);
> +}
> +
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> +static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_unpoison_object_data(cache, object);
> +}
> +
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
> +static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_poison_object_data(cache, object);
> +}
> +
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> + const void *object);
> +static inline void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> + const void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_init_slab_obj(cache, object);
> + return (void *)object;
> +}
> +
> +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_slab_free(s, object, ip);
> + return false;
> +}
>
> -size_t kasan_metadata_size(struct kmem_cache *cache);
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> + void *object, gfp_t flags);
> +static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
> + void *object, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_slab_alloc(s, object, flags);
> + return object;
> +}
> +
> +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags);
> +static inline void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_kmalloc(s, object, size, flags);
> + return (void *)object;
> +}
> +
> +void * __must_check __kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags);
> +static inline void * __must_check kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_kmalloc_large(ptr, size, flags);
> + return (void *)ptr;
> +}
> +
> +void * __must_check __kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags);
> +static inline void * __must_check kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_krealloc(object, new_size, flags);
> + return (void *)object;
> +}
> +
> +void __kasan_poison_kfree(void *ptr, unsigned long ip);
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_poison_kfree(ptr, ip);
> +}
> +
> +void __kasan_kfree_large(void *ptr, unsigned long ip);
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_kfree_large(ptr, ip);
> +}
>
> bool kasan_save_enable_multi_shot(void);
> void kasan_restore_multi_shot(bool enabled);
> @@ -108,14 +223,12 @@ void kasan_restore_multi_shot(bool enabled);
>
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> -
> static inline void kasan_cache_create(struct kmem_cache *cache,
> unsigned int *size,
> slab_flags_t *flags) {}
> -
> -static inline void kasan_unpoison_data(const void *address, size_t size) { }
> -static inline void kasan_unpoison_slab(const void *ptr) { }
> -
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> +static inline void kasan_unpoison_data(const void *address, size_t size) {}
> +static inline void kasan_unpoison_slab(const void *ptr) {}
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> void *object) {}
> @@ -126,36 +239,33 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> {
> return (void *)object;
> }
> -
> -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> + unsigned long ip)
> {
> - return ptr;
> + return false;
> }
> -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> -static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags)
> +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> + gfp_t flags)
> {
> - return (void *)object;
> + return object;
> }
> -static inline void *kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags)
> +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags)
> {
> return (void *)object;
> }
>
> -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags)
> +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
> {
> - return object;
> + return (void *)ptr;
> }
> -static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> - unsigned long ip)
> +static inline void *kasan_krealloc(const void *object, size_t new_size,
> + gfp_t flags)
> {
> - return false;
> + return (void *)object;
> }
> -
> -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
>
> #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a3cac68c737c..701e9d7666d6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1412,22 +1412,36 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
> #endif /* CONFIG_NUMA_BALANCING */
>
> #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> +#endif
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + if (static_branch_likely(&kasan_enabled))
> + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + return 0xff;
> }
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag)
> {
> - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + if (static_branch_likely(&kasan_enabled)) {
> + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + }
> }
>
> static inline void page_kasan_tag_reset(struct page *page)
> {
> - page_kasan_tag_set(page, 0xff);
> + if (static_branch_likely(&kasan_enabled))
> + page_kasan_tag_set(page, 0xff);
> }
> -#else
> +
> +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> return 0xff;
> @@ -1435,7 +1449,8 @@ static inline u8 page_kasan_tag(const struct page *page)
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> static inline void page_kasan_tag_reset(struct page *page) { }
> -#endif
> +
> +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
>
> static inline struct zone *page_zone(const struct page *page)
> {
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index cc129ef62ab1..c5ec60e1a4d2 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -81,7 +81,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> }
> #endif /* CONFIG_KASAN_STACK */
>
> -void kasan_alloc_pages(struct page *page, unsigned int order)
> +void __kasan_alloc_pages(struct page *page, unsigned int order)
> {
> u8 tag;
> unsigned long i;
> @@ -95,7 +95,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
> kasan_unpoison_memory(page_address(page), PAGE_SIZE << order);
> }
>
> -void kasan_free_pages(struct page *page, unsigned int order)
> +void __kasan_free_pages(struct page *page, unsigned int order)
> {
> if (likely(!PageHighMem(page)))
> kasan_poison_memory(page_address(page),
> @@ -122,8 +122,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
> object_size <= (1 << 16) - 1024 ? 1024 : 2048;
> }
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags)
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags)
> {
> unsigned int orig_size = *size;
> unsigned int redzone_size;
> @@ -165,7 +165,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> *flags |= SLAB_KASAN;
> }
>
> -size_t kasan_metadata_size(struct kmem_cache *cache)
> +size_t __kasan_metadata_size(struct kmem_cache *cache)
> {
> if (static_branch_unlikely(&kasan_stack))
> return (cache->kasan_info.alloc_meta_offset ?
> @@ -188,17 +188,17 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
> }
>
> -void kasan_unpoison_data(const void *address, size_t size)
> +void __kasan_unpoison_data(const void *addr, size_t size)
> {
> - kasan_unpoison_memory(address, size);
> + kasan_unpoison_memory(addr, size);
> }
>
> -void kasan_unpoison_slab(const void *ptr)
> +void __kasan_unpoison_slab(const void *ptr)
> {
> kasan_unpoison_memory(ptr, __ksize(ptr));
> }
>
> -void kasan_poison_slab(struct page *page)
> +void __kasan_poison_slab(struct page *page)
> {
> unsigned long i;
>
> @@ -208,12 +208,12 @@ void kasan_poison_slab(struct page *page)
> KASAN_KMALLOC_REDZONE);
> }
>
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> {
> kasan_unpoison_memory(object, cache->object_size);
> }
>
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> {
> kasan_poison_memory(object,
> round_up(cache->object_size, KASAN_GRANULE_SIZE),
> @@ -266,7 +266,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> #endif
> }
>
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object)
> {
> struct kasan_alloc_meta *alloc_meta;
> @@ -285,7 +285,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> return (void *)object;
> }
>
> -static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> unsigned long ip, bool quarantine)
> {
> u8 tag;
> @@ -329,9 +329,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> return false;
> }
>
> -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> {
> - return __kasan_slab_free(cache, object, ip, true);
> + return ____kasan_slab_free(cache, object, ip, true);
> }
>
> static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> @@ -339,7 +339,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> }
>
> -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags, bool keep_tag)
> {
> unsigned long redzone_start;
> @@ -371,20 +371,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> return set_tag(object, tag);
> }
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
> - gfp_t flags)
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> + void *object, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
> + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> }
>
> -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags)
> +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> + size_t size, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, size, flags, true);
> + return ____kasan_kmalloc(cache, object, size, flags, true);
> }
> -EXPORT_SYMBOL(kasan_kmalloc);
> +EXPORT_SYMBOL(__kasan_kmalloc);
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> gfp_t flags)
> {
> struct page *page;
> @@ -409,7 +409,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> return (void *)ptr;
> }
>
> -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> {
> struct page *page;
>
> @@ -419,13 +419,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> page = virt_to_head_page(object);
>
> if (unlikely(!PageSlab(page)))
> - return kasan_kmalloc_large(object, size, flags);
> + return __kasan_kmalloc_large(object, size, flags);
> else
> - return __kasan_kmalloc(page->slab_cache, object, size,
> + return ____kasan_kmalloc(page->slab_cache, object, size,
> flags, true);
> }
>
> -void kasan_poison_kfree(void *ptr, unsigned long ip)
> +void __kasan_poison_kfree(void *ptr, unsigned long ip)
> {
> struct page *page;
>
> @@ -438,11 +438,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
> }
> kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
> } else {
> - __kasan_slab_free(page->slab_cache, ptr, ip, false);
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> }
> }
>
> -void kasan_kfree_large(void *ptr, unsigned long ip)
> +void __kasan_kfree_large(void *ptr, unsigned long ip)
> {
> if (ptr != page_address(virt_to_head_page(ptr)))
> kasan_report_invalid_free(ptr, ip);
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 12:55:50 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
> to reunpoison the object data, only to poison the redzone.
>
> This requires changing kasan annotation for early SLUB cache to
> kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
> This doesn't do any functional changes, as kmem_cache_node->object_size
> is equal to sizeof(struct kmem_cache_node).
>
> Similarly for kasan_krealloc(), as it's called after ksize(), which
> already unpoisoned the object, there's no need to do it again.

Have you considered doing this the other way around: make krealloc
call __ksize and unpoison in kasan_krealloc?
This has the advantage of more precise poisoning as ksize will
unpoison the whole underlying object.

But then maybe we will need to move first checks in ksize into __ksize
as we may need them in krealloc as well.





> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2
> ---
> mm/kasan/common.c | 31 +++++++++++++++++++++----------
> mm/slub.c | 3 +--
> 2 files changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index c5ec60e1a4d2..a581937c2a44 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> tag = assign_tag(cache, object, false, keep_tag);
>
> - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> - kasan_unpoison_memory(set_tag(object, tag), size);
> + /*
> + * Don't unpoison the object when keeping the tag. Tag is kept for:
> + * 1. krealloc(), and then the memory has already been unpoisoned via ksize();
> + * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc().
> + * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS.
> + */
> + if (!keep_tag)
> + kasan_unpoison_memory(set_tag(object, tag), size);
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> @@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
> }
> EXPORT_SYMBOL(__kasan_kmalloc);
>
> -void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags)
> +static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr,
> + size_t size, gfp_t flags, bool realloc)
> {
> - struct page *page;
> unsigned long redzone_start;
> unsigned long redzone_end;
>
> @@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> if (unlikely(ptr == NULL))
> return NULL;
>
> - page = virt_to_page(ptr);
> - redzone_start = round_up((unsigned long)(ptr + size),
> - KASAN_GRANULE_SIZE);
> + redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
> redzone_end = (unsigned long)ptr + page_size(page);
>
> - kasan_unpoison_memory(ptr, size);
> + /* ksize() in __do_krealloc() already unpoisoned the memory. */
> + if (!realloc)
> + kasan_unpoison_memory(ptr, size);
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_PAGE_REDZONE);
>
> return (void *)ptr;
> }
>
> +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> + gfp_t flags)
> +{
> + return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false);
> +}
> +
> void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> {
> struct page *page;
> @@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> page = virt_to_head_page(object);
>
> if (unlikely(!PageSlab(page)))
> - return __kasan_kmalloc_large(object, size, flags);
> + return ____kasan_kmalloc_large(page, object, size, flags, true);
> else
> return ____kasan_kmalloc(page->slab_cache, object, size,
> flags, true);
> diff --git a/mm/slub.c b/mm/slub.c
> index 1d3f2355df3b..afb035b0bf2d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node)
> init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
> init_tracking(kmem_cache_node, n);
> #endif
> - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
> - GFP_KERNEL);
> + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
> page->freelist = get_freepointer(kmem_cache_node, n);
> page->inuse = 1;
> page->frozen = 0;
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 12:57:45 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> kasan_poison_kfree() is currently only called for mempool allocations
> that are backed by either kmem_cache_alloc() or kmalloc(). Therefore, the
> page passed to kasan_poison_kfree() is always PageSlab() and there's no
> need to do the check.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/If31f88726745da8744c6bea96fb32584e6c2778c

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 11 +----------
> 1 file changed, 1 insertion(+), 10 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a581937c2a44..b82dbae0c5d6 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -441,16 +441,7 @@ void __kasan_poison_kfree(void *ptr, unsigned long ip)
> struct page *page;
>
> page = virt_to_head_page(ptr);
> -
> - if (unlikely(!PageSlab(page))) {
> - if (ptr != page_address(page)) {
> - kasan_report_invalid_free(ptr, ip);
> - return;
> - }
> - kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
> - } else {
> - ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> - }
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> }
>
> void __kasan_kfree_large(void *ptr, unsigned long ip)
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 12:58:18 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Rename kasan_poison_kfree() into kasan_slab_free_mempool() as it better
> reflects what this annotation does.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I5026f87364e556b506ef1baee725144bb04b8810

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> include/linux/kasan.h | 16 ++++++++--------
> mm/kasan/common.c | 16 ++++++++--------
> mm/mempool.c | 2 +-
> 3 files changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 8654275aa62e..2ae92f295f76 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -162,6 +162,13 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned
> return false;
> }
>
> +void __kasan_slab_free_mempool(void *ptr, unsigned long ip);
> +static inline void kasan_slab_free_mempool(void *ptr, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_slab_free_mempool(ptr, ip);
> +}
> +
> void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> void *object, gfp_t flags);
> static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
> @@ -202,13 +209,6 @@ static inline void * __must_check kasan_krealloc(const void *object,
> return (void *)object;
> }
>
> -void __kasan_poison_kfree(void *ptr, unsigned long ip);
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> -{
> - if (static_branch_likely(&kasan_enabled))
> - __kasan_poison_kfree(ptr, ip);
> -}
> -
> void __kasan_kfree_large(void *ptr, unsigned long ip);
> static inline void kasan_kfree_large(void *ptr, unsigned long ip)
> {
> @@ -244,6 +244,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> {
> return false;
> }
> +static inline void kasan_slab_free_mempool(void *ptr, unsigned long ip) {}
> static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> gfp_t flags)
> {
> @@ -264,7 +265,6 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
> {
> return (void *)object;
> }
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
>
> #endif /* CONFIG_KASAN */
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index b82dbae0c5d6..5622b0ec0907 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -334,6 +334,14 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> return ____kasan_slab_free(cache, object, ip, true);
> }
>
> +void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> +{
> + struct page *page;
> +
> + page = virt_to_head_page(ptr);
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> +}
> +
> static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> {
> kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> @@ -436,14 +444,6 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> flags, true);
> }
>
> -void __kasan_poison_kfree(void *ptr, unsigned long ip)
> -{
> - struct page *page;
> -
> - page = virt_to_head_page(ptr);
> - ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> -}
> -
> void __kasan_kfree_large(void *ptr, unsigned long ip)
> {
> if (ptr != page_address(virt_to_head_page(ptr)))
> diff --git a/mm/mempool.c b/mm/mempool.c
> index 79bff63ecf27..0e8d877fbbc6 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -106,7 +106,7 @@ static inline void poison_element(mempool_t *pool, void *element)
> static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
> {
> if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
> - kasan_poison_kfree(element, _RET_IP_);
> + kasan_slab_free_mempool(element, _RET_IP_);
> if (pool->alloc == mempool_alloc_pages)
> kasan_free_pages(element, (unsigned long)pool->pool_data);
> }
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 1:02:02 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> For tag-based mode kasan_poison_memory() already rounds up the size. Do
> the same for software modes and remove round_up() from common code.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/Ib397128fac6eba874008662b4964d65352db4aa4

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 8 ++------
> mm/kasan/shadow.c | 1 +
> 2 files changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 5622b0ec0907..983383ebe32a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -215,9 +215,7 @@ void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
>
> void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> {
> - kasan_poison_memory(object,
> - round_up(cache->object_size, KASAN_GRANULE_SIZE),
> - KASAN_KMALLOC_REDZONE);
> + kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_REDZONE);
> }
>
> /*
> @@ -290,7 +288,6 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> {
> u8 tag;
> void *tagged_object;
> - unsigned long rounded_up_size;
>
> tag = get_tag(object);
> tagged_object = object;
> @@ -311,8 +308,7 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> return true;
> }
>
> - rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> - kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
> + kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_FREE);
>
> if (static_branch_unlikely(&kasan_stack)) {
> if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 616ac64c4a21..ab1d39c566b9 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -82,6 +82,7 @@ void kasan_poison_memory(const void *address, size_t size, u8 value)
> * addresses to this function.
> */
> address = reset_tag(address);
> + size = round_up(size, KASAN_GRANULE_SIZE);
>
> shadow_start = kasan_mem_to_shadow(address);
> shadow_end = kasan_mem_to_shadow(address + size);
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 1:03:33 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> set_tag() already ignores the tag for the generic mode, so just call it
> as is. Add a check for the generic mode to assign_tag(), and simplify its
> call in ____kasan_kmalloc().
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I18905ca78fb4a3d60e1a34a4ca00247272480438

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 983383ebe32a..3cd56861eb11 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -235,6 +235,9 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> static u8 assign_tag(struct kmem_cache *cache, const void *object,
> bool init, bool keep_tag)
> {
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + return 0xff;
> +
> /*
> * 1. When an object is kmalloc()'ed, two hooks are called:
> * kasan_slab_alloc() and kasan_kmalloc(). We assign the
> @@ -277,8 +280,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> __memset(alloc_meta, 0, sizeof(*alloc_meta));
> }
>
> - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> - object = set_tag(object, assign_tag(cache, object, true, false));
> + /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
> + object = set_tag(object, assign_tag(cache, object, true, false));
>
> return (void *)object;
> }
> @@ -360,9 +363,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_GRANULE_SIZE);
> redzone_end = round_up((unsigned long)object + cache->object_size,
> KASAN_GRANULE_SIZE);
> -
> - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> - tag = assign_tag(cache, object, false, keep_tag);
> + tag = assign_tag(cache, object, false, keep_tag);
>
> /*
> * Don't unpoison the object when keeping the tag. Tag is kept for:
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Dmitry Vyukov

unread,
Oct 28, 2020, 1:04:29 PM10/28/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
>
> Currently it says that the memory gets poisoned by page_alloc code.
> Clarify this by mentioning the specific callback that poisons the
> memory.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/I1334dffb69b87d7986fab88a1a039cc3ea764725

Reviewed-by: Dmitry Vyukov <dvy...@google.com>

> ---
> mm/kasan/common.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 3cd56861eb11..54af79aa8d3f 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -445,5 +445,5 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
> {
> if (ptr != page_address(virt_to_head_page(ptr)))
> kasan_report_invalid_free(ptr, ip);
> - /* The object will be poisoned by page_alloc. */
> + /* The object will be poisoned by kasan_free_pages(). */
> }
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

Andrey Konovalov

unread,
Oct 29, 2020, 3:57:55 PM10/29/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Tue, Oct 27, 2020 at 1:44 PM Dmitry Vyukov <dvy...@google.com> wrote:
>
CONFIG_KASAN_STACK is different from other KASAN configs. It's always
defined, and its value is what controls whether stack instrumentation
is enabled.

Andrey Konovalov

unread,
Oct 29, 2020, 4:00:51 PM10/29/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Tue, Oct 27, 2020 at 1:49 PM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
> >
> > Even though hardware tag-based mode currently doesn't support checking
> > vmalloc allocations, it doesn't use shadow memory and works with
> > VMAP_STACK as is.
> >
> > Signed-off-by: Andrey Konovalov <andre...@google.com>
> > Link: https://linux-review.googlesource.com/id/I3552cbc12321dec82cd7372676e9372a2eb452ac
> > ---
> > arch/Kconfig | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/Kconfig b/arch/Kconfig
> > index af14a567b493..3caf7bcdcf93 100644
> > --- a/arch/Kconfig
> > +++ b/arch/Kconfig
> > @@ -868,7 +868,7 @@ config VMAP_STACK
> > default y
> > bool "Use a virtually-mapped stack"
> > depends on HAVE_ARCH_VMAP_STACK
> > - depends on !KASAN || KASAN_VMALLOC
> > + depends on !(KASAN_GENERIC || KASAN_SW_TAGS) || KASAN_VMALLOC
>
> I find it a bit simpler to interpret:
>
> depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC
>
> due to simpler structure. But maybe it's just me.

This looks better, will fix in the next version, thanks!

Andrey Konovalov

unread,
Oct 29, 2020, 4:08:42 PM10/29/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Wed, Oct 28, 2020 at 11:08 AM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 3:19 PM 'Andrey Konovalov' via kasan-dev
> <kasa...@googlegroups.com> wrote:
> >
> > Similarly to kasan_init() mark kasan_init_tags() as __init.
> >
> > Signed-off-by: Andrey Konovalov <andre...@google.com>
> > Link: https://linux-review.googlesource.com/id/I8792e22f1ca5a703c5e979969147968a99312558
>
> Reviewed-by: Dmitry Vyukov <dvy...@google.com>
>
> init_tags itself is not __init, but that's added in a different patch.
> I've commented on that patch.

Will add that change to this patch, thanks! If we combine the two
patch series, we can move this into the other one later. Thanks!

Andrey Konovalov

unread,
Oct 29, 2020, 4:14:25 PM10/29/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Wed, Oct 28, 2020 at 11:56 AM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
> >
> > Tag-based KASAN modes are fully initialized with kasan_init_tags(),
> > while the generic mode only requireds kasan_init(). Move the
> > initialization message for tag-based modes into kasan_init_tags().
> >
> > Also fix pr_fmt() usage for KASAN code: generic mode doesn't need it,
>
> Why doesn't it need it? What's the difference with tag modes?

I need to reword the patch descriptions: it's not the mode that
doesn't need it, it's the generic.c file, as it doesn't use any pr_*()
functions.

>
> > tag-based modes should use "kasan:" instead of KBUILD_MODNAME.
>
> With generic KASAN I currently see:
>
> [ 0.571473][ T0] kasan: KernelAddressSanitizer initialized
>
> So KBUILD_MODNAME somehow works. Is there some difference between files?

That code is printed from arch/xxx/mm/kasan_init*.c, which has its own
pr_fmt defined.

>
> > Signed-off-by: Andrey Konovalov <andre...@google.com>
> > Link: https://linux-review.googlesource.com/id/Idfd1e50625ffdf42dfc3dbf7455b11bd200a0a49
> > ---
> > arch/arm64/mm/kasan_init.c | 3 +++
> > mm/kasan/generic.c | 2 --
> > mm/kasan/hw_tags.c | 4 ++++
> > mm/kasan/sw_tags.c | 4 +++-
> > 4 files changed, 10 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index b6b9d55bb72e..8f17fa834b62 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -290,5 +290,8 @@ void __init kasan_init(void)
> > {
> > kasan_init_shadow();
> > kasan_init_depth();
> > +#if defined(CONFIG_KASAN_GENERIC)
> > + /* CONFIG_KASAN_SW/HW_TAGS also requires kasan_init_tags(). */
>
> A bit cleaner way may be to introduce kasan_init_early() and
> kasan_init_late(). Late() will do tag init and always print the
> message.

It appears we'll also need kasan_init_even_later() for some
MTE-related stuff. I'll try to figure out some sane naming scheme here
and include it into the next version.

Marco Elver

unread,
Oct 30, 2020, 10:45:38 AM10/30/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
On Thu, 22 Oct 2020 at 15:19, Andrey Konovalov <andre...@google.com> wrote:
>
> TODO: no meaningful description here yet, please see the cover letter
> for this RFC series.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> ---
> mm/kasan/common.c | 92 +++++++++++++-----------
> mm/kasan/generic.c | 5 ++
> mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 9 +++
> mm/kasan/report.c | 14 +++-
> mm/kasan/sw_tags.c | 5 ++
> 6 files changed, 250 insertions(+), 44 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 1a5e6c279a72..cc129ef62ab1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> unsigned int redzone_size;
> int redzone_adjust;
>
> - /* Add alloc meta. */
> - cache->kasan_info.alloc_meta_offset = *size;
> - *size += sizeof(struct kasan_alloc_meta);
> -
> - /* Add free meta. */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> - cache->object_size < sizeof(struct kasan_free_meta))) {
> - cache->kasan_info.free_meta_offset = *size;
> - *size += sizeof(struct kasan_free_meta);
> - }
> -
> - redzone_size = optimal_redzone(cache->object_size);
> - redzone_adjust = redzone_size - (*size - cache->object_size);
> - if (redzone_adjust > 0)
> - *size += redzone_adjust;
> -
> - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> - max(*size, cache->object_size + redzone_size));
> + if (static_branch_unlikely(&kasan_stack)) {

I just looked at this file in your Github repo, and noticed that this
could just be

if (!static_branch_unlikely(&kasan_stack))
return;

since the if-block ends at the function. That might hopefully make the
diff a bit smaller.

Thanks,
-- Marco

Andrey Konovalov

unread,
Oct 30, 2020, 11:48:36 AM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Not really. Will simplify this in the next version and give cleaner names.

Andrey Konovalov

unread,
Oct 30, 2020, 12:07:26 PM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Actually I think I'll keep both for the next version, but rename
get_random_tag() into hw_get_random_tag() along with other hw-specific
calls. The idea is to have hw_*() calls for things that are
implemented by the hardware for HW_TAGS, and then define random_tag()
based on that for HW_TAGS and based on a software implementation for
SW_TAGS.

Andrey Konovalov

unread,
Oct 30, 2020, 12:19:33 PM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
arch_kasan_reset_tag() (->__tag_reset() -> __untagged_addr())
preserves the type of the argument, so the cast is needed.
Will remove it.

Andrey Konovalov

unread,
Oct 30, 2020, 12:35:08 PM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Wed, Oct 28, 2020 at 12:36 PM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <andre...@google.com> wrote:
> >
> > Currently kasan_unpoison_memory() is used as both an external annotation
> > and as internal memory poisoning helper. Rename external annotation to
> > kasan_unpoison_data() and inline the internal helper for for hardware
> > tag-based mode to avoid undeeded function calls.
> >
> > There's the external annotation kasan_unpoison_slab() that is currently
> > defined as static inline and uses kasan_unpoison_memory(). With this
> > change it's turned into a function call. Overall, this results in the
> > same number of calls for hardware tag-based mode as
> > kasan_unpoison_memory() is now inlined.
>
> Can't we leave kasan_unpoison_slab as is? Or there are other reasons
> to uninline it?

Just to have cleaner kasan.h callbacks definitions.

> It seems that uninling it is orthogonal to the rest of this patch.

I can split it out into a separate patch if you think this makes sense?

Dmitry Vyukov

unread,
Oct 30, 2020, 1:46:47 PM10/30/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
I don't have a strong opinion either way.

Andrey Konovalov

unread,
Oct 30, 2020, 3:30:23 PM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Will do.

> ... reading code further, it also disables quarantine, right?
> Something to mention somewhere.

Quarantine is not supported for anything but generic KASAN. Maybe it
makes sense to put this into documentation...
Yes, will fix. I'll also include a complete rework of SLAB_KASAN into
the next version.
Sounds good, will do.

Andrey Konovalov

unread,
Oct 30, 2020, 3:30:46 PM10/30/20
to Marco Elver, Catalin Marinas, Will Deacon, Vincenzo Frascino, Dmitry Vyukov, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux Memory Management List, LKML
Will do, thanks!

Andrey Konovalov

unread,
Oct 30, 2020, 3:47:56 PM10/30/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Will do, thanks!

Andrey Konovalov

unread,
Nov 2, 2020, 10:17:16 AM11/2/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Wed, Oct 28, 2020 at 5:55 PM Dmitry Vyukov <dvy...@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <andre...@google.com> wrote:
> >
> > Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
> > to reunpoison the object data, only to poison the redzone.
> >
> > This requires changing kasan annotation for early SLUB cache to
> > kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
> > This doesn't do any functional changes, as kmem_cache_node->object_size
> > is equal to sizeof(struct kmem_cache_node).
> >
> > Similarly for kasan_krealloc(), as it's called after ksize(), which
> > already unpoisoned the object, there's no need to do it again.
>
> Have you considered doing this the other way around: make krealloc
> call __ksize and unpoison in kasan_krealloc?
> This has the advantage of more precise poisoning as ksize will
> unpoison the whole underlying object.
>
> But then maybe we will need to move first checks in ksize into __ksize
> as we may need them in krealloc as well.

This might be a good idea. I won't implement this for the next
version, but will look into this after that. Thanks!

Andrey Konovalov

unread,
Nov 3, 2020, 10:34:06 AM11/3/20
to Dmitry Vyukov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Actually, it looks like some arches already have
kasan_init_early/late() along with kasan_init(). I'd say we better
keep those for generic KASAN mode, and kasan_init_tags() for tag-based
modes.

Dmitry Vyukov

unread,
Nov 16, 2020, 5:59:41 AM11/16/20
to Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Marco Elver, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Not sure why we did this instead of the following, but okay.

config KASAN_STACK
- int
- default 1 if KASAN_STACK_ENABLE || CC_IS_GCC
- default 0
+ bool
+ default y if KASAN_STACK_ENABLE || CC_IS_GCC
+ default n

Marco Elver

unread,
Nov 16, 2020, 6:50:13 AM11/16/20
to Dmitry Vyukov, Andrey Konovalov, Catalin Marinas, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
I wondered the same, but then looking at scripts/Makefile.kasan I
think it's because we directly pass it to the compiler:
...
$(call cc-param,asan-stack=$(CONFIG_KASAN_STACK)) \
...

Catalin Marinas

unread,
Nov 16, 2020, 7:16:12 AM11/16/20
to Marco Elver, Dmitry Vyukov, Andrey Konovalov, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
Try this instead:

$(call cc-param,asan-stack=$(if $(CONFIG_KASAN_STACK),1,0)) \

--
Catalin

Dmitry Vyukov

unread,
Nov 16, 2020, 7:19:20 AM11/16/20
to Catalin Marinas, Marco Elver, Andrey Konovalov, Will Deacon, Vincenzo Frascino, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
We could have just 1 config instead of 2 as well.
For gcc we could do no prompt and default value y, and for clang --
prompt and default value n. I think it should do what we need.

Vincenzo Frascino

unread,
Nov 16, 2020, 7:42:17 AM11/16/20
to Dmitry Vyukov, Catalin Marinas, Marco Elver, Andrey Konovalov, Will Deacon, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
I agree with Catalin's proposal since it should simplify things.

Nit: 'default n' is the default hence I do not think it should be required
explicitly.

--
Regards,
Vincenzo

Andrey Konovalov

unread,
Nov 16, 2020, 8:50:59 AM11/16/20
to Vincenzo Frascino, Dmitry Vyukov, Marco Elver, Catalin Marinas, Will Deacon, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On Mon, Nov 16, 2020 at 1:42 PM Vincenzo Frascino
<vincenzo...@arm.com> wrote:
>
> >>>>
> >>>> Not sure why we did this instead of the following, but okay.
> >>>>
> >>>> config KASAN_STACK
> >>>> - int
> >>>> - default 1 if KASAN_STACK_ENABLE || CC_IS_GCC
> >>>> - default 0
> >>>> + bool
> >>>> + default y if KASAN_STACK_ENABLE || CC_IS_GCC
> >>>> + default n
> >>>
> >>> I wondered the same, but then looking at scripts/Makefile.kasan I
> >>> think it's because we directly pass it to the compiler:
> >>> ...
> >>> $(call cc-param,asan-stack=$(CONFIG_KASAN_STACK)) \
> >>> ...
> >>
> >> Try this instead:
> >>
> >> $(call cc-param,asan-stack=$(if $(CONFIG_KASAN_STACK),1,0)) \
> >
> >
> > We could have just 1 config instead of 2 as well.
> > For gcc we could do no prompt and default value y, and for clang --
> > prompt and default value n. I think it should do what we need.
> >
>
> I agree with Catalin's proposal since it should simplify things.
>
> Nit: 'default n' is the default hence I do not think it should be required
> explicitly.

Fixing this sounds like a good idea, but perhaps not as a part of this
series, to not overinflate it even further.

I've filed a bug for this: https://bugzilla.kernel.org/show_bug.cgi?id=210221

Vincenzo Frascino

unread,
Nov 16, 2020, 9:47:14 AM11/16/20
to Andrey Konovalov, Dmitry Vyukov, Marco Elver, Catalin Marinas, Will Deacon, Alexander Potapenko, Evgenii Stepanov, Kostya Serebryany, Peter Collingbourne, Serban Constantinescu, Andrey Ryabinin, Elena Petrova, Branislav Rankov, Kevin Brodsky, Andrew Morton, kasan-dev, Linux ARM, Linux-MM, LKML
On 11/16/20 1:50 PM, Andrey Konovalov wrote:
> Fixing this sounds like a good idea, but perhaps not as a part of this
> series, to not overinflate it even further.
>
> I've filed a bug for this: https://bugzilla.kernel.org/show_bug.cgi?id=210221

Fine by me.

--
Regards,
Vincenzo
Reply all
Reply to author
Forward
0 new messages