[PATCH v2 0/3] kasan: tag-based mode fixes

6 views
Skip to first unread message

Andrey Konovalov

unread,
Jan 2, 2019, 12:36:14 PM1/2/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Hi Andrew,

This patchset includes an updated "kasan, arm64: use ARCH_SLAB_MINALIGN
instead of manual aligning" patch and fixes for two more issues that
were uncovered while testing with a variety of different config options
enabled.

Thanks!

Andrey Konovalov (3):
kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
kasan: fix krealloc handling for tag-based mode

arch/arm64/include/asm/kasan.h | 4 ++++
include/linux/kasan.h | 14 +++++---------
include/linux/slab.h | 5 +++--
mm/kasan/common.c | 22 ++++++++++++----------
mm/slab.c | 8 ++++----
mm/slab_common.c | 2 +-
mm/slub.c | 12 +++++++-----
7 files changed, 36 insertions(+), 31 deletions(-)

--
2.20.1.415.g653613c723-goog

Andrey Konovalov

unread,
Jan 2, 2019, 12:36:15 PM1/2/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

Suggested-by: Vincenzo Frascino <vincenzo...@arm.com>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/include/asm/kasan.h | 4 ++++
include/linux/slab.h | 1 +
mm/kasan/common.c | 2 --
3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..ba26150d578d 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -36,6 +36,10 @@
#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1ULL << \
(64 - KASAN_SHADOW_SCALE_SHIFT)))

+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
void kasan_init(void);
void kasan_copy_shadow(pgd_t *pgdir);
asmlinkage void kasan_early_init(void);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 11b45f7ae405..d87f913ab4e8 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -16,6 +16,7 @@
#include <linux/overflow.h>
#include <linux/types.h>
#include <linux/workqueue.h>
+#include <linux/kasan.h>


/*
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 03d5d1374ca7..44390392d4c9 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
return;
}

- cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
-
*flags |= SLAB_KASAN;
}

--
2.20.1.415.g653613c723-goog

Andrey Konovalov

unread,
Jan 2, 2019, 12:36:17 PM1/2/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
With CONFIG_HARDENED_USERCOPY enabled __check_heap_object() compares and
then subtracts a potentially tagged pointer with a non-tagged address of
the page that this pointer belongs to, which leads to unexpected behavior.

Untag the pointer in __check_heap_object() before doing any of these
operations.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 36c0befeebd8..1e3d0ec4e200 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3846,6 +3846,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
unsigned int offset;
size_t object_size;

+ ptr = kasan_reset_tag(ptr);
+
/* Find object and usable object size. */
s = page->slab_cache;

--
2.20.1.415.g653613c723-goog

Andrey Konovalov

unread,
Jan 2, 2019, 12:36:19 PM1/2/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Right now tag-based KASAN can retag the memory that is reallocated via
krealloc and return a differently tagged pointer even if the same slab
object gets used and no reallocated technically happens.

There are a few issues with this approach. One is that krealloc callers
can't rely on comparing the return value with the passed argument to
check whether reallocation happened. Another is that if a caller knows
that no reallocation happened, that it can access object memory through
the old pointer, which leads to false positives. Look at nf_ct_ext_add()
to see an example.

Fix this by keeping the same tag if the memory don't actually gets
reallocated during krealloc.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
include/linux/kasan.h | 14 +++++---------
include/linux/slab.h | 4 ++--
mm/kasan/common.c | 20 ++++++++++++--------
mm/slab.c | 8 ++++----
mm/slab_common.c | 2 +-
mm/slub.c | 10 +++++-----
6 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b40ea104dd36..7576fff90923 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -57,9 +57,8 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
void kasan_kfree_large(void *ptr, unsigned long ip);
void kasan_poison_kfree(void *ptr, unsigned long ip);
void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags);
+ size_t size, gfp_t flags, bool krealloc);
+void kasan_krealloc(const void *object, size_t new_size, gfp_t flags);

void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags);
@@ -118,15 +117,12 @@ static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags)
-{
- return (void *)object;
-}
-static inline void *kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags)
+ size_t size, gfp_t flags, bool krealloc)
{
return (void *)object;
}
+static inline void kasan_krealloc(const void *object, size_t new_size,
+ gfp_t flags) {}

static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index d87f913ab4e8..1cd168758c05 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -445,7 +445,7 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
{
void *ret = kmem_cache_alloc(s, flags);

- ret = kasan_kmalloc(s, ret, size, flags);
+ ret = kasan_kmalloc(s, ret, size, flags, false);
return ret;
}

@@ -456,7 +456,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
{
void *ret = kmem_cache_alloc_node(s, gfpflags, node);

- ret = kasan_kmalloc(s, ret, size, gfpflags);
+ ret = kasan_kmalloc(s, ret, size, gfpflags, false);
return ret;
}
#endif /* CONFIG_TRACING */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 44390392d4c9..b6633ab86160 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -392,7 +392,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
gfp_t flags)
{
- return kasan_kmalloc(cache, object, cache->object_size, flags);
+ return kasan_kmalloc(cache, object, cache->object_size, flags, false);
}

static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
@@ -451,7 +451,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
}

void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+ size_t size, gfp_t flags, bool krealloc)
{
unsigned long redzone_start;
unsigned long redzone_end;
@@ -468,8 +468,12 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
redzone_end = round_up((unsigned long)object + cache->object_size,
KASAN_SHADOW_SCALE_SIZE);

- if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
- tag = assign_tag(cache, object, false);
+ if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) {
+ if (krealloc)
+ tag = get_tag(object);
+ else
+ tag = assign_tag(cache, object, false);
+ }

/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -508,19 +512,19 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
return (void *)ptr;
}

-void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;

if (unlikely(object == ZERO_SIZE_PTR))
- return (void *)object;
+ return;

page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return kasan_kmalloc_large(object, size, flags);
+ kasan_kmalloc_large(object, size, flags);
else
- return kasan_kmalloc(page->slab_cache, object, size, flags);
+ kasan_kmalloc(page->slab_cache, object, size, flags, true);
}

void kasan_poison_kfree(void *ptr, unsigned long ip)
diff --git a/mm/slab.c b/mm/slab.c
index 73fe23e649c9..09b54386cf67 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3604,7 +3604,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)

ret = slab_alloc(cachep, flags, _RET_IP_);

- ret = kasan_kmalloc(cachep, ret, size, flags);
+ ret = kasan_kmalloc(cachep, ret, size, flags, false);
trace_kmalloc(_RET_IP_, ret,
size, cachep->size, flags);
return ret;
@@ -3647,7 +3647,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,

ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_);

- ret = kasan_kmalloc(cachep, ret, size, flags);
+ ret = kasan_kmalloc(cachep, ret, size, flags, false);
trace_kmalloc_node(_RET_IP_, ret,
size, cachep->size,
flags, nodeid);
@@ -3668,7 +3668,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
if (unlikely(ZERO_OR_NULL_PTR(cachep)))
return cachep;
ret = kmem_cache_alloc_node_trace(cachep, flags, node, size);
- ret = kasan_kmalloc(cachep, ret, size, flags);
+ ret = kasan_kmalloc(cachep, ret, size, flags, false);

return ret;
}
@@ -3706,7 +3706,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
return cachep;
ret = slab_alloc(cachep, flags, caller);

- ret = kasan_kmalloc(cachep, ret, size, flags);
+ ret = kasan_kmalloc(cachep, ret, size, flags, false);
trace_kmalloc(caller, ret,
size, cachep->size, flags);

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 81732d05e74a..b55c58178f83 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1507,7 +1507,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
ks = ksize(p);

if (ks >= new_size) {
- p = kasan_krealloc((void *)p, new_size, flags);
+ kasan_krealloc((void *)p, new_size, flags);
return (void *)p;
}

diff --git a/mm/slub.c b/mm/slub.c
index 1e3d0ec4e200..20aa0547acbf 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2763,7 +2763,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
{
void *ret = slab_alloc(s, gfpflags, _RET_IP_);
trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
- ret = kasan_kmalloc(s, ret, size, gfpflags);
+ ret = kasan_kmalloc(s, ret, size, gfpflags, false);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2791,7 +2791,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
trace_kmalloc_node(_RET_IP_, ret,
size, s->size, gfpflags, node);

- ret = kasan_kmalloc(s, ret, size, gfpflags);
+ ret = kasan_kmalloc(s, ret, size, gfpflags, false);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -3364,7 +3364,7 @@ static void early_kmem_cache_node_alloc(int node)
init_tracking(kmem_cache_node, n);
#endif
n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
- GFP_KERNEL);
+ GFP_KERNEL, false);
page->freelist = get_freepointer(kmem_cache_node, n);
page->inuse = 1;
page->frozen = 0;
@@ -3779,7 +3779,7 @@ void *__kmalloc(size_t size, gfp_t flags)

trace_kmalloc(_RET_IP_, ret, size, s->size, flags);

- ret = kasan_kmalloc(s, ret, size, flags);
+ ret = kasan_kmalloc(s, ret, size, flags, false);

return ret;
}
@@ -3823,7 +3823,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)

trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);

- ret = kasan_kmalloc(s, ret, size, flags);
+ ret = kasan_kmalloc(s, ret, size, flags, false);

return ret;
}
--
2.20.1.415.g653613c723-goog

Andrew Morton

unread,
Jan 2, 2019, 3:14:40 PM1/2/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan
On Wed, 2 Jan 2019 18:36:06 +0100 Andrey Konovalov <andre...@google.com> wrote:

> Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
>
> ...
>
> --- a/arch/arm64/include/asm/kasan.h
> +++ b/arch/arm64/include/asm/kasan.h
> @@ -36,6 +36,10 @@
> #define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1ULL << \
> (64 - KASAN_SHADOW_SCALE_SHIFT)))
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
> +#endif
> +
> void kasan_init(void);
> void kasan_copy_shadow(pgd_t *pgdir);
> asmlinkage void kasan_early_init(void);
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 11b45f7ae405..d87f913ab4e8 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -16,6 +16,7 @@
> #include <linux/overflow.h>
> #include <linux/types.h>
> #include <linux/workqueue.h>
> +#include <linux/kasan.h>
>

This still seems unadvisable. Like other architectures, arm defines
ARCH_SLAB_MINALIGN in arch/arm/include/asm/cache.h.
arch/arm/include/asm64/cache.h doesn't define ARCH_SLAB_MINALIGN
afaict.

If arch/arm/include/asm64/cache.h later gets a definition of
ARCH_SLAB_MINALIGN then we again face the risk that different .c files
will see different values of ARCH_SLAB_MINALIGN depending on which
headers they include.

So what to say about this? The architecture's ARCH_SLAB_MINALIGN
should be defined in the architecture's cache.h, end of story. Not in
slab.h, not in kasan.h.


kbuild test robot

unread,
Jan 2, 2019, 5:15:08 PM1/2/19
to Andrey Konovalov, kbuil...@01.org, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Hi Andrey,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on next-20190102]
[cannot apply to v4.20]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Andrey-Konovalov/kasan-tag-based-mode-fixes/20190103-050707
config: x86_64-randconfig-x016-201900 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64

All warnings (new ones prefixed by >>):

mm/kasan/common.c: In function 'kasan_krealloc':
>> mm/kasan/common.c:525:3: warning: ignoring return value of 'kasan_kmalloc_large', declared with attribute warn_unused_result [-Wunused-result]
kasan_kmalloc_large(object, size, flags);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> mm/kasan/common.c:527:3: warning: ignoring return value of 'kasan_kmalloc', declared with attribute warn_unused_result [-Wunused-result]
kasan_kmalloc(page->slab_cache, object, size, flags, true);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

vim +/kasan_kmalloc_large +525 mm/kasan/common.c

514
515 void kasan_krealloc(const void *object, size_t size, gfp_t flags)
516 {
517 struct page *page;
518
519 if (unlikely(object == ZERO_SIZE_PTR))
520 return;
521
522 page = virt_to_head_page(object);
523
524 if (unlikely(!PageSlab(page)))
> 525 kasan_kmalloc_large(object, size, flags);
526 else
> 527 kasan_kmalloc(page->slab_cache, object, size, flags, true);
528 }
529

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
.config.gz

Will Deacon

unread,
Jan 3, 2019, 10:52:32 AM1/3/19
to Andrew Morton, Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan
Agreed. Also, as far as I can tell, this patch isn't actually a fix (unlike
the other two in this series) so it should be harmless to drop it for now.

Will

Andrey Konovalov

unread,
Jan 3, 2019, 1:45:26 PM1/3/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Changes in v3:
- Fixed krealloc tag assigning without changing kasan_kmalloc hook and
added more comments to the assign_tag function while at it.
- Moved ARCH_SLAB_MINALIGN definition to arch/arm64/include/asm/cache.h.

Changes in v2:
- Added "kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY"
patch.
- Added "kasan: fix krealloc handling for tag-based mode" patch.

Andrey Konovalov (3):
kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
kasan: fix krealloc handling for tag-based mode

arch/arm64/include/asm/cache.h | 6 ++++
mm/kasan/common.c | 65 ++++++++++++++++++++++------------
mm/slub.c | 2 ++
3 files changed, 51 insertions(+), 22 deletions(-)

--
2.20.1.415.g653613c723-goog

Andrey Konovalov

unread,
Jan 3, 2019, 1:45:28 PM1/3/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

Suggested-by: Vincenzo Frascino <vincenzo...@arm.com>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
arch/arm64/include/asm/cache.h | 6 ++++++
mm/kasan/common.c | 2 --
2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 13dd42c3ad4e..eb43e09c1980 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -58,6 +58,12 @@
*/
#define ARCH_DMA_MINALIGN (128)

+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#else
+#define ARCH_SLAB_MINALIGN __alignof__(unsigned long long)
+#endif
+
#ifndef __ASSEMBLY__

#include <linux/bitops.h>
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 03d5d1374ca7..44390392d4c9 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c

Andrey Konovalov

unread,
Jan 3, 2019, 1:45:29 PM1/3/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
With CONFIG_HARDENED_USERCOPY enabled __check_heap_object() compares and
then subtracts a potentially tagged pointer with a non-tagged address of
the page that this pointer belongs to, which leads to unexpected behavior.

Untag the pointer in __check_heap_object() before doing any of these
operations.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 36c0befeebd8..1e3d0ec4e200 100644
--- a/mm/slub.c
+++ b/mm/slub.c

Andrey Konovalov

unread,
Jan 3, 2019, 1:45:31 PM1/3/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov
Right now tag-based KASAN can retag the memory that is reallocated via
krealloc and return a differently tagged pointer even if the same slab
object gets used and no reallocated technically happens.

There are a few issues with this approach. One is that krealloc callers
can't rely on comparing the return value with the passed argument to
check whether reallocation happened. Another is that if a caller knows
that no reallocation happened, that it can access object memory through
the old pointer, which leads to false positives. Look at nf_ct_ext_add()
to see an example.

Fix this by keeping the same tag if the memory don't actually gets
reallocated during krealloc.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/common.c | 63 ++++++++++++++++++++++++++++++++---------------
1 file changed, 43 insertions(+), 20 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 44390392d4c9..73c9cbfdedf4 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -347,28 +347,43 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
}

/*
- * Since it's desirable to only call object contructors once during slab
- * allocation, we preassign tags to all such objects. Also preassign tags for
- * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports.
- * For SLAB allocator we can't preassign tags randomly since the freelist is
- * stored as an array of indexes instead of a linked list. Assign tags based
- * on objects indexes, so that objects that are next to each other get
- * different tags.
- * After a tag is assigned, the object always gets allocated with the same tag.
- * The reason is that we can't change tags for objects with constructors on
- * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor
- * code can save the pointer to the object somewhere (e.g. in the object
- * itself). Then if we retag it, the old saved pointer will become invalid.
+ * This function assigns a tag to an object considering the following:
+ * 1. A cache might have a constructor, which might save a pointer to a slab
+ * object somewhere (e.g. in the object itself). We preassign a tag for
+ * each object in caches with constructors during slab creation and reuse
+ * the same tag each time a particular object is allocated.
+ * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be
+ * accessed after being freed. We preassign tags for objects in these
+ * caches as well.
+ * 3. For SLAB allocator we can't preassign tags randomly since the freelist
+ * is stored as an array of indexes instead of a linked list. Assign tags
+ * based on objects indexes, so that objects that are next to each other
+ * get different tags.
*/
-static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new)
+static u8 assign_tag(struct kmem_cache *cache, const void *object,
+ bool init, bool krealloc)
{
+ /* Reuse the same tag for krealloc'ed objects. */
+ if (krealloc)
+ return get_tag(object);
+
+ /*
+ * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU
+ * set, assign a tag when the object is being allocated (init == false).
+ */
if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))
- return new ? KASAN_TAG_KERNEL : random_tag();
+ return init ? KASAN_TAG_KERNEL : random_tag();

+ /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
#ifdef CONFIG_SLAB
+ /* For SLAB assign tags based on the object index in the freelist. */
return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
#else
- return new ? random_tag() : get_tag(object);
+ /*
+ * For SLUB assign a random tag during slab creation, otherwise reuse
+ * the already assigned tag.
+ */
+ return init ? random_tag() : get_tag(object);
#endif
}

@@ -384,7 +399,8 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
__memset(alloc_info, 0, sizeof(*alloc_info));

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
- object = set_tag(object, assign_tag(cache, object, true));
+ object = set_tag(object,
+ assign_tag(cache, object, true, false));

return (void *)object;
}
@@ -450,8 +466,8 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
}

-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags, bool krealloc)
{
unsigned long redzone_start;
unsigned long redzone_end;
@@ -469,7 +485,7 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_SHADOW_SCALE_SIZE);

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
- tag = assign_tag(cache, object, false);
+ tag = assign_tag(cache, object, false, krealloc);

/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -481,6 +497,12 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,

return set_tag(object, tag);
}
+
+void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags)
+{
+ return __kasan_kmalloc(cache, object, size, flags, false);
+}
EXPORT_SYMBOL(kasan_kmalloc);

void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
@@ -520,7 +542,8 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
if (unlikely(!PageSlab(page)))
return kasan_kmalloc_large(object, size, flags);
else
- return kasan_kmalloc(page->slab_cache, object, size, flags);
+ return __kasan_kmalloc(page->slab_cache, object, size,
+ flags, true);
}

void kasan_poison_kfree(void *ptr, unsigned long ip)
--
2.20.1.415.g653613c723-goog

Andrey Konovalov

unread,
Jan 3, 2019, 1:46:32 PM1/3/19
to Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, Vincenzo Frascino, kasan-dev, open list:DOCUMENTATION, LKML, Linux ARM, linux-...@vger.kernel.org, Linux Memory Management List, Linux Kbuild mailing list, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan
Done in v3, thanks!

>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To post to this group, send email to kasa...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190102121436.5c2b72d1b0ec49affadc9692%40linux-foundation.org.
> For more options, visit https://groups.google.com/d/optout.

Vincenzo Frascino

unread,
Jan 9, 2019, 5:10:35 AM1/9/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-...@vger.kernel.org, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan
On 03/01/2019 18:45, Andrey Konovalov wrote:
> Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
>
> Suggested-by: Vincenzo Frascino <vincenzo...@arm.com>
> Signed-off-by: Andrey Konovalov <andre...@google.com>
> ---
> arch/arm64/include/asm/cache.h | 6 ++++++
> mm/kasan/common.c | 2 --
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> index 13dd42c3ad4e..eb43e09c1980 100644
> --- a/arch/arm64/include/asm/cache.h
> +++ b/arch/arm64/include/asm/cache.h
> @@ -58,6 +58,12 @@
> */
> #define ARCH_DMA_MINALIGN (128)
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
> +#else
> +#define ARCH_SLAB_MINALIGN __alignof__(unsigned long long)
> +#endif
> +

Could you please remove the "#else" case here, because it is redundant (it is
defined in linux/slab.h as ifndef) and could be misleading in future?

> #ifndef __ASSEMBLY__
>
> #include <linux/bitops.h>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 03d5d1374ca7..44390392d4c9 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> return;
> }
>
> - cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> -
> *flags |= SLAB_KASAN;
> }
>
>

--
Regards,
Vincenzo

Andrey Konovalov

unread,
Jan 11, 2019, 8:49:04 AM1/11/19
to Vincenzo Frascino, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart, Mike Rapoport, kasan-dev, open list:DOCUMENTATION, LKML, Linux ARM, linux-...@vger.kernel.org, Linux Memory Management List, Linux Kbuild mailing list, Kostya Serebryany, Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan
Sure, sent a patch. Thanks!

>
> > #ifndef __ASSEMBLY__
> >
> > #include <linux/bitops.h>
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 03d5d1374ca7..44390392d4c9 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > return;
> > }
> >
> > - cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> > -
> > *flags |= SLAB_KASAN;
> > }
> >
> >
>
> --
> Regards,
> Vincenzo
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To post to this group, send email to kasa...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/fc93e5a4-fa54-98a1-ea5f-4708568d7857%40arm.com.
Reply all
Reply to author
Forward
0 new messages