[PATCH 0/5] kasan: more tag based mode fixes

2 views
Skip to first unread message

Andrey Konovalov

unread,
Feb 11, 2019, 4:59:59 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
Andrey Konovalov (5):
kasan: fix assigning tags twice
kasan, kmemleak: pass tagged pointers to kmemleak
kmemleak: account for tagged pointers when calculating pointer range
kasan, slub: move kasan_poison_slab hook before page_address
kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED

mm/kasan/common.c | 29 +++++++++++++++++------------
mm/kmemleak.c | 10 +++++++---
mm/slab.h | 6 ++----
mm/slab_common.c | 2 +-
mm/slub.c | 32 +++++++++++++++-----------------
5 files changed, 42 insertions(+), 37 deletions(-)

--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 11, 2019, 5:00:04 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
Right now we call kmemleak hooks before assigning tags to pointers in
KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
a differently tagged pointer, compared to the one it sees when the object
gets freed. Fix it by calling KASAN hooks before kmemleak's ones.

Reported-by: Qian Cai <c...@lca.pw>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slab.h | 6 ++----
mm/slab_common.c | 2 +-
mm/slub.c | 3 ++-
3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 4190c24ef0e9..638ea1b25d39 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -437,11 +437,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,

flags &= gfp_allowed_mask;
for (i = 0; i < size; i++) {
- void *object = p[i];
-
- kmemleak_alloc_recursive(object, s->object_size, 1,
+ p[i] = kasan_slab_alloc(s, p[i], flags);
+ kmemleak_alloc_recursive(p[i], s->object_size, 1,
s->flags, flags);
- p[i] = kasan_slab_alloc(s, object, flags);
}

if (memcg_kmem_enabled())
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 81732d05e74a..fe524c8d0246 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1228,8 +1228,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
flags |= __GFP_COMP;
page = alloc_pages(flags, order);
ret = page ? page_address(page) : NULL;
- kmemleak_alloc(ret, size, 1, flags);
ret = kasan_kmalloc_large(ret, size, flags);
+ kmemleak_alloc(ret, size, 1, flags);
return ret;
}
EXPORT_SYMBOL(kmalloc_order);
diff --git a/mm/slub.c b/mm/slub.c
index 1e3d0ec4e200..4a3d7686902f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
*/
static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
{
+ ptr = kasan_kmalloc_large(ptr, size, flags);
kmemleak_alloc(ptr, size, 1, flags);
- return kasan_kmalloc_large(ptr, size, flags);
+ return ptr;
}

static __always_inline void kfree_hook(void *x)
--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 11, 2019, 5:00:06 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
kmemleak keeps two global variables, min_addr and max_addr, which store
the range of valid (encountered by kmemleak) pointer values, which it
later uses to speed up pointer lookup when scanning blocks.

With tagged pointers this range will get bigger than it needs to be.
This patch makes kmemleak untag pointers before saving them to min_addr
and max_addr and when performing a lookup.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kmemleak.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..707fa5579f66 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -574,6 +574,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
unsigned long flags;
struct kmemleak_object *object, *parent;
struct rb_node **link, *rb_parent;
+ unsigned long untagged_ptr;

object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
if (!object) {
@@ -619,8 +620,9 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,

write_lock_irqsave(&kmemleak_lock, flags);

- min_addr = min(min_addr, ptr);
- max_addr = max(max_addr, ptr + size);
+ untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
+ min_addr = min(min_addr, untagged_ptr);
+ max_addr = max(max_addr, untagged_ptr + size);
link = &object_tree_root.rb_node;
rb_parent = NULL;
while (*link) {
@@ -1333,6 +1335,7 @@ static void scan_block(void *_start, void *_end,
unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
unsigned long *end = _end - (BYTES_PER_POINTER - 1);
unsigned long flags;
+ unsigned long untagged_ptr;

read_lock_irqsave(&kmemleak_lock, flags);
for (ptr = start; ptr < end; ptr++) {
@@ -1347,7 +1350,8 @@ static void scan_block(void *_start, void *_end,
pointer = *ptr;
kasan_enable_current();

- if (pointer < min_addr || pointer >= max_addr)
+ untagged_ptr = (unsigned long)kasan_reset_tag((void *)pointer);
+ if (untagged_ptr < min_addr || untagged_ptr >= max_addr)
continue;

/*
--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 11, 2019, 5:00:07 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
With tag based KASAN page_address() looks at the page flags to see
whether the resulting pointer needs to have a tag set. Since we don't
want to set a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
page_address() is called before kasan_poison_slab(). Fix it by changing
the order.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4a3d7686902f..ce874a5c9ee7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1642,12 +1642,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
if (page_is_pfmemalloc(page))
SetPageSlabPfmemalloc(page);

+ kasan_poison_slab(page);
+
start = page_address(page);

- if (unlikely(s->flags & SLAB_POISON))
+ if (unlikely(s->flags & SLAB_POISON)) {
+ metadata_access_enable();
memset(start, POISON_INUSE, PAGE_SIZE << order);
-
- kasan_poison_slab(page);
+ metadata_access_disable();
+ }

shuffle = shuffle_freelist(s, page);

--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 11, 2019, 5:00:08 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
of the object where the pointer gets stored. With tag based KASAN we don't
account for that when building freelist, as we call set_freepointer() with
the first argument untagged. This patch changes the code to properly
propagate tags throughout the loop.

Reported-by: Qian Cai <c...@lca.pw>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index ce874a5c9ee7..0d32f8d30752 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
__p < (__addr) + (__objects) * (__s)->size; \
__p += (__s)->size)

-#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
- for (__p = fixup_red_left(__s, __addr), __idx = 1; \
- __idx <= __objects; \
- __p += (__s)->size, __idx++)
-
/* Determine object index from a given position */
static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
{
@@ -1655,17 +1650,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
shuffle = shuffle_freelist(s, page);

if (!shuffle) {
- for_each_object_idx(p, idx, s, start, page->objects) {
- if (likely(idx < page->objects)) {
- next = p + s->size;
- next = setup_object(s, page, next);
- set_freepointer(s, p, next);
- } else
- set_freepointer(s, p, NULL);
- }
start = fixup_red_left(s, start);
start = setup_object(s, page, start);
page->freelist = start;
+ for (idx = 0, p = start; idx < page->objects - 1; idx++) {
+ next = p + s->size;
+ next = setup_object(s, page, next);
+ set_freepointer(s, p, next);
+ p = next;
+ }
+ set_freepointer(s, p, NULL);
}

page->inuse = page->objects;
--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 11, 2019, 5:02:33 PM2/11/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
and kasan_kmalloc(). Right now we assign a tag twice, once in each of
the hooks. Fix it by assigning a tag only in the former hook.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kasan/common.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 73c9cbfdedf4..09b534fbba17 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -361,10 +361,15 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
* get different tags.
*/
static u8 assign_tag(struct kmem_cache *cache, const void *object,
- bool init, bool krealloc)
+ bool init, bool keep_tag)
{
- /* Reuse the same tag for krealloc'ed objects. */
- if (krealloc)
+ /*
+ * 1. When an object is kmalloc()'ed, two hooks are called:
+ * kasan_slab_alloc() and kasan_kmalloc(). We assign the
+ * tag only in the first one.
+ * 2. We reuse the same tag for krealloc'ed objects.
+ */
+ if (keep_tag)
return get_tag(object);

/*
@@ -405,12 +410,6 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}

-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
- gfp_t flags)
-{
- return kasan_kmalloc(cache, object, cache->object_size, flags);
-}
-
static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
{
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
@@ -467,7 +466,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
}

static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags, bool krealloc)
+ size_t size, gfp_t flags, bool keep_tag)
{
unsigned long redzone_start;
unsigned long redzone_end;
@@ -485,7 +484,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_SHADOW_SCALE_SIZE);

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
- tag = assign_tag(cache, object, false, krealloc);
+ tag = assign_tag(cache, object, false, keep_tag);

/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -498,10 +497,16 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
return set_tag(object, tag);
}

+void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
+ gfp_t flags)
+{
+ return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+}
+
void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, size, flags, false);
+ return __kasan_kmalloc(cache, object, size, flags, true);
}
EXPORT_SYMBOL(kasan_kmalloc);

--
2.20.1.791.gb4d0f1c61a-goog

Qian Cai

unread,
Feb 11, 2019, 9:43:37 PM2/11/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
Well, this one patch does not work here, as it throws endless errors below
during boot. Still need this patch to fix it.

https://marc.info/?l=linux-mm&m=154955366113951&w=2

[ 85.744772] BUG kmemleak_object (Tainted: G B L ): Freepointer
corrupt
[ 85.744776]
-----------------------------------------------------------------------------
[ 85.744776]
[ 85.744788] INFO: Allocated in create_object+0x88/0x9c8 age=2564 cpu=153 pid=1
[ 85.744797] kmem_cache_alloc+0x39c/0x4ec
[ 85.744803] create_object+0x88/0x9c8
[ 85.744811] kmemleak_alloc+0xbc/0x180
[ 85.744818] kmem_cache_alloc+0x3ec/0x4ec
[ 85.744825] acpi_ut_create_generic_state+0x64/0xc4
[ 85.744832] acpi_ut_create_pkg_state+0x24/0x1c8
[ 85.744840] acpi_ut_walk_package_tree+0x268/0x564
[ 85.744848] acpi_ns_init_one_package+0x80/0x114
[ 85.744856] acpi_ns_init_one_object+0x214/0x3d8
[ 85.744862] acpi_ns_walk_namespace+0x288/0x384
[ 85.744869] acpi_walk_namespace+0xac/0xe8
[ 85.744877] acpi_ns_initialize_objects+0x50/0x98
[ 85.744883] acpi_load_tables+0xac/0x120
[ 85.744891] acpi_init+0x128/0x850
[ 85.744898] do_one_initcall+0x3ac/0x8c0
[ 85.744906] kernel_init_freeable+0xcdc/0x1104
[ 85.744916] INFO: Freed in free_object_rcu+0x200/0x228 age=3 cpu=153 pid=0
[ 85.744923] free_object_rcu+0x200/0x228
[ 85.744931] rcu_process_callbacks+0xb00/0x12c0
[ 85.744937] __do_softirq+0x644/0xfd0
[ 85.744944] irq_exit+0x29c/0x370
[ 85.744952] __handle_domain_irq+0xe0/0x1c4
[ 85.744958] gic_handle_irq+0x1c4/0x3b0
[ 85.744964] el1_irq+0xb0/0x140
[ 85.744971] arch_cpu_idle+0x26c/0x594
[ 85.744978] default_idle_call+0x44/0x5c
[ 85.744985] do_idle+0x180/0x260
[ 85.744993] cpu_startup_entry+0x24/0x28
[ 85.745001] secondary_start_kernel+0x36c/0x440
[ 85.745009] INFO: Slab 0x(____ptrval____) objects=91 used=0
fp=0x(____ptrval____) flags=0x17ffffffc000200
[ 85.745015] INFO: Object 0x(____ptrval____) @offset=35296 fp=0x(____ptrval____)

kkkkk4.226750] Redzone (____ptrval____): bb bb bb bb bb bb bb bb bb bb bb bb bb
bb bb bb ................
[ 84.22[ 84.226765] ORedzone (____ptrptrval____): 5a worker/223:0 Tainted: G
B L 5.0.0-rc6+ #36
[ 84.226790] Hardware name: HPE Apollo 70 /C01_APACHE_MB ,
BIOS L50_5.13_1.0.6 07/10/2018
[ 84.226798] Workqueue: events free_obj_work
[ 84.226802] Call trace:
[ 84.226809] dump_backtrace+0x0/0x450
[ 84.226815] show_stack+0x20/0x2c
[ 84.226822] __dump_stack+0x20/0x28
[ 84.226828] dump_stack+0xa0/0xfc
[ 84.226835] print_trailer+0x1a8/0x1bc
[ 84.226842] object_err+0x40/0x50
[ 84.226848] check_object+0x214/0x2b8
[ 84.226854] __free_slab+0x9c/0x31c
[ 84.226860] discard_slab+0x78/0xa8
[ 84.226866] kmem_cache_free+0x99c/0x9f0
[ 84.226873] free_obj_work+0x92c/0xa44
[ 84.226879] process_one_work+0x894/0x1280
[ 84.226885] worker_thread+0x684/0xa1c
[ 84.226892] kthread+0x2cc/0x2e8
[ 84.226898] ret_from_fork+0x10/0x18
[ 84.229197]

Andrey Konovalov

unread,
Feb 12, 2019, 8:27:10 AM2/12/19
to Qian Cai, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
Hm, did you apply all 6 patches (the one that you sent and these five)?

Qian Cai

unread,
Feb 12, 2019, 8:43:58 AM2/12/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov


On 2/12/19 8:26 AM, Andrey Konovalov wrote:
> Hm, did you apply all 6 patches (the one that you sent and these five)
Yes.

Andrey Konovalov

unread,
Feb 12, 2019, 9:42:42 AM2/12/19
to Qian Cai, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
I'm failing to reproduce this in QEMU. You're still using the same
config, right? Could you share whole dmesg until the first BUG?

Vincenzo Frascino

unread,
Feb 12, 2019, 10:57:04 AM2/12/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Kostya Serebryany, Evgeniy Stepanov
On 11/02/2019 21:59, Andrey Konovalov wrote:
> Right now we call kmemleak hooks before assigning tags to pointers in
> KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
> a differently tagged pointer, compared to the one it sees when the object
> gets freed. Fix it by calling KASAN hooks before kmemleak's ones.
>

Nit: Could you please add comments to the the code? It should prevent that the
code gets refactored in future, reintroducing the same issue.
Regards,
Vincenzo

Qian Cai

unread,
Feb 12, 2019, 11:07:41 AM2/12/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

Andrew Morton

unread,
Feb 12, 2019, 4:12:53 PM2/12/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Mon, 11 Feb 2019 22:59:53 +0100 Andrey Konovalov <andre...@google.com> wrote:

> With tag based KASAN page_address() looks at the page flags to see
> whether the resulting pointer needs to have a tag set. Since we don't
> want to set a tag when page_address() is called on SLAB pages, we call
> page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
> page_address() is called before kasan_poison_slab(). Fix it by changing
> the order.
>
> ...
>
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1642,12 +1642,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> if (page_is_pfmemalloc(page))
> SetPageSlabPfmemalloc(page);
>
> + kasan_poison_slab(page);
> +
> start = page_address(page);
>
> - if (unlikely(s->flags & SLAB_POISON))
> + if (unlikely(s->flags & SLAB_POISON)) {
> + metadata_access_enable();
> memset(start, POISON_INUSE, PAGE_SIZE << order);
> -
> - kasan_poison_slab(page);
> + metadata_access_disable();
> + }
>
> shuffle = shuffle_freelist(s, page);

This doesn't compile when CONFIG_SLUB_DEBUG=n. Please review carefully:

--- a/mm/slub.c~kasan-slub-move-kasan_poison_slab-hook-before-page_address-fix
+++ a/mm/slub.c
@@ -1357,6 +1357,14 @@ slab_flags_t kmem_cache_flags(unsigned i

#define disable_higher_order_debug 0

+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
static inline unsigned long slabs_node(struct kmem_cache *s, int node)
{ return 0; }
static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
_

Qian Cai

unread,
Feb 12, 2019, 9:15:30 PM2/12/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov


On 2/12/19 11:07 AM, Qian Cai wrote:
> https://git.sr.ht/~cai/linux-debug/tree/master/dmesg
>

FYI, I just send a patch to take care of this.
https://marc.info/?l=linux-mm&m=155002356527913&w=2

Andrey Konovalov

unread,
Feb 13, 2019, 8:07:40 AM2/13/19
to Vincenzo Frascino, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, Linux Memory Management List, LKML, Qian Cai, Kostya Serebryany, Evgeniy Stepanov
On Tue, Feb 12, 2019 at 4:57 PM Vincenzo Frascino
<vincenzo...@arm.com> wrote:
>
> On 11/02/2019 21:59, Andrey Konovalov wrote:
> > Right now we call kmemleak hooks before assigning tags to pointers in
> > KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
> > a differently tagged pointer, compared to the one it sees when the object
> > gets freed. Fix it by calling KASAN hooks before kmemleak's ones.
> >
>
> Nit: Could you please add comments to the the code? It should prevent that the
> code gets refactored in future, reintroducing the same issue.

Sure, I'll send v2 with comments, thanks!
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To post to this group, send email to kasa...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/f57831be-c57a-4a9e-992e-1f193866467b%40arm.com.
> For more options, visit https://groups.google.com/d/optout.

Andrey Konovalov

unread,
Feb 13, 2019, 8:25:17 AM2/13/19
to Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, kasan-dev, Linux Memory Management List, LKML, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
Sorry, missed this. I think it makes more sense to move this memset
into another function CONFIG_SLUB_DEBUG ifdef, since all other
poisoning code is also there. I'll send a v2.

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:37 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
Changes in v2:
- Add comments about kmemleak vs KASAN hooks order.
- Fix compilation error when CONFIG_SLUB_DEBUG is not defined.

Andrey Konovalov (5):
kasan: fix assigning tags twice
kasan, kmemleak: pass tagged pointers to kmemleak
kmemleak: account for tagged pointers when calculating pointer range
kasan, slub: move kasan_poison_slab hook before page_address
kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED

mm/kasan/common.c | 29 +++++++++++++++++------------
mm/kmemleak.c | 10 +++++++---
mm/slab.h | 7 +++----
mm/slab_common.c | 3 ++-
mm/slub.c | 43 +++++++++++++++++++++++++------------------
5 files changed, 54 insertions(+), 38 deletions(-)

--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:39 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
and kasan_kmalloc(). Right now we assign a tag twice, once in each of
the hooks. Fix it by assigning a tag only in the former hook.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:41 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
Right now we call kmemleak hooks before assigning tags to pointers in
KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
a differently tagged pointer, compared to the one it sees when the object
gets freed. Fix it by calling KASAN hooks before kmemleak's ones.

Reported-by: Qian Cai <c...@lca.pw>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
*/
static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
{
+ ptr = kasan_kmalloc_large(ptr, size, flags);
kmemleak_alloc(ptr, size, 1, flags);
- return kasan_kmalloc_large(ptr, size, flags);
+ return ptr;
}

static __always_inline void kfree_hook(void *x)
--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:42 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
kmemleak keeps two global variables, min_addr and max_addr, which store
the range of valid (encountered by kmemleak) pointer values, which it
later uses to speed up pointer lookup when scanning blocks.

With tagged pointers this range will get bigger than it needs to be.
This patch makes kmemleak untag pointers before saving them to min_addr
and max_addr and when performing a lookup.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/kmemleak.c | 10 +++++++---
mm/slab.h | 1 +
mm/slab_common.c | 1 +
mm/slub.c | 1 +
4 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 638ea1b25d39..384105318779 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -438,6 +438,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
flags &= gfp_allowed_mask;
for (i = 0; i < size; i++) {
p[i] = kasan_slab_alloc(s, p[i], flags);
+ /* As p[i] might get tagged, call kmemleak hook after KASAN. */
kmemleak_alloc_recursive(p[i], s->object_size, 1,
s->flags, flags);
}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index fe524c8d0246..f9d89c1b5977 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1229,6 +1229,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
page = alloc_pages(flags, order);
ret = page ? page_address(page) : NULL;
ret = kasan_kmalloc_large(ret, size, flags);
+ /* As ret might get tagged, call kmemleak hook after KASAN. */
kmemleak_alloc(ret, size, 1, flags);
return ret;
}
diff --git a/mm/slub.c b/mm/slub.c
index 4a3d7686902f..f5a451c49190 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1375,6 +1375,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
{
ptr = kasan_kmalloc_large(ptr, size, flags);
+ /* As ptr might get tagged, call kmemleak hook after KASAN. */
kmemleak_alloc(ptr, size, 1, flags);
return ptr;
}
--
2.20.1.791.gb4d0f1c61a-goog

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:44 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
With tag based KASAN page_address() looks at the page flags to see
whether the resulting pointer needs to have a tag set. Since we don't
want to set a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
page_address() is called before kasan_poison_slab(). Fix it by changing
the order.

Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f5a451c49190..a7e7c7f719f9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1075,6 +1075,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
init_tracking(s, object);
}

+static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
+{
+ if (!(s->flags & SLAB_POISON))
+ return;
+
+ metadata_access_enable();
+ memset(addr, POISON_INUSE, PAGE_SIZE << order);
+ metadata_access_disable();
+}
+
static inline int alloc_consistency_checks(struct kmem_cache *s,
struct page *page,
void *object, unsigned long addr)
@@ -1330,6 +1340,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
#else /* !CONFIG_SLUB_DEBUG */
static inline void setup_object_debug(struct kmem_cache *s,
struct page *page, void *object) {}
+static inline void setup_page_debug(struct kmem_cache *s,
+ void *addr, int order) {}

static inline int alloc_debug_processing(struct kmem_cache *s,
struct page *page, void *object, unsigned long addr) { return 0; }
@@ -1643,12 +1655,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
if (page_is_pfmemalloc(page))
SetPageSlabPfmemalloc(page);

- start = page_address(page);
+ kasan_poison_slab(page);

- if (unlikely(s->flags & SLAB_POISON))
- memset(start, POISON_INUSE, PAGE_SIZE << order);
+ start = page_address(page);

- kasan_poison_slab(page);
+ setup_page_debug(s, start, order);

Andrey Konovalov

unread,
Feb 13, 2019, 8:58:46 AM2/13/19
to Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov, Andrey Konovalov
CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
of the object where the pointer gets stored. With tag based KASAN we don't
account for that when building freelist, as we call set_freepointer() with
the first argument untagged. This patch changes the code to properly
propagate tags throughout the loop.

Reported-by: Qian Cai <c...@lca.pw>
Signed-off-by: Andrey Konovalov <andre...@google.com>
---
mm/slub.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index a7e7c7f719f9..80da3a40b74d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
__p < (__addr) + (__objects) * (__s)->size; \
__p += (__s)->size)

-#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
- for (__p = fixup_red_left(__s, __addr), __idx = 1; \
- __idx <= __objects; \
- __p += (__s)->size, __idx++)
-
/* Determine object index from a given position */
static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
{
@@ -1664,17 +1659,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
shuffle = shuffle_freelist(s, page);

if (!shuffle) {
- for_each_object_idx(p, idx, s, start, page->objects) {
- if (likely(idx < page->objects)) {
- next = p + s->size;
- next = setup_object(s, page, next);
- set_freepointer(s, p, next);
- } else
- set_freepointer(s, p, NULL);
- }
start = fixup_red_left(s, start);
start = setup_object(s, page, start);
page->freelist = start;
+ for (idx = 0, p = start; idx < page->objects - 1; idx++) {
+ next = p + s->size;
+ next = setup_object(s, page, next);
+ set_freepointer(s, p, next);
+ p = next;
+ }
+ set_freepointer(s, p, NULL);
}

page->inuse = page->objects;
--
2.20.1.791.gb4d0f1c61a-goog

Qian Cai

unread,
Feb 13, 2019, 10:36:53 AM2/13/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Wed, 2019-02-13 at 14:58 +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
>
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

Tested-by: Qian Cai <c...@lca.pw>

Andrew Morton

unread,
Feb 13, 2019, 3:42:01 PM2/13/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Wed, 13 Feb 2019 14:58:25 +0100 Andrey Konovalov <andre...@google.com> wrote:

> Changes in v2:
> - Add comments about kmemleak vs KASAN hooks order.

I assume this refers to Vincenzo's review of "kasan, kmemleak: pass
tagged pointers to kmemleak". But v2 of that patch is unchanged.

Andrey Konovalov

unread,
Feb 13, 2019, 4:28:24 PM2/13/19
to Andrew Morton, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, kasan-dev, Linux Memory Management List, LKML, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Wed, Feb 13, 2019 at 9:42 PM Andrew Morton <ak...@linux-foundation.org> wrote:
>
> On Wed, 13 Feb 2019 14:58:25 +0100 Andrey Konovalov <andre...@google.com> wrote:
>
> > Changes in v2:
> > - Add comments about kmemleak vs KASAN hooks order.
>
> I assume this refers to Vincenzo's review of "kasan, kmemleak: pass
> tagged pointers to kmemleak". But v2 of that patch is unchanged.

I've accidentally squashed this change into commit #3 instead of #2 :(

Catalin Marinas

unread,
Feb 15, 2019, 9:05:47 AM2/15/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Mon, Feb 11, 2019 at 10:59:52PM +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
>
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

Acked-by: Catalin Marinas <catalin...@arm.com>

Catalin Marinas

unread,
Feb 15, 2019, 9:07:55 AM2/15/19
to Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kasa...@googlegroups.com, linu...@kvack.org, linux-...@vger.kernel.org, Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov
On Wed, Feb 13, 2019 at 02:58:28PM +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
>
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
>
> Signed-off-by: Andrey Konovalov <andre...@google.com>

I reviewed the old series. This patch also looks fine:

Acked-by: Catalin Marinas <catalin...@arm.com>
Reply all
Reply to author
Forward
0 new messages