[PATCH 0/3] kasan: support backing vmalloc space with real shadow memory

47 views
Skip to first unread message

Daniel Axtens

unread,
Jul 25, 2019, 1:55:16 AM7/25/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, dvy...@google.com, Daniel Axtens
Currently, vmalloc space is backed by the early shadow page. This
means that kasan is incompatible with VMAP_STACK, and it also provides
a hurdle for architectures that do not have a dedicated module space
(like powerpc64).

This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's
very easy to wire up other architectures.

This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198

In terms of implementation details:

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage appears to grow at first but then stay fairly stable.

If we run into practical memory exhaustion issues, I'm happy to
consider hooking into the book-keeping that vmap does, but I am not
convinced that it will be an issue.

Daniel Axtens (3):
kasan: support backing vmalloc space with real shadow memory
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC

Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++++++
arch/Kconfig | 9 ++---
arch/x86/Kconfig | 1 +
arch/x86/mm/fault.c | 13 +++++++
arch/x86/mm/kasan_init_64.c | 10 ++++++
include/linux/kasan.h | 16 +++++++++
kernel/fork.c | 4 +++
lib/Kconfig.kasan | 16 +++++++++
lib/test_kasan.c | 26 ++++++++++++++
mm/kasan/common.c | 51 ++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++++-
13 files changed, 220 insertions(+), 5 deletions(-)

--
2.20.1

Daniel Axtens

unread,
Jul 25, 2019, 1:55:21 AM7/25/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, dvy...@google.com, Daniel Axtens
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage grows at first but then stays fairly stable.

This requires architecture support to actually use: arches must stop
mapping the read-only zero page over portion of the shadow region that
covers the vmalloc space and instead leave it unmapped.

This allows KASAN with VMAP_STACK, and will be needed for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on).

Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Signed-off-by: Daniel Axtens <d...@axtens.net>
---
Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++++++
include/linux/kasan.h | 16 +++++++++
lib/Kconfig.kasan | 16 +++++++++
lib/test_kasan.c | 26 ++++++++++++++
mm/kasan/common.c | 51 ++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++++-
8 files changed, 187 insertions(+), 1 deletion(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index b72d07d70239..35fda484a672 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -215,3 +215,63 @@ brk handler is used to print bug reports.
A potential expansion of this mode is a hardware tag-based mode, which would
use hardware memory tagging support instead of compiler instrumentation and
manual shadow memory manipulation.
+
+What memory accesses are sanitised by KASAN?
+--------------------------------------------
+
+The kernel maps memory in a number of different parts of the address
+space. This poses something of a problem for KASAN, which requires
+that all addresses accessed by instrumented code have a valid shadow
+region.
+
+The range of kernel virtual addresses is large: there is not enough
+real memory to support a real shadow region for every address that
+could be accessed by the kernel.
+
+By default
+~~~~~~~~~~
+
+By default, architectures only map real memory over the shadow region
+for the linear mapping (and potentially other small areas). For all
+other areas - such as vmalloc and vmemmap space - a single read-only
+page is mapped over the shadow area. This read-only shadow page
+declares all memory accesses as permitted.
+
+This presents a problem for modules: they do not live in the linear
+mapping, but in a dedicated module space. By hooking in to the module
+allocator, KASAN can temporarily map real shadow memory to cover
+them. This allows detection of invalid accesses to module globals, for
+example.
+
+This also creates an incompatibility with ``VMAP_STACK``: if the stack
+lives in vmalloc space, it will be shadowed by the read-only page, and
+the kernel will fault when trying to set up the shadow data for stack
+variables.
+
+CONFIG_KASAN_VMALLOC
+~~~~~~~~~~~~~~~~~~~~
+
+With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
+cost of greater memory usage. Currently this is only supported on x86.
+
+This works by hooking into vmalloc and vmap, and dynamically
+allocating real shadow memory to back the mappings.
+
+Most mappings in vmalloc space are small, requiring less than a full
+page of shadow space. Allocating a full shadow page per mapping would
+therefore be wasteful. Furthermore, to ensure that different mappings
+use different shadow pages, mappings would have to be aligned to
+``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
+
+Instead, we share backing space across multiple mappings. We allocate
+a backing page the first time a mapping in vmalloc space uses a
+particular page of the shadow region. We keep this page around
+regardless of whether the mapping is later freed - in the mean time
+this page could have become shared by another vmalloc mapping.
+
+This can in theory lead to unbounded memory growth, but the vmalloc
+allocator is pretty good at reusing addresses, so the practical memory
+usage grows at first but then stays fairly stable.
+
+This allows ``VMAP_STACK`` support on x86, and enables support of
+architectures that do not have a fixed module region.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index cc8a03cc9674..fcabc5a03fca 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -70,8 +70,18 @@ struct kasan_cache {
int free_meta_offset;
};

+/*
+ * These functions provide a special case to support backing module
+ * allocations with real shadow memory. With KASAN vmalloc, the special
+ * case is unnecessary, as the work is handled in the generic case.
+ */
+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size);
void kasan_free_shadow(const struct vm_struct *vm);
+#else
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+#endif

int kasan_add_zero_shadow(void *start, unsigned long size);
void kasan_remove_zero_shadow(void *start, unsigned long size);
@@ -194,4 +204,10 @@ static inline void *kasan_reset_tag(const void *addr)

#endif /* CONFIG_KASAN_SW_TAGS */

+#ifdef CONFIG_KASAN_VMALLOC
+void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area);
+#else
+static inline void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area) {}
+#endif
+
#endif /* LINUX_KASAN_H */
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4fafba1a923b..a320dc2e9317 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN
config HAVE_ARCH_KASAN_SW_TAGS
bool

+config HAVE_ARCH_KASAN_VMALLOC
+ bool
+
config CC_HAS_KASAN_GENERIC
def_bool $(cc-option, -fsanitize=kernel-address)

@@ -135,6 +138,19 @@ config KASAN_S390_4_LEVEL_PAGING
to 3TB of RAM with KASan enabled). This options allows to force
4-level paging instead.

+config KASAN_VMALLOC
+ bool "Back mappings in vmalloc space with real shadow memory"
+ depends on KASAN && HAVE_ARCH_KASAN_VMALLOC
+ help
+ By default, the shadow region for vmalloc space is the read-only
+ zero page. This means that KASAN cannot detect errors involving
+ vmalloc space.
+
+ Enabling this option will hook in to vmap/vmalloc and back those
+ mappings with real shadow memory allocated on demand. This allows
+ for KASAN to detect more sorts of errors (and to support vmapped
+ stacks), but at the cost of higher memory usage.
+
config TEST_KASAN
tristate "Module for testing KASAN for bug detection"
depends on m && KASAN
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index b63b367a94e8..d375246f5f96 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -18,6 +18,7 @@
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/uaccess.h>
+#include <linux/vmalloc.h>

/*
* Note: test functions are marked noinline so that their names appear in
@@ -709,6 +710,30 @@ static noinline void __init kmalloc_double_kzfree(void)
kzfree(ptr);
}

+#ifdef CONFIG_KASAN_VMALLOC
+static noinline void __init vmalloc_oob(void)
+{
+ void *area;
+
+ pr_info("vmalloc out-of-bounds\n");
+
+ /*
+ * We have to be careful not to hit the guard page.
+ * The MMU will catch that and crash us.
+ */
+ area = vmalloc(3000);
+ if (!area) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ((volatile char *)area)[3100];
+ vfree(area);
+}
+#else
+static void __init vmalloc_oob(void) {}
+#endif
+
static int __init kmalloc_tests_init(void)
{
/*
@@ -752,6 +777,7 @@ static int __init kmalloc_tests_init(void)
kasan_strings();
kasan_bitops();
kmalloc_double_kzfree();
+ vmalloc_oob();

kasan_restore_multi_shot(multishot);

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2277b82902d8..a3bb84efccbf 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
/* The object will be poisoned by page_alloc. */
}

+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size)
{
void *ret;
@@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
}
+#endif

extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip);

@@ -722,3 +724,52 @@ static int __init kasan_memhotplug_init(void)

core_initcall(kasan_memhotplug_init);
#endif
+
+#ifdef CONFIG_KASAN_VMALLOC
+void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area)
+{
+ unsigned long shadow_alloc_start, shadow_alloc_end;
+ unsigned long addr;
+ unsigned long backing;
+ pgd_t *pgdp;
+ p4d_t *p4dp;
+ pud_t *pudp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+ pte_t backing_pte;
+
+ shadow_alloc_start = ALIGN_DOWN(
+ (unsigned long)kasan_mem_to_shadow(area->addr),
+ PAGE_SIZE);
+ shadow_alloc_end = ALIGN(
+ (unsigned long)kasan_mem_to_shadow(area->addr + area->size),
+ PAGE_SIZE);
+
+ addr = shadow_alloc_start;
+ do {
+ pgdp = pgd_offset_k(addr);
+ p4dp = p4d_alloc(&init_mm, pgdp, addr);
+ pudp = pud_alloc(&init_mm, p4dp, addr);
+ pmdp = pmd_alloc(&init_mm, pudp, addr);
+ ptep = pte_alloc_kernel(pmdp, addr);
+
+ /*
+ * we can validly get here if pte is not none: it means we
+ * allocated this page earlier to use part of it for another
+ * allocation
+ */
+ if (pte_none(*ptep)) {
+ backing = __get_free_page(GFP_KERNEL);
+ backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
+ PAGE_KERNEL);
+ set_pte_at(&init_mm, addr, ptep, backing_pte);
+ }
+ } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
+
+ requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
+ kasan_unpoison_shadow(area->addr, requested_size);
+ kasan_poison_shadow(area->addr + requested_size,
+ area->size - requested_size,
+ KASAN_VMALLOC_INVALID);
+}
+#endif
diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
index 36c645939bc9..2d97efd4954f 100644
--- a/mm/kasan/generic_report.c
+++ b/mm/kasan/generic_report.c
@@ -86,6 +86,9 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info)
case KASAN_ALLOCA_RIGHT:
bug_type = "alloca-out-of-bounds";
break;
+ case KASAN_VMALLOC_INVALID:
+ bug_type = "vmalloc-out-of-bounds";
+ break;
}

return bug_type;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 014f19e76247..8b1f2fbc780b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -25,6 +25,7 @@
#endif

#define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */
+#define KASAN_VMALLOC_INVALID 0xF9 /* unallocated space in vmapped page */

/*
* Stack redzone shadow values
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4fa8d84599b0..8cbcb5056c9b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2012,6 +2012,15 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
va->vm = vm;
va->flags |= VM_VM_AREA;
spin_unlock(&vmap_area_lock);
+
+ /*
+ * If we are in vmalloc space we need to cover the shadow area with
+ * real memory. If we come here through VM_ALLOC, this is done
+ * by a higher level function that has access to the true size,
+ * which might not be a full page.
+ */
+ if (is_vmalloc_addr(vm->addr) && !(vm->flags & VM_ALLOC))
+ kasan_cover_vmalloc(vm->size, vm);
}

static void clear_vm_uninitialized_flag(struct vm_struct *vm)
@@ -2483,6 +2492,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
return NULL;

+ kasan_cover_vmalloc(real_size, area);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3324,9 +3335,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
spin_unlock(&vmap_area_lock);

/* insert all vm's */
- for (area = 0; area < nr_vms; area++)
+ for (area = 0; area < nr_vms; area++) {
setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
pcpu_get_vm_areas);
+ kasan_cover_vmalloc(sizes[area], vms[area]);
+ }

kfree(vas);
return vms;
--
2.20.1

Daniel Axtens

unread,
Jul 25, 2019, 1:55:25 AM7/25/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, dvy...@google.com, Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:

- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN

Signed-off-by: Daniel Axtens <d...@axtens.net>
---
arch/Kconfig | 9 +++++----
kernel/fork.c | 4 ++++
2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index a7b57dd42c26..e791196005e1 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -825,16 +825,17 @@ config HAVE_ARCH_VMAP_STACK
config VMAP_STACK
default y
bool "Use a virtually-mapped stack"
- depends on HAVE_ARCH_VMAP_STACK && !KASAN
+ depends on HAVE_ARCH_VMAP_STACK
+ depends on !KASAN || KASAN_VMALLOC
---help---
Enable this if you want the use virtually-mapped kernel stacks
with guard pages. This causes kernel stack overflows to be
caught immediately rather than causing difficult-to-diagnose
corruption.

- This is presently incompatible with KASAN because KASAN expects
- the stack to map directly to the KASAN shadow map using a formula
- that is incorrect if the stack is in vmalloc space.
+ To use this with KASAN, the architecture must support backing
+ virtual mappings with real shadow memory, and KASAN_VMALLOC must
+ be enabled.

config ARCH_OPTIONAL_KERNEL_RWX
def_bool n
diff --git a/kernel/fork.c b/kernel/fork.c
index d8ae0f1b4148..ce3150fe8ff2 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -94,6 +94,7 @@
#include <linux/livepatch.h>
#include <linux/thread_info.h>
#include <linux/stackleak.h>
+#include <linux/kasan.h>

#include <asm/pgtable.h>
#include <asm/pgalloc.h>
@@ -215,6 +216,9 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
if (!s)
continue;

+ /* Clear the KASAN shadow of the stack. */
+ kasan_unpoison_shadow(s->addr, THREAD_SIZE);
+
/* Clear stale pointers from reused stack. */
memset(s->addr, 0, THREAD_SIZE);

--
2.20.1

Daniel Axtens

unread,
Jul 25, 2019, 1:55:31 AM7/25/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, dvy...@google.com, Daniel Axtens
In the case where KASAN directly allocates memory to back vmalloc
space, don't map the early shadow page over it.

Not mapping the early shadow page over the whole shadow space means
that there are some pgds that are not populated on boot. Allow the
vmalloc fault handler to also fault in vmalloc shadow as needed.

Signed-off-by: Daniel Axtens <d...@axtens.net>
---
arch/x86/Kconfig | 1 +
arch/x86/mm/fault.c | 13 +++++++++++++
arch/x86/mm/kasan_init_64.c | 10 ++++++++++
3 files changed, 24 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 222855cc0158..40562cc3771f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -134,6 +134,7 @@ config X86
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
+ select HAVE_ARCH_KASAN_VMALLOC if X86_64
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 6c46095cd0d9..d722230121c3 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -340,8 +340,21 @@ static noinline int vmalloc_fault(unsigned long address)
pte_t *pte;

/* Make sure we are in vmalloc area: */
+#ifndef CONFIG_KASAN_VMALLOC
if (!(address >= VMALLOC_START && address < VMALLOC_END))
return -1;
+#else
+ /*
+ * Some of the shadow mapping for the vmalloc area lives outside the
+ * pgds populated by kasan init. They are created dynamically and so
+ * we may need to fault them in.
+ *
+ * You can observe this with test_vmalloc's align_shift_alloc_test
+ */
+ if (!((address >= VMALLOC_START && address < VMALLOC_END) ||
+ (address >= KASAN_SHADOW_START && address < KASAN_SHADOW_END)))
+ return -1;
+#endif

/*
* Copy kernel mappings over when needed. This can also
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 296da58f3013..e2fe1c1b805c 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -352,9 +352,19 @@ void __init kasan_init(void)
shadow_cpu_entry_end = (void *)round_up(
(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);

+ /*
+ * If we're in full vmalloc mode, don't back vmalloc space with early
+ * shadow pages.
+ */
+#ifdef CONFIG_KASAN_VMALLOC
+ kasan_populate_early_shadow(
+ kasan_mem_to_shadow((void *)VMALLOC_END+1),
+ shadow_cpu_entry_begin);
+#else
kasan_populate_early_shadow(
kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
shadow_cpu_entry_begin);
+#endif

kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
(unsigned long)shadow_cpu_entry_end, 0);
--
2.20.1

Dmitry Vyukov

unread,
Jul 25, 2019, 3:35:54 AM7/25/19
to Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
,On Thu, Jul 25, 2019 at 7:55 AM Daniel Axtens <d...@axtens.net> wrote:
>
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate
> a backing page the first time a mapping in vmalloc space uses a
> particular page of the shadow region. Keep this page around
> regardless of whether the mapping is later freed - in the mean time
> the page could have become shared by another vmalloc mapping.
>
> This can in theory lead to unbounded memory growth, but the vmalloc
> allocator is pretty good at reusing addresses, so the practical memory
> usage grows at first but then stays fairly stable.
>
> This requires architecture support to actually use: arches must stop
> mapping the read-only zero page over portion of the shadow region that
> covers the vmalloc space and instead leave it unmapped.
>
> This allows KASAN with VMAP_STACK, and will be needed for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on).
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Signed-off-by: Daniel Axtens <d...@axtens.net>

Hi Daniel,

This is awesome! Thanks so much for taking over this!
I agree with memory/simplicity tradeoffs. Provided that virtual
addresses are reused, this should be fine (I hope). If we will ever
need to optimize memory consumption, I would even consider something
like aligning all vmalloc allocations to PAGE_SIZE*KASAN_SHADOW_SCALE
to make things simpler.

Some comments below.
Page table allocations will be protected by mm->page_table_lock, right?


> + pudp = pud_alloc(&init_mm, p4dp, addr);
> + pmdp = pmd_alloc(&init_mm, pudp, addr);
> + ptep = pte_alloc_kernel(pmdp, addr);
> +
> + /*
> + * we can validly get here if pte is not none: it means we
> + * allocated this page earlier to use part of it for another
> + * allocation
> + */
> + if (pte_none(*ptep)) {
> + backing = __get_free_page(GFP_KERNEL);
> + backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
> + PAGE_KERNEL);
> + set_pte_at(&init_mm, addr, ptep, backing_pte);
> + }
> + } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
> +
> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> + kasan_unpoison_shadow(area->addr, requested_size);
> + kasan_poison_shadow(area->addr + requested_size,
> + area->size - requested_size,
> + KASAN_VMALLOC_INVALID);


Do I read this correctly that if kernel code does vmalloc(64), they
will have exactly 64 bytes available rather than full page? To make
sure: vmalloc does not guarantee that the available size is rounded up
to page size? I suspect we will see a throw out of new bugs related to
OOBs on vmalloc memory. So I want to make sure that these will be
indeed bugs that we agree need to be fixed.
I am sure there will be bugs where the size is controlled by
user-space, so these are bad bugs under any circumstances. But there
will also probably be OOBs, where people will try to "prove" that
that's fine and will work (just based on our previous experiences :)).

On impl side: kasan_unpoison_shadow seems to be capable of handling
non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want.
So I think it's better to do:

kasan_unpoison_shadow(area->addr, requested_size);
requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
kasan_poison_shadow(area->addr + requested_size,
area->size - requested_size,
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190725055503.19507-2-dja%40axtens.net.

Dmitry Vyukov

unread,
Jul 25, 2019, 3:38:07 AM7/25/19
to Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
On Thu, Jul 25, 2019 at 7:55 AM Daniel Axtens <d...@axtens.net> wrote:
>
> Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
>
> - clear the shadow region of vmapped stacks when swapping them in
> - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
>
> Signed-off-by: Daniel Axtens <d...@axtens.net>

Reviewed-by: Dmitry Vyukov <dvy...@google.com>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190725055503.19507-3-dja%40axtens.net.

Dmitry Vyukov

unread,
Jul 25, 2019, 3:49:18 AM7/25/19
to Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
On Thu, Jul 25, 2019 at 7:55 AM Daniel Axtens <d...@axtens.net> wrote:
>
> In the case where KASAN directly allocates memory to back vmalloc
> space, don't map the early shadow page over it.
>
> Not mapping the early shadow page over the whole shadow space means
> that there are some pgds that are not populated on boot. Allow the
> vmalloc fault handler to also fault in vmalloc shadow as needed.
>
> Signed-off-by: Daniel Axtens <d...@axtens.net>


Would it make things simpler if we pre-populate the top level page
tables for the whole vmalloc region? That would be
(16<<40)/4096/512/512*8 = 131072 bytes?
The check in vmalloc_fault in not really a big burden, so I am not
sure. Just brining as an option.

Acked-by: Dmitry Vyukov <dvy...@google.com>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190725055503.19507-4-dja%40axtens.net.

Dmitry Vyukov

unread,
Jul 25, 2019, 3:51:18 AM7/25/19
to Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Marco Elver, Mark Rutland
Marco, please test this with your stack overflow test and with
syzkaller (to estimate the amount of new OOBs :)). Also are there any
concerns with performance/memory consumption for us?

Marco Elver

unread,
Jul 25, 2019, 6:07:00 AM7/25/19
to Dmitry Vyukov, Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Mark Rutland
It appears that stack overflows are *not* detected when KASAN_VMALLOC
and VMAP_STACK are enabled.

Tested with:
insmod drivers/misc/lkdtm/lkdtm.ko cpoint_name=DIRECT cpoint_type=EXHAUST_STACK

I've also attached the .config. Anything I missed?

Thanks,
-- Marco
.config

Mark Rutland

unread,
Jul 25, 2019, 6:11:20 AM7/25/19
to Marco Elver, Dmitry Vyukov, Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
Could you elaborate on what exactly happens?

i.e. does the test fail entirely, or is it detected as a fault (but not
reported as a stack overflow)?

If you could post a log, that would be ideal!

Thanks,
Mark.

Marco Elver

unread,
Jul 25, 2019, 7:38:56 AM7/25/19
to Mark Rutland, Dmitry Vyukov, Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
No fault, system just appears to freeze.

Log:

[ 18.408553] lkdtm: Calling function with 1024 frame size to depth 64 ...
[ 18.409546] lkdtm: loop 64/64 ...
[ 18.410030] lkdtm: loop 63/64 ...
[ 18.410497] lkdtm: loop 62/64 ...
[ 18.410972] lkdtm: loop 61/64 ...
[ 18.411470] lkdtm: loop 60/64 ...
[ 18.411946] lkdtm: loop 59/64 ...
[ 18.412415] lkdtm: loop 58/64 ...
[ 18.412890] lkdtm: loop 57/64 ...
[ 18.413356] lkdtm: loop 56/64 ...
[ 18.413830] lkdtm: loop 55/64 ...
[ 18.414297] lkdtm: loop 54/64 ...
[ 18.414801] lkdtm: loop 53/64 ...
[ 18.415269] lkdtm: loop 52/64 ...
[ 18.415751] lkdtm: loop 51/64 ...
[ 18.416219] lkdtm: loop 50/64 ...
[ 18.416698] lkdtm: loop 49/64 ...
[ 18.417201] lkdtm: loop 48/64 ...
[ 18.417712] lkdtm: loop 47/64 ...
[ 18.418216] lkdtm: loop 46/64 ...
[ 18.418728] lkdtm: loop 45/64 ...
[ 18.419232] lkdtm: loop 44/64 ...
[ 18.419747] lkdtm: loop 43/64 ...
[ 18.420262] lkdtm: loop 42/64 ...
< no further output, system appears unresponsive at this point >

Thanks,
-- Marco

Andy Lutomirski

unread,
Jul 25, 2019, 11:08:23 AM7/25/19
to Dmitry Vyukov, Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski


> On Jul 25, 2019, at 12:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>
>> On Thu, Jul 25, 2019 at 7:55 AM Daniel Axtens <d...@axtens.net> wrote:
>>
>> In the case where KASAN directly allocates memory to back vmalloc
>> space, don't map the early shadow page over it.
>>
>> Not mapping the early shadow page over the whole shadow space means
>> that there are some pgds that are not populated on boot. Allow the
>> vmalloc fault handler to also fault in vmalloc shadow as needed.
>>
>> Signed-off-by: Daniel Axtens <d...@axtens.net>
>
>
> Would it make things simpler if we pre-populate the top level page
> tables for the whole vmalloc region? That would be
> (16<<40)/4096/512/512*8 = 131072 bytes?
> The check in vmalloc_fault in not really a big burden, so I am not
> sure. Just brining as an option.

I prefer pre-populating them. In particular, I have already spent far too much time debugging the awful explosions when the stack doesn’t have KASAN backing, and the vmap stack code is very careful to pre-populate the stack pgds — vmalloc_fault fundamentally can’t recover when the stack itself isn’t mapped.

So the vmalloc_fault code, if it stays, needs some careful analysis to make sure it will actually survive all the various context switch cases. Or you can pre-populate it.

Daniel Axtens

unread,
Jul 25, 2019, 11:25:09 AM7/25/19
to Marco Elver, Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Mark Rutland
Hi Marco,

> It appears that stack overflows are *not* detected when KASAN_VMALLOC
> and VMAP_STACK are enabled.
>
> Tested with:
> insmod drivers/misc/lkdtm/lkdtm.ko cpoint_name=DIRECT cpoint_type=EXHAUST_STACK
>
> I've also attached the .config. Anything I missed?
>

Fascinating - it seems to work on my config, a lightly modified
defconfig (attached):

[ 111.287854] lkdtm: loop 46/64 ...
[ 111.287856] lkdtm: loop 45/64 ...
[ 111.287859] lkdtm: loop 44/64 ...
[ 111.287862] lkdtm: loop 43/64 ...
[ 111.287864] lkdtm: loop 42/64 ...
[ 111.287867] lkdtm: loop 41/64 ...
[ 111.287869] lkdtm: loop 40/64 ...
[ 111.288498] BUG: stack guard page was hit at 000000007bf6ef1a (stack is 000000005952e5cc..00000000ba40316c)
[ 111.288499] kernel stack overflow (double-fault): 0000 [#1] SMP KASAN PTI
[ 111.288500] CPU: 0 PID: 767 Comm: modprobe Not tainted 5.3.0-rc1-next-20190723+ #91
[ 111.288501] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[ 111.288501] RIP: 0010:__lock_acquire+0x43/0x3b50
[ 111.288503] Code: 84 24 90 00 00 00 48 c7 84 24 90 00 00 00 b3 8a b5 41 48 8b 9c 24 28 01 00 00 48 c7 84 24 98 00 00 00 f8
5a a9 84 48 c1 e8 03 <48> 89 44 24 18 48 89 c7 48 b8 00 00 00 00 00 fc ff df 48 c7 84 24
[ 111.288504] RSP: 0018:ffffc90000a37fd8 EFLAGS: 00010802
[ 111.288505] RAX: 1ffff9200014700d RBX: 0000000000000000 RCX: 0000000000000000
[ 111.288506] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff84cf3ff8
[ 111.288507] RBP: ffffffff84cf3ff8 R08: 0000000000000001 R09: 0000000000000001
[ 111.288507] R10: fffffbfff0a440cf R11: ffffffff8522067f R12: 0000000000000000
[ 111.288508] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
[ 111.288509] FS: 00007f97f1f23740(0000) GS:ffff88806c400000(0000) knlGS:0000000000000000
[ 111.288510] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 111.288510] CR2: ffffc90000a37fc8 CR3: 000000006a0fc005 CR4: 0000000000360ef0
[ 111.288511] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 111.288512] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 111.288512] Call Trace:
[ 111.288513] lock_acquire+0x125/0x300
[ 111.288513] ? vprintk_emit+0x6c/0x250
[ 111.288514] _raw_spin_lock+0x20/0x30

I will test with your config and see if I can narrow it down tomorrow.

Regards,
Daniel

.config

Daniel Axtens

unread,
Jul 25, 2019, 11:39:44 AM7/25/19
to Andy Lutomirski, Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski

>> Would it make things simpler if we pre-populate the top level page
>> tables for the whole vmalloc region? That would be
>> (16<<40)/4096/512/512*8 = 131072 bytes?
>> The check in vmalloc_fault in not really a big burden, so I am not
>> sure. Just brining as an option.
>
> I prefer pre-populating them. In particular, I have already spent far too much time debugging the awful explosions when the stack doesn’t have KASAN backing, and the vmap stack code is very careful to pre-populate the stack pgds — vmalloc_fault fundamentally can’t recover when the stack itself isn’t mapped.
>
> So the vmalloc_fault code, if it stays, needs some careful analysis to make sure it will actually survive all the various context switch cases. Or you can pre-populate it.
>

No worries - I'll have another crack at prepopulating them for v2.

I tried prepopulating them at first, but because I'm really a powerpc
developer rather than an x86 developer (and because I find mm code
confusing at the best of times) I didn't have a lot of luck. I think on
reflection I stuffed up the pgd/p4d stuff and I think I know how to fix
it. So I'll give it another go and ask for help here if I get stuck :)

Regards,
Daniel

Andy Lutomirski

unread,
Jul 25, 2019, 12:32:47 PM7/25/19
to Daniel Axtens, Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
On Thu, Jul 25, 2019 at 8:39 AM Daniel Axtens <d...@axtens.net> wrote:
>
>
> >> Would it make things simpler if we pre-populate the top level page
> >> tables for the whole vmalloc region? That would be
> >> (16<<40)/4096/512/512*8 = 131072 bytes?
> >> The check in vmalloc_fault in not really a big burden, so I am not
> >> sure. Just brining as an option.
> >
> > I prefer pre-populating them. In particular, I have already spent far too much time debugging the awful explosions when the stack doesn’t have KASAN backing, and the vmap stack code is very careful to pre-populate the stack pgds — vmalloc_fault fundamentally can’t recover when the stack itself isn’t mapped.
> >
> > So the vmalloc_fault code, if it stays, needs some careful analysis to make sure it will actually survive all the various context switch cases. Or you can pre-populate it.
> >
>
> No worries - I'll have another crack at prepopulating them for v2.
>
> I tried prepopulating them at first, but because I'm really a powerpc
> developer rather than an x86 developer (and because I find mm code
> confusing at the best of times) I didn't have a lot of luck. I think on
> reflection I stuffed up the pgd/p4d stuff and I think I know how to fix
> it. So I'll give it another go and ask for help here if I get stuck :)
>

I looked at this a bit more, and I think the vmalloc_fault approach is
fine with one tweak. In prepare_switch_to(), you'll want to add
something like:

kasan_probe_shadow(next->thread.sp);

where kasan_probe_shadow() is a new function that, depending on kernel
config, either does nothing or reads the shadow associated with the
passed-in address. Also, if you take this approach, I think you
should refactor vmalloc_fault() to push the address check to a new
helper:

static bool is_vmalloc_fault_addr(unsigned long addr)
{
if (addr >= VMALLOC_START && addr < VMALLOC_END)
return true;

#ifdef CONFIG_WHATEVER
if (addr >= whatever && etc)
return true;
#endif

return false;
}

and call that from vmalloc_fault() rather than duplicating the logic.

Also, thanks for doing this series!

Daniel Axtens

unread,
Jul 26, 2019, 1:12:02 AM7/26/19
to Marco Elver, Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Mark Rutland
>> It appears that stack overflows are *not* detected when KASAN_VMALLOC
>> and VMAP_STACK are enabled.
>>
>> Tested with:
>> insmod drivers/misc/lkdtm/lkdtm.ko cpoint_name=DIRECT cpoint_type=EXHAUST_STACK
>>
>> I've also attached the .config. Anything I missed?
>>

So this is a pretty fun bug.

From qemu it seems that CPU#0 is stuck in
queued_spin_lock_slowpath. Some registers contain the address of
logbuf_lock. Looking at a stack in crash, we're printing:

crash> bt -S 0xffffc90000530000 695
PID: 695 TASK: ffff888069933b00 CPU: 0 COMMAND: "modprobe"
#0 [ffffc90000530000] __schedule at ffffffff834832e5
#1 [ffffc900005300d0] vscnprintf at ffffffff83464398
#2 [ffffc900005300f8] vprintk_store at ffffffff8123d9f0
#3 [ffffc90000530160] vprintk_emit at ffffffff8123e2f9
#4 [ffffc900005301b0] vprintk_func at ffffffff8123ff06
#5 [ffffc900005301c8] printk at ffffffff8123efb0
#6 [ffffc90000530278] recursive_loop at ffffffffc0459939 [lkdtm]
#7 [ffffc90000530708] recursive_loop at ffffffffc045994a [lkdtm]
#8 [ffffc90000530b98] recursive_loop at ffffffffc045994a [lkdtm]
...

We seem to be deadlocking on logbuf_lock because we take the stack
overflow inside printk after it takes the lock, as recursive_loop
attempts to print its status. Then we try to printk() some information
about the double-fault, which tries to take the lock again, and blam,
we're deadlocked.

I didn't see it in my build because I happen to just access the stack
differently with lock debugging on - we happen to overflow the stack
while not holding the lock.

So I think this is a generic bug, not related to KASAN_VMALLOC. IIUC,
it's not safe to kill stack-overflowing tasks with die() because they
could be holding arbitrary locks. Instead we should panic() the box.
(panic prints without taking locks.)

The following patch works for me, does it fix things for you?

-----------------------------------------------------

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 4bb0f8447112..bfb0ec667c09 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -301,13 +301,14 @@ __visible void __noreturn handle_stack_overflow(const char *message,
struct pt_regs *regs,
unsigned long fault_address)
{
- printk(KERN_EMERG "BUG: stack guard page was hit at %p (stack is %p..%p)\n",
- (void *)fault_address, current->stack,
- (char *)current->stack + THREAD_SIZE - 1);
- die(message, regs, 0);
+ /*
+ * It's not safe to kill the task, as it's in kernel space and
+ * might be holding important locks. Just panic.
+ */

- /* Be absolutely certain we don't return. */
- panic("%s", message);
+ panic("%s - stack guard page was hit at %p (stack is %p..%p)",
+ message, (void *)fault_address, current->stack,
+ (char *)current->stack + THREAD_SIZE - 1);
}


-----------------------------------------------------


Regards,
Daniel

Marco Elver

unread,
Jul 26, 2019, 5:55:15 AM7/26/19
to Daniel Axtens, Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Mark Rutland
Many thanks for debugging this! Indeed, this seems to fix things for me.

Best Wishes,
-- Marco
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/87h879gz1g.fsf%40dja-thinkpad.axtens.net.

Marco Elver

unread,
Jul 26, 2019, 6:32:12 AM7/26/19
to Dmitry Vyukov, Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski, Mark Rutland
On Thu, 25 Jul 2019 at 09:51, Dmitry Vyukov <dvy...@google.com> wrote:
>
FYI: I have been running Syzkaller for a few hours; performance is
fine, no RCU timeouts. AFAIK no new bugs (yet).

Daniel Axtens

unread,
Jul 29, 2019, 6:15:26 AM7/29/19
to Dmitry Vyukov, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
Hi Dmitry,

Thanks for the feedback!

>> + addr = shadow_alloc_start;
>> + do {
>> + pgdp = pgd_offset_k(addr);
>> + p4dp = p4d_alloc(&init_mm, pgdp, addr);
>
> Page table allocations will be protected by mm->page_table_lock, right?

Yes, each of those alloc functions take the lock if they end up in the
slow-path that does the actual allocation (e.g. __p4d_alloc()).
So the implementation of vmalloc will always round it up. The
description of the function reads, in part:

* Allocate enough pages to cover @size from the page level
* allocator and map them into contiguous kernel virtual space.

So in short it's not quite clear - you could argue that you have a
guarantee that you get full pages, but you could also argue that you've
specifically asked for @size bytes and @size bytes only.

So far it seems that users are well behaved in terms of using the amount
of memory they ask for, but you'll get a better idea than me very
quickly as I only tested with trinity. :)

I also handle vmap - for vmap there's no way to specify sub-page
allocations so you get as many pages as you ask for.

> On impl side: kasan_unpoison_shadow seems to be capable of handling
> non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want.
> So I think it's better to do:
>
> kasan_unpoison_shadow(area->addr, requested_size);
> requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> kasan_poison_shadow(area->addr + requested_size,
> area->size - requested_size,
> KASAN_VMALLOC_INVALID);

Will do for v2.

Regards,
Daniel

Dmitry Vyukov

unread,
Jul 29, 2019, 6:28:39 AM7/29/19
to Daniel Axtens, kasan-dev, Linux-MM, the arch/x86 maintainers, Andrey Ryabinin, Alexander Potapenko, Andy Lutomirski
Ack.
Let's try and see then. There is always an easy fix -- round up size
explicitly before vmalloc, which will make the code more explicit and
clear. I can hardly see any potential downsides for rounding up the
size explicitly.

> I also handle vmap - for vmap there's no way to specify sub-page
> allocations so you get as many pages as you ask for.
>
> > On impl side: kasan_unpoison_shadow seems to be capable of handling
> > non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want.
> > So I think it's better to do:
> >
> > kasan_unpoison_shadow(area->addr, requested_size);
> > requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> > kasan_poison_shadow(area->addr + requested_size,
> > area->size - requested_size,
> > KASAN_VMALLOC_INVALID);
>
> Will do for v2.
>
> Regards,
> Daniel
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/87blxdgn9k.fsf%40dja-thinkpad.axtens.net.

Daniel Axtens

unread,
Jul 29, 2019, 10:21:17 AM7/29/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Currently, vmalloc space is backed by the early shadow page. This
means that kasan is incompatible with VMAP_STACK, and it also provides
a hurdle for architectures that do not have a dedicated module space
(like powerpc64).

This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's
very easy to wire up other architectures.

This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822

In terms of implementation details:

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage appears to grow at first but then stay fairly stable.

If we run into practical memory exhaustion issues, I'm happy to
consider hooking into the book-keeping that vmap does, but I am not
convinced that it will be an issue.

v1: https://lore.kernel.org/linux-mm/2019072505550...@axtens.net/T/
v2: address review comments:
- Patch 1: use kasan_unpoison_shadow's built-in handling of
ranges that do not align to a full shadow byte
- Patch 3: prepopulate pgds rather than faulting things in

Daniel Axtens (3):
kasan: support backing vmalloc space with real shadow memory
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC

Documentation/dev-tools/kasan.rst | 60 ++++++++++++++++++++++++++++++
arch/Kconfig | 9 +++--
arch/x86/Kconfig | 1 +
arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++
include/linux/kasan.h | 16 ++++++++
kernel/fork.c | 4 ++
lib/Kconfig.kasan | 16 ++++++++
lib/test_kasan.c | 26 +++++++++++++
mm/kasan/common.c | 51 ++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++++-
12 files changed, 258 insertions(+), 5 deletions(-)

--
2.20.1

Daniel Axtens

unread,
Jul 29, 2019, 10:21:22 AM7/29/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage grows at first but then stays fairly stable.

This requires architecture support to actually use: arches must stop
mapping the read-only zero page over portion of the shadow region that
covers the vmalloc space and instead leave it unmapped.

This allows KASAN with VMAP_STACK, and will be needed for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on).

Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Signed-off-by: Daniel Axtens <d...@axtens.net>

---

v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
---
Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++++++
include/linux/kasan.h | 16 +++++++++
lib/Kconfig.kasan | 16 +++++++++
lib/test_kasan.c | 26 ++++++++++++++
mm/kasan/common.c | 51 ++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++++-
+
+ /*
index 2277b82902d8..15d8f4ad581b 100644
+ addr = shadow_alloc_start;
+ do {
+ pgdp = pgd_offset_k(addr);
+ p4dp = p4d_alloc(&init_mm, pgdp, addr);
+ pudp = pud_alloc(&init_mm, p4dp, addr);
+ pmdp = pmd_alloc(&init_mm, pudp, addr);
+ ptep = pte_alloc_kernel(pmdp, addr);
+
+ /*
+ * we can validly get here if pte is not none: it means we
+ * allocated this page earlier to use part of it for another
+ * allocation
+ */
+ if (pte_none(*ptep)) {
+ backing = __get_free_page(GFP_KERNEL);
+ backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
+ PAGE_KERNEL);
+ set_pte_at(&init_mm, addr, ptep, backing_pte);
+ }
+ } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
+
+ kasan_unpoison_shadow(area->addr, requested_size);
+ requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
+ kasan_poison_shadow(area->addr + requested_size,
+ area->size - requested_size,
+ KASAN_VMALLOC_INVALID);
+
+ /*

Daniel Axtens

unread,
Jul 29, 2019, 10:21:25 AM7/29/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:

- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN

Reviewed-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>
---

Daniel Axtens

unread,
Jul 29, 2019, 10:21:31 AM7/29/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
In the case where KASAN directly allocates memory to back vmalloc
space, don't map the early shadow page over it.

We prepopulate pgds/p4ds for the range that would otherwise be empty.
This is required to get it synced to hardware on boot, allowing the
lower levels of the page tables to be filled dynamically.

Acked-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>

---

v2: move from faulting in shadow pgds to prepopulating
---
arch/x86/Kconfig | 1 +
arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++++++++
2 files changed, 62 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 222855cc0158..40562cc3771f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -134,6 +134,7 @@ config X86
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
+ select HAVE_ARCH_KASAN_VMALLOC if X86_64
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 296da58f3013..2f57c4ddff61 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -245,6 +245,52 @@ static void __init kasan_map_early_shadow(pgd_t *pgd)
} while (pgd++, addr = next, addr != end);
}

+static void __init kasan_shallow_populate_p4ds(pgd_t *pgd,
+ unsigned long addr,
+ unsigned long end,
+ int nid)
+{
+ p4d_t *p4d;
+ unsigned long next;
+ void *p;
+
+ p4d = p4d_offset(pgd, addr);
+ do {
+ next = p4d_addr_end(addr, end);
+
+ if (p4d_none(*p4d)) {
+ p = early_alloc(PAGE_SIZE, nid, true);
+ p4d_populate(&init_mm, p4d, p);
+ }
+ } while (p4d++, addr = next, addr != end);
+}
+
+static void __init kasan_shallow_populate_pgds(void *start, void *end)
+{
+ unsigned long addr, next;
+ pgd_t *pgd;
+ void *p;
+ int nid = early_pfn_to_nid((unsigned long)start);
+
+ addr = (unsigned long)start;
+ pgd = pgd_offset_k(addr);
+ do {
+ next = pgd_addr_end(addr, (unsigned long)end);
+
+ if (pgd_none(*pgd)) {
+ p = early_alloc(PAGE_SIZE, nid, true);
+ pgd_populate(&init_mm, pgd, p);
+ }
+
+ /*
+ * we need to populate p4ds to be synced when running in
+ * four level mode - see sync_global_pgds_l4()
+ */
+ kasan_shallow_populate_p4ds(pgd, addr, next, nid);
+ } while (pgd++, addr = next, addr != (unsigned long)end);
+}
+
+
#ifdef CONFIG_KASAN_INLINE
static int kasan_die_handler(struct notifier_block *self,
unsigned long val,
@@ -352,9 +398,24 @@ void __init kasan_init(void)
shadow_cpu_entry_end = (void *)round_up(
(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);

+ /*
+ * If we're in full vmalloc mode, don't back vmalloc space with early
+ * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to
+ * the global table and we can populate the lower levels on demand.
+ */
+#ifdef CONFIG_KASAN_VMALLOC
+ kasan_shallow_populate_pgds(
+ kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
+ kasan_mem_to_shadow((void *)VMALLOC_END));
+
+ kasan_populate_early_shadow(
+ kasan_mem_to_shadow((void *)VMALLOC_END + 1),

Mark Rutland

unread,
Jul 29, 2019, 11:44:36 AM7/29/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
Hi Daniel,

On Tue, Jul 30, 2019 at 12:21:06AM +1000, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate
> a backing page the first time a mapping in vmalloc space uses a
> particular page of the shadow region. Keep this page around
> regardless of whether the mapping is later freed - in the mean time
> the page could have become shared by another vmalloc mapping.
>
> This can in theory lead to unbounded memory growth, but the vmalloc
> allocator is pretty good at reusing addresses, so the practical memory
> usage grows at first but then stays fairly stable.
>
> This requires architecture support to actually use: arches must stop
> mapping the read-only zero page over portion of the shadow region that
> covers the vmalloc space and instead leave it unmapped.
>
> This allows KASAN with VMAP_STACK, and will be needed for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on).
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Signed-off-by: Daniel Axtens <d...@axtens.net>

This generally looks good, but I have a few concerns below, mostly
related to concurrency.

[...]

> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2277b82902d8..15d8f4ad581b 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
> /* The object will be poisoned by page_alloc. */
> }
>
> +#ifndef CONFIG_KASAN_VMALLOC
> int kasan_module_alloc(void *addr, size_t size)
> {
> void *ret;
> @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
> if (vm->flags & VM_KASAN)
> vfree(kasan_mem_to_shadow(vm->addr));
> }
> +#endif

IIUC we can drop MODULE_ALIGN back to PAGE_SIZE in this case, too.

>
> extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip);
>
> @@ -722,3 +724,52 @@ static int __init kasan_memhotplug_init(void)
>
> core_initcall(kasan_memhotplug_init);
> #endif
> +
> +#ifdef CONFIG_KASAN_VMALLOC
> +void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area)

Nit: I think it would be more consistent to call this
kasan_populate_vmalloc().

> +{
> + unsigned long shadow_alloc_start, shadow_alloc_end;
> + unsigned long addr;
> + unsigned long backing;
> + pgd_t *pgdp;
> + p4d_t *p4dp;
> + pud_t *pudp;
> + pmd_t *pmdp;
> + pte_t *ptep;
> + pte_t backing_pte;

Nit: I think it would be preferable to use 'page' rather than 'backing',
and 'pte' rather than 'backing_pte', since there's no otehr namespace to
collide with here. Otherwise, using 'shadow' rather than 'backing' would
be consistent with the existing kasan code.
Does anything prevent two threads from racing to allocate the same
shadow page?

AFAICT it's possible for two threads to get down to the ptep, then both
see pte_none(*ptep)), then both try to allocate the same page.

I suspect we have to take init_mm::page_table_lock when plumbing this
in, similarly to __pte_alloc().

> + } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
> +
> + kasan_unpoison_shadow(area->addr, requested_size);
> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> + kasan_poison_shadow(area->addr + requested_size,
> + area->size - requested_size,
> + KASAN_VMALLOC_INVALID);

IIUC, this could leave the final portion of an allocated page
unpoisoned.

I think it might make more sense to poison each page when it's
allocated, then plumb it into the page tables, then unpoison the object.

That way, we can rely on any shadow allocated by another thread having
been initialized to KASAN_VMALLOC_INVALID, and only need mutual
exclusion when allocating the shadow, rather than when poisoning
objects.

Thanks,
Mark.

Daniel Axtens

unread,
Jul 30, 2019, 4:38:53 AM7/30/19
to Mark Rutland, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
Hi Mark,

Thanks for your email - I'm very new to mm stuff and the feedback is
very helpful.

>> +#ifndef CONFIG_KASAN_VMALLOC
>> int kasan_module_alloc(void *addr, size_t size)
>> {
>> void *ret;
>> @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
>> if (vm->flags & VM_KASAN)
>> vfree(kasan_mem_to_shadow(vm->addr));
>> }
>> +#endif
>
> IIUC we can drop MODULE_ALIGN back to PAGE_SIZE in this case, too.

Yes, done.

>> core_initcall(kasan_memhotplug_init);
>> #endif
>> +
>> +#ifdef CONFIG_KASAN_VMALLOC
>> +void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area)
>
> Nit: I think it would be more consistent to call this
> kasan_populate_vmalloc().
>

Absolutely. I didn't love the name but just didn't 'click' that populate
would be a better verb.

>> +{
>> + unsigned long shadow_alloc_start, shadow_alloc_end;
>> + unsigned long addr;
>> + unsigned long backing;
>> + pgd_t *pgdp;
>> + p4d_t *p4dp;
>> + pud_t *pudp;
>> + pmd_t *pmdp;
>> + pte_t *ptep;
>> + pte_t backing_pte;
>
> Nit: I think it would be preferable to use 'page' rather than 'backing',
> and 'pte' rather than 'backing_pte', since there's no otehr namespace to
> collide with here. Otherwise, using 'shadow' rather than 'backing' would
> be consistent with the existing kasan code.

Not a problem, done.
Good catch. I think you're right, I'll add the lock.

>> + } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
>> +
>> + kasan_unpoison_shadow(area->addr, requested_size);
>> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
>> + kasan_poison_shadow(area->addr + requested_size,
>> + area->size - requested_size,
>> + KASAN_VMALLOC_INVALID);
>
> IIUC, this could leave the final portion of an allocated page
> unpoisoned.
>
> I think it might make more sense to poison each page when it's
> allocated, then plumb it into the page tables, then unpoison the object.
>
> That way, we can rely on any shadow allocated by another thread having
> been initialized to KASAN_VMALLOC_INVALID, and only need mutual
> exclusion when allocating the shadow, rather than when poisoning
> objects.

Yes, that makes sense, will do.

Thanks again,
Daniel

Daniel Axtens

unread,
Jul 31, 2019, 2:34:49 AM7/31/19
to Mark Rutland, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
I've come a bit unstuck on this one. If a vmalloc address range is
reused, we can end up with the following sequence:

- p = vmalloc(PAGE_SIZE) allocates ffffc90000000000

- kasan_populate_shadow allocates a shadow page, fills it with
KASAN_VMALLOC_INVALID, and then unpoisions
PAGE_SIZE >> KASAN_SHADOW_SHIFT_SIZE bytes

- vfree(p)

- p = vmalloc(3000) also allocates ffffc90000000000 because of address
reuse in vmalloc.

- Now kasan_populate_shadow doesn't allocate a page, so does no
poisioning.

- kasan_populate_shadow unpoisions 3000 >> KASAN_SHADOW_SHIFT_SIZE
bytes, but the PAGE_SIZE-3000 extra bytes are still unpoisioned, so
accesses that are out-of-bounds for the 3000 byte allocation are
missed.

So I think we do need to poision the shadow of the [requested_size,
area->size) region each time. However, I don't think we need mutual
exclusion to be able to do this safely. I think the safety is guaranteed
by vmalloc not giving the same page to multiple allocations. Because no
two threads are going to get overlapping vmalloc/vmap allocations, their
shadow ranges are not going to overlap, and so they're not going to
trample over each other.

I think it's probably still worth poisioning the pages on allocation:
for one thing, you are right that part of the shadow page will not be
poisioned otherwise, and secondly it means you migh get a kasan splat
before you get a page-not-present fault if you access beyond an
allocation, at least if the shadow happens to fall helpfully within an
already-allocated page.

v3 to come soon.

Regards,
Daniel

Daniel Axtens

unread,
Jul 31, 2019, 3:15:57 AM7/31/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Currently, vmalloc space is backed by the early shadow page. This
means that kasan is incompatible with VMAP_STACK, and it also provides
a hurdle for architectures that do not have a dedicated module space
(like powerpc64).

This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's
very easy to wire up other architectures.

This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822

In terms of implementation details:

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage appears to grow at first but then stay fairly stable.

If we run into practical memory exhaustion issues, I'm happy to
consider hooking into the book-keeping that vmap does, but I am not
convinced that it will be an issue.

v1: https://lore.kernel.org/linux-mm/2019072505550...@axtens.net/
v2: https://lore.kernel.org/linux-mm/2019072914210...@axtens.net/
Address review comments:
- Patch 1: use kasan_unpoison_shadow's built-in handling of
ranges that do not align to a full shadow byte
- Patch 3: prepopulate pgds rather than faulting things in
v3: Address comments from Mark Rutland:
- kasan_populate_vmalloc is a better name
- handle concurrency correctly
- various nits and cleanups
- relax module alignment in KASAN_VMALLOC case

Daniel Axtens (3):
kasan: support backing vmalloc space with real shadow memory
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC

Documentation/dev-tools/kasan.rst | 60 ++++++++++++++++++++++
arch/Kconfig | 9 ++--
arch/x86/Kconfig | 1 +
arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++
include/linux/kasan.h | 16 ++++++
include/linux/moduleloader.h | 2 +-
kernel/fork.c | 4 ++
lib/Kconfig.kasan | 16 ++++++
lib/test_kasan.c | 26 ++++++++++
mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++-
13 files changed, 291 insertions(+), 6 deletions(-)

--
2.20.1

Daniel Axtens

unread,
Jul 31, 2019, 3:16:01 AM7/31/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage grows at first but then stays fairly stable.

This requires architecture support to actually use: arches must stop
mapping the read-only zero page over portion of the shadow region that
covers the vmalloc space and instead leave it unmapped.

This allows KASAN with VMAP_STACK, and will be needed for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
---

v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.

v3: relax module alignment
rename to kasan_populate_vmalloc which is a much better name
deal with concurrency correctly
---
Documentation/dev-tools/kasan.rst | 60 ++++++++++++++++++++++
include/linux/kasan.h | 16 ++++++
include/linux/moduleloader.h | 2 +-
lib/Kconfig.kasan | 16 ++++++
lib/test_kasan.c | 26 ++++++++++
mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 15 +++++-
9 files changed, 220 insertions(+), 2 deletions(-)
index cc8a03cc9674..ec81113fcee4 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -70,8 +70,18 @@ struct kasan_cache {
int free_meta_offset;
};

+/*
+ * These functions provide a special case to support backing module
+ * allocations with real shadow memory. With KASAN vmalloc, the special
+ * case is unnecessary, as the work is handled in the generic case.
+ */
+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size);
void kasan_free_shadow(const struct vm_struct *vm);
+#else
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+#endif

int kasan_add_zero_shadow(void *start, unsigned long size);
void kasan_remove_zero_shadow(void *start, unsigned long size);
@@ -194,4 +204,10 @@ static inline void *kasan_reset_tag(const void *addr)

#endif /* CONFIG_KASAN_SW_TAGS */

+#ifdef CONFIG_KASAN_VMALLOC
+void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area);
+#else
+static inline void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) {}
+#endif
+
#endif /* LINUX_KASAN_H */
diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h
index 5229c18025e9..ca92aea8a6bd 100644
--- a/include/linux/moduleloader.h
+++ b/include/linux/moduleloader.h
@@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod);
/* Any cleanup before freeing mod->module_init */
void module_arch_freeing_init(struct module *mod);

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC)
#include <linux/kasan.h>
#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
#else
+
+ /*
+ * We have to be careful not to hit the guard page.
+ * The MMU will catch that and crash us.
+ */
+ area = vmalloc(3000);
+ if (!area) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ((volatile char *)area)[3100];
+ vfree(area);
+}
+#else
+static void __init vmalloc_oob(void) {}
+#endif
+
static int __init kmalloc_tests_init(void)
{
/*
@@ -752,6 +777,7 @@ static int __init kmalloc_tests_init(void)
kasan_strings();
kasan_bitops();
kmalloc_double_kzfree();
+ vmalloc_oob();

kasan_restore_multi_shot(multishot);

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2277b82902d8..e1a748c3f3db 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
/* The object will be poisoned by page_alloc. */
}

+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size)
{
void *ret;
@@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
}
+#endif

extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip);

@@ -722,3 +724,84 @@ static int __init kasan_memhotplug_init(void)

core_initcall(kasan_memhotplug_init);
#endif
+
+#ifdef CONFIG_KASAN_VMALLOC
+void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area)
+{
+ unsigned long shadow_alloc_start, shadow_alloc_end;
+ unsigned long addr;
+ unsigned long page;
+ pgd_t *pgdp;
+ p4d_t *p4dp;
+ pud_t *pudp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+ pte_t pte;
+
+ shadow_alloc_start = ALIGN_DOWN(
+ (unsigned long)kasan_mem_to_shadow(area->addr),
+ PAGE_SIZE);
+ shadow_alloc_end = ALIGN(
+ (unsigned long)kasan_mem_to_shadow(area->addr + area->size),
+ PAGE_SIZE);
+
+ addr = shadow_alloc_start;
+ do {
+ pgdp = pgd_offset_k(addr);
+ p4dp = p4d_alloc(&init_mm, pgdp, addr);
+ pudp = pud_alloc(&init_mm, p4dp, addr);
+ pmdp = pmd_alloc(&init_mm, pudp, addr);
+ ptep = pte_alloc_kernel(pmdp, addr);
+
+ /*
+ * The pte may not be none if we allocated the page earlier to
+ * use part of it for another allocation.
+ *
+ * Because we only ever add to the vmalloc shadow pages and
+ * never free any, we can optimise here by checking for the pte
+ * presence outside the lock. It's OK to race with another
+ * allocation here because we do the 'real' test under the lock.
+ * This just allows us to save creating/freeing the new shadow
+ * page in the common case.
+ */
+ if (!pte_none(*ptep))
+ continue;
+
+ /*
+ * We're probably going to need to populate the shadow.
+ * Allocate and poision the shadow page now, outside the lock.
+ */
+ page = __get_free_page(GFP_KERNEL);
+ memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
+ pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
+
+ spin_lock(&init_mm.page_table_lock);
+ if (pte_none(*ptep)) {
+ set_pte_at(&init_mm, addr, ptep, pte);
+ page = 0;
+ }
+ spin_unlock(&init_mm.page_table_lock);
+
+ /* catch the case where we raced and don't need the page */
+ if (page)
+ free_page(page);
+ } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
+
+ kasan_unpoison_shadow(area->addr, requested_size);
+
+ /*
+ * We have to poison the remainder of the allocation each time, not
+ * just when the shadow page is first allocated, because vmalloc may
+ * reuse addresses, and an early large allocation would cause us to
+ * miss OOBs in future smaller allocations.
+ *
+ * The alternative is to poison the shadow on vfree()/vunmap(). We
+ * don't because the unmapping the virtual addresses should be
+ * sufficient to find most UAFs.
+ */
+ requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
+ kasan_poison_shadow(area->addr + requested_size,
+ area->size - requested_size,
+ KASAN_VMALLOC_INVALID);
index 4fa8d84599b0..406097ff8ced 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2012,6 +2012,15 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
va->vm = vm;
va->flags |= VM_VM_AREA;
spin_unlock(&vmap_area_lock);
+
+ /*
+ * If we are in vmalloc space we need to cover the shadow area with
+ * real memory. If we come here through VM_ALLOC, this is done
+ * by a higher level function that has access to the true size,
+ * which might not be a full page.
+ */
+ if (is_vmalloc_addr(vm->addr) && !(vm->flags & VM_ALLOC))
+ kasan_populate_vmalloc(vm->size, vm);
}

static void clear_vm_uninitialized_flag(struct vm_struct *vm)
@@ -2483,6 +2492,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
return NULL;

+ kasan_populate_vmalloc(real_size, area);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3324,9 +3335,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
spin_unlock(&vmap_area_lock);

/* insert all vm's */
- for (area = 0; area < nr_vms; area++)
+ for (area = 0; area < nr_vms; area++) {
setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
pcpu_get_vm_areas);
+ kasan_populate_vmalloc(sizes[area], vms[area]);

Daniel Axtens

unread,
Jul 31, 2019, 3:16:05 AM7/31/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:

- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN

Reviewed-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>
---

Daniel Axtens

unread,
Jul 31, 2019, 3:16:09 AM7/31/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, Daniel Axtens
In the case where KASAN directly allocates memory to back vmalloc
space, don't map the early shadow page over it.

We prepopulate pgds/p4ds for the range that would otherwise be empty.
This is required to get it synced to hardware on boot, allowing the
lower levels of the page tables to be filled dynamically.

Acked-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>

---

v2: move from faulting in shadow pgds to prepopulating
---
arch/x86/Kconfig | 1 +
+
+ /*
+ * we need to populate p4ds to be synced when running in
+ * four level mode - see sync_global_pgds_l4()
+ */
+ kasan_shallow_populate_p4ds(pgd, addr, next, nid);
+ } while (pgd++, addr = next, addr != (unsigned long)end);
+}
+
+
#ifdef CONFIG_KASAN_INLINE
static int kasan_die_handler(struct notifier_block *self,
unsigned long val,
@@ -352,9 +398,24 @@ void __init kasan_init(void)
shadow_cpu_entry_end = (void *)round_up(
(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);

+ /*

Mark Rutland

unread,
Aug 8, 2019, 9:50:46 AM8/8/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
Hi Daniel,

This is looking really good!

I spotted a few more things we need to deal with, so I've suggested some
(not even compile-tested) code for that below. Mostly that's just error
handling, and using helpers to avoid things getting too verbose.
From looking at this for a while, there are a few more things we should
sort out:

* We need to handle allocations failing. I think we can get most of that
by using apply_to_page_range() to allocate the tables for us.

* Between poisoning the page and updating the page table, we need an
smp_wmb() to ensure that the poison is visible to other CPUs, similar
to what __pte_alloc() and friends do when allocating new tables.

* We can use the split pmd locks (used by both x86 and arm64) to
minimize contention on the init_mm ptl. As apply_to_page_range()
doesn't pass the corresponding pmd in, we'll have to re-walk the table
in the callback, but I suspect that's better than having all vmalloc
operations contend on the same ptl.

I think it would make sense to follow the style of the __alloc_p??
functions and factor out the actual initialization into a helper like:

static int __kasan_populate_vmalloc_pte(pmd_t *pmdp, pte_t *ptep)
{
unsigned long page;
spinlock_t *ptl;
pte_t pte;

page = __get_free_page(GFP_KERNEL);
if (!page)
return -ENOMEM;

memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
pte = pfn_pte(page_to_pfn(page), PAGE_KERNEL);

/*
* Ensure poisoning is visible before the shadow is made visible
* to other CPUs.
*/
smp_wmb();

ptl = pmd_lock(&init_mm, pmdp);
if (likely(pte_none(*ptep))) {
set_pte(ptep, pte)
page = 0;
}
spin_unlock(ptl);
if (page)
free_page(page);
return 0;
}

... with the apply_to_page_range() callback looking a bit like
alloc_p??(), grabbing the pmd for its ptl.

static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused)
{
pgd_t *pgdp;
p4d_t *p4dp;
pud_t *pudp;
pmd_t *pmdp;

if (likely(!pte_none(*ptep)))
return 0;

pgdp = pgd_offset_k(addr);
p4dp = p4d_offset(pgdp, addr)
pudp = pud_pffset(p4dp, addr);
pmdp = pmd_offset(pudp, addr);

return __kasan_populate_vmalloc_pte(pmdp, ptep);
}

... and the main function looking something like:

int kasan_populate_vmalloc(...)
{
unsigned long shadow_start, shadow_size;
unsigned long addr;
int ret;

// calculate shadow bounds here

ret = apply_to_page_range(&init_mm, shadow_start, shadow_size,
kasan_populate_vmalloc_pte, NULL);
if (ret)
return ret;

...

// unpoison the new allocation here
}

> + kasan_unpoison_shadow(area->addr, requested_size);
> +
> + /*
> + * We have to poison the remainder of the allocation each time, not
> + * just when the shadow page is first allocated, because vmalloc may
> + * reuse addresses, and an early large allocation would cause us to
> + * miss OOBs in future smaller allocations.
> + *
> + * The alternative is to poison the shadow on vfree()/vunmap(). We
> + * don't because the unmapping the virtual addresses should be
> + * sufficient to find most UAFs.
> + */
> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> + kasan_poison_shadow(area->addr + requested_size,
> + area->size - requested_size,
> + KASAN_VMALLOC_INVALID);
> +}

Is it painful to do the unpoison in the vfree/vunmap paths? I haven't
looked, so I might have missed something that makes that nasty.

If it's possible, I think it would be preferable to do so. It would be
consistent with the non-vmalloc KASAN cases. IIUC in that case we only
need the requested size here (and not the vmap_area), so we could just
take start and size as arguments.

Thanks,
Mark.

Mark Rutland

unread,
Aug 8, 2019, 1:43:30 PM8/8/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> Hi Daniel,
>
> This is looking really good!
>
> I spotted a few more things we need to deal with, so I've suggested some
> (not even compile-tested) code for that below. Mostly that's just error
> handling, and using helpers to avoid things getting too verbose.

FWIW, I had a quick go at that, and I've pushed the (corrected) results
to my git repo, along with an initial stab at arm64 support (which is
currently broken):

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=kasan/vmalloc

Thanks,
Mark.

Mark Rutland

unread,
Aug 9, 2019, 5:54:42 AM8/9/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
I've fixed my arm64 patch now, and that appears to work in basic tests
(example below), so I'll throw my arm64 Syzkaller instance at that today
to shake out anything major that we've missed or that I've botched.

I'm very excited to see this!

Are you happy to pick up my modified patch 1 for v4?

Thanks,
Mark.

# echo STACK_GUARD_PAGE_LEADING > DIRECT
[ 107.453162] lkdtm: Performing direct entry STACK_GUARD_PAGE_LEADING
[ 107.454672] lkdtm: attempting bad read from page below current stack
[ 107.456672] ==================================================================
[ 107.457929] BUG: KASAN: vmalloc-out-of-bounds in lkdtm_STACK_GUARD_PAGE_LEADING+0x88/0xb4
[ 107.459398] Read of size 1 at addr ffff20001515ffff by task sh/214
[ 107.460864]
[ 107.461271] CPU: 0 PID: 214 Comm: sh Not tainted 5.3.0-rc3-00004-g84f902ca9396-dirty #7
[ 107.463101] Hardware name: linux,dummy-virt (DT)
[ 107.464407] Call trace:
[ 107.464951] dump_backtrace+0x0/0x1e8
[ 107.465781] show_stack+0x14/0x20
[ 107.466824] dump_stack+0xbc/0xf4
[ 107.467780] print_address_description+0x60/0x33c
[ 107.469221] __kasan_report+0x140/0x1a0
[ 107.470388] kasan_report+0xc/0x18
[ 107.471439] __asan_load1+0x4c/0x58
[ 107.472428] lkdtm_STACK_GUARD_PAGE_LEADING+0x88/0xb4
[ 107.473908] lkdtm_do_action+0x40/0x50
[ 107.475255] direct_entry+0x128/0x1b0
[ 107.476348] full_proxy_write+0x90/0xc8
[ 107.477595] __vfs_write+0x54/0xa8
[ 107.478780] vfs_write+0xd0/0x230
[ 107.479762] ksys_write+0xc4/0x170
[ 107.480738] __arm64_sys_write+0x40/0x50
[ 107.481888] el0_svc_common.constprop.0+0xc0/0x1c0
[ 107.483240] el0_svc_handler+0x34/0x88
[ 107.484211] el0_svc+0x8/0xc
[ 107.484996]
[ 107.485429]
[ 107.485895] Memory state around the buggy address:
[ 107.487107] ffff20001515fe80: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
[ 107.489162] ffff20001515ff00: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
[ 107.491157] >ffff20001515ff80: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
[ 107.493193] ^
[ 107.494973] ffff200015160000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 107.497103] ffff200015160080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 107.498795] ==================================================================
[ 107.500495] Disabling lock debugging due to kernel taint
[ 107.503212] Unable to handle kernel paging request at virtual address ffff20001515ffff
[ 107.505177] Mem abort info:
[ 107.505797] ESR = 0x96000007
[ 107.506554] Exception class = DABT (current EL), IL = 32 bits
[ 107.508031] SET = 0, FnV = 0
[ 107.508547] EA = 0, S1PTW = 0
[ 107.509125] Data abort info:
[ 107.509704] ISV = 0, ISS = 0x00000007
[ 107.510388] CM = 0, WnR = 0
[ 107.511089] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000041c65000
[ 107.513221] [ffff20001515ffff] pgd=00000000bdfff003, pud=00000000bdffe003, pmd=00000000aa31e003, pte=0000000000000000
[ 107.515915] Internal error: Oops: 96000007 [#1] PREEMPT SMP
[ 107.517295] Modules linked in:
[ 107.518074] CPU: 0 PID: 214 Comm: sh Tainted: G B 5.3.0-rc3-00004-g84f902ca9396-dirty #7
[ 107.520755] Hardware name: linux,dummy-virt (DT)
[ 107.522208] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 107.523670] pc : lkdtm_STACK_GUARD_PAGE_LEADING+0x88/0xb4
[ 107.525176] lr : lkdtm_STACK_GUARD_PAGE_LEADING+0x88/0xb4
[ 107.526809] sp : ffff200015167b90
[ 107.527856] x29: ffff200015167b90 x28: ffff800002294740
[ 107.529728] x27: 0000000000000000 x26: 0000000000000000
[ 107.531523] x25: ffff200015167df0 x24: ffff2000116e8400
[ 107.533234] x23: ffff200015160000 x22: dfff200000000000
[ 107.534694] x21: ffff040002a2cf7a x20: ffff2000116e9ee0
[ 107.536238] x19: 1fffe40002a2cf7a x18: 0000000000000000
[ 107.537699] x17: 0000000000000000 x16: 0000000000000000
[ 107.539288] x15: 0000000000000000 x14: 0000000000000000
[ 107.540584] x13: 0000000000000000 x12: ffff10000d672bb9
[ 107.541920] x11: 1ffff0000d672bb8 x10: ffff10000d672bb8
[ 107.543438] x9 : 1ffff0000d672bb8 x8 : dfff200000000000
[ 107.545008] x7 : ffff10000d672bb9 x6 : ffff80006b395dc0
[ 107.546570] x5 : 0000000000000001 x4 : dfff200000000000
[ 107.547936] x3 : ffff20001113274c x2 : 0000000000000007
[ 107.549121] x1 : eb957a6c7b3ab400 x0 : 0000000000000000
[ 107.550220] Call trace:
[ 107.551017] lkdtm_STACK_GUARD_PAGE_LEADING+0x88/0xb4
[ 107.552288] lkdtm_do_action+0x40/0x50
[ 107.553302] direct_entry+0x128/0x1b0
[ 107.554290] full_proxy_write+0x90/0xc8
[ 107.555332] __vfs_write+0x54/0xa8
[ 107.556278] vfs_write+0xd0/0x230
[ 107.557000] ksys_write+0xc4/0x170
[ 107.557834] __arm64_sys_write+0x40/0x50
[ 107.558980] el0_svc_common.constprop.0+0xc0/0x1c0
[ 107.560111] el0_svc_handler+0x34/0x88
[ 107.560936] el0_svc+0x8/0xc
[ 107.561580] Code: 91140280 97ded9e3 d10006e0 97e4672e (385ff2e1)
[ 107.563208] ---[ end trace 9e69aa587e1dc0cc ]---

Vasily Gorbik

unread,
Aug 9, 2019, 7:54:13 AM8/9/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com
On Wed, Jul 31, 2019 at 05:15:48PM +1000, Daniel Axtens wrote:
Acked-by: Vasily Gorbik <g...@linux.ibm.com>

I've added s390 specific kasan init part and the whole thing looks good!
Unfortunately I also had to make additional changes in s390 code, so
s390 part would go later through s390 tree. But looking forward seeing
your patch series upstream.

Mark Rutland

unread,
Aug 9, 2019, 8:37:50 AM8/9/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> From looking at this for a while, there are a few more things we should
> sort out:

> * We can use the split pmd locks (used by both x86 and arm64) to
> minimize contention on the init_mm ptl. As apply_to_page_range()
> doesn't pass the corresponding pmd in, we'll have to re-walk the table
> in the callback, but I suspect that's better than having all vmalloc
> operations contend on the same ptl.

Just to point out: I was wrong about this. We don't initialise the split
pmd locks for the kernel page tables, so we have to use the init_mm ptl.

I've fixed that up in my kasan/vmalloc branch as below, which works for
me on arm64 (with another patch to prevent arm64 from using early shadow
for the vmalloc area).

Thanks,
Mark.

----

static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused)
{
unsigned long page;
pte_t pte;

if (likely(!pte_none(*ptep)))
return 0;

page = __get_free_page(GFP_KERNEL);
if (!page)
return -ENOMEM;

memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);

/*
* Ensure poisoning is visible before the shadow is made visible
* to other CPUs.
*/
smp_wmb();

spin_lock(&init_mm.page_table_lock);
if (likely(pte_none(*ptep))) {
set_pte_at(&init_mm, addr, ptep, pte);
page = 0;
}
spin_unlock(&init_mm.page_table_lock);
if (page)
free_page(page);
return 0;
}

int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area)
{
unsigned long shadow_start, shadow_end;
int ret;

shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr);
shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
shadow_end = (unsigned long)kasan_mem_to_shadow(area->addr + area->size),
shadow_end = ALIGN(shadow_end, PAGE_SIZE);

ret = apply_to_page_range(&init_mm, shadow_start,
shadow_end - shadow_start,
kasan_populate_vmalloc_pte, NULL);
if (ret)
return ret;

kasan_unpoison_shadow(area->addr, requested_size);

/*
* We have to poison the remainder of the allocation each time, not
* just when the shadow page is first allocated, because vmalloc may
* reuse addresses, and an early large allocation would cause us to
* miss OOBs in future smaller allocations.
*
* The alternative is to poison the shadow on vfree()/vunmap(). We
* don't because the unmapping the virtual addresses should be
* sufficient to find most UAFs.
*/
requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
kasan_poison_shadow(area->addr + requested_size,
area->size - requested_size,
KASAN_VMALLOC_INVALID);

return 0;
}

Daniel Axtens

unread,
Aug 11, 2019, 10:53:34 PM8/11/19
to Mark Rutland, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com
Mark Rutland <mark.r...@arm.com> writes:

> On Thu, Aug 08, 2019 at 06:43:25PM +0100, Mark Rutland wrote:
>> On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
>> > Hi Daniel,
>> >
>> > This is looking really good!
>> >
>> > I spotted a few more things we need to deal with, so I've suggested some
>> > (not even compile-tested) code for that below. Mostly that's just error
>> > handling, and using helpers to avoid things getting too verbose.
>>
>> FWIW, I had a quick go at that, and I've pushed the (corrected) results
>> to my git repo, along with an initial stab at arm64 support (which is
>> currently broken):
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=kasan/vmalloc
>
> I've fixed my arm64 patch now, and that appears to work in basic tests
> (example below), so I'll throw my arm64 Syzkaller instance at that today
> to shake out anything major that we've missed or that I've botched.
>
> I'm very excited to see this!
>
> Are you happy to pick up my modified patch 1 for v4?

Thanks, I'll do that.

I'll also have a crack at poisioning on free - I know I did that in an
early draft and then dropped it, so I don't think it was painful at all.

Regards,
Daniel

Daniel Axtens

unread,
Aug 14, 2019, 8:16:55 PM8/14/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com, Daniel Axtens
Currently, vmalloc space is backed by the early shadow page. This
means that kasan is incompatible with VMAP_STACK, and it also provides
a hurdle for architectures that do not have a dedicated module space
(like powerpc64).

This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's
very easy to wire up other architectures.

This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822

In terms of implementation details:

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage appears to grow at first but then stay fairly stable.

If we run into practical memory exhaustion issues, I'm happy to
consider hooking into the book-keeping that vmap does, but I am not
convinced that it will be an issue.

v1: https://lore.kernel.org/linux-mm/2019072505550...@axtens.net/
v2: https://lore.kernel.org/linux-mm/2019072914210...@axtens.net/
Address review comments:
- Patch 1: use kasan_unpoison_shadow's built-in handling of
ranges that do not align to a full shadow byte
- Patch 3: prepopulate pgds rather than faulting things in
v3: https://lore.kernel.org/linux-mm/2019073107155...@axtens.net/
Address comments from Mark Rutland:
- kasan_populate_vmalloc is a better name
- handle concurrency correctly
- various nits and cleanups
- relax module alignment in KASAN_VMALLOC case
v4: Changes to patch 1 only:
- Integrate Mark's rework, thanks Mark!
- handle the case where kasan_populate_shadow might fail
- poision shadow on free, allowing the alloc path to just
unpoision memory that it uses

Daniel Axtens (3):
kasan: support backing vmalloc space with real shadow memory
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC

Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++
arch/Kconfig | 9 +++--
arch/x86/Kconfig | 1 +
arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++
include/linux/kasan.h | 24 +++++++++++
include/linux/moduleloader.h | 2 +-
include/linux/vmalloc.h | 12 ++++++
kernel/fork.c | 4 ++
lib/Kconfig.kasan | 16 ++++++++
lib/test_kasan.c | 26 ++++++++++++
mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 28 ++++++++++++-
14 files changed, 308 insertions(+), 6 deletions(-)

--
2.20.1

Daniel Axtens

unread,
Aug 14, 2019, 8:17:00 PM8/14/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com, Daniel Axtens
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.

Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, share backing space across multiple mappings. Allocate
a backing page the first time a mapping in vmalloc space uses a
particular page of the shadow region. Keep this page around
regardless of whether the mapping is later freed - in the mean time
the page could have become shared by another vmalloc mapping.

This can in theory lead to unbounded memory growth, but the vmalloc
allocator is pretty good at reusing addresses, so the practical memory
usage grows at first but then stays fairly stable.

This requires architecture support to actually use: arches must stop
mapping the read-only zero page over portion of the shadow region that
covers the vmalloc space and instead leave it unmapped.

This allows KASAN with VMAP_STACK, and will be needed for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik <g...@linux.ibm.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland <mark.r...@arm.com>

--

v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.

v3: relax module alignment
rename to kasan_populate_vmalloc which is a much better name
deal with concurrency correctly

v4: Integrate Mark's rework
Poision pages on vfree
Handle allocation failures. I've tested this by inserting artificial
failures and using test_vmalloc to stress it. I haven't handled the
per-cpu case: it looked like it would require a messy hacking-up of
the function to deal with an OOM failure case in a debug feature.

---
Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++
include/linux/kasan.h | 24 +++++++++++
include/linux/moduleloader.h | 2 +-
include/linux/vmalloc.h | 12 ++++++
lib/Kconfig.kasan | 16 ++++++++
lib/test_kasan.c | 26 ++++++++++++
mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++
mm/kasan/generic_report.c | 3 ++
mm/kasan/kasan.h | 1 +
mm/vmalloc.c | 28 ++++++++++++-
10 files changed, 237 insertions(+), 2 deletions(-)
index cc8a03cc9674..d666748cd378 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -70,8 +70,18 @@ struct kasan_cache {
int free_meta_offset;
};

+/*
+ * These functions provide a special case to support backing module
+ * allocations with real shadow memory. With KASAN vmalloc, the special
+ * case is unnecessary, as the work is handled in the generic case.
+ */
+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size);
void kasan_free_shadow(const struct vm_struct *vm);
+#else
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+#endif

int kasan_add_zero_shadow(void *start, unsigned long size);
void kasan_remove_zero_shadow(void *start, unsigned long size);
@@ -194,4 +204,18 @@ static inline void *kasan_reset_tag(const void *addr)

#endif /* CONFIG_KASAN_SW_TAGS */

+#ifdef CONFIG_KASAN_VMALLOC
+int kasan_populate_vmalloc(unsigned long requested_size,
+ struct vm_struct *area);
+void kasan_free_vmalloc(void *start, unsigned long size);
+#else
+static inline int kasan_populate_vmalloc(unsigned long requested_size,
+ struct vm_struct *area)
+{
+ return 0;
+}
+
+static inline void kasan_free_vmalloc(void *start, unsigned long size) {}
+#endif
+
#endif /* LINUX_KASAN_H */
diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h
index 5229c18025e9..ca92aea8a6bd 100644
--- a/include/linux/moduleloader.h
+++ b/include/linux/moduleloader.h
@@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod);
/* Any cleanup before freeing mod->module_init */
void module_arch_freeing_init(struct module *mod);

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC)
#include <linux/kasan.h>
#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
#else
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 9b21d0047710..cdc7a60f7d81 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -21,6 +21,18 @@ struct notifier_block; /* in notifier.h */
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
+
+/*
+ * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
+ *
+ * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
+ * shadow memory has been mapped. It's used to handle allocation errors so that
+ * we don't try to poision shadow on free if it was never allocated.
+ *
+ * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
+ * determine which allocations need the module shadow freed.
+ */
+
/*
* Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
* vfree_atomic().
+
+ /*
index 2277b82902d8..b8374e3773cf 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
/* The object will be poisoned by page_alloc. */
}

+#ifndef CONFIG_KASAN_VMALLOC
int kasan_module_alloc(void *addr, size_t size)
{
void *ret;
@@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
}
+#endif

extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip);

@@ -722,3 +724,68 @@ static int __init kasan_memhotplug_init(void)

core_initcall(kasan_memhotplug_init);
#endif
+
+#ifdef CONFIG_KASAN_VMALLOC
+static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
+ void *unused)
+{
+ unsigned long page;
+ pte_t pte;
+
+ if (likely(!pte_none(*ptep)))
+ return 0;
+
+ page = __get_free_page(GFP_KERNEL);
+ if (!page)
+ return -ENOMEM;
+
+ memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
+ pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
+
+ /*
+ * Ensure poisoning is visible before the shadow is made visible
+ * to other CPUs.
+ */
+ smp_wmb();
+
+ spin_lock(&init_mm.page_table_lock);
+ if (likely(pte_none(*ptep))) {
+ set_pte_at(&init_mm, addr, ptep, pte);
+ page = 0;
+ }
+ spin_unlock(&init_mm.page_table_lock);
+ if (page)
+ free_page(page);
+ return 0;
+}
+
+int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area)
+{
+ unsigned long shadow_start, shadow_end;
+ int ret;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr);
+ shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(
+ area->addr + area->size);
+ shadow_end = ALIGN(shadow_end, PAGE_SIZE);
+
+ ret = apply_to_page_range(&init_mm, shadow_start,
+ shadow_end - shadow_start,
+ kasan_populate_vmalloc_pte, NULL);
+ if (ret)
+ return ret;
+
+ kasan_unpoison_shadow(area->addr, requested_size);
+
+ area->flags |= VM_KASAN;
+
+ return 0;
+}
+
+void kasan_free_vmalloc(void *start, unsigned long size)
+{
+ size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+ kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID);
index 4fa8d84599b0..c20a7e663004 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2056,6 +2056,22 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,

setup_vmalloc_vm(area, va, flags, caller);

+ /*
+ * For KASAN, if we are in vmalloc space, we need to cover the shadow
+ * area with real memory. If we come here through VM_ALLOC, this is
+ * done by a higher level function that has access to the true size,
+ * which might not be a full page.
+ *
+ * We assume module space comes via VM_ALLOC path.
+ */
+ if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) {
+ if (kasan_populate_vmalloc(area->size, area)) {
+ unmap_vmap_area(va);
+ kfree(area);
+ return NULL;
+ }
+ }
+
return area;
}

@@ -2233,6 +2249,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
debug_check_no_obj_freed(area->addr, get_vm_area_size(area));

+ if (area->flags & VM_KASAN)
+ kasan_free_vmalloc(area->addr, area->size);
+
vm_remove_mappings(area, deallocate_pages);

if (deallocate_pages) {
@@ -2483,6 +2502,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
return NULL;

+ if (kasan_populate_vmalloc(real_size, area))
+ return NULL;
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3324,10 +3346,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
spin_unlock(&vmap_area_lock);

/* insert all vm's */
- for (area = 0; area < nr_vms; area++)
+ for (area = 0; area < nr_vms; area++) {
setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
pcpu_get_vm_areas);

+ /* assume success here */

Daniel Axtens

unread,
Aug 14, 2019, 8:17:04 PM8/14/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com, Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:

- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN

Reviewed-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>
---

Daniel Axtens

unread,
Aug 14, 2019, 8:17:09 PM8/14/19
to kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com, Daniel Axtens
In the case where KASAN directly allocates memory to back vmalloc
space, don't map the early shadow page over it.

We prepopulate pgds/p4ds for the range that would otherwise be empty.
This is required to get it synced to hardware on boot, allowing the
lower levels of the page tables to be filled dynamically.

Acked-by: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Daniel Axtens <d...@axtens.net>

---

v2: move from faulting in shadow pgds to prepopulating
---
arch/x86/Kconfig | 1 +
+
+ /*

Mark Rutland

unread,
Aug 15, 2019, 7:28:49 AM8/15/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com
On Thu, Aug 15, 2019 at 10:16:33AM +1000, Daniel Axtens wrote:
> Currently, vmalloc space is backed by the early shadow page. This
> means that kasan is incompatible with VMAP_STACK, and it also provides
> a hurdle for architectures that do not have a dedicated module space
> (like powerpc64).
>
> This series provides a mechanism to back vmalloc space with real,
> dynamically allocated memory. I have only wired up x86, because that's
> the only currently supported arch I can work with easily, but it's
> very easy to wire up other architectures.

I'm happy to send patches for arm64 once we've settled some conflicting
rework going on for 52-bit VA support.

>
> This has been discussed before in the context of VMAP_STACK:
> - https://bugzilla.kernel.org/show_bug.cgi?id=202009
> - https://lkml.org/lkml/2018/7/22/198
> - https://lkml.org/lkml/2019/7/19/822
>
> In terms of implementation details:
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate
> a backing page the first time a mapping in vmalloc space uses a
> particular page of the shadow region. Keep this page around
> regardless of whether the mapping is later freed - in the mean time
> the page could have become shared by another vmalloc mapping.
>
> This can in theory lead to unbounded memory growth, but the vmalloc
> allocator is pretty good at reusing addresses, so the practical memory
> usage appears to grow at first but then stay fairly stable.
>
> If we run into practical memory exhaustion issues, I'm happy to
> consider hooking into the book-keeping that vmap does, but I am not
> convinced that it will be an issue.

FWIW, I haven't spotted such memory exhaustion after a week of Syzkaller
fuzzing with the last patchset, across 3 machines, so that sounds fine
to me.

Otherwise, this looks good to me now! For the x86 and fork patch, feel
free to add:

Acked-by: Mark Rutland <mark.r...@arm.com>

Mark.

Christophe Leroy

unread,
Aug 16, 2019, 3:47:08 AM8/16/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com


Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate
> a backing page the first time a mapping in vmalloc space uses a
> particular page of the shadow region. Keep this page around
> regardless of whether the mapping is later freed - in the mean time
> the page could have become shared by another vmalloc mapping.
>
> This can in theory lead to unbounded memory growth, but the vmalloc
> allocator is pretty good at reusing addresses, so the practical memory
> usage grows at first but then stays fairly stable.

I guess people having gigabytes of memory don't mind, but I'm concerned
about tiny targets with very little amount of memory. I have boards with
as little as 32Mbytes of RAM. The shadow region for the linear space
already takes one eighth of the RAM. I'd rather avoid keeping unused
shadow pages busy.

Each page of shadow memory represent 8 pages of real memory. Could we
use page_ref to count how many pieces of a shadow page are used so that
we can free it when the ref count decreases to 0.

>
> This requires architecture support to actually use: arches must stop
> mapping the read-only zero page over portion of the shadow region that
> covers the vmalloc space and instead leave it unmapped.

Why 'must' ? Couldn't we switch back and forth from the zero page to
real page on demand ?

If the zero page is not mapped for unused vmalloc space, bad memory
accesses will Oops on the shadow memory access instead of Oopsing on the
real bad access, making it more difficult to locate and identify the issue.

>
> This allows KASAN with VMAP_STACK, and will be needed for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.

Why 'needed' ? powerpc32 doesn't have a separate module space and
doesn't need that.
That's wrong, powerpc32 doesn't have a fixed module region and is
already supported.
Could we put the testing part in a separate patch ?
Prior to this, the zero shadow area should be mapped, and the test
should be:

if (likely(pte_pfn(*ptep) != PHYS_PFN(__pa(kasan_early_shadow_page))))
Christophe

Christophe Leroy

unread,
Aug 16, 2019, 4:04:36 AM8/16/19
to Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, mark.r...@arm.com, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com


Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> In the case where KASAN directly allocates memory to back vmalloc
> space, don't map the early shadow page over it.

If early shadow page is not mapped, any bad memory access will Oops on
the shadow access instead of Oopsing on the real bad access.

You should still map early shadow page, and replace it with real page
when needed.

Christophe

Mark Rutland

unread,
Aug 16, 2019, 1:08:21 PM8/16/19
to Christophe Leroy, Daniel Axtens, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com
Hi Christophe,

On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> > Hook into vmalloc and vmap, and dynamically allocate real shadow
> > memory to back the mappings.
> >
> > Most mappings in vmalloc space are small, requiring less than a full
> > page of shadow space. Allocating a full shadow page per mapping would
> > therefore be wasteful. Furthermore, to ensure that different mappings
> > use different shadow pages, mappings would have to be aligned to
> > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> >
> > Instead, share backing space across multiple mappings. Allocate
> > a backing page the first time a mapping in vmalloc space uses a
> > particular page of the shadow region. Keep this page around
> > regardless of whether the mapping is later freed - in the mean time
> > the page could have become shared by another vmalloc mapping.
> >
> > This can in theory lead to unbounded memory growth, but the vmalloc
> > allocator is pretty good at reusing addresses, so the practical memory
> > usage grows at first but then stays fairly stable.
>
> I guess people having gigabytes of memory don't mind, but I'm concerned
> about tiny targets with very little amount of memory. I have boards with as
> little as 32Mbytes of RAM. The shadow region for the linear space already
> takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages
> busy.

I think this depends on how much shadow would be in constant use vs what
would get left unused. If the amount in constant use is sufficiently
large (or the residue is sufficiently small), then it may not be
worthwhile to support KASAN_VMALLOC on such small systems.

> Each page of shadow memory represent 8 pages of real memory. Could we use
> page_ref to count how many pieces of a shadow page are used so that we can
> free it when the ref count decreases to 0.
>
> > This requires architecture support to actually use: arches must stop
> > mapping the read-only zero page over portion of the shadow region that
> > covers the vmalloc space and instead leave it unmapped.
>
> Why 'must' ? Couldn't we switch back and forth from the zero page to real
> page on demand ?
>
> If the zero page is not mapped for unused vmalloc space, bad memory accesses
> will Oops on the shadow memory access instead of Oopsing on the real bad
> access, making it more difficult to locate and identify the issue.

I agree this isn't nice, though FWIW this can already happen today for
bad addresses that fall outside of the usual kernel address space. We
could make the !KASAN_INLINE checks resilient to this by using
probe_kernel_read() to check the shadow, and treating unmapped shadow as
poison.

It's also worth noting that flipping back and forth isn't generally safe
unless going via an invalid table entry, so there'd still be windows
where a bad access might not have shadow mapped.

We'd need to reuse the common p4d/pud/pmd/pte tables for unallocated
regions, or the tables alone would consume significant amounts of memory
(e..g ~32GiB for arm64 defconfig), and thus we'd need to be able to
switch all levels between pgd and pte, which is much more complicated.

I strongly suspect that the additional complexity will outweigh the
benefit.

[...]

> > +#ifdef CONFIG_KASAN_VMALLOC
> > +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> > + void *unused)
> > +{
> > + unsigned long page;
> > + pte_t pte;
> > +
> > + if (likely(!pte_none(*ptep)))
> > + return 0;
>
> Prior to this, the zero shadow area should be mapped, and the test should
> be:
>
> if (likely(pte_pfn(*ptep) != PHYS_PFN(__pa(kasan_early_shadow_page))))
> return 0;

As above, this would need a more comprehensive redesign, so I don't
think it's worth going into that level of nit here. :)

If we do try to use common shadow for unallocate VA ranges, it probably
makes sense to have a common poison page that we can use, so that we can
report vmalloc-out-of-bounfds.

Thanks,
Mark.

Andy Lutomirski

unread,
Aug 16, 2019, 1:41:14 PM8/16/19
to Mark Rutland, Christophe Leroy, Daniel Axtens, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, Andrew Lutomirski, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik
Could we instead modify the page fault handlers to detect this case
and print a useful message?

Daniel Axtens

unread,
Aug 18, 2019, 11:59:02 PM8/18/19
to Mark Rutland, Christophe Leroy, kasa...@googlegroups.com, linu...@kvack.org, x...@kernel.org, arya...@virtuozzo.com, gli...@google.com, lu...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com, linuxp...@lists.ozlabs.org, g...@linux.ibm.com

>> > Instead, share backing space across multiple mappings. Allocate
>> > a backing page the first time a mapping in vmalloc space uses a
>> > particular page of the shadow region. Keep this page around
>> > regardless of whether the mapping is later freed - in the mean time
>> > the page could have become shared by another vmalloc mapping.
>> >
>> > This can in theory lead to unbounded memory growth, but the vmalloc
>> > allocator is pretty good at reusing addresses, so the practical memory
>> > usage grows at first but then stays fairly stable.
>>
>> I guess people having gigabytes of memory don't mind, but I'm concerned
>> about tiny targets with very little amount of memory. I have boards with as
>> little as 32Mbytes of RAM. The shadow region for the linear space already
>> takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages
>> busy.
>
> I think this depends on how much shadow would be in constant use vs what
> would get left unused. If the amount in constant use is sufficiently
> large (or the residue is sufficiently small), then it may not be
> worthwhile to support KASAN_VMALLOC on such small systems.

I'm not unsympathetic to the cause of small-memory systems, but this is
useful as-is for x86, especially for VMAP_STACK. arm64 and s390 have
already been able to make use of it as well. So unless the design is
going to make it difficult to extend to small-memory systems - if it
bakes in concepts or APIs that are going to make things harder - I think
it might be worth merging as is. (pending the fixes for documentation
nits etc that you point out.)

>> Each page of shadow memory represent 8 pages of real memory. Could we use
>> page_ref to count how many pieces of a shadow page are used so that we can
>> free it when the ref count decreases to 0.

I'm not sure how much of a difference it will make, but I'll have a look.

>> > This requires architecture support to actually use: arches must stop
>> > mapping the read-only zero page over portion of the shadow region that
>> > covers the vmalloc space and instead leave it unmapped.
>>
>> Why 'must' ? Couldn't we switch back and forth from the zero page to real
>> page on demand ?

This code as currently written will not work if the architecture maps
the zero page over the portion of the shadow region that covers the
vmalloc space. So it's an implementation 'must' rather than a laws of
the universe 'must'.

We could perhaps map the zero page, but:

- you have to be really careful to get it right. If you accidentally
map the zero page onto memory where you shouldn't, you may permit
memory accesses that you should catch.

We could ameliorate this by taking Mark's suggestion and mapping a
poision page over the vmalloc space instead.

- I'm not sure what benefit is provided by having something mapped vs
leaving a hole, other than making the fault addresses more obvious.

- This gets complex, especially to do swapping correctly with respect
to various architectures' quirks (see e.g. 56eecdb912b5 "mm: Use
ptep/pmdp_set_numa() for updating _PAGE_NUMA bit" - ppc64 at least
requires that set_pte_at is never called on a valid PTE).

>> If the zero page is not mapped for unused vmalloc space, bad memory accesses
>> will Oops on the shadow memory access instead of Oopsing on the real bad
>> access, making it more difficult to locate and identify the issue.

I suppose. It's pretty easy on at least x86 and my draft ppc64
implementation to identify when an access falls into the shadow region
and then to reverse engineer the memory access that was being checked
based on the offset. As Andy points out, the fault handler could do this
automatically.

> I agree this isn't nice, though FWIW this can already happen today for
> bad addresses that fall outside of the usual kernel address space. We
> could make the !KASAN_INLINE checks resilient to this by using
> probe_kernel_read() to check the shadow, and treating unmapped shadow as
> poison.
>
> It's also worth noting that flipping back and forth isn't generally safe
> unless going via an invalid table entry, so there'd still be windows
> where a bad access might not have shadow mapped.
>
> We'd need to reuse the common p4d/pud/pmd/pte tables for unallocated
> regions, or the tables alone would consume significant amounts of memory
> (e..g ~32GiB for arm64 defconfig), and thus we'd need to be able to
> switch all levels between pgd and pte, which is much more complicated.
>
> I strongly suspect that the additional complexity will outweigh the
> benefit.
>

I'm not opposed to this in principle but I am also concerned about the
complexity involved.

Regards,
Daniel

Mark Rutland

unread,
Aug 19, 2019, 6:15:26 AM8/19/19
to Andy Lutomirski, Christophe Leroy, Daniel Axtens, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik
In general we can't know if a bad access was a KASAN shadow lookup (e.g.
since the shadow of NULL falls outside of the shadow region), but we
could always print a message using kasan_shadow_to_mem() for any
unhandled fault to suggeest what the "real" address might have been.

Thanks,
Mark.

Andy Lutomirski

unread,
Aug 19, 2019, 6:21:02 PM8/19/19
to Daniel Axtens, Mark Rutland, Christophe Leroy, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, Andrew Lutomirski, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik
> On Aug 18, 2019, at 8:58 PM, Daniel Axtens <d...@axtens.net> wrote:
>

>>> Each page of shadow memory represent 8 pages of real memory. Could we use
>>> page_ref to count how many pieces of a shadow page are used so that we can
>>> free it when the ref count decreases to 0.
>
> I'm not sure how much of a difference it will make, but I'll have a look.
>

There are a grand total of eight possible pages that could require a
given shadow page. I would suggest that, instead of reference
counting, you just check all eight pages.

Or, better yet, look at the actual vm_area_struct and are where prev
and next point. That should tell you exactly which range can be freed.
Reply all
Reply to author
Forward
0 new messages