[PATCH v2 0/6] RISC-V kasan rework

0 views
Skip to first unread message

Alexandre Ghiti

unread,
Jan 23, 2023, 5:09:55 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
As described in patch 2, our current kasan implementation is intricate,
so I tried to simplify the implementation and mimic what arm64/x86 are
doing.

In addition it fixes UEFI bootflow with a kasan kernel and kasan inline
instrumentation: all kasan configurations were tested on a large ubuntu
kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST.

inline ubuntu config + uefi:
sv39: OK
sv48: OK
sv57: OK

outline ubuntu config + uefi:
sv39: OK
sv48: OK
sv57: OK

Actually 1 test always fails with KASAN_KUNIT_TEST that I have to check:
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan__test.c:1020
KASAN failure expected in "set_bit(nr, addr)", but none occurrred

Note that Palmer recently proposed to remove COMMAND_LINE_SIZE from the
userspace abi
https://lore.kernel.org/lkml/20221211061358...@rivosinc.com/T/
so that we can finally increase the command line to fit all kasan kernel
parameters.

All of this should hopefully fix the syzkaller riscv build that has been
failing for a few months now, any test is appreciated and if I can help
in any way, please ask.

v2:
- Rebase on top of v6.2-rc3
- patch 4 is now way simpler than it used to be since Ard already moved
the string functions into the efistub.

Alexandre Ghiti (6):
riscv: Split early and final KASAN population functions
riscv: Rework kasan population functions
riscv: Move DTB_EARLY_BASE_VA to the kernel address space
riscv: Fix EFI stub usage of KASAN instrumented strcmp function
riscv: Fix ptdump when KASAN is enabled
riscv: Unconditionnally select KASAN_VMALLOC if KASAN

arch/riscv/Kconfig | 1 +
arch/riscv/kernel/image-vars.h | 2 -
arch/riscv/mm/init.c | 2 +-
arch/riscv/mm/kasan_init.c | 516 ++++++++++++++++++---------------
arch/riscv/mm/ptdump.c | 24 +-
5 files changed, 298 insertions(+), 247 deletions(-)

--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:10:56 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
This is a preliminary work that allows to make the code more
understandable.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 187 +++++++++++++++++++++++--------------
1 file changed, 117 insertions(+), 70 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index e1226709490f..9a5211ca8368 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -95,23 +95,13 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
}

static void __init kasan_populate_pud(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
pud_t *pudp, *base_pud;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else if (pgd_none(*pgd)) {
+ if (pgd_none(*pgd)) {
base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
memcpy(base_pud, (void *)kasan_early_shadow_pud,
sizeof(pud_t) * PTRS_PER_PUD);
@@ -130,16 +120,10 @@ static void __init kasan_populate_pud(pgd_t *pgd,
next = pud_addr_end(vaddr, end);

if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pmd));
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
+ if (phys_addr) {
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
- if (phys_addr) {
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

@@ -152,35 +136,22 @@ static void __init kasan_populate_pud(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
p4d_t *p4dp, *base_p4d;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else {
- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
- }
- }
+ base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
+ if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
+ base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
+ memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
+ sizeof(p4d_t) * PTRS_PER_P4D);
+ }

p4dp = base_p4d + p4d_index(vaddr);

@@ -188,20 +159,14 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
next = p4d_addr_end(vaddr, end);

if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pud));
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
+ if (phys_addr) {
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
- if (phys_addr) {
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next, early);
+ kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);

/*
@@ -210,8 +175,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
@@ -219,16 +183,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
(pgtable_l4_enabled ? \
(uintptr_t)kasan_early_shadow_pud : \
(uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next, early) \
+#define kasan_populate_pgd_next(pgdp, vaddr, next) \
(pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next, early) : \
+ kasan_populate_p4d(pgdp, vaddr, next) : \
(pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next, early) : \
+ kasan_populate_pud(pgdp, vaddr, next) : \
kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))

static void __init kasan_populate_pgd(pgd_t *pgdp,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
unsigned long next;
@@ -237,11 +200,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
next = pgd_addr_end(vaddr, end);

if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (early) {
- phys_addr = __pa((uintptr_t)kasan_early_shadow_pgd_next);
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
- continue;
- } else if (pgd_page_vaddr(*pgdp) ==
+ if (pgd_page_vaddr(*pgdp) ==
(unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
/*
* pgdp can't be none since kasan_early_init
@@ -258,7 +217,95 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next, early);
+ kasan_populate_pgd_next(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pud(p4d_t *p4dp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) &&
+ (next - vaddr) >= PUD_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd);
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_p4d(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ /*
+ * We can't use pgd_page_vaddr here as it would return a linear
+ * mapping address but it is not mapped yet, but when populating
+ * early_pg_dir, we need the physical address and when populating
+ * swapper_pg_dir, we need the kernel virtual address so use
+ * pt_ops facility.
+ * Note that this test is then completely equivalent to
+ * p4dp = p4d_offset(pgdp, vaddr)
+ */
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pud);
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pgd(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d);
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

@@ -295,16 +342,16 @@ asmlinkage void __init kasan_early_init(void)
PAGE_TABLE));
}

- kasan_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}

void __init kasan_swapper_init(void)
{
- kasan_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}
@@ -314,7 +361,7 @@ static void __init kasan_populate(void *start, void *end)
unsigned long vaddr = (unsigned long)start & PAGE_MASK;
unsigned long vend = PAGE_ALIGN((unsigned long)end);

- kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend, false);
+ kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend);

local_flush_tlb_all();
memset(start, KASAN_SHADOW_INIT, end - start);
--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:11:57 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Our previous kasan population implementation used to have the final kasan
shadow region mapped with kasan_early_shadow_page, because we did not clean
the early mapping and then we had to populate the kasan region "in-place"
which made the code cumbersome.

So now we clear the early mapping, establish a temporary mapping while we
populate the kasan shadow region with just the kernel regions that will
be used.

This new version uses the "generic" way of going through a page table
that may be folded at runtime (avoid the XXX_next macros).

It was tested with outline instrumentation on an Ubuntu kernel
configuration successfully.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 363 +++++++++++++++++++------------------
1 file changed, 184 insertions(+), 179 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 9a5211ca8368..5c7b1d07faf2 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -18,58 +18,48 @@
* For sv39, the region is aligned on PGDIR_SIZE so we only need to populate
* the page global directory with kasan_early_shadow_pmd.
*
- * For sv48 and sv57, the region is not aligned on PGDIR_SIZE so the mapping
- * must be divided as follows:
- * - the first PGD entry, although incomplete, is populated with
- * kasan_early_shadow_pud/p4d
- * - the PGD entries in the middle are populated with kasan_early_shadow_pud/p4d
- * - the last PGD entry is shared with the kernel mapping so populated at the
- * lower levels pud/p4d
- *
- * In addition, when shallow populating a kasan region (for example vmalloc),
- * this region may also not be aligned on PGDIR size, so we must go down to the
- * pud level too.
+ * For sv48 and sv57, the region start is aligned on PGDIR_SIZE whereas the end
+ * region is not and then we have to go down to the PUD level.
*/

extern pgd_t early_pg_dir[PTRS_PER_PGD];
+pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss;
+pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss;

static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- pte_t *ptep, *base_pte;
+ pte_t *ptep, *p;

- if (pmd_none(*pmd))
- base_pte = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
- else
- base_pte = (pte_t *)pmd_page_vaddr(*pmd);
+ if (pmd_none(*pmd)) {
+ p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
+ set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE));
+ }

- ptep = base_pte + pte_index(vaddr);
+ ptep = pte_offset_kernel(pmd, vaddr);

do {
if (pte_none(*ptep)) {
phys_addr = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
set_pte(ptep, pfn_pte(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PAGE_SIZE);
}
} while (ptep++, vaddr += PAGE_SIZE, vaddr != end);
-
- set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(base_pte)), PAGE_TABLE));
}

static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- pmd_t *pmdp, *base_pmd;
+ pmd_t *pmdp, *p;
unsigned long next;

if (pud_none(*pud)) {
- base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
- } else {
- base_pmd = (pmd_t *)pud_pgtable(*pud);
- if (base_pmd == lm_alias(kasan_early_shadow_pmd))
- base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
+ p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
+ set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
}

- pmdp = base_pmd + pmd_index(vaddr);
+ pmdp = pmd_offset(pud, vaddr);

do {
next = pmd_addr_end(vaddr, end);
@@ -78,43 +68,28 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
phys_addr = memblock_phys_alloc(PMD_SIZE, PMD_SIZE);
if (phys_addr) {
set_pmd(pmdp, pfn_pmd(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PMD_SIZE);
continue;
}
}

kasan_populate_pte(pmdp, vaddr, next);
} while (pmdp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole PGD to be populated before setting the PGD in
- * the page table, otherwise, if we did set the PGD before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pud(pud, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE));
}

-static void __init kasan_populate_pud(pgd_t *pgd,
+static void __init kasan_populate_pud(p4d_t *p4d,
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- pud_t *pudp, *base_pud;
+ pud_t *pudp, *p;
unsigned long next;

- if (pgd_none(*pgd)) {
- base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
- memcpy(base_pud, (void *)kasan_early_shadow_pud,
- sizeof(pud_t) * PTRS_PER_PUD);
- } else {
- base_pud = (pud_t *)pgd_page_vaddr(*pgd);
- if (base_pud == lm_alias(kasan_early_shadow_pud)) {
- base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
- memcpy(base_pud, (void *)kasan_early_shadow_pud,
- sizeof(pud_t) * PTRS_PER_PUD);
- }
+ if (p4d_none(*p4d)) {
+ p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
+ set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
}

- pudp = base_pud + pud_index(vaddr);
+ pudp = pud_offset(p4d, vaddr);

do {
next = pud_addr_end(vaddr, end);
@@ -123,37 +98,28 @@ static void __init kasan_populate_pud(pgd_t *pgd,
phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
if (phys_addr) {
set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PUD_SIZE);
continue;
}
}

kasan_populate_pmd(pudp, vaddr, next);
} while (pudp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole PGD to be populated before setting the PGD in
- * the page table, otherwise, if we did set the PGD before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- p4d_t *p4dp, *base_p4d;
+ p4d_t *p4dp, *p;
unsigned long next;

- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
- }
+ if (pgd_none(*pgd)) {
+ p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
+ }

- p4dp = base_p4d + p4d_index(vaddr);
+ p4dp = p4d_offset(pgd, vaddr);

do {
next = p4d_addr_end(vaddr, end);
@@ -162,34 +128,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
if (phys_addr) {
set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, P4D_SIZE);
continue;
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
+ kasan_populate_pud(p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole P4D to be populated before setting the P4D in
- * the page table, otherwise, if we did set the P4D before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

-#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
- (uintptr_t)kasan_early_shadow_p4d : \
- (pgtable_l4_enabled ? \
- (uintptr_t)kasan_early_shadow_pud : \
- (uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next) \
- (pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next) : \
- (pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next) : \
- kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))
-
static void __init kasan_populate_pgd(pgd_t *pgdp,
unsigned long vaddr, unsigned long end)
{
@@ -199,25 +146,86 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
do {
next = pgd_addr_end(vaddr, end);

- if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (pgd_page_vaddr(*pgdp) ==
- (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
- /*
- * pgdp can't be none since kasan_early_init
- * initialized all KASAN shadow region with
- * kasan_early_shadow_pud: if this is still the
- * case, that means we can try to allocate a
- * hugepage as a replacement.
- */
- phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
- if (phys_addr) {
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
+ if (phys_addr) {
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PGDIR_SIZE);
+ continue;
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next);
+ kasan_populate_p4d(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pud(p4d_t *p4dp,
+ unsigned long vaddr, unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
+ pud_clear(pudp);
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_p4d(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ unsigned long next;
+
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (pgtable_l4_enabled && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ p4d_clear(p4dp);
+ continue;
+ }
+
+ kasan_early_clear_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pgd(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgtable_l5_enabled && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ pgd_clear(pgdp);
+ continue;
+ }
+
+ kasan_early_clear_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

@@ -362,117 +370,64 @@ static void __init kasan_populate(void *start, void *end)
unsigned long vend = PAGE_ALIGN((unsigned long)end);

kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend);
-
- local_flush_tlb_all();
- memset(start, KASAN_SHADOW_INIT, end - start);
}

-static void __init kasan_shallow_populate_pmd(pgd_t *pgdp,
+static void __init kasan_shallow_populate_pud(p4d_t *p4d,
unsigned long vaddr, unsigned long end)
{
unsigned long next;
- pmd_t *pmdp, *base_pmd;
- bool is_kasan_pte;
-
- base_pmd = (pmd_t *)pgd_page_vaddr(*pgdp);
- pmdp = base_pmd + pmd_index(vaddr);
-
- do {
- next = pmd_addr_end(vaddr, end);
- is_kasan_pte = (pmd_pgtable(*pmdp) == lm_alias(kasan_early_shadow_pte));
-
- if (is_kasan_pte)
- pmd_clear(pmdp);
- } while (pmdp++, vaddr = next, vaddr != end);
-}
-
-static void __init kasan_shallow_populate_pud(pgd_t *pgdp,
- unsigned long vaddr, unsigned long end)
-{
- unsigned long next;
- pud_t *pudp, *base_pud;
- pmd_t *base_pmd;
- bool is_kasan_pmd;
-
- base_pud = (pud_t *)pgd_page_vaddr(*pgdp);
- pudp = base_pud + pud_index(vaddr);
+ void *p;
+ pud_t *pud_k = pud_offset(p4d, vaddr);

do {
next = pud_addr_end(vaddr, end);
- is_kasan_pmd = (pud_pgtable(*pudp) == lm_alias(kasan_early_shadow_pmd));

- if (!is_kasan_pmd)
- continue;
-
- base_pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
- set_pud(pudp, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE));
-
- if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE)
+ if (pud_none(*pud_k)) {
+ p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
continue;
+ }

- memcpy(base_pmd, (void *)kasan_early_shadow_pmd, PAGE_SIZE);
- kasan_shallow_populate_pmd((pgd_t *)pudp, vaddr, next);
- } while (pudp++, vaddr = next, vaddr != end);
+ BUG();
+ } while (pud_k++, vaddr = next, vaddr != end);
}

-static void __init kasan_shallow_populate_p4d(pgd_t *pgdp,
+static void __init kasan_shallow_populate_p4d(pgd_t *pgd,
unsigned long vaddr, unsigned long end)
{
unsigned long next;
- p4d_t *p4dp, *base_p4d;
- pud_t *base_pud;
- bool is_kasan_pud;
-
- base_p4d = (p4d_t *)pgd_page_vaddr(*pgdp);
- p4dp = base_p4d + p4d_index(vaddr);
+ void *p;
+ p4d_t *p4d_k = p4d_offset(pgd, vaddr);

do {
next = p4d_addr_end(vaddr, end);
- is_kasan_pud = (p4d_pgtable(*p4dp) == lm_alias(kasan_early_shadow_pud));
-
- if (!is_kasan_pud)
- continue;
-
- base_pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));

- if (IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE)
+ if (p4d_none(*p4d_k)) {
+ p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
continue;
+ }

- memcpy(base_pud, (void *)kasan_early_shadow_pud, PAGE_SIZE);
- kasan_shallow_populate_pud((pgd_t *)p4dp, vaddr, next);
- } while (p4dp++, vaddr = next, vaddr != end);
+ kasan_shallow_populate_pud(p4d_k, vaddr, end);
+ } while (p4d_k++, vaddr = next, vaddr != end);
}

-#define kasan_shallow_populate_pgd_next(pgdp, vaddr, next) \
- (pgtable_l5_enabled ? \
- kasan_shallow_populate_p4d(pgdp, vaddr, next) : \
- (pgtable_l4_enabled ? \
- kasan_shallow_populate_pud(pgdp, vaddr, next) : \
- kasan_shallow_populate_pmd(pgdp, vaddr, next)))
-
static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long end)
{
unsigned long next;
void *p;
pgd_t *pgd_k = pgd_offset_k(vaddr);
- bool is_kasan_pgd_next;

do {
next = pgd_addr_end(vaddr, end);
- is_kasan_pgd_next = (pgd_page_vaddr(*pgd_k) ==
- (unsigned long)lm_alias(kasan_early_shadow_pgd_next));

- if (is_kasan_pgd_next) {
+ if (pgd_none(*pgd_k)) {
p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
- }
-
- if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE)
continue;
+ }

- memcpy(p, (void *)kasan_early_shadow_pgd_next, PAGE_SIZE);
- kasan_shallow_populate_pgd_next(pgd_k, vaddr, next);
+ kasan_shallow_populate_p4d(pgd_k, vaddr, next);
} while (pgd_k++, vaddr = next, vaddr != end);
}

@@ -482,7 +437,37 @@ static void __init kasan_shallow_populate(void *start, void *end)
unsigned long vend = PAGE_ALIGN((unsigned long)end);

kasan_shallow_populate_pgd(vaddr, vend);
- local_flush_tlb_all();
+}
+
+void create_tmp_mapping(void)
+{
+ void *ptr;
+ p4d_t *base_p4d;
+
+ /*
+ * We need to clean the early mapping: this is hard to achieve "in-place",
+ * so install a temporary mapping like arm64 and x86 do.
+ */
+ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD);
+
+ /* Copy the last p4d since it is shared with the kernel mapping. */
+ if (pgtable_l5_enabled) {
+ ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
+ memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D);
+ set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)],
+ pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE));
+ base_p4d = tmp_p4d;
+ } else {
+ base_p4d = (p4d_t *)tmp_pg_dir;
+ }
+
+ /* Copy the last pud since it is shared with the kernel mapping. */
+ if (pgtable_l4_enabled) {
+ ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END)));
+ memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD);
+ set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)],
+ pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE));
+ }
}

void __init kasan_init(void)
@@ -490,10 +475,27 @@ void __init kasan_init(void)
phys_addr_t p_start, p_end;
u64 i;

- if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ create_tmp_mapping();
+ csr_write(CSR_SATP, PFN_DOWN(__pa(tmp_pg_dir)) | satp_mode);
+
+ kasan_early_clear_pgd(pgd_offset_k(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)FIXADDR_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_START));
+
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
kasan_shallow_populate(
(void *)kasan_mem_to_shadow((void *)VMALLOC_START),
(void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+ /* Shallow populate modules and BPF which are vmalloc-allocated */
+ kasan_shallow_populate(
+ (void *)kasan_mem_to_shadow((void *)MODULES_VADDR),
+ (void *)kasan_mem_to_shadow((void *)MODULES_END));
+ } else {
+ kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+ }

/* Populate the linear mapping */
for_each_mem_range(i, &p_start, &p_end) {
@@ -506,8 +508,8 @@ void __init kasan_init(void)
kasan_populate(kasan_mem_to_shadow(start), kasan_mem_to_shadow(end));
}

- /* Populate kernel, BPF, modules mapping */
- kasan_populate(kasan_mem_to_shadow((const void *)MODULES_VADDR),
+ /* Populate kernel */
+ kasan_populate(kasan_mem_to_shadow((const void *)MODULES_END),
kasan_mem_to_shadow((const void *)MODULES_VADDR + SZ_2G));

for (i = 0; i < PTRS_PER_PTE; i++)
@@ -518,4 +520,7 @@ void __init kasan_init(void)

memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
init_task.kasan_depth = 0;
+
+ csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
+ local_flush_tlb_all();
}
--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:12:57 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The early virtual address should lie in the kernel address space for
inline kasan instrumentation to succeed, otherwise kasan tries to
dereference an address that does not exist in the address space (since
kasan only maps *kernel* address space, not the userspace).

Simply use the very first address of the kernel address space for the
early fdt mapping.

It allowed an Ubuntu kernel to boot successfully with inline
instrumentation.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 478d6763a01a..87f6a5d475a6 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -57,7 +57,7 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
EXPORT_SYMBOL(empty_zero_page);

extern char _start[];
-#define DTB_EARLY_BASE_VA PGDIR_SIZE
+#define DTB_EARLY_BASE_VA (ADDRESS_SPACE_END - (PTRS_PER_PGD / 2 * PGDIR_SIZE) + 1)
void *_dtb_early_va __initdata;
uintptr_t _dtb_early_pa __initdata;

--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:13:59 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti, Alexandre Ghiti
From: Alexandre Ghiti <alex...@alexghiti.eu.rivosinc.com>

The EFI stub must not use any KASAN instrumented code as the kernel
proper did not initialize the thread pointer and the mapping for the
KASAN shadow region.

Avoid using the generic strcmp function, instead use the one in
drivers/firmware/efi/libstub/string.c.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/kernel/image-vars.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/riscv/kernel/image-vars.h b/arch/riscv/kernel/image-vars.h
index 7e2962ef73f9..15616155008c 100644
--- a/arch/riscv/kernel/image-vars.h
+++ b/arch/riscv/kernel/image-vars.h
@@ -23,8 +23,6 @@
* linked at. The routines below are all implemented in assembler in a
* position independent manner
*/
-__efistub_strcmp = strcmp;
-
__efistub__start = _start;
__efistub__start_kernel = _start_kernel;
__efistub__end = _end;
--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:15:00 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The KASAN shadow region was moved next to the kernel mapping but the
ptdump code was not updated and it appears to break the dump of the kernel
page table, so fix this by moving the KASAN shadow region in ptdump.

Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping")
Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/ptdump.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c
index 830e7de65e3a..20a9f991a6d7 100644
--- a/arch/riscv/mm/ptdump.c
+++ b/arch/riscv/mm/ptdump.c
@@ -59,10 +59,6 @@ struct ptd_mm_info {
};

enum address_markers_idx {
-#ifdef CONFIG_KASAN
- KASAN_SHADOW_START_NR,
- KASAN_SHADOW_END_NR,
-#endif
FIXMAP_START_NR,
FIXMAP_END_NR,
PCI_IO_START_NR,
@@ -74,6 +70,10 @@ enum address_markers_idx {
VMALLOC_START_NR,
VMALLOC_END_NR,
PAGE_OFFSET_NR,
+#ifdef CONFIG_KASAN
+ KASAN_SHADOW_START_NR,
+ KASAN_SHADOW_END_NR,
+#endif
#ifdef CONFIG_64BIT
MODULES_MAPPING_NR,
KERNEL_MAPPING_NR,
@@ -82,10 +82,6 @@ enum address_markers_idx {
};

static struct addr_marker address_markers[] = {
-#ifdef CONFIG_KASAN
- {0, "Kasan shadow start"},
- {0, "Kasan shadow end"},
-#endif
{0, "Fixmap start"},
{0, "Fixmap end"},
{0, "PCI I/O start"},
@@ -97,6 +93,10 @@ static struct addr_marker address_markers[] = {
{0, "vmalloc() area"},
{0, "vmalloc() end"},
{0, "Linear mapping"},
+#ifdef CONFIG_KASAN
+ {0, "Kasan shadow start"},
+ {0, "Kasan shadow end"},
+#endif
#ifdef CONFIG_64BIT
{0, "Modules/BPF mapping"},
{0, "Kernel mapping"},
@@ -362,10 +362,6 @@ static int __init ptdump_init(void)
{
unsigned int i, j;

-#ifdef CONFIG_KASAN
- address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START;
- address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END;
-#endif
address_markers[FIXMAP_START_NR].start_address = FIXADDR_START;
address_markers[FIXMAP_END_NR].start_address = FIXADDR_TOP;
address_markers[PCI_IO_START_NR].start_address = PCI_IO_START;
@@ -377,6 +373,10 @@ static int __init ptdump_init(void)
address_markers[VMALLOC_START_NR].start_address = VMALLOC_START;
address_markers[VMALLOC_END_NR].start_address = VMALLOC_END;
address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET;
+#ifdef CONFIG_KASAN
+ address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START;
+ address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END;
+#endif
#ifdef CONFIG_64BIT
address_markers[MODULES_MAPPING_NR].start_address = MODULES_VADDR;
address_markers[KERNEL_MAPPING_NR].start_address = kernel_map.virt_addr;
--
2.37.2

Alexandre Ghiti

unread,
Jan 23, 2023, 5:16:01 AM1/23/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable
KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index e2b656043abf..0f226d3261ca 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -117,6 +117,7 @@ config RISCV
select HAVE_RSEQ
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
+ select KASAN_VMALLOC if KASAN
select MODULES_USE_ELF_RELA if MODULES
select MODULE_SECTIONS if MODULES
select OF
--
2.37.2

Ard Biesheuvel

unread,
Jan 23, 2023, 5:19:31 AM1/23/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
On Mon, 23 Jan 2023 at 11:14, Alexandre Ghiti <alex...@rivosinc.com> wrote:
>
> From: Alexandre Ghiti <alex...@alexghiti.eu.rivosinc.com>
>
> The EFI stub must not use any KASAN instrumented code as the kernel
> proper did not initialize the thread pointer and the mapping for the
> KASAN shadow region.
>
> Avoid using the generic strcmp function, instead use the one in
> drivers/firmware/efi/libstub/string.c.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

Acked-by: Ard Biesheuvel <ar...@kernel.org>

Conor Dooley

unread,
Jan 23, 2023, 5:15:51 PM1/23/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org
Hey Alex,

FYI this patch has a couple places with spaces used rather than tabs for
indent.
^^ here.

Thanks,
Conor.

signature.asc

Alexandre Ghiti

unread,
Jan 24, 2023, 3:00:38 AM1/24/23
to Conor Dooley, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org
Hi Conor,

On Mon, Jan 23, 2023 at 11:15 PM Conor Dooley <co...@kernel.org> wrote:
>
> Hey Alex,
>
> FYI this patch has a couple places with spaces used rather than tabs for
> indent.

Damn, I forgot to run checkpatch this time...

Thanks,

Alex

Alexandre Ghiti

unread,
Jan 25, 2023, 3:23:44 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
v3:
- Add AB from Ard in patch 4, thanks
- Fix checkpatch issues in patch 1, thanks Conor

Alexandre Ghiti

unread,
Jan 25, 2023, 3:24:45 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
This is a preliminary work that allows to make the code more
understandable.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 185 +++++++++++++++++++++++--------------
1 file changed, 116 insertions(+), 69 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index e1226709490f..2a48eba6bd08 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -95,23 +95,13 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
}

static void __init kasan_populate_pud(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
pud_t *pudp, *base_pud;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else if (pgd_none(*pgd)) {
+ if (pgd_none(*pgd)) {
base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
memcpy(base_pud, (void *)kasan_early_shadow_pud,
sizeof(pud_t) * PTRS_PER_PUD);
@@ -130,16 +120,10 @@ static void __init kasan_populate_pud(pgd_t *pgd,
next = pud_addr_end(vaddr, end);

if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pmd));
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
+ if (phys_addr) {
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
- if (phys_addr) {
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

@@ -152,34 +136,21 @@ static void __init kasan_populate_pud(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
p4d_t *p4dp, *base_p4d;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else {
- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
- }
+ base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
+ if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
+ base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
+ memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
+ sizeof(p4d_t) * PTRS_PER_P4D);
}

p4dp = base_p4d + p4d_index(vaddr);
@@ -188,20 +159,14 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
next = p4d_addr_end(vaddr, end);

if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pud));
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
+ if (phys_addr) {
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
- if (phys_addr) {
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next, early);
+ kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);

/*
@@ -210,8 +175,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
@@ -219,16 +183,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
(pgtable_l4_enabled ? \
(uintptr_t)kasan_early_shadow_pud : \
(uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next, early) \
+#define kasan_populate_pgd_next(pgdp, vaddr, next) \
(pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next, early) : \
+ kasan_populate_p4d(pgdp, vaddr, next) : \
(pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next, early) : \
+ kasan_populate_pud(pgdp, vaddr, next) : \
kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))

static void __init kasan_populate_pgd(pgd_t *pgdp,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
unsigned long next;
@@ -237,11 +200,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
next = pgd_addr_end(vaddr, end);

if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (early) {
- phys_addr = __pa((uintptr_t)kasan_early_shadow_pgd_next);
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
- continue;
- } else if (pgd_page_vaddr(*pgdp) ==
+ if (pgd_page_vaddr(*pgdp) ==
(unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
/*
* pgdp can't be none since kasan_early_init
@@ -258,7 +217,95 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next, early);
+ kasan_populate_pgd_next(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pud(p4d_t *p4dp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) &&
+ (next - vaddr) >= PUD_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd);
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_p4d(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ /*
+ * We can't use pgd_page_vaddr here as it would return a linear
+ * mapping address but it is not mapped yet, but when populating
+ * early_pg_dir, we need the physical address and when populating
+ * swapper_pg_dir, we need the kernel virtual address so use
+ * pt_ops facility.
+ * Note that this test is then completely equivalent to
+ * p4dp = p4d_offset(pgdp, vaddr)
+ */
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pud);
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pgd(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d);
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

@@ -295,16 +342,16 @@ asmlinkage void __init kasan_early_init(void)
PAGE_TABLE));
}

- kasan_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}

void __init kasan_swapper_init(void)
{
- kasan_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}
@@ -314,7 +361,7 @@ static void __init kasan_populate(void *start, void *end)
unsigned long vaddr = (unsigned long)start & PAGE_MASK;
unsigned long vend = PAGE_ALIGN((unsigned long)end);

- kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend, false);
+ kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend);

local_flush_tlb_all();
memset(start, KASAN_SHADOW_INIT, end - start);
--
2.37.2

Alexandre Ghiti

unread,
Jan 25, 2023, 3:25:46 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Our previous kasan population implementation used to have the final kasan
shadow region mapped with kasan_early_shadow_page, because we did not clean
the early mapping and then we had to populate the kasan region "in-place"
which made the code cumbersome.

So now we clear the early mapping, establish a temporary mapping while we
populate the kasan shadow region with just the kernel regions that will
be used.

This new version uses the "generic" way of going through a page table
that may be folded at runtime (avoid the XXX_next macros).

It was tested with outline instrumentation on an Ubuntu kernel
configuration successfully.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 361 +++++++++++++++++++------------------
1 file changed, 183 insertions(+), 178 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 2a48eba6bd08..5c7b1d07faf2 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -18,58 +18,48 @@
* For sv39, the region is aligned on PGDIR_SIZE so we only need to populate
* the page global directory with kasan_early_shadow_pmd.
*
- * For sv48 and sv57, the region is not aligned on PGDIR_SIZE so the mapping
- * must be divided as follows:
- * - the first PGD entry, although incomplete, is populated with
- * kasan_early_shadow_pud/p4d
- * - the PGD entries in the middle are populated with kasan_early_shadow_pud/p4d
- * - the last PGD entry is shared with the kernel mapping so populated at the
- * lower levels pud/p4d
- *
- * In addition, when shallow populating a kasan region (for example vmalloc),
- * this region may also not be aligned on PGDIR size, so we must go down to the
- * pud level too.
+ * For sv48 and sv57, the region start is aligned on PGDIR_SIZE whereas the end
+ * region is not and then we have to go down to the PUD level.
*/

extern pgd_t early_pg_dir[PTRS_PER_PGD];
+pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss;
+pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss;

static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- pud_t *pudp, *base_pud;
+ pud_t *pudp, *p;
unsigned long next;
} while (pudp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole PGD to be populated before setting the PGD in
- * the page table, otherwise, if we did set the PGD before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- p4d_t *p4dp, *base_p4d;
+ p4d_t *p4dp, *p;
unsigned long next;

- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
+ if (pgd_none(*pgd)) {
+ p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
}

- p4dp = base_p4d + p4d_index(vaddr);
+ p4dp = p4d_offset(pgd, vaddr);

do {
next = p4d_addr_end(vaddr, end);
@@ -162,34 +128,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
if (phys_addr) {
set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, P4D_SIZE);
continue;
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
+ kasan_populate_pud(p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole P4D to be populated before setting the P4D in
- * the page table, otherwise, if we did set the P4D before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

-#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
- (uintptr_t)kasan_early_shadow_p4d : \
- (pgtable_l4_enabled ? \
- (uintptr_t)kasan_early_shadow_pud : \
- (uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next) \
- (pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next) : \
- (pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next) : \
- kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))
-
static void __init kasan_populate_pgd(pgd_t *pgdp,
unsigned long vaddr, unsigned long end)
{
@@ -199,25 +146,86 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
do {
next = pgd_addr_end(vaddr, end);

- if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (pgd_page_vaddr(*pgdp) ==
- (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
- /*
- * pgdp can't be none since kasan_early_init
- * initialized all KASAN shadow region with
- * kasan_early_shadow_pud: if this is still the
- * case, that means we can try to allocate a
- * hugepage as a replacement.
- */
- phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
- if (phys_addr) {
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
+ if (phys_addr) {
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PGDIR_SIZE);
+ continue;
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next);
+ kasan_populate_p4d(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pud(p4d_t *p4dp,
+ unsigned long vaddr, unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
+ pud_clear(pudp);
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_p4d(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ unsigned long next;
+
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (pgtable_l4_enabled && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ p4d_clear(p4dp);
+ continue;
+ }
+
+ kasan_early_clear_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pgd(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgtable_l5_enabled && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ pgd_clear(pgdp);
+ continue;
+ }
+
+ kasan_early_clear_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

@@ -362,117 +370,64 @@ static void __init kasan_populate(void *start, void *end)
unsigned long vend = PAGE_ALIGN((unsigned long)end);

kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend);
-
- local_flush_tlb_all();
- memset(start, KASAN_SHADOW_INIT, end - start);
-}
-
-static void __init kasan_shallow_populate_pmd(pgd_t *pgdp,
- unsigned long vaddr, unsigned long end)
-{
- unsigned long next;
- pmd_t *pmdp, *base_pmd;
- bool is_kasan_pte;
-
- base_pmd = (pmd_t *)pgd_page_vaddr(*pgdp);
- pmdp = base_pmd + pmd_index(vaddr);
-
- do {
- next = pmd_addr_end(vaddr, end);
- is_kasan_pte = (pmd_pgtable(*pmdp) == lm_alias(kasan_early_shadow_pte));
-
- if (is_kasan_pte)
- pmd_clear(pmdp);
- } while (pmdp++, vaddr = next, vaddr != end);
}

-static void __init kasan_shallow_populate_pud(pgd_t *pgdp,
+static void __init kasan_shallow_populate_pud(p4d_t *p4d,
unsigned long vaddr, unsigned long end)
{
unsigned long vaddr, unsigned long end)
{
+
+ /*
+ * We need to clean the early mapping: this is hard to achieve "in-place",
+ * so install a temporary mapping like arm64 and x86 do.
+ */
+ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD);
+
+ /* Copy the last p4d since it is shared with the kernel mapping. */
+ if (pgtable_l5_enabled) {
+ ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
+ memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D);
+ set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)],
+ pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE));
+ base_p4d = tmp_p4d;
+ } else {

Alexandre Ghiti

unread,
Jan 25, 2023, 3:26:47 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The early virtual address should lie in the kernel address space for
inline kasan instrumentation to succeed, otherwise kasan tries to
dereference an address that does not exist in the address space (since
kasan only maps *kernel* address space, not the userspace).

Simply use the very first address of the kernel address space for the
early fdt mapping.

It allowed an Ubuntu kernel to boot successfully with inline
instrumentation.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Alexandre Ghiti

unread,
Jan 25, 2023, 3:27:48 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti

Alexandre Ghiti

unread,
Jan 25, 2023, 3:28:49 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The KASAN shadow region was moved next to the kernel mapping but the
ptdump code was not updated and it appears to break the dump of the kernel
page table, so fix this by moving the KASAN shadow region in ptdump.

Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping")
Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Alexandre Ghiti

unread,
Jan 25, 2023, 3:29:50 AM1/25/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable
KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Conor Dooley

unread,
Jan 27, 2023, 10:28:18 AM1/27/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org
Hey Alex,

On Wed, Jan 25, 2023 at 09:23:30AM +0100, Alexandre Ghiti wrote:
> The early virtual address should lie in the kernel address space for
> inline kasan instrumentation to succeed, otherwise kasan tries to
> dereference an address that does not exist in the address space (since
> kasan only maps *kernel* address space, not the userspace).
>
> Simply use the very first address of the kernel address space for the
> early fdt mapping.
>
> It allowed an Ubuntu kernel to boot successfully with inline
> instrumentation.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

Been poking around in this area the last few days trying to hunt down
some bugs... Things look functionally the same w/ this patch and we do
get rid of the odd looking pointer which is nice.
Reviewed-by: Conor Dooley <conor....@microchip.com>

Probably would've made the cause of 50e63dd8ed92 ("riscv: fix reserved
memory setup") more difficult to find so glad I got that out of the way
well before this patch!

Thanks,
Conor.
signature.asc

kernel test robot

unread,
Jan 31, 2023, 7:16:40 PM1/31/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, oe-kbu...@lists.linux.dev, Alexandre Ghiti
Hi Alexandre,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v6.2-rc6 next-20230131]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Alexandre-Ghiti/riscv-Split-early-and-final-KASAN-population-functions/20230125-163113
patch link: https://lore.kernel.org/r/20230125082333.1577572-3-alexghiti%40rivosinc.com
patch subject: [PATCH v3 2/6] riscv: Rework kasan population functions
config: riscv-randconfig-r006-20230201 (https://download.01.org/0day-ci/archive/20230201/202302010819...@intel.com/config)
compiler: riscv64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/c18726e8d14edbd59ec19854b4eb06d83fff716f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Alexandre-Ghiti/riscv-Split-early-and-final-KASAN-population-functions/20230125-163113
git checkout c18726e8d14edbd59ec19854b4eb06d83fff716f
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=riscv olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=riscv SHELL=/bin/bash arch/riscv/mm/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <l...@intel.com>

All warnings (new ones prefixed by >>):

>> arch/riscv/mm/kasan_init.c:442:6: warning: no previous prototype for 'create_tmp_mapping' [-Wmissing-prototypes]
442 | void create_tmp_mapping(void)
| ^~~~~~~~~~~~~~~~~~


vim +/create_tmp_mapping +442 arch/riscv/mm/kasan_init.c

441
> 442 void create_tmp_mapping(void)
443 {
444 void *ptr;
445 p4d_t *base_p4d;
446
447 /*
448 * We need to clean the early mapping: this is hard to achieve "in-place",
449 * so install a temporary mapping like arm64 and x86 do.
450 */
451 memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD);
452
453 /* Copy the last p4d since it is shared with the kernel mapping. */
454 if (pgtable_l5_enabled) {
455 ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
456 memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D);
457 set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)],
458 pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE));
459 base_p4d = tmp_p4d;
460 } else {
461 base_p4d = (p4d_t *)tmp_pg_dir;
462 }
463
464 /* Copy the last pud since it is shared with the kernel mapping. */
465 if (pgtable_l4_enabled) {
466 ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END)));
467 memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD);
468 set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)],
469 pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE));
470 }
471 }
472

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

Alexandre Ghiti

unread,
Feb 2, 2023, 9:00:15 AM2/2/23
to kernel test robot, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, oe-kbu...@lists.linux.dev
Ok, I have to declare this function static to quiet this warning,
there will be a v4 soon then.

Alexandre Ghiti

unread,
Feb 3, 2023, 2:52:37 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
base-commit-tag: v6.2-rc6

v4:
- Fix build warning by declaring create_tmp_mapping as static, kernel
test robot

v3:
- Add AB from Ard in patch 4, thanks
- Fix checkpatch issues in patch 1, thanks Conor

v2:
- Rebase on top of v6.2-rc3
- patch 4 is now way simpler than it used to be since Ard already moved
the string functions into the efistub.

Alexandre Ghiti (6):
riscv: Split early and final KASAN population functions
riscv: Rework kasan population functions

Alexandre Ghiti

unread,
Feb 3, 2023, 2:53:38 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
This is a preliminary work that allows to make the code more
understandable.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 185 +++++++++++++++++++++++--------------
1 file changed, 116 insertions(+), 69 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index e1226709490f..2a48eba6bd08 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -95,23 +95,13 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
}

static void __init kasan_populate_pud(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
pud_t *pudp, *base_pud;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else if (pgd_none(*pgd)) {
+ if (pgd_none(*pgd)) {
base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
memcpy(base_pud, (void *)kasan_early_shadow_pud,
sizeof(pud_t) * PTRS_PER_PUD);
@@ -130,16 +120,10 @@ static void __init kasan_populate_pud(pgd_t *pgd,
next = pud_addr_end(vaddr, end);

if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pmd));
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
+ if (phys_addr) {
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE);
- if (phys_addr) {
- set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

@@ -152,34 +136,21 @@ static void __init kasan_populate_pud(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
p4d_t *p4dp, *base_p4d;
unsigned long next;

- if (early) {
- /*
- * We can't use pgd_page_vaddr here as it would return a linear
- * mapping address but it is not mapped yet, but when populating
- * early_pg_dir, we need the physical address and when populating
- * swapper_pg_dir, we need the kernel virtual address so use
- * pt_ops facility.
- */
- base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd)));
- } else {
- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
- }
+ base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
+ if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
+ base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
+ memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
+ sizeof(p4d_t) * PTRS_PER_P4D);
}

p4dp = base_p4d + p4d_index(vaddr);
@@ -188,20 +159,14 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
next = p4d_addr_end(vaddr, end);

if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) {
- if (early) {
- phys_addr = __pa(((uintptr_t)kasan_early_shadow_pud));
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
+ if (phys_addr) {
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
continue;
- } else {
- phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
- if (phys_addr) {
- set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next, early);
+ kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);

/*
@@ -210,8 +175,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
* it entirely, memblock could allocate a page at a physical address
* where KASAN is not populated yet and then we'd get a page fault.
*/
- if (!early)
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
@@ -219,16 +183,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
(pgtable_l4_enabled ? \
(uintptr_t)kasan_early_shadow_pud : \
(uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next, early) \
+#define kasan_populate_pgd_next(pgdp, vaddr, next) \
(pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next, early) : \
+ kasan_populate_p4d(pgdp, vaddr, next) : \
(pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next, early) : \
+ kasan_populate_pud(pgdp, vaddr, next) : \
kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))

static void __init kasan_populate_pgd(pgd_t *pgdp,
- unsigned long vaddr, unsigned long end,
- bool early)
+ unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
unsigned long next;
@@ -237,11 +200,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
next = pgd_addr_end(vaddr, end);

if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (early) {
- phys_addr = __pa((uintptr_t)kasan_early_shadow_pgd_next);
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
- continue;
- } else if (pgd_page_vaddr(*pgdp) ==
+ if (pgd_page_vaddr(*pgdp) ==
(unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
/*
* pgdp can't be none since kasan_early_init
@@ -258,7 +217,95 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next, early);
+ kasan_populate_pgd_next(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pud(p4d_t *p4dp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) &&
+ (next - vaddr) >= PUD_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd);
+ set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_p4d(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ /*
+ * We can't use pgd_page_vaddr here as it would return a linear
+ * mapping address but it is not mapped yet, but when populating
+ * early_pg_dir, we need the physical address and when populating
+ * swapper_pg_dir, we need the kernel virtual address so use
+ * pt_ops facility.
+ * Note that this test is then completely equivalent to
+ * p4dp = p4d_offset(pgdp, vaddr)
+ */
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_pud);
+ set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_populate_pgd(pgd_t *pgdp,
+ unsigned long vaddr,
+ unsigned long end)
+{
+ phys_addr_t phys_addr;
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d);
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE));
+ continue;
+ }
+
+ kasan_early_populate_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

@@ -295,16 +342,16 @@ asmlinkage void __init kasan_early_init(void)
PAGE_TABLE));
}

- kasan_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}

void __init kasan_swapper_init(void)
{
- kasan_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
- KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+ kasan_early_populate_pgd(pgd_offset_k(KASAN_SHADOW_START),
+ KASAN_SHADOW_START, KASAN_SHADOW_END);

local_flush_tlb_all();
}
@@ -314,7 +361,7 @@ static void __init kasan_populate(void *start, void *end)
unsigned long vaddr = (unsigned long)start & PAGE_MASK;
unsigned long vend = PAGE_ALIGN((unsigned long)end);

- kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend, false);
+ kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend);

local_flush_tlb_all();
memset(start, KASAN_SHADOW_INIT, end - start);
--
2.37.2

Alexandre Ghiti

unread,
Feb 3, 2023, 2:54:39 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Our previous kasan population implementation used to have the final kasan
shadow region mapped with kasan_early_shadow_page, because we did not clean
the early mapping and then we had to populate the kasan region "in-place"
which made the code cumbersome.

So now we clear the early mapping, establish a temporary mapping while we
populate the kasan shadow region with just the kernel regions that will
be used.

This new version uses the "generic" way of going through a page table
that may be folded at runtime (avoid the XXX_next macros).

It was tested with outline instrumentation on an Ubuntu kernel
configuration successfully.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---
arch/riscv/mm/kasan_init.c | 361 +++++++++++++++++++------------------
1 file changed, 183 insertions(+), 178 deletions(-)

diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 2a48eba6bd08..8fc0efcf905c 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -18,58 +18,48 @@
* For sv39, the region is aligned on PGDIR_SIZE so we only need to populate
* the page global directory with kasan_early_shadow_pmd.
*
- * For sv48 and sv57, the region is not aligned on PGDIR_SIZE so the mapping
- * must be divided as follows:
- * - the first PGD entry, although incomplete, is populated with
- * kasan_early_shadow_pud/p4d
- * - the PGD entries in the middle are populated with kasan_early_shadow_pud/p4d
- * - the last PGD entry is shared with the kernel mapping so populated at the
- * lower levels pud/p4d
- *
- * In addition, when shallow populating a kasan region (for example vmalloc),
- * this region may also not be aligned on PGDIR size, so we must go down to the
- * pud level too.
+ * For sv48 and sv57, the region start is aligned on PGDIR_SIZE whereas the end
+ * region is not and then we have to go down to the PUD level.
*/

extern pgd_t early_pg_dir[PTRS_PER_PGD];
+pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss;
+pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss;

static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- pud_t *pudp, *base_pud;
+ pud_t *pudp, *p;
unsigned long next;
} while (pudp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole PGD to be populated before setting the PGD in
- * the page table, otherwise, if we did set the PGD before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE));
}

static void __init kasan_populate_p4d(pgd_t *pgd,
unsigned long vaddr, unsigned long end)
{
phys_addr_t phys_addr;
- p4d_t *p4dp, *base_p4d;
+ p4d_t *p4dp, *p;
unsigned long next;

- base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
- base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
- memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
- sizeof(p4d_t) * PTRS_PER_P4D);
+ if (pgd_none(*pgd)) {
+ p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
+ set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
}

- p4dp = base_p4d + p4d_index(vaddr);
+ p4dp = p4d_offset(pgd, vaddr);

do {
next = p4d_addr_end(vaddr, end);
@@ -162,34 +128,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE);
if (phys_addr) {
set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, P4D_SIZE);
continue;
}
}

- kasan_populate_pud((pgd_t *)p4dp, vaddr, next);
+ kasan_populate_pud(p4dp, vaddr, next);
} while (p4dp++, vaddr = next, vaddr != end);
-
- /*
- * Wait for the whole P4D to be populated before setting the P4D in
- * the page table, otherwise, if we did set the P4D before populating
- * it entirely, memblock could allocate a page at a physical address
- * where KASAN is not populated yet and then we'd get a page fault.
- */
- set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE));
}

-#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \
- (uintptr_t)kasan_early_shadow_p4d : \
- (pgtable_l4_enabled ? \
- (uintptr_t)kasan_early_shadow_pud : \
- (uintptr_t)kasan_early_shadow_pmd))
-#define kasan_populate_pgd_next(pgdp, vaddr, next) \
- (pgtable_l5_enabled ? \
- kasan_populate_p4d(pgdp, vaddr, next) : \
- (pgtable_l4_enabled ? \
- kasan_populate_pud(pgdp, vaddr, next) : \
- kasan_populate_pmd((pud_t *)pgdp, vaddr, next)))
-
static void __init kasan_populate_pgd(pgd_t *pgdp,
unsigned long vaddr, unsigned long end)
{
@@ -199,25 +146,86 @@ static void __init kasan_populate_pgd(pgd_t *pgdp,
do {
next = pgd_addr_end(vaddr, end);

- if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) {
- if (pgd_page_vaddr(*pgdp) ==
- (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) {
- /*
- * pgdp can't be none since kasan_early_init
- * initialized all KASAN shadow region with
- * kasan_early_shadow_pud: if this is still the
- * case, that means we can try to allocate a
- * hugepage as a replacement.
- */
- phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
- if (phys_addr) {
- set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
- continue;
- }
+ if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE);
+ if (phys_addr) {
+ set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL));
+ memset(__va(phys_addr), KASAN_SHADOW_INIT, PGDIR_SIZE);
+ continue;
}
}

- kasan_populate_pgd_next(pgdp, vaddr, next);
+ kasan_populate_p4d(pgdp, vaddr, next);
+ } while (pgdp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pud(p4d_t *p4dp,
+ unsigned long vaddr, unsigned long end)
+{
+ pud_t *pudp, *base_pud;
+ unsigned long next;
+
+ if (!pgtable_l4_enabled) {
+ pudp = (pud_t *)p4dp;
+ } else {
+ base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp)));
+ pudp = base_pud + pud_index(vaddr);
+ }
+
+ do {
+ next = pud_addr_end(vaddr, end);
+
+ if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) {
+ pud_clear(pudp);
+ continue;
+ }
+
+ BUG();
+ } while (pudp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_p4d(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ p4d_t *p4dp, *base_p4d;
+ unsigned long next;
+
+ if (!pgtable_l5_enabled) {
+ p4dp = (p4d_t *)pgdp;
+ } else {
+ base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp)));
+ p4dp = base_p4d + p4d_index(vaddr);
+ }
+
+ do {
+ next = p4d_addr_end(vaddr, end);
+
+ if (pgtable_l4_enabled && IS_ALIGNED(vaddr, P4D_SIZE) &&
+ (next - vaddr) >= P4D_SIZE) {
+ p4d_clear(p4dp);
+ continue;
+ }
+
+ kasan_early_clear_pud(p4dp, vaddr, next);
+ } while (p4dp++, vaddr = next, vaddr != end);
+}
+
+static void __init kasan_early_clear_pgd(pgd_t *pgdp,
+ unsigned long vaddr, unsigned long end)
+{
+ unsigned long next;
+
+ do {
+ next = pgd_addr_end(vaddr, end);
+
+ if (pgtable_l5_enabled && IS_ALIGNED(vaddr, PGDIR_SIZE) &&
+ (next - vaddr) >= PGDIR_SIZE) {
+ pgd_clear(pgdp);
+ continue;
+ }
+
+ kasan_early_clear_p4d(pgdp, vaddr, next);
} while (pgdp++, vaddr = next, vaddr != end);
}

unsigned long vaddr, unsigned long end)
{
unsigned long vaddr, unsigned long end)
{
+static void create_tmp_mapping(void)
+{
+ void *ptr;
+ p4d_t *base_p4d;
+
+ /*
+ * We need to clean the early mapping: this is hard to achieve "in-place",
+ * so install a temporary mapping like arm64 and x86 do.
+ */
+ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD);
+
+ /* Copy the last p4d since it is shared with the kernel mapping. */
+ if (pgtable_l5_enabled) {
+ ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
+ memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D);
+ set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)],
+ pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE));
+ base_p4d = tmp_p4d;
+ } else {

Alexandre Ghiti

unread,
Feb 3, 2023, 2:55:41 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The early virtual address should lie in the kernel address space for
inline kasan instrumentation to succeed, otherwise kasan tries to
dereference an address that does not exist in the address space (since
kasan only maps *kernel* address space, not the userspace).

Simply use the very first address of the kernel address space for the
early fdt mapping.

It allowed an Ubuntu kernel to boot successfully with inline
instrumentation.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Alexandre Ghiti

unread,
Feb 3, 2023, 2:56:41 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti

Alexandre Ghiti

unread,
Feb 3, 2023, 2:57:42 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
The KASAN shadow region was moved next to the kernel mapping but the
ptdump code was not updated and it appears to break the dump of the kernel
page table, so fix this by moving the KASAN shadow region in ptdump.

Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping")
Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Atish Patra

unread,
Feb 3, 2023, 2:58:42 AM2/3/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org
> _______________________________________________
> linux-riscv mailing list
> linux...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv


Reviewed-by: Atish Patra <ati...@rivosinc.com>

--
Regards,
Atish

Alexandre Ghiti

unread,
Feb 3, 2023, 2:58:43 AM2/3/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable
KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default.

Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
---

Björn Töpel

unread,
Feb 17, 2023, 6:50:30 AM2/17/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Alexandre Ghiti <alex...@rivosinc.com> writes:

> This is a preliminary work that allows to make the code more
> understandable.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>
> ---
> arch/riscv/mm/kasan_init.c | 185 +++++++++++++++++++++++--------------
> 1 file changed, 116 insertions(+), 69 deletions(-)

Reviewed-by: Björn Töpel <bj...@rivosinc.com>

Björn Töpel

unread,
Feb 17, 2023, 9:54:10 AM2/17/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Alexandre Ghiti <alex...@rivosinc.com> writes:

> Our previous kasan population implementation used to have the final kasan
> shadow region mapped with kasan_early_shadow_page, because we did not clean
> the early mapping and then we had to populate the kasan region "in-place"
> which made the code cumbersome.
>
> So now we clear the early mapping, establish a temporary mapping while we
> populate the kasan shadow region with just the kernel regions that will
> be used.
>
> This new version uses the "generic" way of going through a page table
> that may be folded at runtime (avoid the XXX_next macros).
>
> It was tested with outline instrumentation on an Ubuntu kernel
> configuration successfully.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

(One minor nit, that can be addressed later.)

Reviewed-by: Björn Töpel <bj...@rivosinc.com>

> arch/riscv/mm/kasan_init.c | 361 +++++++++++++++++++------------------
> 1 file changed, 183 insertions(+), 178 deletions(-)


Nit: Maybe add a comment, why the sfence.vma is *not* required here. I
tripped over it.


Björn

Björn Töpel

unread,
Feb 17, 2023, 9:55:50 AM2/17/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Alexandre Ghiti <alex...@rivosinc.com> writes:

> The KASAN shadow region was moved next to the kernel mapping but the
> ptdump code was not updated and it appears to break the dump of the kernel
> page table, so fix this by moving the KASAN shadow region in ptdump.
>
> Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping")
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

Tested-by: Björn Töpel <bj...@rivosinc.com>
Reviewed-by: Björn Töpel <bj...@rivosinc.com>

Björn Töpel

unread,
Feb 17, 2023, 9:57:18 AM2/17/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Alexandre Ghiti <alex...@rivosinc.com> writes:

> If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable
> KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

Reviewed-by: Björn Töpel <bj...@rivosinc.com>

Björn Töpel

unread,
Feb 17, 2023, 9:58:52 AM2/17/23
to Alexandre Ghiti, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti
Alexandre Ghiti <alex...@rivosinc.com> writes:

> The early virtual address should lie in the kernel address space for
> inline kasan instrumentation to succeed, otherwise kasan tries to
> dereference an address that does not exist in the address space (since
> kasan only maps *kernel* address space, not the userspace).
>
> Simply use the very first address of the kernel address space for the
> early fdt mapping.
>
> It allowed an Ubuntu kernel to boot successfully with inline
> instrumentation.
>
> Signed-off-by: Alexandre Ghiti <alex...@rivosinc.com>

Reviewed-by: Björn Töpel <bj...@rivosinc.com>

patchwork-bo...@kernel.org

unread,
Mar 7, 2023, 10:30:24 PM3/7/23
to Alexandre Ghiti, linux...@lists.infradead.org, paul.w...@sifive.com, pal...@dabbelt.com, a...@eecs.berkeley.edu, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, vincenzo...@arm.com, ar...@kernel.org, co...@kernel.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org
Hello:

This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <pal...@rivosinc.com>:

On Fri, 3 Feb 2023 08:52:26 +0100 you wrote:
> As described in patch 2, our current kasan implementation is intricate,
> so I tried to simplify the implementation and mimic what arm64/x86 are
> doing.
>
> In addition it fixes UEFI bootflow with a kasan kernel and kasan inline
> instrumentation: all kasan configurations were tested on a large ubuntu
> kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST.
>
> [...]

Here is the summary with links:
- [v4,1/6] riscv: Split early and final KASAN population functions
https://git.kernel.org/riscv/c/70a3bb1e1fd9
- [v4,2/6] riscv: Rework kasan population functions
https://git.kernel.org/riscv/c/fec8e4f66e4d
- [v4,3/6] riscv: Move DTB_EARLY_BASE_VA to the kernel address space
https://git.kernel.org/riscv/c/1cdf594686a3
- [v4,4/6] riscv: Fix EFI stub usage of KASAN instrumented strcmp function
https://git.kernel.org/riscv/c/415e9a115124
- [v4,5/6] riscv: Fix ptdump when KASAN is enabled
https://git.kernel.org/riscv/c/fe0c8624d20d
- [v4,6/6] riscv: Unconditionnally select KASAN_VMALLOC if KASAN
https://git.kernel.org/riscv/c/4cdc06c5c741

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html


Palmer Dabbelt

unread,
Mar 7, 2023, 10:30:26 PM3/7/23
to Albert Ou, Andrey Konovalov, Vincenzo Frascino, linu...@vger.kernel.org, kasa...@googlegroups.com, Paul Walmsley, Alexander Potapenko, Andrey Ryabinin, linux...@lists.infradead.org, Ard Biesheuvel, linux-...@vger.kernel.org, Palmer Dabbelt, Dmitry Vyukov, Conor Dooley, Alexandre Ghiti

On Fri, 3 Feb 2023 08:52:26 +0100, Alexandre Ghiti wrote:
> As described in patch 2, our current kasan implementation is intricate,
> so I tried to simplify the implementation and mimic what arm64/x86 are
> doing.
>
> In addition it fixes UEFI bootflow with a kasan kernel and kasan inline
> instrumentation: all kasan configurations were tested on a large ubuntu
> kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST.
>
> [...]

Applied, thanks!

[1/6] riscv: Split early and final KASAN population functions
https://git.kernel.org/palmer/c/70a3bb1e1fd9
[2/6] riscv: Rework kasan population functions
https://git.kernel.org/palmer/c/fec8e4f66e4d
[3/6] riscv: Move DTB_EARLY_BASE_VA to the kernel address space
https://git.kernel.org/palmer/c/1cdf594686a3
[4/6] riscv: Fix EFI stub usage of KASAN instrumented strcmp function
https://git.kernel.org/palmer/c/415e9a115124
[5/6] riscv: Fix ptdump when KASAN is enabled
https://git.kernel.org/palmer/c/fe0c8624d20d
[6/6] riscv: Unconditionnally select KASAN_VMALLOC if KASAN
https://git.kernel.org/palmer/c/4cdc06c5c741

Best regards,
--
Palmer Dabbelt <pal...@rivosinc.com>

Palmer Dabbelt

unread,
Mar 7, 2023, 10:45:29 PM3/7/23
to alex...@rivosinc.com, a...@eecs.berkeley.edu, andre...@gmail.com, vincenzo...@arm.com, linu...@vger.kernel.org, kasa...@googlegroups.com, Paul Walmsley, gli...@google.com, ryabin...@gmail.com, linux...@lists.infradead.org, ar...@kernel.org, linux-...@vger.kernel.org, dvy...@google.com, Conor Dooley
Sorry, this one didn't actually get tested -- I'd thought it was in the
queue before I kicked off the run, but it wasn't. It's testing now,
I've dropped it from for-next for a bit as I don't remember if this is
one of the patch sets that had a bulid/test failure.

Palmer Dabbelt

unread,
Apr 20, 2023, 1:36:46 PM4/20/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti

On Fri, 03 Feb 2023 08:52:26 +0100, Alexandre Ghiti wrote:
> As described in patch 2, our current kasan implementation is intricate,
> so I tried to simplify the implementation and mimic what arm64/x86 are
> doing.
>
> In addition it fixes UEFI bootflow with a kasan kernel and kasan inline
> instrumentation: all kasan configurations were tested on a large ubuntu
> kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST.
>
> [...]

Applied, thanks!

[1/6] riscv: Split early and final KASAN population functions
https://git.kernel.org/palmer/c/cd0334e1c091
[2/6] riscv: Rework kasan population functions
https://git.kernel.org/palmer/c/96f9d4daf745
[3/6] riscv: Move DTB_EARLY_BASE_VA to the kernel address space
https://git.kernel.org/palmer/c/401e84488800
[4/6] riscv: Fix EFI stub usage of KASAN instrumented strcmp function
https://git.kernel.org/palmer/c/617955ca6e27
[5/6] riscv: Fix ptdump when KASAN is enabled
https://git.kernel.org/palmer/c/ecd7ebaf0b5a
[6/6] riscv: Unconditionnally select KASAN_VMALLOC if KASAN
https://git.kernel.org/palmer/c/864046c512c2

Palmer Dabbelt

unread,
Apr 21, 2023, 2:59:24 PM4/21/23
to Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Ard Biesheuvel, Conor Dooley, linux...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, linu...@vger.kernel.org, Alexandre Ghiti

On Fri, 03 Feb 2023 08:52:26 +0100, Alexandre Ghiti wrote:
> As described in patch 2, our current kasan implementation is intricate,
> so I tried to simplify the implementation and mimic what arm64/x86 are
> doing.
>
> In addition it fixes UEFI bootflow with a kasan kernel and kasan inline
> instrumentation: all kasan configurations were tested on a large ubuntu
> kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST.
>
Reply all
Reply to author
Forward
0 new messages