[PATCH 0/3] arm64: kasan: support CONFIG_KASAN_VMALLOC

39 views
Skip to first unread message

Lecopzer Chen

unread,
Jan 3, 2021, 12:12:32 PM1/3/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, Lecopzer Chen, Lecopzer Chen
Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

Test environment:
4G and 8G Qemu virt,
39-bit VA + 4k PAGE_SIZE with 3-level page table,
test by lib/test_kasan.ko and lib/test_kasan_module.ko

It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL,
but not test for HW_TAG(I have no proper device), thus keep
HW_TAG and KASAN_VMALLOC mutual exclusion until confirming
the functionality.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>


Lecopzer Chen (3):
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
arm64: kasan: abstract _text and _end to KERNEL_START/END
arm64: Kconfig: support CONFIG_KASAN_VMALLOC

arch/arm64/Kconfig | 1 +
arch/arm64/mm/kasan_init.c | 29 +++++++++++++++++++++--------
2 files changed, 22 insertions(+), 8 deletions(-)

--
2.25.1

Lecopzer Chen

unread,
Jan 3, 2021, 12:12:48 PM1/3/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, Lecopzer Chen, Lecopzer Chen
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.
similarly, the kernel code mapping is now in the VMALLOC area and
should keep these area populated.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..d7ad3f1e9c4d 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
{
u64 kimg_shadow_start, kimg_shadow_end;
u64 mod_shadow_start, mod_shadow_end;
+ u64 vmalloc_shadow_start, vmalloc_shadow_end;
phys_addr_t pa_start, pa_end;
u64 i;

@@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);

+ vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
+ vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
+
/*
* We are going to perform proper setup of shadow memory.
* At first we should unmap early shadow (clear_pgds() call below).
@@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)

kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
(void *)mod_shadow_start);
- kasan_populate_early_shadow((void *)kimg_shadow_end,
- (void *)KASAN_SHADOW_END);
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+ kasan_populate_early_shadow((void *)vmalloc_shadow_end,
+ (void *)KASAN_SHADOW_END);
+ if (vmalloc_shadow_start > mod_shadow_end)
+ kasan_populate_early_shadow((void *)mod_shadow_end,
+ (void *)vmalloc_shadow_start);
+
+ } else {
+ kasan_populate_early_shadow((void *)kimg_shadow_end,
+ (void *)KASAN_SHADOW_END);
+ if (kimg_shadow_start > mod_shadow_end)
+ kasan_populate_early_shadow((void *)mod_shadow_end,
+ (void *)kimg_shadow_start);
+ }

- if (kimg_shadow_start > mod_shadow_end)
- kasan_populate_early_shadow((void *)mod_shadow_end,
- (void *)kimg_shadow_start);

for_each_mem_range(i, &pa_start, &pa_end) {
void *start = (void *)__phys_to_virt(pa_start);
--
2.25.1

Lecopzer Chen

unread,
Jan 3, 2021, 12:12:56 PM1/3/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, Lecopzer Chen, Lecopzer Chen
Arm64 provide defined macro for KERNEL_START and KERNEL_END,
thus replace by the abstration instead of using _text and _end directly.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d7ad3f1e9c4d..acb549951f87 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
phys_addr_t pa_start, pa_end;
u64 i;

- kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
- kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+ kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+ kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));

mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
@@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
- early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+ early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));

kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
(void *)mod_shadow_start);
--
2.25.1

Lecopzer Chen

unread,
Jan 3, 2021, 12:13:03 PM1/3/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, Lecopzer Chen, Lecopzer Chen
Now I have no device to test for HW_TAG, so keep it not selected
until someone can test this.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 05e17351e4f3..29ab35aab59e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -136,6 +136,7 @@ config ARM64
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+ select HAVE_ARCH_KASAN_VMALLOC if (HAVE_ARCH_KASAN && !KASAN_HW_TAGS)
select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
select HAVE_ARCH_KGDB
--
2.25.1

Andrey Konovalov

unread,
Jan 8, 2021, 1:29:47 PM1/8/21
to Lecopzer Chen, LKML, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, Will Deacon, Catalin Marinas, Lecopzer Chen
On Sun, Jan 3, 2021 at 6:13 PM Lecopzer Chen <leco...@gmail.com> wrote:
>
> Now I have no device to test for HW_TAG, so keep it not selected
> until someone can test this.
>
> Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
> ---
> arch/arm64/Kconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 05e17351e4f3..29ab35aab59e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -136,6 +136,7 @@ config ARM64
> select HAVE_ARCH_JUMP_LABEL
> select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
> + select HAVE_ARCH_KASAN_VMALLOC if (HAVE_ARCH_KASAN && !KASAN_HW_TAGS)

KASAN_VMALLOC currently "depends on" KASAN_GENERIC. I think we should
either do "HAVE_ARCH_KASAN && KASAN_GENERIC" here as well, or just do
"if HAVE_ARCH_KASAN".

> select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
> select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
> select HAVE_ARCH_KGDB
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20210103171137.153834-4-lecopzer%40gmail.com.

Andrey Konovalov

unread,
Jan 8, 2021, 1:30:45 PM1/8/21
to Lecopzer Chen, Catalin Marinas, Will Deacon, LKML, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, Lecopzer Chen
On Sun, Jan 3, 2021 at 6:12 PM Lecopzer Chen <leco...@gmail.com> wrote:
>
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.
>
> Test environment:
> 4G and 8G Qemu virt,
> 39-bit VA + 4k PAGE_SIZE with 3-level page table,
> test by lib/test_kasan.ko and lib/test_kasan_module.ko
>
> It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL,
> but not test for HW_TAG(I have no proper device), thus keep
> HW_TAG and KASAN_VMALLOC mutual exclusion until confirming
> the functionality.
>
>
> [1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
>
> Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>

Hi Lecopzer,

Thanks for working on this!

Acked-by: Andrey Konovalov <andre...@google.com>
Tested-by: Andrey Konovalov <andre...@google.com>

for the series along with the other two patches minding the nit in patch #3.

Will, Catalin, could you please take a look at the arm changes?

Thanks!

Andrey Konovalov

unread,
Jan 8, 2021, 1:37:20 PM1/8/21
to Lecopzer Chen, LKML, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, Will Deacon, Catalin Marinas, Lecopzer Chen
On Sun, Jan 3, 2021 at 6:12 PM Lecopzer Chen <leco...@gmail.com> wrote:
>
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.
>
> Test environment:
> 4G and 8G Qemu virt,
> 39-bit VA + 4k PAGE_SIZE with 3-level page table,
> test by lib/test_kasan.ko and lib/test_kasan_module.ko
>
> It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL,
> but not test for HW_TAG(I have no proper device), thus keep
> HW_TAG and KASAN_VMALLOC mutual exclusion until confirming
> the functionality.

Re this: it makes sense to introduce vmalloc support one step a time
and add SW_TAGS support before taking on HW_TAGS. SW_TAGS doesn't
require any special hardware. Working on SW_TAGS first will also allow
dealing with potential conflicts between vmalloc and tags without
having MTE in the picture as well. Just FYI, no need to include that
in this change.

Ard Biesheuvel

unread,
Jan 8, 2021, 1:41:50 PM1/8/21
to Andrey Konovalov, Lecopzer Chen, Catalin Marinas, Will Deacon, Lecopzer Chen, yj.c...@mediatek.com, linux-m...@lists.infradead.org, LKML, kasan-dev, Linux Memory Management List, Alexander Potapenko, Dmitry Vyukov, Andrey Ryabinin, Dan Williams, Andrew Morton, Linux ARM
If vmalloc can now be backed with real shadow memory, we no longer
have to keep the module region in its default location when KASLR and
KASAN are both enabled.

So the check on line 164 in arch/arm64/kernel/kaslr.c should probably
be updated to reflect this change.

Lecopzer Chen

unread,
Jan 9, 2021, 2:26:39 AM1/9/21
to andre...@google.com, ak...@linux-foundation.org, arya...@virtuozzo.com, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, wi...@kernel.org, yj.c...@mediatek.com
Hi Andrey,

> On Sun, Jan 3, 2021 at 6:13 PM Lecopzer Chen <leco...@gmail.com> wrote:
> >
> > Now I have no device to test for HW_TAG, so keep it not selected
> > until someone can test this.
> >
> > Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
> > ---
> > arch/arm64/Kconfig | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 05e17351e4f3..29ab35aab59e 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -136,6 +136,7 @@ config ARM64
> > select HAVE_ARCH_JUMP_LABEL
> > select HAVE_ARCH_JUMP_LABEL_RELATIVE
> > select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
> > + select HAVE_ARCH_KASAN_VMALLOC if (HAVE_ARCH_KASAN && !KASAN_HW_TAGS)
>
> KASAN_VMALLOC currently "depends on" KASAN_GENERIC. I think we should
> either do "HAVE_ARCH_KASAN && KASAN_GENERIC" here as well, or just do
> "if HAVE_ARCH_KASAN".

Thanks for the correctness, I'll change to the following in V2 patch.
"select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN"

Let KASAN_VMALLOC depend on the mode it supports to avoid modifying
two places if KASAN_VMALLOC can support other than GENERIC in the future.

Lecopzer Chen

unread,
Jan 9, 2021, 2:34:42 AM1/9/21
to andre...@google.com, ak...@linux-foundation.org, arya...@virtuozzo.com, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, wi...@kernel.org, yj.c...@mediatek.com
Hi Andrey,
Thanks for the information and suggestion, so this serise I'll keep
only for KASAN_GENERIC support :)



BRs,
Lecopzer

Lecopzer Chen

unread,
Jan 9, 2021, 5:02:10 AM1/9/21
to ar...@kernel.org, ak...@linux-foundation.org, andre...@google.com, arya...@virtuozzo.com, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, wi...@kernel.org, yj.c...@mediatek.com
Hi Ard,
I've tested supporting module region randomized and It looks fine
in some easy test(insmod some modules).

I'll add this to patch v2, thanks for your suggestion.

BRs,
Lecopzer

Lecopzer Chen

unread,
Jan 9, 2021, 5:33:08 AM1/9/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

Test environment:
4G and 8G Qemu virt,
39-bit VA + 4k PAGE_SIZE with 3-level page table,
test by lib/test_kasan.ko and lib/test_kasan_module.ko

It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
Acked-by: Andrey Konovalov <andre...@google.com>
Tested-by: Andrey Konovalov <andre...@google.com>


v2 -> v1
1. kasan_init.c tweak indent
2. change Kconfig depends only on HAVE_ARCH_KASAN
3. support randomized module region.

v1:
https://lore.kernel.org/lkml/20210103171137.1...@gmail.com/

Lecopzer Chen (4):
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
arm64: kasan: abstract _text and _end to KERNEL_START/END
arm64: Kconfig: support CONFIG_KASAN_VMALLOC
arm64: kaslr: support randomized module area with KASAN_VMALLOC

arch/arm64/Kconfig | 1 +
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
arch/arm64/mm/kasan_init.c | 29 +++++++++++++++++++++--------
4 files changed, 41 insertions(+), 23 deletions(-)

--
2.25.1

Lecopzer Chen

unread,
Jan 9, 2021, 5:33:20 AM1/9/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.
similarly, the kernel code mapping is now in the VMALLOC area and
should keep these area populated.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..39b218a64279 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
{
u64 kimg_shadow_start, kimg_shadow_end;
u64 mod_shadow_start, mod_shadow_end;
+ u64 vmalloc_shadow_start, vmalloc_shadow_end;
phys_addr_t pa_start, pa_end;
u64 i;

@@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);

Lecopzer Chen

unread,
Jan 9, 2021, 5:33:28 AM1/9/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
Arm64 provide defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 39b218a64279..fa8d7ece895d 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
phys_addr_t pa_start, pa_end;
u64 i;

- kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
- kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+ kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+ kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));

mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);

Lecopzer Chen

unread,
Jan 9, 2021, 5:33:37 AM1/9/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
now we can backed shadow memory in vmalloc area,
thus support KASAN_VMALLOC in KASAN_GENERIC mode.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 05e17351e4f3..ba03820402ee 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -136,6 +136,7 @@ config ARM64
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+ select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN

Lecopzer Chen

unread,
Jan 9, 2021, 5:33:45 AM1/9/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, wi...@kernel.org, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
VMALLOC area ffffffc010000000 fffffffdf0000000

before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ar...@kernel.org>
Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
/* use the top 16 bits to randomize the linear region */
memstart_offset_seed = seed >> 48;

- if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
- IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+ (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
/*
- * KASAN does not expect the module region to intersect the
- * vmalloc region, since shadow memory is allocated for each
- * module at load time, whereas the vmalloc region is shadowed
- * by KASAN zero pages. So keep modules out of the vmalloc
- * region if KASAN is enabled, and put the kernel well within
- * 4 GB of the module region.
+ * KASAN without KASAN_VMALLOC does not expect the module region
+ * to intersect the vmalloc region, since shadow memory is
+ * allocated for each module at load time, whereas the vmalloc
+ * region is shadowed by KASAN zero pages. So keep modules
+ * out of the vmalloc region if KASAN is enabled without
+ * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+ * module region.
*/
return offset % SZ_2G;

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
NUMA_NO_NODE, __builtin_return_address(0));

if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
- !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
/*
- * KASAN can only deal with module allocations being served
- * from the reserved module region, since the remainder of
- * the vmalloc region is already backed by zero shadow pages,
- * and punching holes into it is non-trivial. Since the module
- * region is not randomized when KASAN is enabled, it is even
+ * KASAN without KASAN_VMALLOC can only deal with module
+ * allocations being served from the reserved module region,
+ * since the remainder of the vmalloc region is already
+ * backed by zero shadow pages, and punching holes into it
+ * is non-trivial. Since the module region is not randomized
+ * when KASAN is enabled without KASAN_VMALLOC, it is even
* less likely that the module region gets exhausted, so we
* can simply omit this fallback in that case.
*/
--
2.25.1

Lecopzer Chen

unread,
Jan 21, 2021, 5:19:32 AM1/21/21
to leco...@gmail.com, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, wi...@kernel.org, yj.c...@mediatek.com
Dear reviewers and maintainers,


Could we have chance to upstream this in 5.12-rc?

So if these patches have any problem I can fix as soon as possible before
next -rc comming.


thanks!

BRs,
Lecopzer

Andrey Konovalov

unread,
Jan 21, 2021, 12:44:27 PM1/21/21
to Will Deacon, LKML, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, moderated list:ARM/Mediatek SoC..., yj.c...@mediatek.com, Catalin Marinas, Ard Biesheuvel, Mark Brown, Guenter Roeck, rp...@kernel.org, tyh...@linux.microsoft.com, Robin Murphy, Vincenzo Frascino, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
Hi Will,

Could you PTAL at the arm64 changes?

Thanks!

Will Deacon

unread,
Jan 22, 2021, 2:05:09 PM1/22/21
to Andrey Konovalov, LKML, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, moderated list:ARM/Mediatek SoC..., yj.c...@mediatek.com, Catalin Marinas, Ard Biesheuvel, Mark Brown, Guenter Roeck, rp...@kernel.org, tyh...@linux.microsoft.com, Robin Murphy, Vincenzo Frascino, gusta...@kernel.org, Lecopzer Chen, Lecopzer Chen
Sorry, wanted to get to this today but I ran out of time in the end. On the
list for next week!

Will

Will Deacon

unread,
Jan 27, 2021, 6:04:23 PM1/27/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen
CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
necessary?

Will

Lecopzer Chen

unread,
Jan 28, 2021, 3:53:44 AM1/28/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
CONFIG_KASAN_VMALLOC=y means CONFIG_KASAN_GENERIC=y
but CONFIG_KASAN_GENERIC=y doesn't means CONFIG_KASAN_VMALLOC=y

So this if-condition allows only KASAN rather than
KASAN + KASAN_VMALLOC enabled.

Please correct me if I'm wrong.

thanks,
Lecopzer





Will Deacon

unread,
Jan 28, 2021, 3:26:57 PM1/28/21
to Lecopzer Chen, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Sorry, you're completely right -- I missed the '!' when I read this
initially.

Will

Ard Biesheuvel

unread,
Feb 3, 2021, 1:31:18 PM2/3/21
to Lecopzer Chen, Linux Kernel Mailing List, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, Will Deacon, Catalin Marinas, Andrey Konovalov, Mark Brown, Guenter Roeck, Mike Rapoport, Tyler Hicks, Robin Murphy, Vincenzo Frascino, Gustavo A. R. Silva, Lecopzer Chen
I failed to realize that VMAP_STACK and KASAN are currently mutually
exclusive on arm64, and that this series actually fixes that, which is
a big improvement, so it would make sense to call that out.

This builds and runs fine for me on a VM running under KVM.

Tested-by: Ard Biesheuvel <ar...@kernel.org>

Ard Biesheuvel

unread,
Feb 3, 2021, 1:37:32 PM2/3/21
to Lecopzer Chen, Linux Kernel Mailing List, Linux Memory Management List, kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, Will Deacon, Catalin Marinas, Andrey Konovalov, Mark Brown, Guenter Roeck, Mike Rapoport, Tyler Hicks, Robin Murphy, Vincenzo Frascino, Gustavo A. R. Silva, Lecopzer Chen
On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen <leco...@gmail.com> wrote:
>
> Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Like how the MODULES_VADDR does now, just not to early populate
> the VMALLOC_START between VMALLOC_END.
> similarly, the kernel code mapping is now in the VMALLOC area and
> should keep these area populated.
>
> Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>


This commit log text is a bit hard to follow. You are saying that the
vmalloc region is *not* backed with zero shadow or any default mapping
at all, right, and everything gets allocated on demand, just like is
the case for modules?

> ---
> arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> 1 file changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d8e66c78440e..39b218a64279 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> {
> u64 kimg_shadow_start, kimg_shadow_end;
> u64 mod_shadow_start, mod_shadow_end;
> + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> phys_addr_t pa_start, pa_end;
> u64 i;
>
> @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
>
> + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> +


This and the below seems overly complicated, given that VMALLOC_START
== MODULES_END. Can we simplify this?

Lecopzer Chen

unread,
Feb 4, 2021, 1:21:42 AM2/4/21
to ar...@kernel.org, ak...@linux-foundation.org, andre...@google.com, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, wi...@kernel.org, yj.c...@mediatek.com
> On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen <leco...@gmail.com> wrote:
> >
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to early populate
> > the VMALLOC_START between VMALLOC_END.
> > similarly, the kernel code mapping is now in the VMALLOC area and
> > should keep these area populated.
> >
> > Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
>
>
> This commit log text is a bit hard to follow. You are saying that the
> vmalloc region is *not* backed with zero shadow or any default mapping
> at all, right, and everything gets allocated on demand, just like is
> the case for modules?

It's much more like:

before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

after:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: no mapping, no zoreo shadow at init

So it should be both "not backed with zero shadow" and
"not any mapping and everything gets allocated on demand".

And the "not backed with zero shadow" is like a subset of "not any mapping ...".


Is that being more clear if the commit revises to:

----------------------
Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

VMALLOC_VADDR: no mapping, no zoreo shadow at init

Thus the mapping will get allocate on demand by the core function
of KASAN vmalloc.

similarly, the kernel code mapping is now in the VMALLOC area and
should keep these area populated.
--------------------

Or would you have any suggestion?


Thanks a lot for your review!

BRs,
Lecopzer

Will Deacon

unread,
Feb 4, 2021, 7:45:52 AM2/4/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen
Do we really need yet another CONFIG option for KASAN? What's the use-case
for *not* enabling this if you're already enabling one of the KASAN
backends?

> + kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> + (void *)KASAN_SHADOW_END);
> + if (vmalloc_shadow_start > mod_shadow_end)

To echo Ard's concern: when is the above 'if' condition true?

Will

Will Deacon

unread,
Feb 4, 2021, 7:47:06 AM2/4/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen
To be honest, I think this whole line is pointless. We should be able to
pass NUMA_NO_NODE now that we're not abusing the vmemmap() allocator to
populate the shadow.

Will

Will Deacon

unread,
Feb 4, 2021, 7:49:22 AM2/4/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, Lecopzer Chen
On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.

The one thing I've failed to grok from your series is how you deal with
vmalloc allocations where the shadow overlaps with the shadow which has
already been allocated for the kernel image. Please can you explain?

Thanks,

Will

Lecopzer Chen

unread,
Feb 4, 2021, 9:46:24 AM2/4/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).

There should be someone can enable KASAN_GENERIC but can't use VMALLOC
due to memory issue.

> > + kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> > + (void *)KASAN_SHADOW_END);
> > + if (vmalloc_shadow_start > mod_shadow_end)
>
> To echo Ard's concern: when is the above 'if' condition true?

After reviewing this code,
since VMALLOC_STAR is a compiler defined macro of MODULES_END,
this if-condition will never be true.

I also test it with removing this and works fine.

I'll remove this in the next version patch,
thanks a lot for pointing out this.

BRs,
Lecopzer

Lecopzer Chen

unread,
Feb 4, 2021, 9:51:38 AM2/4/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Do we need to fix this in this series? it seems another topic.
If not, should this patch be removed in this series?

Thanks,
Lecopzer

Will Deacon

unread,
Feb 4, 2021, 9:55:56 AM2/4/21
to Lecopzer Chen, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Since you're reposting anyway, you may as well include a patch doing that.
If you don't, then I will.

Will

Will Deacon

unread,
Feb 4, 2021, 10:01:10 AM2/4/21
to Lecopzer Chen, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
The shadow is allocated dynamically though, isn't it?

> There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> due to memory issue.

That doesn't sound particularly realistic to me. The reason I'm pushing here
is because I would _really_ like to move to VMAP stack unconditionally, and
that would effectively force KASAN_VMALLOC to be set if KASAN is in use.

So unless there's a really good reason not to do that, please can we make
this unconditional for arm64? Pretty please?

Will

Lecopzer Chen

unread,
Feb 4, 2021, 10:53:57 AM2/4/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
The most key point is we don't map anything in the vmalloc shadow address.
So we don't care where the kernel image locate inside vmalloc area.

kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...)

Kernel image was populated with real mapping in its shadow address.
I `bypass' the whole shadow of vmalloc area, the only place you can find
about vmalloc_shadow is
kasan_populate_early_shadow((void *)vmalloc_shadow_end,
(void *)KASAN_SHADOW_END);

----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END

vmalloc shadow will be mapped 'ondemend', see kasan_populate_vmalloc()
in mm/vmalloc.c in detail.
So the shadow of vmalloc will be allocated later if anyone use its va.


BRs,
Lecopzer


Lecopzer Chen

unread,
Feb 4, 2021, 11:06:22 AM2/4/21
to Will Deacon, Andrew Morton, Andrey Konovalov, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, Catalin Marinas, dan.j.w...@intel.com, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasa...@googlegroups.com, Jian-Lin Chen, linux-arm-kernel, Linux Kernel Mailing List, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
I think it would be better to leave this for you since I'm not
familiar with the relationship
between vmemmap() and NUMA_NO_NODE.

So I would just keep this patch in next version, is this fine with you?


Thanks for your help:)

Lecopzer



Will Deacon <wi...@kernel.org> 於 2021年2月4日 週四 下午10:55寫道:

Lecopzer Chen

unread,
Feb 4, 2021, 11:37:33 AM2/4/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, leco...@gmail.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Yes, but It's still a cost.

> > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > due to memory issue.
>
> That doesn't sound particularly realistic to me. The reason I'm pushing here
> is because I would _really_ like to move to VMAP stack unconditionally, and
> that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
>
> So unless there's a really good reason not to do that, please can we make
> this unconditional for arm64? Pretty please?

I think it's fine since we have a good reason.
Also if someone have memory issue in KASAN_VMALLOC,
they can use SW_TAG, right?

However the SW_TAG/HW_TAG is not supported VMALLOC yet.
So the code would be like

if (IS_ENABLED(CONFIG_KASAN_GENERIC))
/* explain the relationship between
* KASAN_GENERIC and KASAN_VMALLOC in arm64
* XXX: because we want VMAP stack....
*/
kasan_populate_early_shadow((void *)vmalloc_shadow_end,
(void *)KASAN_SHADOW_END);
else {
kasan_populate_early_shadow((void *)kimg_shadow_end,
(void *)KASAN_SHADOW_END);
if (kimg_shadow_start > mod_shadow_end)
kasan_populate_early_shadow((void *)mod_shadow_end,
(void *)kimg_shadow_start);
}

and the arch/arm64/Kconfig will add
select KASAN_VMALLOC if KASAN_GENERIC

Is this code same as your thought?

BRs,
Lecopzer

Will Deacon

unread,
Feb 4, 2021, 12:57:08 PM2/4/21
to Lecopzer Chen, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Indeed, but the question I'm asking is what happens when an on-demand shadow
allocation from vmalloc overlaps with the shadow that we allocated early for
the kernel image?

Sounds like I have to go and read the code...

Will

Lecopzer Chen

unread,
Feb 4, 2021, 1:32:40 PM2/4/21
to Will Deacon, Andrew Morton, Andrey Konovalov, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
oh, sorry I misunderstood your question.

FWIW,
I think this won't happend because this mean vmalloc() provides va which already allocated by kimg, as I know, vmalloc_init() will insert early allocated vma into its vmalloc rb tree
, and this early allocated vma will include  kernel image.

After quick review of mm init code,
this early allocated for vma is at map_kernel() in arch/arm64/mm/mmu.c



BRs
Lecopzer


Lecopzer Chen

unread,
Feb 4, 2021, 1:41:46 PM2/4/21
to Will Deacon, Andrew Morton, Andrey Konovalov, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, Catalin Marinas, dan.j.w...@intel.com, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasa...@googlegroups.com, Jian-Lin Chen, linux-arm-kernel, Linux Kernel Mailing List, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com

Will Deacon

unread,
Feb 5, 2021, 12:02:23 PM2/5/21
to Lecopzer Chen, Andrew Morton, Andrey Konovalov, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, Catalin Marinas, dan.j.w...@intel.com, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasa...@googlegroups.com, Jian-Lin Chen, linux-arm-kernel, Linux Kernel Mailing List, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
On Fri, Feb 05, 2021 at 12:06:10AM +0800, Lecopzer Chen wrote:
> I think it would be better to leave this for you since I'm not
> familiar with the relationship
> between vmemmap() and NUMA_NO_NODE.
>
> So I would just keep this patch in next version, is this fine with you?

Yes, ok.

Will

Will Deacon

unread,
Feb 5, 2021, 12:19:08 PM2/5/21
to Lecopzer Chen, ak...@linux-foundation.org, andre...@google.com, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, catalin...@arm.com, dan.j.w...@intel.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.

> /* explain the relationship between
> * KASAN_GENERIC and KASAN_VMALLOC in arm64
> * XXX: because we want VMAP stack....
> */

I don't understand the relation with SW_TAGS. The VMAP_STACK dependency is:

depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC

which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
and VMAP_STACK are mutually exclusive :(

Will

Andrey Konovalov

unread,
Feb 5, 2021, 12:30:56 PM2/5/21
to Will Deacon, Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin, Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasan-dev, Lecopzer Chen, Linux ARM, LKML, moderated list:ARM/Mediatek SoC..., Linux Memory Management List, Guenter Roeck, Robin Murphy, rp...@kernel.org, tyh...@linux.microsoft.com, Vincenzo Frascino, yj.c...@mediatek.com
This means that VMAP_STACK can be only enabled if KASAN_HW_TAGS=y or
if KASAN_VMALLOC=y for other modes.

>
> which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
> and VMAP_STACK are mutually exclusive :(

SW_TAGS doesn't yet have vmalloc support, so it's not compatible with
VMAP_STACK. Once vmalloc support is added to SW_TAGS, KASAN_VMALLOC
should be allowed to be enabled with SW_TAGS. This series is a step
towards having that support, but doesn't implement it. That will be a
separate effort.

Will Deacon

unread,
Feb 5, 2021, 12:43:10 PM2/5/21
to Andrey Konovalov, Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin, Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasan-dev, Lecopzer Chen, Linux ARM, LKML, moderated list:ARM/Mediatek SoC..., Linux Memory Management List, Guenter Roeck, Robin Murphy, rp...@kernel.org, tyh...@linux.microsoft.com, Vincenzo Frascino, yj.c...@mediatek.com
Ok, thanks. Then I think we should try to invert the dependency here, if
possible, so that the KASAN backends depend on !VMAP_STACK if they don't
support it, rather than silently disabling VMAP_STACK when they are
selected.

Will

Lecopzer Chen

unread,
Feb 5, 2021, 1:11:10 PM2/5/21
to Will Deacon, Andrew Morton, Andrey Konovalov, ar...@kernel.org, arya...@virtuozzo.com, bro...@kernel.org, Catalin Marinas, dan.j.w...@intel.com, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasa...@googlegroups.com, Jian-Lin Chen, linux-arm-kernel, Linux Kernel Mailing List, linux-m...@lists.infradead.org, linu...@kvack.org, li...@roeck-us.net, robin....@arm.com, rp...@kernel.org, tyh...@linux.microsoft.com, vincenzo...@arm.com, yj.c...@mediatek.com
Will Deacon <wi...@kernel.org> 於 2021年2月6日 週六 上午1:19寫道:
> > > >commit message
> > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > >
> > > The shadow is allocated dynamically though, isn't it?
> >
> > Yes, but It's still a cost.
> >
> > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > due to memory issue.
> > >
> > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > >
> > > So unless there's a really good reason not to do that, please can we make
> > > this unconditional for arm64? Pretty please?
> >
> > I think it's fine since we have a good reason.
> > Also if someone have memory issue in KASAN_VMALLOC,
> > they can use SW_TAG, right?
> >
> > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > So the code would be like
> >
> > if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>
> Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.

OK, this also make sense.
My first thought was that selecting KASAN_GENERIC implies VMALLOC in
arm64 is a special case so this need well documented.
I'll document this in the commit message of Kconfig patch to avoid
messing up the code here.

I'm going to send V3 patch, thanks again for your review.

BRs,
Lecopzer

Andrey Konovalov

unread,
Feb 5, 2021, 3:51:07 PM2/5/21
to Will Deacon, Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin, Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasan-dev, Lecopzer Chen, Linux ARM, LKML, moderated list:ARM/Mediatek SoC..., Linux Memory Management List, Guenter Roeck, Robin Murphy, rp...@kernel.org, tyh...@linux.microsoft.com, Vincenzo Frascino, yj.c...@mediatek.com
SGTM. Not sure if I will get to this in the nearest future, so I filed
a bug: https://bugzilla.kernel.org/show_bug.cgi?id=211581

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:10 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: no mapping, no zoreo shadow at init

Thus the mapping will get allocated on demand by the core function
of KASAN_VMALLOC.

----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..20d06008785f 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
{
u64 kimg_shadow_start, kimg_shadow_end;
u64 mod_shadow_start, mod_shadow_end;
+ u64 vmalloc_shadow_end;
phys_addr_t pa_start, pa_end;
u64 i;

@@ -223,6 +224,8 @@ static void __init kasan_init_shadow(void)
mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);

+ vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
+
/*
* We are going to perform proper setup of shadow memory.
* At first we should unmap early shadow (clear_pgds() call below).
@@ -241,12 +244,17 @@ static void __init kasan_init_shadow(void)

kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
(void *)mod_shadow_start);
- kasan_populate_early_shadow((void *)kimg_shadow_end,
- (void *)KASAN_SHADOW_END);

- if (kimg_shadow_start > mod_shadow_end)
- kasan_populate_early_shadow((void *)mod_shadow_end,
- (void *)kimg_shadow_start);
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ kasan_populate_early_shadow((void *)vmalloc_shadow_end,
+ (void *)KASAN_SHADOW_END);
+ else {
+ kasan_populate_early_shadow((void *)kimg_shadow_end,
+ (void *)KASAN_SHADOW_END);
+ if (kimg_shadow_start > mod_shadow_end)
+ kasan_populate_early_shadow((void *)mod_shadow_end,
+ (void *)kimg_shadow_start);
+ }

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:12 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen
Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 20d06008785f..cd2653b7b174 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
phys_addr_t pa_start, pa_end;
u64 i;

- kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
- kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+ kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+ kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));

mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
@@ -240,7 +240,7 @@ static void __init kasan_init_shadow(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
- early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+ early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));

kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
(void *)mod_shadow_start);
--
2.25.1

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:13 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen

Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END


Test environment:
4G and 8G Qemu virt,
39-bit VA + 4k PAGE_SIZE with 3-level page table,
test by lib/test_kasan.ko and lib/test_kasan_module.ko

It works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.

Also work with VMAP_STACK, thanks Ard for testing it.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")


Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
Acked-by: Andrey Konovalov <andre...@google.com>
Tested-by: Andrey Konovalov <andre...@google.com>
Tested-by: Ard Biesheuvel <ar...@kernel.org>

---
Thanks Will Deacon, Ard Biesheuvel and Andrey Konovalov
for reviewing and suggestion.

v3 -> v2
rebase on 5.11-rc6
1. remove always true condition in kasan_init() and remove unsed
vmalloc_shadow_start.
2. select KASAN_VMALLOC if KANSAN_GENERIC is enabled
for VMAP_STACK.
3. tweak commit message

v2 -> v1
1. kasan_init.c tweak indent
2. change Kconfig depends only on HAVE_ARCH_KASAN
3. support randomized module region.


v2:
https://lkml.org/lkml/2021/1/9/49
v1:
https://lore.kernel.org/lkml/20210103171137.1...@gmail.com/
---
Lecopzer Chen (5):
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
arm64: kasan: abstract _text and _end to KERNEL_START/END
arm64: Kconfig: support CONFIG_KASAN_VMALLOC
arm64: kaslr: support randomized module area with KASAN_VMALLOC
arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled

arch/arm64/Kconfig | 2 ++
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
arch/arm64/mm/kasan_init.c | 24 ++++++++++++++++--------
4 files changed, 37 insertions(+), 23 deletions(-)

--
2.25.1

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:13 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
VMALLOC area ffffffc010000000 fffffffdf0000000

before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ar...@kernel.org>
Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
/* use the top 16 bits to randomize the linear region */
memstart_offset_seed = seed >> 48;

- if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
- IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+ (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
/*
- * KASAN does not expect the module region to intersect the
- * vmalloc region, since shadow memory is allocated for each
- * module at load time, whereas the vmalloc region is shadowed
- * by KASAN zero pages. So keep modules out of the vmalloc
- * region if KASAN is enabled, and put the kernel well within
- * 4 GB of the module region.
+ * KASAN without KASAN_VMALLOC does not expect the module region
+ * to intersect the vmalloc region, since shadow memory is
+ * allocated for each module at load time, whereas the vmalloc
+ * region is shadowed by KASAN zero pages. So keep modules
+ * out of the vmalloc region if KASAN is enabled without
+ * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+ * module region.
*/
return offset % SZ_2G;

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
NUMA_NO_NODE, __builtin_return_address(0));

if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
- !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
/*
- * KASAN can only deal with module allocations being served
- * from the reserved module region, since the remainder of
- * the vmalloc region is already backed by zero shadow pages,
- * and punching holes into it is non-trivial. Since the module
- * region is not randomized when KASAN is enabled, it is even
+ * KASAN without KASAN_VMALLOC can only deal with module
+ * allocations being served from the reserved module region,
+ * since the remainder of the vmalloc region is already
+ * backed by zero shadow pages, and punching holes into it
+ * is non-trivial. Since the module region is not randomized
+ * when KASAN is enabled without KASAN_VMALLOC, it is even
* less likely that the module region gets exhausted, so we
* can simply omit this fallback in that case.
*/
--
2.25.1

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:35 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen
Before this patch, someone who wants to use VMAP_STACK when
KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.

From Will's suggestion [1]:
> I would _really_ like to move to VMAP stack unconditionally, and
> that would effectively force KASAN_VMALLOC to be set if KASAN is in use.

Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
KASAN enabled, in order to make VMAP_STACK selected unconditionally,
we bind KANSAN_GENERIC and KASAN_VMALLOC together.

Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
so this is the first step to make VMAP_STACK selected unconditionally.

Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.

[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/

Suggested-by: Will Deacon <wi...@kernel.org>
Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a8f5a9171a85..9be6a57f6447 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -190,6 +190,7 @@ config ARM64
select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
+ select KASAN_VMALLOC if KASAN_GENERIC
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
--
2.25.1

Lecopzer Chen

unread,
Feb 6, 2021, 3:36:36 AM2/6/21
to linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, catalin...@arm.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com, Lecopzer Chen
Now we can backed shadow memory in vmalloc area,
thus make KASAN_VMALLOC selectable.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b28ec1..a8f5a9171a85 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -136,6 +136,7 @@ config ARM64
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+ select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
select HAVE_ARCH_KGDB
--
2.25.1

Catalin Marinas

unread,
Mar 19, 2021, 1:38:05 PM3/19/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com
On Sat, Feb 06, 2021 at 04:35:48PM +0800, Lecopzer Chen wrote:
> Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Like how the MODULES_VADDR does now, just not to early populate
> the VMALLOC_START between VMALLOC_END.
>
> Before:
>
> MODULE_VADDR: no mapping, no zoreo shadow at init
> VMALLOC_VADDR: backed with zero shadow at init
>
> After:
>
> MODULE_VADDR: no mapping, no zoreo shadow at init
> VMALLOC_VADDR: no mapping, no zoreo shadow at init

s/zoreo/zero/
Not something introduced by this patch but what happens if this
condition is false? It means that kimg_shadow_end < mod_shadow_start and
the above kasan_populate_early_shadow(PAGE_END, mod_shadow_start)
overlaps with the earlier kasan_map_populate(kimg_shadow_start,
kimg_shadow_end).

> + if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> + kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> + (void *)KASAN_SHADOW_END);
> + else {
> + kasan_populate_early_shadow((void *)kimg_shadow_end,
> + (void *)KASAN_SHADOW_END);
> + if (kimg_shadow_start > mod_shadow_end)
> + kasan_populate_early_shadow((void *)mod_shadow_end,
> + (void *)kimg_shadow_start);
> + }
>
> for_each_mem_range(i, &pa_start, &pa_end) {
> void *start = (void *)__phys_to_virt(pa_start);
> --
> 2.25.1
>

--
Catalin

Catalin Marinas

unread,
Mar 19, 2021, 1:41:15 PM3/19/21
to Lecopzer Chen, linux-...@vger.kernel.org, linu...@kvack.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, wi...@kernel.org, dan.j.w...@intel.com, arya...@virtuozzo.com, gli...@google.com, dvy...@google.com, ak...@linux-foundation.org, linux-m...@lists.infradead.org, yj.c...@mediatek.com, ar...@kernel.org, andre...@google.com, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org, leco...@gmail.com
Hi Lecopzer,

On Sat, Feb 06, 2021 at 04:35:47PM +0800, Lecopzer Chen wrote:
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.

Do you plan an update to a newer kernel like 5.12-rc3?

> Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
> Acked-by: Andrey Konovalov <andre...@google.com>
> Tested-by: Andrey Konovalov <andre...@google.com>
> Tested-by: Ard Biesheuvel <ar...@kernel.org>

You could move these to individual patches rather than the cover letter,
assuming that they still stand after the changes you've made. Also note
that Andrey K no longer has the @google.com email address if you cc him
on future patches (replace it with @gmail.com).

Thanks.

--
Catalin

Lecopzer Chen

unread,
Mar 20, 2021, 6:58:56 AM3/20/21
to Catalin Marinas, Lecopzer Chen, Linux Kernel Mailing List, linu...@kvack.org, kasa...@googlegroups.com, linux-arm-kernel, Will Deacon, dan.j.w...@intel.com, arya...@virtuozzo.com, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, ar...@kernel.org, Andrey Konovalov, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org
On Sat, Mar 20, 2021 at 1:41 AM Catalin Marinas <catalin...@arm.com> wrote:
>
> Hi Lecopzer,
>
> On Sat, Feb 06, 2021 at 04:35:47PM +0800, Lecopzer Chen wrote:
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > by not to populate the vmalloc area except for kimg address.
>
> Do you plan an update to a newer kernel like 5.12-rc3?
>

Yes, of course. I dealt with some personal matters so didn't update
these series last month.

> > Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
> > Acked-by: Andrey Konovalov <andre...@google.com>
> > Tested-by: Andrey Konovalov <andre...@google.com>
> > Tested-by: Ard Biesheuvel <ar...@kernel.org>
>
> You could move these to individual patches rather than the cover letter,
> assuming that they still stand after the changes you've made. Also note
> that Andrey K no longer has the @google.com email address if you cc him
> on future patches (replace it with @gmail.com).
>

Ok thanks for the suggestion.
I will move them to each patch and correct the email address.


Thanks,
Lecopzer

Lecopzer Chen

unread,
Mar 20, 2021, 9:01:19 AM3/20/21
to Catalin Marinas, Lecopzer Chen, Linux Kernel Mailing List, linu...@kvack.org, kasa...@googlegroups.com, linux-arm-kernel, Will Deacon, dan.j.w...@intel.com, arya...@virtuozzo.com, Alexander Potapenko, Dmitry Vyukov, Andrew Morton, linux-m...@lists.infradead.org, yj.c...@mediatek.com, ar...@kernel.org, Andrey Konovalov, bro...@kernel.org, li...@roeck-us.net, rp...@kernel.org, tyh...@linux.microsoft.com, robin....@arm.com, vincenzo...@arm.com, gusta...@kernel.org
On Sat, Mar 20, 2021 at 1:38 AM Catalin Marinas <catalin...@arm.com> wrote:
>
> On Sat, Feb 06, 2021 at 04:35:48PM +0800, Lecopzer Chen wrote:
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to early populate
> > the VMALLOC_START between VMALLOC_END.
> >
> > Before:
> >
> > MODULE_VADDR: no mapping, no zoreo shadow at init
> > VMALLOC_VADDR: backed with zero shadow at init
> >
> > After:
> >
> > MODULE_VADDR: no mapping, no zoreo shadow at init
> > VMALLOC_VADDR: no mapping, no zoreo shadow at init
>
> s/zoreo/zero/
>

thanks!
In this case, the area between mod_shadow_start and kimg_shadow_end
was mapping when kasan init.

Thus the corner case is that module_alloc() allocates that range
(the area between mod_shadow_start and kimg_shadow_end) again.


With VMALLOC_KASAN,
module_alloc() ->
... ->
kasan_populate_vmalloc ->
apply_to_page_range()
will check the mapping exists or not and bypass allocating new mapping
if it exists.
So it should be fine in the second allocation.

Without VMALLOC_KASAN,
module_alloc() ->
kasan_module_alloc()
will allocate the range twice, first time is kasan_map_populate() and
second time is vmalloc(),
and this should have some problems(?).

Now the only possibility that the module area can overlap with kimage
should be KASLR on.
I'm not sure if this is the case that really happens in KASLR, it depends on
how __relocate_kernel() calculates kimage and how kaslr_earlt_init()
decides module_alloc_base.

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:35 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen

Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END


Test environment:
4G and 8G Qemu virt,
39-bit VA + 4k PAGE_SIZE with 3-level page table,
test by lib/test_kasan.ko and lib/test_kasan_module.ko

It works with Kaslr and CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.

Also work on VMAP_STACK, thanks Ard for testing it.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")


---
Thanks Will Deacon, Ard Biesheuvel and Andrey Konovalov
for reviewing and suggestion.

v4:
1. rebase on 5.12-rc4
2. tweak commit message

v3:
rebase on 5.11-rc6
1. remove always true condition in kasan_init() and remove unsed
vmalloc_shadow_start.
2. select KASAN_VMALLOC if KANSAN_GENERIC is enabled
for VMAP_STACK.
3. tweak commit message

v2:
1. kasan_init.c tweak indent
2. change Kconfig depends only on HAVE_ARCH_KASAN
3. support randomized module region.



v3:
https://lore.kernel.org/lkml/20210206083552.243...@mediatek.com/
v2:
https://lkml.org/lkml/2021/1/9/49
v1:
https://lore.kernel.org/lkml/20210103171137.1...@gmail.com/
---
Lecopzer Chen (5):
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
arm64: kasan: abstract _text and _end to KERNEL_START/END
arm64: Kconfig: support CONFIG_KASAN_VMALLOC
arm64: kaslr: support randomized module area with KASAN_VMALLOC
arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled

arch/arm64/Kconfig | 2 ++
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:36 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen
We can backed shadow memory in vmalloc area after vmalloc area
isn't populated at kasan_init(), thus make KASAN_VMALLOC selectable.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
Acked-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Ard Biesheuvel <ar...@kernel.org>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5656e7aacd69..3e54fa938234 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -138,6 +138,7 @@ config ARM64
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+ select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
select HAVE_ARCH_KFENCE
--
2.25.1

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:37 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: no mapping, no zero shadow at init

Thus the mapping will get allocated on demand by the core function
of KASAN_VMALLOC.

----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
Acked-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Ard Biesheuvel <ar...@kernel.org>
---

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:37 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen
Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
Acked-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Andrey Konovalov <andre...@gmail.com>
Tested-by: Ard Biesheuvel <ar...@kernel.org>
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 20d06008785f..cd2653b7b174 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
phys_addr_t pa_start, pa_end;
u64 i;

- kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
- kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+ kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+ kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));

mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:38 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
VMALLOC area ffffffc010000000 fffffffdf0000000

before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ar...@kernel.org>
Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 27f8939deb1b..341342b207f6 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -128,15 +128,17 @@ u64 __init kaslr_early_init(void)

Lecopzer Chen

unread,
Mar 24, 2021, 12:05:41 AM3/24/21
to linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, wi...@kernel.org, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com, Lecopzer Chen
Before this patch, someone who wants to use VMAP_STACK when
KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.

From Will's suggestion [1]:
> I would _really_ like to move to VMAP stack unconditionally, and
> that would effectively force KASAN_VMALLOC to be set if KASAN is in use

Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
KASAN enabled, in order to make VMAP_STACK selected unconditionally,
we bind KANSAN_GENERIC and KASAN_VMALLOC together.

Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
so this is the first step to make VMAP_STACK selected unconditionally.

Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.

[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/

Suggested-by: Will Deacon <wi...@kernel.org>
Signed-off-by: Lecopzer Chen <lecopz...@mediatek.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3e54fa938234..07762359d741 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -195,6 +195,7 @@ config ARM64

Catalin Marinas

unread,
Mar 29, 2021, 8:28:42 AM3/29/21
to kasa...@googlegroups.com, wi...@kernel.org, Lecopzer Chen, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, li...@roeck-us.net, ryabin...@gmail.com, andre...@gmail.com, yj.c...@mediatek.com, gusta...@kernel.org, tyh...@linux.microsoft.com, rp...@kernel.org, gli...@google.com, m...@kernel.org, ak...@linux-foundation.org, dvy...@google.com
On Wed, 24 Mar 2021 12:05:17 +0800, Lecopzer Chen wrote:
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.
>
> [...]

Applied to arm64 (for-next/kasan-vmalloc), thanks!

[1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
https://git.kernel.org/arm64/c/9a0732efa774
[2/5] arm64: kasan: abstract _text and _end to KERNEL_START/END
https://git.kernel.org/arm64/c/7d7b88ff5f8f
[3/5] arm64: Kconfig: support CONFIG_KASAN_VMALLOC
https://git.kernel.org/arm64/c/71b613fc0c69
[4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC
https://git.kernel.org/arm64/c/31d02e7ab008
[5/5] arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled
https://git.kernel.org/arm64/c/acc3042d62cb

--
Catalin

Will Deacon

unread,
Mar 29, 2021, 8:54:57 AM3/29/21
to Lecopzer Chen, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, catalin...@arm.com, ryabin...@gmail.com, gli...@google.com, andre...@gmail.com, dvy...@google.com, ak...@linux-foundation.org, tyh...@linux.microsoft.com, m...@kernel.org, rp...@kernel.org, li...@roeck-us.net, gusta...@kernel.org, yj.c...@mediatek.com
On Wed, Mar 24, 2021 at 12:05:22PM +0800, Lecopzer Chen wrote:
> Before this patch, someone who wants to use VMAP_STACK when
> KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.
>
> From Will's suggestion [1]:
> > I would _really_ like to move to VMAP stack unconditionally, and
> > that would effectively force KASAN_VMALLOC to be set if KASAN is in use
>
> Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
> KASAN enabled, in order to make VMAP_STACK selected unconditionally,
> we bind KANSAN_GENERIC and KASAN_VMALLOC together.
>
> Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
> so this is the first step to make VMAP_STACK selected unconditionally.

Do you know if anybody is working on this? It's really unfortunate that
we can't move exclusively to VMAP_STACK just because of SW_TAGS KASAN.

That said, what is there to do? As things stand, won't kernel stack
addresses end up using KASAN_TAG_KERNEL?

Will

Lecopzer Chen

unread,
Mar 30, 2021, 4:14:22 AM3/30/21
to wi...@kernel.org, ak...@linux-foundation.org, andre...@gmail.com, catalin...@arm.com, dvy...@google.com, gli...@google.com, gusta...@kernel.org, kasa...@googlegroups.com, lecopz...@mediatek.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, li...@roeck-us.net, m...@kernel.org, rp...@kernel.org, ryabin...@gmail.com, tyh...@linux.microsoft.com, yj.c...@mediatek.com, leco...@gmail.com
Hi Andrey,

Do you or any KASAN developers have already had any plan for this?



thanks,
Lecopzer

Andrey Konovalov

unread,
Mar 30, 2021, 11:41:14 AM3/30/21
to Lecopzer Chen, Will Deacon, Andrew Morton, Catalin Marinas, Dmitry Vyukov, Alexander Potapenko, gusta...@kernel.org, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, LKML, li...@roeck-us.net, m...@kernel.org, rp...@kernel.org, Andrey Ryabinin, tyh...@linux.microsoft.com, yj.c...@mediatek.com, leco...@gmail.com
On Tue, Mar 30, 2021 at 10:14 AM Lecopzer Chen
<lecopz...@mediatek.com> wrote:
>
> > Do you know if anybody is working on this? It's really unfortunate that
> > we can't move exclusively to VMAP_STACK just because of SW_TAGS KASAN.
> >
> > That said, what is there to do? As things stand, won't kernel stack
> > addresses end up using KASAN_TAG_KERNEL?
>
> Hi Andrey,
>
> Do you or any KASAN developers have already had any plan for this?

Hi Will and Lecopzer,

We have an issue open to track this [1], but no immediate plans to work on this.

Now that we have GENERIC vmalloc support for arm64, there's a chance
that SW_TAGS vmalloc will just work once allowed via configs. However,
I would expect that we'll still need to at least add some
kasan_reset_tag() annotations here and there.

Thanks!

[1] https://bugzilla.kernel.org/show_bug.cgi?id=211777
Reply all
Reply to author
Forward
0 new messages