[PATCH v4 2/3] arm64: Support page mapping percpu first chunk allocator

1 view
Skip to first unread message

Kefeng Wang

unread,
Sep 10, 2021, 1:30:44 AM9/10/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com, Kefeng Wang
Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
"percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
"percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
"percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
even the system could not boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Reviewed-by: Catalin Marinas <catalin...@arm.com>
Signed-off-by: Kefeng Wang <wangkef...@huawei.com>
---
arch/arm64/Kconfig | 4 ++
drivers/base/arch_numa.c | 82 +++++++++++++++++++++++++++++++++++-----
2 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 077f2ec4eeb2..04cfe1b4e98b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1042,6 +1042,10 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
def_bool y
depends on NUMA

+config NEED_PER_CPU_PAGE_FIRST_CHUNK
+ def_bool y
+ depends on NUMA
+
source "kernel/Kconfig.hz"

config ARCH_SPARSEMEM_ENABLE
diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
index 46c503486e96..995dca9f3254 100644
--- a/drivers/base/arch_numa.c
+++ b/drivers/base/arch_numa.c
@@ -14,6 +14,7 @@
#include <linux/of.h>

#include <asm/sections.h>
+#include <asm/pgalloc.h>

struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data);
@@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
memblock_free_early(__pa(ptr), size);
}

+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+static void __init pcpu_populate_pte(unsigned long addr)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ p4d = p4d_offset(pgd, addr);
+ if (p4d_none(*p4d)) {
+ pud_t *new;
+
+ new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
+ p4d_populate(&init_mm, p4d, new);
+ }
+
+ pud = pud_offset(p4d, addr);
+ if (pud_none(*pud)) {
+ pmd_t *new;
+
+ new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
+ pud_populate(&init_mm, pud, new);
+ }
+
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd)) {
+ pte_t *new;
+
+ new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
+ pmd_populate_kernel(&init_mm, pmd, new);
+ }
+
+ return;
+
+err_alloc:
+ panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+}
+#endif
+
void __init setup_per_cpu_areas(void)
{
unsigned long delta;
unsigned int cpu;
- int rc;
+ int rc = -EINVAL;
+
+ if (pcpu_chosen_fc != PCPU_FC_PAGE) {
+ /*
+ * Always reserve area for module percpu variables. That's
+ * what the legacy allocator did.
+ */
+ rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
+ PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
+ pcpu_cpu_distance,
+ pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+ if (rc < 0)
+ pr_warn("PERCPU: %s allocator failed (%d), falling back to page size\n",
+ pcpu_fc_names[pcpu_chosen_fc], rc);
+#endif
+ }

- /*
- * Always reserve area for module percpu variables. That's
- * what the legacy allocator did.
- */
- rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
- PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
- pcpu_cpu_distance,
- pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+ if (rc < 0)
+ rc = pcpu_page_first_chunk(PERCPU_MODULE_RESERVE,
+ pcpu_fc_alloc,
+ pcpu_fc_free,
+ pcpu_populate_pte);
+#endif
if (rc < 0)
- panic("Failed to initialize percpu areas.");
+ panic("Failed to initialize percpu areas (err=%d).", rc);

delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
for_each_possible_cpu(cpu)
--
2.26.2

Kefeng Wang

unread,
Sep 10, 2021, 1:30:44 AM9/10/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com, Kefeng Wang
Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
"percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
"percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
"percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
even the system could not boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Also fix a crash when both NEED_PER_CPU_PAGE_FIRST_CHUNK and KASAN_VMALLOC enabled.

Tested on ARM64 qemu with cmdline "percpu_alloc=page" based on v5.14.

V4:
- add ACK/RB
- address comments about patch1 from Catalin
- add Greg and Andrew into list suggested by Catalin

v3:
- search for a range that fits instead of always picking the end from
vmalloc area suggested by Catalin.
- use NUMA_NO_NODE to avoid "virt_to_phys used for non-linear address:"
issue in arm64 kasan_populate_early_vm_area_shadow().
- add Acked-by: Marco Elver <el...@google.com> to patch v3

V2:
- fix build error when CONFIG_KASAN disabled, found by l...@intel.com
- drop wrong __weak comment from kasan_populate_early_vm_area_shadow(),
found by Marco Elver <el...@google.com>

Kefeng Wang (3):
vmalloc: Choose a better start address in vm_area_register_early()
arm64: Support page mapping percpu first chunk allocator
kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC

arch/arm64/Kconfig | 4 ++
arch/arm64/mm/kasan_init.c | 16 ++++++++
drivers/base/arch_numa.c | 82 +++++++++++++++++++++++++++++++++-----
include/linux/kasan.h | 6 +++
mm/kasan/init.c | 5 +++
mm/vmalloc.c | 19 ++++++---
6 files changed, 116 insertions(+), 16 deletions(-)

--
2.26.2

Kefeng Wang

unread,
Sep 10, 2021, 1:30:45 AM9/10/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com, Kefeng Wang
With KASAN_VMALLOC and NEED_PER_CPU_PAGE_FIRST_CHUNK, it crashs,

Unable to handle kernel paging request at virtual address ffff7000028f2000
...
swapper pgtable: 64k pages, 48-bit VAs, pgdp=0000000042440000
[ffff7000028f2000] pgd=000000063e7c0003, p4d=000000063e7c0003, pud=000000063e7c0003, pmd=000000063e7b0003, pte=0000000000000000
Internal error: Oops: 96000007 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc4-00003-gc6e6e28f3f30-dirty #62
Hardware name: linux,dummy-virt (DT)
pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO BTYPE=--)
pc : kasan_check_range+0x90/0x1a0
lr : memcpy+0x88/0xf4
sp : ffff80001378fe20
...
Call trace:
kasan_check_range+0x90/0x1a0
pcpu_page_first_chunk+0x3f0/0x568
setup_per_cpu_areas+0xb8/0x184
start_kernel+0x8c/0x328

The vm area used in vm_area_register_early() has no kasan shadow memory,
Let's add a new kasan_populate_early_vm_area_shadow() function to populate
the vm area shadow memory to fix the issue.

Acked-by: Marco Elver <el...@google.com> (for KASAN parts)
Acked-by: Andrey Konovalov <andre...@gmail.com> (for KASAN parts)
Signed-off-by: Kefeng Wang <wangkef...@huawei.com>
---
arch/arm64/mm/kasan_init.c | 16 ++++++++++++++++
include/linux/kasan.h | 6 ++++++
mm/kasan/init.c | 5 +++++
mm/vmalloc.c | 1 +
4 files changed, 28 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 61b52a92b8b6..5b996ca4d996 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -287,6 +287,22 @@ static void __init kasan_init_depth(void)
init_task.kasan_depth = 0;
}

+#ifdef CONFIG_KASAN_VMALLOC
+void __init kasan_populate_early_vm_area_shadow(void *start, unsigned long size)
+{
+ unsigned long shadow_start, shadow_end;
+
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(start);
+ shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
+ shadow_end = ALIGN(shadow_end, PAGE_SIZE);
+ kasan_map_populate(shadow_start, shadow_end, NUMA_NO_NODE);
+}
+#endif
+
void __init kasan_init(void)
{
kasan_init_shadow();
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index dd874a1ee862..859f1e724ee1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -434,6 +434,8 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+
#else /* CONFIG_KASAN_VMALLOC */

static inline int kasan_populate_vmalloc(unsigned long start,
@@ -451,6 +453,10 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_start,
unsigned long free_region_end) {}

+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{ }
+
#endif /* CONFIG_KASAN_VMALLOC */

#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index cc64ed6858c6..d39577d088a1 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
return 0;
}

+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{
+}
+
static void kasan_free_pte(pte_t *pte_start, pmd_t *pmd)
{
pte_t *pte;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5ee3cbeffa26..4cb494447910 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2287,6 +2287,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
vm->addr = (void *)addr;
vm->next = *p;
*p = vm;
+ kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
}

static void vmap_init_free_space(void)
--
2.26.2

Kefeng Wang

unread,
Sep 15, 2021, 4:33:14 AM9/15/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com
Hi Greg and Andrew, as Catalin saids,the series touches drivers/ and mm/
but missing

acks from both of you,could you take a look of this patchset(patch1
change mm/vmalloc.c

and patch2 changes drivers/base/arch_numa.c).

And Catalin, is there any other comments? I hope this could be merged
into next version,

Many thanks all of you.

Greg KH

unread,
Sep 16, 2021, 11:41:42 AM9/16/21
to Kefeng Wang, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
On Wed, Sep 15, 2021 at 04:33:09PM +0800, Kefeng Wang wrote:
> Hi Greg and Andrew, as Catalin saids,the series touches drivers/ and mm/
> but missing
>
> acks from both of you,could you take a look of this patchset(patch1 change
> mm/vmalloc.c

What patchset?

> and patch2 changes drivers/base/arch_numa.c).

that file is not really owned by anyone it seems :(

Can you provide a link to the real patch please?

thanks,

greg k-h

Kefeng Wang

unread,
Sep 16, 2021, 9:11:42 PM9/16/21
to Greg KH, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com


On 2021/9/16 23:41, Greg KH wrote:
On Wed, Sep 15, 2021 at 04:33:09PM +0800, Kefeng Wang wrote:
Hi Greg and Andrew, as Catalin saids,the series touches drivers/ and mm/
but missing

acks from both of you,could you take a look of this patchset(patch1 change
mm/vmalloc.c
What patchset?

      
that file is not really owned by anyone it seems :(

Can you provide a link to the real patch please?

Yes, arch_numa.c is moved into drivers/base to support riscv numa, it is shared by arm64/riscv,

my changes(patch2) only support NEED_PER_CPU_PAGE_FIRST_CHUNK on ARM64.

here is the link:

https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/

Thanks.


thanks,

greg k-h
.

Greg KH

unread,
Sep 17, 2021, 2:24:26 AM9/17/21
to Kefeng Wang, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
Why is this a config option at all?

> +
> source "kernel/Kconfig.hz"
>
> config ARCH_SPARSEMEM_ENABLE
> diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
> index 46c503486e96..995dca9f3254 100644
> --- a/drivers/base/arch_numa.c
> +++ b/drivers/base/arch_numa.c
> @@ -14,6 +14,7 @@
> #include <linux/of.h>
>
> #include <asm/sections.h>
> +#include <asm/pgalloc.h>
>
> struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
> EXPORT_SYMBOL(node_data);
> @@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
> memblock_free_early(__pa(ptr), size);
> }
>
> +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK

Ick, no #ifdef in .c files if at all possible please.
That feels harsh, are you sure you want to crash? There's no way to
recover from this? If not, how can this fail in real life?

> +}
> +#endif
> +
> void __init setup_per_cpu_areas(void)
> {
> unsigned long delta;
> unsigned int cpu;
> - int rc;
> + int rc = -EINVAL;
> +
> + if (pcpu_chosen_fc != PCPU_FC_PAGE) {
> + /*
> + * Always reserve area for module percpu variables. That's
> + * what the legacy allocator did.
> + */
> + rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
> + PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
> + pcpu_cpu_distance,
> + pcpu_fc_alloc, pcpu_fc_free);
> +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
> + if (rc < 0)
> + pr_warn("PERCPU: %s allocator failed (%d), falling back to page size\n",
> + pcpu_fc_names[pcpu_chosen_fc], rc);
> +#endif

Why only print out a message for a config option? Again, no #ifdef in
.c files if at all possible.

thanks,

greg k-h

Greg KH

unread,
Sep 17, 2021, 2:24:52 AM9/17/21
to Kefeng Wang, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
On Fri, Sep 17, 2021 at 09:11:38AM +0800, Kefeng Wang wrote:
>
> On 2021/9/16 23:41, Greg KH wrote:
> > On Wed, Sep 15, 2021 at 04:33:09PM +0800, Kefeng Wang wrote:
> > > Hi Greg and Andrew, as Catalin saids,the series touches drivers/ and mm/
> > > but missing
> > >
> > > acks from both of you,could you take a look of this patchset(patch1 change
> > > mm/vmalloc.c
> > What patchset?
>
> [PATCH v4 1/3] vmalloc: Choose a better start address in
> vm_area_register_early() <https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/>
> [PATCH v4 2/3] arm64: Support page mapping percpu first chunk allocator <https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/>
> [PATCH v4 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with
> KASAN_VMALLOC <https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/>
> [PATCH v4 0/3] arm64: support page mapping percpu first chunk allocator <https://lore.kernel.org/linux-arm-kernel/c06faf6c-3d21-04f2...@huawei.com/>
>
> > > and patch2 changes drivers/base/arch_numa.c).
> patch2 :
>
> [PATCH v4 2/3] arm64: Support page mapping percpu first chunk allocator <https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/#r>
>
> > that file is not really owned by anyone it seems :(
> >
> > Can you provide a link to the real patch please?
>
> Yes, arch_numa.c is moved into drivers/base to support riscv numa, it is
> shared by arm64/riscv,
>
> my changes(patch2) only support NEED_PER_CPU_PAGE_FIRST_CHUNK on ARM64.
>
> here is the link:
>
> https://lore.kernel.org/linux-arm-kernel/20210910053354.2672...@huawei.com/

Now reviewed.

Kefeng Wang

unread,
Sep 17, 2021, 2:55:22 AM9/17/21
to Greg KH, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
The config is introduced from

commit 08fc45806103e59a37418e84719b878f9bb32540
Author: Tejun Heo <t...@kernel.org>
Date:   Fri Aug 14 15:00:49 2009 +0900

    percpu: build first chunk allocators selectively

    There's no need to build unused first chunk allocators in. Define
    CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
    selectively.

For now, there are three ARCHs support both PER_CPU_EMBED_FIRST_CHUNK

and PER_CPU_PAGE_FIRST_CHUNK.

  arch/powerpc/Kconfig:config NEED_PER_CPU_PAGE_FIRST_CHUNK
  arch/sparc/Kconfig:config NEED_PER_CPU_PAGE_FIRST_CHUNK
  arch/x86/Kconfig:config NEED_PER_CPU_PAGE_FIRST_CHUNK

and we have a cmdline to choose a alloctor.

   percpu_alloc=   Select which percpu first chunk allocator to use.
                   Currently supported values are "embed" and "page".
                   Archs may support subset or none of the selections.
                   See comments in mm/percpu.c for details on each
                   allocator.  This parameter is primarily for debugging
                   and performance comparison.

embed percpu first chunk allocator is the first choice, but it could
fails due to some

memory layout(it does occurs on ARM64 too.), so page mapping percpu
first chunk

allocator is as a fallback, that is what this patch does.

>
>> +
>> source "kernel/Kconfig.hz"
>>
>> config ARCH_SPARSEMEM_ENABLE
>> diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
>> index 46c503486e96..995dca9f3254 100644
>> --- a/drivers/base/arch_numa.c
>> +++ b/drivers/base/arch_numa.c
>> @@ -14,6 +14,7 @@
>> #include <linux/of.h>
>>
>> #include <asm/sections.h>
>> +#include <asm/pgalloc.h>
>>
>> struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
>> EXPORT_SYMBOL(node_data);
>> @@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
>> memblock_free_early(__pa(ptr), size);
>> }
>>
>> +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
> Ick, no #ifdef in .c files if at all possible please.

The drivers/base/arch_numa.c is shared by RISCV/ARM64, so I add this
config to

no need to build this part on RISCV.
Yes,  if no memory, the system won't work, panic is the only choose.
Same reason as above.

Thanks for your review.

>
> thanks,
>
> greg k-h
> .
>

Greg KH

unread,
Sep 17, 2021, 3:04:09 AM9/17/21
to Kefeng Wang, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
Ok, then you need to get reviews from the mm people as I know nothing
about this at all, sorry. This file ended up in drivers/base/ for some
reason to make it easier for others to use cross-arches, not that it had
much to do with the driver core :(

thanks,

greg k-h

Kefeng Wang

unread,
Sep 17, 2021, 3:24:37 AM9/17/21
to Greg KH, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, kasa...@googlegroups.com
Ok, I has Cc'ed Andrew and mm list ;)

Hi Catalin and Will, this patchset is mostly changed for arm64,

and the change itself  is not too big,  could you pick it up from arm64

tree if there are no more comments,  many thanks.

>
> thanks,
>
> greg k-h
> .
>

Kefeng Wang

unread,
Sep 28, 2021, 3:49:43 AM9/28/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com
Hi Catalin and Andrew, kindly ping again, any comments, thanks.

On 2021/9/10 13:33, Kefeng Wang wrote:

Kefeng Wang

unread,
Oct 8, 2021, 9:33:23 AM10/8/21
to wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com


On 2021/9/28 15:48, Kefeng Wang wrote:
> Hi Catalin and Andrew, kindly ping again, any comments, thanks.

Looks no more comments, Catalin and Andrew, ping again, any one of you
could merge this patchset, many thanks.

Andrew Morton

unread,
Oct 10, 2021, 5:36:54 PM10/10/21
to Kefeng Wang, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, gre...@linuxfoundation.org, kasa...@googlegroups.com
On Fri, 10 Sep 2021 13:33:51 +0800 Kefeng Wang <wangkef...@huawei.com> wrote:

> Percpu embedded first chunk allocator is the firstly option, but it
> could fails on ARM64, eg,
> "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
> "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
> "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"
>
> then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
> even the system could not boot successfully.
>
> Let's implement page mapping percpu first chunk allocator as a fallback
> to the embedding allocator to increase the robustness of the system.
>
> Also fix a crash when both NEED_PER_CPU_PAGE_FIRST_CHUNK and KASAN_VMALLOC enabled.

How serious are these problems in real-world situations? Do people
feel that a -stable backport is needed, or is a 5.16-rc1 merge
sufficient?

Kefeng Wang

unread,
Oct 10, 2021, 9:10:08 PM10/10/21
to Andrew Morton, wi...@kernel.org, catalin...@arm.com, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, gre...@linuxfoundation.org, kasa...@googlegroups.com
> .
Thanks Andrew.

A specific memory layout is required(also with KASAN enabled), we met
this issue at qemu and real hardware, due to KASAN enabled, so I think
5.16-rc1 is sufficient.



Catalin Marinas

unread,
Oct 12, 2021, 2:17:18 PM10/12/21
to Kefeng Wang, wi...@kernel.org, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com
It looks like I only acked patch 2 previously, so here it is:

Acked-by: Catalin Marinas <catalin...@arm.com>

Kefeng Wang

unread,
Oct 12, 2021, 9:09:29 PM10/12/21
to Catalin Marinas, wi...@kernel.org, ryabin...@gmail.com, andre...@gmail.com, dvy...@google.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, linu...@kvack.org, el...@google.com, ak...@linux-foundation.org, gre...@linuxfoundation.org, kasa...@googlegroups.com
Many thanks, Catalin :)

> .
>
Reply all
Reply to author
Forward
0 new messages