[PATCH 0/8] memblock: improve late freeing of reserved memory

0 views
Skip to first unread message

Mike Rapoport

unread,
Mar 18, 2026, 6:58:45 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

Hi,

Following a recent discussion about leaks in x86 EFI [1], I audited usage of
memblock_free_late() and free_reserved_area() and made some imporovements how
we handle late freeing of the memory allocated with memblock.

[1] https://lore.kernel.org/all/ec2aaef14783869b3be6e3c...@kernel.crashing.org/

Mike Rapoport (Microsoft) (8):
powerpc: fadump: pair alloc_pages_exact() with free_pages_exact()
powerpc: opal-core: pair alloc_pages_exact() with free_pages_exact()
mm: move free_reserved_area() to mm/memblock.c
memblock: make free_reserved_area() more robust
memblock: extract page freeing from free_reserved_area() into a helper
memblock: make free_reserved_area() update memblock if ARCH_KEEP_MEMBLOCK=y
memblock, treewide: make memblock_free() handle late freeing
memblock: warn when freeing reserved memory before memory map is
initialized

arch/arm64/mm/init.c | 3 -
arch/powerpc/kernel/fadump.c | 16 +--
arch/powerpc/platforms/powernv/opal-core.c | 9 +-
arch/sparc/kernel/mdesc.c | 4 +-
arch/x86/kernel/setup.c | 2 +-
arch/x86/platform/efi/memmap.c | 5 +-
arch/x86/platform/efi/quirks.c | 2 +-
drivers/firmware/efi/apple-properties.c | 2 +-
drivers/of/kexec.c | 2 +-
include/linux/memblock.h | 2 -
init/initramfs.c | 7 --
kernel/dma/swiotlb.c | 6 +-
lib/bootconfig.c | 2 +-
mm/internal.h | 10 ++
mm/kfence/core.c | 4 +-
mm/memblock.c | 110 ++++++++++++++-------
mm/page_alloc.c | 46 ---------
17 files changed, 102 insertions(+), 130 deletions(-)


base-commit: 1f318b96cc84d7c2ab792fcc0bfd42a7ca890681
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:58:56 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

fadump allocates buffers with alloc_pages_exact(), but then marks them
as reserved and frees using free_reserved_area().

This is completely unnecessary and the pages allocated with
alloc_pages_exact() can be naturally freed with free_pages_exact().

Replace freeing of memory in fadump_free_buffer() with
free_pages_exact() and simplify allocation code so that it won't mark
allocated pages as reserved.

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
arch/powerpc/kernel/fadump.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 4ebc333dd786..501d43bf18f3 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -775,24 +775,12 @@ void __init fadump_update_elfcore_header(char *bufp)

static void *__init fadump_alloc_buffer(unsigned long size)
{
- unsigned long count, i;
- struct page *page;
- void *vaddr;
-
- vaddr = alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
- if (!vaddr)
- return NULL;
-
- count = PAGE_ALIGN(size) / PAGE_SIZE;
- page = virt_to_page(vaddr);
- for (i = 0; i < count; i++)
- mark_page_reserved(page + i);
- return vaddr;
+ return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
}

static void fadump_free_buffer(unsigned long vaddr, unsigned long size)
{
- free_reserved_area((void *)vaddr, (void *)(vaddr + size), -1, NULL);
+ free_pages_exact((void *)vaddr, size);
}

s32 __init fadump_setup_cpu_notes_buf(u32 num_cpus)
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:59:09 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

opal-core allocates buffers with alloc_pages_exact(), but then
marks them as reserved and frees using free_reserved_area().

This is completely unnecessary and the pages allocated with
alloc_pages_exact() can be naturally freed with free_pages_exact().

Replace freeing of memory in opalcore_cleanup() with
free_pages_exact() and simplify allocation code so that it won't mark
allocated pages as reserved.

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
arch/powerpc/platforms/powernv/opal-core.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/opal-core.c b/arch/powerpc/platforms/powernv/opal-core.c
index e76e462f55f6..abd99ddbf21f 100644
--- a/arch/powerpc/platforms/powernv/opal-core.c
+++ b/arch/powerpc/platforms/powernv/opal-core.c
@@ -303,7 +303,6 @@ static int __init create_opalcore(void)
struct device_node *dn;
struct opalcore *new;
loff_t opalcore_off;
- struct page *page;
Elf64_Phdr *phdr;
Elf64_Ehdr *elf;
int i, ret;
@@ -329,9 +328,6 @@ static int __init create_opalcore(void)
return -ENOMEM;
}
count = oc_conf->opalcorebuf_sz / PAGE_SIZE;
- page = virt_to_page(oc_conf->opalcorebuf);
- for (i = 0; i < count; i++)
- mark_page_reserved(page + i);

pr_debug("opalcorebuf = 0x%llx\n", (u64)oc_conf->opalcorebuf);

@@ -437,10 +433,7 @@ static void opalcore_cleanup(void)

/* free the buffer used for setting up OPAL core */
if (oc_conf->opalcorebuf) {
- void *end = (void *)((u64)oc_conf->opalcorebuf +
- oc_conf->opalcorebuf_sz);
-
- free_reserved_area(oc_conf->opalcorebuf, end, -1, NULL);
+ free_pages_exact(oc_conf->opalcorebuf, oc_conf->opalcorebuf_sz);
oc_conf->opalcorebuf = NULL;
oc_conf->opalcorebuf_sz = 0;
}
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:59:20 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

free_reserved_area() is related to memblock as it frees reserved memory
back to the buddy allocator, similar to what memblock_free_late() does.

Move free_reserved_area() to mm/memblock.c to prepare for further
consolidation of the functions that free reserved memory.

No functional changes.

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
mm/memblock.c | 37 ++++++++++++++++++++++++++++++++++++-
mm/page_alloc.c | 36 ------------------------------------
2 files changed, 36 insertions(+), 37 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..8f3010dddc58 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -893,6 +893,42 @@ int __init_memblock memblock_remove(phys_addr_t base, phys_addr_t size)
return memblock_remove_range(&memblock.memory, base, size);
}

+unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
+{
+ void *pos;
+ unsigned long pages = 0;
+
+ start = (void *)PAGE_ALIGN((unsigned long)start);
+ end = (void *)((unsigned long)end & PAGE_MASK);
+ for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
+ struct page *page = virt_to_page(pos);
+ void *direct_map_addr;
+
+ /*
+ * 'direct_map_addr' might be different from 'pos'
+ * because some architectures' virt_to_page()
+ * work with aliases. Getting the direct map
+ * address ensures that we get a _writeable_
+ * alias for the memset().
+ */
+ direct_map_addr = page_address(page);
+ /*
+ * Perform a kasan-unchecked memset() since this memory
+ * has not been initialized.
+ */
+ direct_map_addr = kasan_reset_tag(direct_map_addr);
+ if ((unsigned int)poison <= 0xFF)
+ memset(direct_map_addr, poison, PAGE_SIZE);
+
+ free_reserved_page(page);
+ }
+
+ if (pages && s)
+ pr_info("Freeing %s memory: %ldK\n", s, K(pages));
+
+ return pages;
+}
+
/**
* memblock_free - free boot memory allocation
* @ptr: starting address of the boot memory allocation
@@ -1776,7 +1812,6 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
totalram_pages_inc();
}
}
-
/*
* Remaining API functions
*/
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d4b6f1a554e..df3d61253001 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6234,42 +6234,6 @@ void adjust_managed_page_count(struct page *page, long count)
}
EXPORT_SYMBOL(adjust_managed_page_count);

-unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
-{
- void *pos;
- unsigned long pages = 0;
-
- start = (void *)PAGE_ALIGN((unsigned long)start);
- end = (void *)((unsigned long)end & PAGE_MASK);
- for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
- struct page *page = virt_to_page(pos);
- void *direct_map_addr;
-
- /*
- * 'direct_map_addr' might be different from 'pos'
- * because some architectures' virt_to_page()
- * work with aliases. Getting the direct map
- * address ensures that we get a _writeable_
- * alias for the memset().
- */
- direct_map_addr = page_address(page);
- /*
- * Perform a kasan-unchecked memset() since this memory
- * has not been initialized.
- */
- direct_map_addr = kasan_reset_tag(direct_map_addr);
- if ((unsigned int)poison <= 0xFF)
- memset(direct_map_addr, poison, PAGE_SIZE);
-
- free_reserved_page(page);
- }
-
- if (pages && s)
- pr_info("Freeing %s memory: %ldK\n", s, K(pages));
-
- return pages;
-}
-
void free_reserved_page(struct page *page)
{
clear_page_tag_ref(page);
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:59:32 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

There are two potential problems in free_reserved_area():
* it may free a page with not-existent buddy page
* it may be passed a virtual address from an alias mapping that won't
be properly translated by virt_to_page(), for example a symbol on arm64

While first issue is quite theoretical and the second one does not manifest
itself because all the callers do the right thing, it is easy to make
free_reserved_area() robust enough to avoid these potential issues.

Replace the loop by virtual address with a loop by pfn that uses
for_each_valid_pfn() and use __pa() or __pa_symbol() depending on the
virtual mapping alias to correctly determine the loop boundaries.

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
mm/memblock.c | 34 +++++++++++++++++++++++-----------
1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 8f3010dddc58..27d4c9889b59 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -895,21 +895,32 @@ int __init_memblock memblock_remove(phys_addr_t base, phys_addr_t size)

unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
{
- void *pos;
- unsigned long pages = 0;
+ phys_addr_t start_pa, end_pa;
+ unsigned long pages = 0, pfn;

- start = (void *)PAGE_ALIGN((unsigned long)start);
- end = (void *)((unsigned long)end & PAGE_MASK);
- for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
- struct page *page = virt_to_page(pos);
+ /*
+ * end is the first address past the region and it may be beyond what
+ * __pa() or __pa_symbol() can handle.
+ * Use the address included in the range for the cnversion and add back
+ * 1 afterwards.
+ */
+ if (__is_kernel((unsigned long)start)) {
+ start_pa = __pa_symbol(start);
+ end_pa = __pa_symbol(end - 1) + 1;
+ } else {
+ start_pa = __pa(start);
+ end_pa = __pa(end - 1) + 1;
+ }
+
+ for_each_valid_pfn(pfn, PFN_UP(start_pa), PFN_DOWN(end_pa)) {
+ struct page *page = pfn_to_page(pfn);
void *direct_map_addr;

/*
- * 'direct_map_addr' might be different from 'pos'
- * because some architectures' virt_to_page()
- * work with aliases. Getting the direct map
- * address ensures that we get a _writeable_
- * alias for the memset().
+ * 'direct_map_addr' might be different from the kernel virtual
+ * address because some architectures use aliases.
+ * Going via physical address, pfn_to_page() and page_address()
+ * ensures that we get a _writeable_ alias for the memset().
*/
direct_map_addr = page_address(page);
/*
@@ -921,6 +932,7 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
memset(direct_map_addr, poison, PAGE_SIZE);

free_reserved_page(page);
+ pages++;
}

if (pages && s)
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:59:38 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

There are two functions that release pages to the buddy allocator late in
the boot: free_reserved_area() and memblock_free_late().

Currently they are using different underlying functionality,
free_reserved_area() runs each page being freed via free_reserved_page()
and memblock_free_late() uses memblock_free_pages() -> __free_pages_core(),
but in the end they both boil down to a loop that frees a range page by
page.

Extract the loop frees pages from free_reserved_area() into a helper and
use that helper in memblock_free_late().

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
mm/memblock.c | 55 +++++++++++++++++++++++++++------------------------
1 file changed, 29 insertions(+), 26 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 27d4c9889b59..87bd200a8cc9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -893,26 +893,12 @@ int __init_memblock memblock_remove(phys_addr_t base, phys_addr_t size)
return memblock_remove_range(&memblock.memory, base, size);
}

-unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
+static unsigned long __free_reserved_area(phys_addr_t start, phys_addr_t end,
+ int poison)
{
- phys_addr_t start_pa, end_pa;
unsigned long pages = 0, pfn;

- /*
- * end is the first address past the region and it may be beyond what
- * __pa() or __pa_symbol() can handle.
- * Use the address included in the range for the cnversion and add back
- * 1 afterwards.
- */
- if (__is_kernel((unsigned long)start)) {
- start_pa = __pa_symbol(start);
- end_pa = __pa_symbol(end - 1) + 1;
- } else {
- start_pa = __pa(start);
- end_pa = __pa(end - 1) + 1;
- }
-
- for_each_valid_pfn(pfn, PFN_UP(start_pa), PFN_DOWN(end_pa)) {
+ for_each_valid_pfn(pfn, PFN_UP(start), PFN_DOWN(end)) {
struct page *page = pfn_to_page(pfn);
void *direct_map_addr;

@@ -934,7 +920,29 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
free_reserved_page(page);
pages++;
}
+ return pages;
+}
+
+unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
+{
+ phys_addr_t start_pa, end_pa;
+ unsigned long pages;
+
+ /*
+ * end is the first address past the region and it may be beyond what
+ * __pa() or __pa_symbol() can handle.
+ * Use the address included in the range for the cnversion and add back
+ * 1 afterwards.
+ */
+ if (__is_kernel((unsigned long)start)) {
+ start_pa = __pa_symbol(start);
+ end_pa = __pa_symbol(end - 1) + 1;
+ } else {
+ start_pa = __pa(start);
+ end_pa = __pa(end - 1) + 1;
+ }

+ pages = __free_reserved_area(start_pa, end_pa, poison);
if (pages && s)
pr_info("Freeing %s memory: %ldK\n", s, K(pages));

@@ -1810,20 +1818,15 @@ void *__init __memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
*/
void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
{
- phys_addr_t cursor, end;
+ phys_addr_t end = base + size - 1;

- end = base + size - 1;
memblock_dbg("%s: [%pa-%pa] %pS\n",
__func__, &base, &end, (void *)_RET_IP_);
- kmemleak_free_part_phys(base, size);
- cursor = PFN_UP(base);
- end = PFN_DOWN(base + size);

- for (; cursor < end; cursor++) {
- memblock_free_pages(cursor, 0);
- totalram_pages_inc();
- }
+ kmemleak_free_part_phys(base, size);
+ __free_reserved_area(base, base + size, -1);
}
+
/*
* Remaining API functions
*/
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 6:59:49 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

On architectures that keep memblock after boot, freeing of reserved memory
with free_reserved_area() is paired with an update of memblock arrays,
usually by a call to memblock_free().

Make free_reserved_area() directly update memblock.reserved when
ARCH_KEEP_MEMBLOCK is enabled.

Remove the now-redundant explicit memblock_free() call from
arm64::free_initmem() and the #ifdef CONFIG_ARCH_KEEP_MEMBLOCK block
from the generic free_initrd_mem().

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
arch/arm64/mm/init.c | 3 ---
init/initramfs.c | 7 -------
mm/memblock.c | 6 ++++++
3 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 96711b8578fd..07b17c708702 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -385,9 +385,6 @@ void free_initmem(void)
WARN_ON(!IS_ALIGNED((unsigned long)lm_init_begin, PAGE_SIZE));
WARN_ON(!IS_ALIGNED((unsigned long)lm_init_end, PAGE_SIZE));

- /* Delete __init region from memblock.reserved. */
- memblock_free(lm_init_begin, lm_init_end - lm_init_begin);
-
free_reserved_area(lm_init_begin, lm_init_end,
POISON_FREE_INITMEM, "unused kernel");
/*
diff --git a/init/initramfs.c b/init/initramfs.c
index 139baed06589..bca0922b2850 100644
--- a/init/initramfs.c
+++ b/init/initramfs.c
@@ -652,13 +652,6 @@ void __init reserve_initrd_mem(void)

void __weak __init free_initrd_mem(unsigned long start, unsigned long end)
{
-#ifdef CONFIG_ARCH_KEEP_MEMBLOCK
- unsigned long aligned_start = ALIGN_DOWN(start, PAGE_SIZE);
- unsigned long aligned_end = ALIGN(end, PAGE_SIZE);
-
- memblock_free((void *)aligned_start, aligned_end - aligned_start);
-#endif
-
free_reserved_area((void *)start, (void *)end, POISON_FREE_INITMEM,
"initrd");
}
diff --git a/mm/memblock.c b/mm/memblock.c
index 87bd200a8cc9..9f372a8e82f7 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -942,6 +942,12 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
end_pa = __pa(end - 1) + 1;
}

+ if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
+ if (start_pa < end_pa)
+ memblock_remove_range(&memblock.reserved,
+ start_pa, end_pa - start_pa);
+ }
+
pages = __free_reserved_area(start_pa, end_pa, poison);
if (pages && s)
pr_info("Freeing %s memory: %ldK\n", s, K(pages));
--
2.51.0

Mike Rapoport

unread,
Mar 18, 2026, 7:00:03 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

It shouldn't be responsibility of memblock users to detect if they free
memory allocated from memblock late and should use memblock_free_late().

Make memblock_free() and memblock_phys_free() take care of late memory
freeing and drop memblock_free_late().

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
arch/sparc/kernel/mdesc.c | 4 +--
arch/x86/kernel/setup.c | 2 +-
arch/x86/platform/efi/memmap.c | 5 +---
arch/x86/platform/efi/quirks.c | 2 +-
drivers/firmware/efi/apple-properties.c | 2 +-
drivers/of/kexec.c | 2 +-
include/linux/memblock.h | 2 --
kernel/dma/swiotlb.c | 6 ++--
lib/bootconfig.c | 2 +-
mm/kfence/core.c | 4 +--
mm/memblock.c | 37 +++++++------------------
11 files changed, 22 insertions(+), 46 deletions(-)

diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
index 30f171b7b00c..ecd6c8ae49c7 100644
--- a/arch/sparc/kernel/mdesc.c
+++ b/arch/sparc/kernel/mdesc.c
@@ -183,14 +183,12 @@ static struct mdesc_handle * __init mdesc_memblock_alloc(unsigned int mdesc_size
static void __init mdesc_memblock_free(struct mdesc_handle *hp)
{
unsigned int alloc_size;
- unsigned long start;

BUG_ON(refcount_read(&hp->refcnt) != 0);
BUG_ON(!list_empty(&hp->list));

alloc_size = PAGE_ALIGN(hp->handle_size);
- start = __pa(hp);
- memblock_free_late(start, alloc_size);
+ memblock_free(hp, alloc_size);
}

static struct mdesc_mem_ops memblock_mdesc_ops = {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index eebcc9db1a1b..46882ce79c3a 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -426,7 +426,7 @@ int __init ima_free_kexec_buffer(void)
if (!ima_kexec_buffer_size)
return -ENOENT;

- memblock_free_late(ima_kexec_buffer_phys,
+ memblock_phys_free(ima_kexec_buffer_phys,
ima_kexec_buffer_size);

ima_kexec_buffer_phys = 0;
diff --git a/arch/x86/platform/efi/memmap.c b/arch/x86/platform/efi/memmap.c
index 023697c88910..697a9a26a005 100644
--- a/arch/x86/platform/efi/memmap.c
+++ b/arch/x86/platform/efi/memmap.c
@@ -34,10 +34,7 @@ static
void __init __efi_memmap_free(u64 phys, unsigned long size, unsigned long flags)
{
if (flags & EFI_MEMMAP_MEMBLOCK) {
- if (slab_is_available())
- memblock_free_late(phys, size);
- else
- memblock_phys_free(phys, size);
+ memblock_phys_free(phys, size);
} else if (flags & EFI_MEMMAP_SLAB) {
struct page *p = pfn_to_page(PHYS_PFN(phys));
unsigned int order = get_order(size);
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index 35caa5746115..a560bbcaa006 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -372,7 +372,7 @@ void __init efi_reserve_boot_services(void)
* doesn't make sense as far as the firmware is
* concerned, but it does provide us with a way to tag
* those regions that must not be paired with
- * memblock_free_late().
+ * memblock_phys_free().
*/
md->attribute |= EFI_MEMORY_RUNTIME;
}
diff --git a/drivers/firmware/efi/apple-properties.c b/drivers/firmware/efi/apple-properties.c
index 13ac28754c03..2e525e17fba7 100644
--- a/drivers/firmware/efi/apple-properties.c
+++ b/drivers/firmware/efi/apple-properties.c
@@ -226,7 +226,7 @@ static int __init map_properties(void)
*/
data->len = 0;
memunmap(data);
- memblock_free_late(pa_data + sizeof(*data), data_len);
+ memblock_phys_free(pa_data + sizeof(*data), data_len);

return ret;
}
diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
index c4cf3552c018..512d9be9d513 100644
--- a/drivers/of/kexec.c
+++ b/drivers/of/kexec.c
@@ -175,7 +175,7 @@ int __init ima_free_kexec_buffer(void)
if (ret)
return ret;

- memblock_free_late(addr, size);
+ memblock_phys_free(addr, size);
return 0;
}
#endif
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 6ec5e9ac0699..6f6c5b5c4a4b 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -172,8 +172,6 @@ void __next_mem_range_rev(u64 *idx, int nid, enum memblock_flags flags,
struct memblock_type *type_b, phys_addr_t *out_start,
phys_addr_t *out_end, int *out_nid);

-void memblock_free_late(phys_addr_t base, phys_addr_t size);
-
#ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
phys_addr_t *out_start,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d8e6f1d889d5..e44e039e00d3 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -546,10 +546,10 @@ void __init swiotlb_exit(void)
free_pages(tbl_vaddr, get_order(tbl_size));
free_pages((unsigned long)mem->slots, get_order(slots_size));
} else {
- memblock_free_late(__pa(mem->areas),
+ memblock_free(mem->areas,
array_size(sizeof(*mem->areas), mem->nareas));
- memblock_free_late(mem->start, tbl_size);
- memblock_free_late(__pa(mem->slots), slots_size);
+ memblock_phys_free(mem->start, tbl_size);
+ memblock_free(mem->slots, slots_size);
}

memset(mem, 0, sizeof(*mem));
diff --git a/lib/bootconfig.c b/lib/bootconfig.c
index 449369a60846..86a75bf636bc 100644
--- a/lib/bootconfig.c
+++ b/lib/bootconfig.c
@@ -64,7 +64,7 @@ static inline void __init xbc_free_mem(void *addr, size_t size, bool early)
if (early)
memblock_free(addr, size);
else if (addr)
- memblock_free_late(__pa(addr), size);
+ memblock_free(addr, size);
}

#else /* !__KERNEL__ */
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 7393957f9a20..5c8268af533e 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -731,10 +731,10 @@ static bool __init kfence_init_pool_early(void)
* fails for the first page, and therefore expect addr==__kfence_pool in
* most failure cases.
*/
- memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
+ memblock_free((void *)addr, KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
__kfence_pool = NULL;

- memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE);
+ memblock_free(kfence_metadata_init, KFENCE_METADATA_SIZE);
kfence_metadata_init = NULL;

return false;
diff --git a/mm/memblock.c b/mm/memblock.c
index 9f372a8e82f7..bd5758ff07f2 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -384,26 +384,24 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
*/
void __init memblock_discard(void)
{
- phys_addr_t addr, size;
+ phys_addr_t size;

if (memblock.reserved.regions != memblock_reserved_init_regions) {
- addr = __pa(memblock.reserved.regions);
size = PAGE_ALIGN(sizeof(struct memblock_region) *
memblock.reserved.max);
if (memblock_reserved_in_slab)
kfree(memblock.reserved.regions);
else
- memblock_free_late(addr, size);
+ memblock_free(memblock.reserved.regions, size);
}

if (memblock.memory.regions != memblock_memory_init_regions) {
- addr = __pa(memblock.memory.regions);
size = PAGE_ALIGN(sizeof(struct memblock_region) *
memblock.memory.max);
if (memblock_memory_in_slab)
kfree(memblock.memory.regions);
else
- memblock_free_late(addr, size);
+ memblock_free(memblock.memory.regions, size);
}

memblock_memory = NULL;
@@ -961,7 +959,8 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
* @size: size of the boot memory block in bytes
*
* Free boot memory block previously allocated by memblock_alloc_xx() API.
- * The freeing memory will not be released to the buddy allocator.
+ * If called after the buddy allocator is available, the memory is released to
+ * the buddy allocator.
*/
void __init_memblock memblock_free(void *ptr, size_t size)
{
@@ -975,7 +974,8 @@ void __init_memblock memblock_free(void *ptr, size_t size)
* @size: size of the boot memory block in bytes
*
* Free boot memory block previously allocated by memblock_phys_alloc_xx() API.
- * The freeing memory will not be released to the buddy allocator.
+ * If called after the buddy allocator is available, the memory is released to
+ * the buddy allocator.
*/
int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size)
{
@@ -985,6 +985,9 @@ int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size)
&base, &end, (void *)_RET_IP_);

kmemleak_free_part_phys(base, size);
+ if (slab_is_available())
+ __free_reserved_area(base, base + size, -1);
+
return memblock_remove_range(&memblock.reserved, base, size);
}

@@ -1813,26 +1816,6 @@ void *__init __memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
return addr;
}

-/**
- * memblock_free_late - free pages directly to buddy allocator
- * @base: phys starting address of the boot memory block
- * @size: size of the boot memory block in bytes
- *
- * This is only useful when the memblock allocator has already been torn
- * down, but we are still initializing the system. Pages are released directly
- * to the buddy allocator.
- */
-void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
-{
- phys_addr_t end = base + size - 1;
-
- memblock_dbg("%s: [%pa-%pa] %pS\n",
- __func__, &base, &end, (void *)_RET_IP_);
-
- kmemleak_free_part_phys(base, size);
- __free_reserved_area(base, base + size, -1);
-}
-

Mike Rapoport

unread,
Mar 18, 2026, 7:00:18 AM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, freeing of reserved
memory before the memory map is fully initialized in deferred_init_memmap()
would cause access to uninitialized struct pages and may crash when
accessing spurious list pointers, like was recently discovered during
discussion about memory leaks in x86 EFI code [1].

The trace below is from an attempt to call free_reserved_page() before
page_alloc_init_late():

[ 0.076840] BUG: unable to handle page fault for address: ffffce1a005a0788
[ 0.078226] #PF: supervisor read access in kernel mode
[ 0.078226] #PF: error_code(0x0000) - not-present page
[ 0.078226] PGD 0 P4D 0
[ 0.078226] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
[ 0.078226] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.12.68-92.123.amzn2023.x86_64 #1
[ 0.078226] Hardware name: Amazon EC2 t3a.nano/, BIOS 1.0 10/16/2017
[ 0.078226] RIP: 0010:__list_del_entry_valid_or_report+0x32/0xb0
...
[ 0.078226] __free_one_page+0x170/0x520
[ 0.078226] free_pcppages_bulk+0x151/0x1e0
[ 0.078226] free_unref_page_commit+0x263/0x320
[ 0.078226] free_unref_page+0x2c8/0x5b0
[ 0.078226] ? srso_return_thunk+0x5/0x5f
[ 0.078226] free_reserved_page+0x1c/0x30
[ 0.078226] memblock_free_late+0x6c/0xc0

Currently there are not many callers of free_reserved_area() and they all
appear to be at the right timings.

Still, in order to protect against problematic code moves or additions of
new callers add a warning that will inform that reserved pages cannot be
freed until the memory map is fully initialized.

[1] https://lore.kernel.org/all/e5d5a1105d90ee1e7fe7eaf...@kernel.crashing.org/

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
mm/internal.h | 10 ++++++++++
mm/memblock.c | 5 +++++
mm/page_alloc.c | 10 ----------
3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d7d9..f60c1edb2e02 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1233,7 +1233,17 @@ static inline void vunmap_range_noflush(unsigned long start, unsigned long end)
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
DECLARE_STATIC_KEY_TRUE(deferred_pages);

+static inline bool deferred_pages_enabled(void)
+{
+ return static_branch_unlikely(&deferred_pages);
+}
+
bool __init deferred_grow_zone(struct zone *zone, unsigned int order);
+#else
+static inline bool deferred_pages_enabled(void)
+{
+ return false;
+}
#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */

void init_deferred_page(unsigned long pfn, int nid);
diff --git a/mm/memblock.c b/mm/memblock.c
index bd5758ff07f2..780e70d4971a 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -896,6 +896,11 @@ static unsigned long __free_reserved_area(phys_addr_t start, phys_addr_t end,
{
unsigned long pages = 0, pfn;

+ if (deferred_pages_enabled()) {
+ WARN(1, "Cannot free reserved memory because of deferred initialization of the memory map");
+ return 0;
+ }
+
for_each_valid_pfn(pfn, PFN_UP(start), PFN_DOWN(end)) {
struct page *page = pfn_to_page(pfn);
void *direct_map_addr;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df3d61253001..9ac47bab2ea7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -331,11 +331,6 @@ int page_group_by_mobility_disabled __read_mostly;
*/
DEFINE_STATIC_KEY_TRUE(deferred_pages);

-static inline bool deferred_pages_enabled(void)
-{
- return static_branch_unlikely(&deferred_pages);
-}
-
/*
* deferred_grow_zone() is __init, but it is called from
* get_page_from_freelist() during early boot until deferred_pages permanently
@@ -348,11 +343,6 @@ _deferred_grow_zone(struct zone *zone, unsigned int order)
return deferred_grow_zone(zone, order);
}
#else
-static inline bool deferred_pages_enabled(void)
-{
- return false;
-}
-
static inline bool _deferred_grow_zone(struct zone *zone, unsigned int order)
{
return false;
--
2.51.0

Vlastimil Babka

unread,
Mar 18, 2026, 10:17:07 AM (3 days ago) Mar 18
to Mike Rapoport, Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
On 3/18/26 11:58, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>
>
> free_reserved_area() is related to memblock as it frees reserved memory
> back to the buddy allocator, similar to what memblock_free_late() does.
>
> Move free_reserved_area() to mm/memblock.c to prepare for further
> consolidation of the functions that free reserved memory.
>
> No functional changes.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>

Acked-by: Vlastimil Babka (SUSE) <vba...@kernel.org>

Mike Rapoport

unread,
Mar 18, 2026, 4:52:40 PM (3 days ago) Mar 18
to Andrew Morton, Alexander Potapenko, Alexander Viro, Andreas Larsson, Ard Biesheuvel, Borislav Petkov, Brendan Jackman, Christophe Leroy (CS GROUP), Catalin Marinas, Christian Brauner, David S. Miller, Dave Hansen, David Hildenbrand, Dmitry Vyukov, Ilias Apalodimas, Ingo Molnar, Jan Kara, Johannes Weiner, Liam R. Howlett, Lorenzo Stoakes, Madhavan Srinivasan, Marco Elver, Marek Szyprowski, Masami Hiramatsu, Michael Ellerman, Michal Hocko, Mike Rapoport, Nicholas Piggin, H. Peter Anvin, Rob Herring, Robin Murphy, Saravana Kannan, Suren Baghdasaryan, Thomas Gleixner, Vlastimil Babka, Will Deacon, Zi Yan, devic...@vger.kernel.org, io...@lists.linux.dev, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linu...@vger.kernel.org, linux-...@vger.kernel.org, linux-...@vger.kernel.org, linu...@kvack.org, linux-tra...@vger.kernel.org, linuxp...@lists.ozlabs.org, sparc...@vger.kernel.org, x...@kernel.org
From: "Mike Rapoport (Microsoft)" <rp...@kernel.org>

After moving free_reserved_area() function to mm/memblock.c memblock
tests lack stubs for several functions and macros this function calls.

Add them.

Signed-off-by: Mike Rapoport (Microsoft) <rp...@kernel.org>
---
tools/include/linux/mm.h | 1 +
tools/testing/memblock/internal.h | 28 +++++++++++++++++++++++++---
2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h
index 028f3faf46e7..4407d8396108 100644
--- a/tools/include/linux/mm.h
+++ b/tools/include/linux/mm.h
@@ -17,6 +17,7 @@

#define __va(x) ((void *)((unsigned long)(x)))
#define __pa(x) ((unsigned long)(x))
+#define __pa_symbol(x) ((unsigned long)(x))

#define pfn_to_page(pfn) ((void *)((pfn) * PAGE_SIZE))

diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/internal.h
index 009b97bbdd22..7ff61172ab24 100644
--- a/tools/testing/memblock/internal.h
+++ b/tools/testing/memblock/internal.h
@@ -11,9 +11,16 @@ static int memblock_debug = 1;

#define pr_warn_ratelimited(fmt, ...) printf(fmt, ##__VA_ARGS__)

+#define K(x) ((x) << (PAGE_SHIFT-10))
+
bool mirrored_kernelcore = false;

struct page {};
+static inline void *page_address(struct page *page)
+{
+ BUG();
+ return page;
+}

void memblock_free_pages(unsigned long pfn, unsigned int order)
{
@@ -23,10 +30,25 @@ static inline void accept_memory(phys_addr_t start, unsigned long size)
{
}

-static inline unsigned long free_reserved_area(void *start, void *end,
- int poison, const char *s)
+unsigned long free_reserved_area(void *start, void *end, int poison, const char *s);
+void free_reserved_page(struct page *page);
+
+static inline bool deferred_pages_enabled(void)
+{
+ return false;
+}
+
+#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
+ for ((pfn) = (start_pfn); (pfn) < (end_pfn); (pfn)++)
+
+static inline void *kasan_reset_tag(const void *addr)
+{
+ return (void *)addr;
+}
+
+static inline bool __is_kernel(unsigned long addr)
{
- return 0;
+ return false;
}

#endif
--
2.51.0

Reply all
Reply to author
Forward
0 new messages