[PATCH v4 01/16] slab: Reimplement page_slab()

0 views
Skip to first unread message

Matthew Wilcox (Oracle)

unread,
Nov 12, 2025, 7:09:41 PMNov 12
to Vlastimil Babka, Andrew Morton, Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linu...@kvack.org, Alexander Potapenko, Marco Elver, kasa...@googlegroups.com
In order to separate slabs from folios, we need to convert from any page
in a slab to the slab directly without going through a page to folio
conversion first.

Up to this point, page_slab() has followed the example of other memdesc
converters (page_folio(), page_ptdesc() etc) and just cast the pointer
to the requested type, regardless of whether the pointer is actually a
pointer to the correct type or not.

That changes with this commit; we check that the page actually belongs
to a slab and return NULL if it does not. Other memdesc converters will
adopt this convention in future.

kfence was the only user of page_slab(), so adjust it to the new way
of working. It will need to be touched again when we separate slab
from page.

Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
Cc: Alexander Potapenko <gli...@google.com>
Cc: Marco Elver <el...@google.com>
Cc: kasa...@googlegroups.com
---
include/linux/page-flags.h | 14 +-------------
mm/kfence/core.c | 14 ++++++++------
mm/slab.h | 28 ++++++++++++++++------------
3 files changed, 25 insertions(+), 31 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0091ad1986bf..6d5e44968eab 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1048,19 +1048,7 @@ PAGE_TYPE_OPS(Table, table, pgtable)
*/
PAGE_TYPE_OPS(Guard, guard, guard)

-FOLIO_TYPE_OPS(slab, slab)
-
-/**
- * PageSlab - Determine if the page belongs to the slab allocator
- * @page: The page to test.
- *
- * Context: Any context.
- * Return: True for slab pages, false for any other kind of page.
- */
-static inline bool PageSlab(const struct page *page)
-{
- return folio_test_slab(page_folio(page));
-}
+PAGE_TYPE_OPS(Slab, slab, slab)

#ifdef CONFIG_HUGETLB_PAGE
FOLIO_TYPE_OPS(hugetlb, hugetlb)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 727c20c94ac5..e62b5516bf48 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -612,14 +612,15 @@ static unsigned long kfence_init_pool(void)
* enters __slab_free() slow-path.
*/
for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
- struct slab *slab;
+ struct page *page;

if (!i || (i % 2))
continue;

- slab = page_slab(pfn_to_page(start_pfn + i));
- __folio_set_slab(slab_folio(slab));
+ page = pfn_to_page(start_pfn + i);
+ __SetPageSlab(page);
#ifdef CONFIG_MEMCG
+ struct slab *slab = page_slab(page);
slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts |
MEMCG_DATA_OBJEXTS;
#endif
@@ -665,16 +666,17 @@ static unsigned long kfence_init_pool(void)

reset_slab:
for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
- struct slab *slab;
+ struct page *page;

if (!i || (i % 2))
continue;

- slab = page_slab(pfn_to_page(start_pfn + i));
+ page = pfn_to_page(start_pfn + i);
#ifdef CONFIG_MEMCG
+ struct slab *slab = page_slab(page);
slab->obj_exts = 0;
#endif
- __folio_clear_slab(slab_folio(slab));
+ __ClearPageSlab(page);
}

return addr;
diff --git a/mm/slab.h b/mm/slab.h
index f7b8df56727d..18cdb8e85273 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -146,20 +146,24 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
struct slab *: (struct folio *)s))

/**
- * page_slab - Converts from first struct page to slab.
- * @p: The first (either head of compound or single) page of slab.
+ * page_slab - Converts from struct page to its slab.
+ * @page: A page which may or may not belong to a slab.
*
- * A temporary wrapper to convert struct page to struct slab in situations where
- * we know the page is the compound head, or single order-0 page.
- *
- * Long-term ideally everything would work with struct slab directly or go
- * through folio to struct slab.
- *
- * Return: The slab which contains this page
+ * Return: The slab which contains this page or NULL if the page does
+ * not belong to a slab. This includes pages returned from large kmalloc.
*/
-#define page_slab(p) (_Generic((p), \
- const struct page *: (const struct slab *)(p), \
- struct page *: (struct slab *)(p)))
+static inline struct slab *page_slab(const struct page *page)
+{
+ unsigned long head;
+
+ head = READ_ONCE(page->compound_head);
+ if (head & 1)
+ page = (struct page *)(head - 1);
+ if (data_race(page->page_type >> 24) != PGTY_slab)
+ page = NULL;
+
+ return (struct slab *)page;
+}

/**
* slab_page - The first struct page allocated for a slab
--
2.47.2

David Hildenbrand (Red Hat)

unread,
Nov 13, 2025, 7:31:21 AM (14 days ago) Nov 13
to Matthew Wilcox (Oracle), Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linu...@kvack.org, Alexander Potapenko, Marco Elver, kasa...@googlegroups.com
On 13.11.25 01:09, Matthew Wilcox (Oracle) wrote:
> In order to separate slabs from folios, we need to convert from any page
> in a slab to the slab directly without going through a page to folio
> conversion first.
>
> Up to this point, page_slab() has followed the example of other memdesc
> converters (page_folio(), page_ptdesc() etc) and just cast the pointer
> to the requested type, regardless of whether the pointer is actually a
> pointer to the correct type or not.
>
> That changes with this commit; we check that the page actually belongs
> to a slab and return NULL if it does not. Other memdesc converters will
> adopt this convention in future.
>
> kfence was the only user of page_slab(), so adjust it to the new way
> of working. It will need to be touched again when we separate slab
> from page.
>
> Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: Marco Elver <el...@google.com>
> Cc: kasa...@googlegroups.com
> ---

Acked-by: David Hildenbrand (Red Hat) <da...@kernel.org>

--
Cheers

David

Marco Elver

unread,
Nov 13, 2025, 9:03:01 AM (13 days ago) Nov 13
to Matthew Wilcox (Oracle), Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linu...@kvack.org, Alexander Potapenko, kasa...@googlegroups.com
On Thu, 13 Nov 2025 at 01:09, Matthew Wilcox (Oracle)
<wi...@infradead.org> wrote:
>
> In order to separate slabs from folios, we need to convert from any page
> in a slab to the slab directly without going through a page to folio
> conversion first.
>
> Up to this point, page_slab() has followed the example of other memdesc
> converters (page_folio(), page_ptdesc() etc) and just cast the pointer
> to the requested type, regardless of whether the pointer is actually a
> pointer to the correct type or not.
>
> That changes with this commit; we check that the page actually belongs
> to a slab and return NULL if it does not. Other memdesc converters will
> adopt this convention in future.
>
> kfence was the only user of page_slab(), so adjust it to the new way
> of working. It will need to be touched again when we separate slab
> from page.
>
> Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: Marco Elver <el...@google.com>
> Cc: kasa...@googlegroups.com

Ran kfence_test with different test configs:

Tested-by: Marco Elver <el...@google.com>

Harry Yoo

unread,
Nov 23, 2025, 9:04:06 PM (3 days ago) Nov 23
to Matthew Wilcox (Oracle), Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, linu...@kvack.org, Alexander Potapenko, Marco Elver, kasa...@googlegroups.com
On Thu, Nov 13, 2025 at 12:09:15AM +0000, Matthew Wilcox (Oracle) wrote:
> In order to separate slabs from folios, we need to convert from any page
> in a slab to the slab directly without going through a page to folio
> conversion first.
>
> Up to this point, page_slab() has followed the example of other memdesc
> converters (page_folio(), page_ptdesc() etc) and just cast the pointer
> to the requested type, regardless of whether the pointer is actually a
> pointer to the correct type or not.
>
> That changes with this commit; we check that the page actually belongs
> to a slab and return NULL if it does not. Other memdesc converters will
> adopt this convention in future.
>
> kfence was the only user of page_slab(), so adjust it to the new way
> of working. It will need to be touched again when we separate slab
> from page.
>
> Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: Marco Elver <el...@google.com>
> Cc: kasa...@googlegroups.com
> ---

Looks good to me,
Reviewed-by: Harry Yoo <harr...@oracle.com>

--
Cheers,
Harry / Hyeonggon
Reply all
Reply to author
Forward
0 new messages