[RFC PATCH 00/32] Separate struct slab from struct page

40 views
Skip to first unread message

Vlastimil Babka

unread,
Nov 15, 2021, 7:16:38 PM11/15/21
to Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Vlastimil Babka, Alexander Potapenko, Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski, Borislav Petkov, cgr...@vger.kernel.org, Dave Hansen, David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar, io...@lists.linux-foundation.org, Joerg Roedel, Johannes Weiner, Julia Lawall, kasa...@googlegroups.com, Lu Baolu, Luis Chamberlain, Marco Elver, Michal Hocko, Minchan Kim, Nitin Gupta, Peter Zijlstra, Sergey Senozhatsky, Suravee Suthikulpanit, Thomas Gleixner, Vladimir Davydov, Will Deacon, x...@kernel.org
Folks from non-slab subsystems are Cc'd only to patches affecting them, and
this cover letter.

Series also available in git, based on 5.16-rc1:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v1r13

Side note: as my SLUB PREEMPT_RT series in 5.15, I would prefer to repeat the
git pull request way of eventually merging this, as it's also not a small
series. Also I wouldn't mind to then continue with a git tree for all slab
patches in general. It was apparently even done that way before:
https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/
What do other slab maintainers think?

Previous version from Matthew Wilcox:
https://lore.kernel.org/all/20211004134650....@infradead.org/

LWN coverage of the above:
https://lwn.net/Articles/871982/

This is originally an offshoot of the folio work by Matthew. One of the more
complex parts of the struct page definition is the parts used by the slab
allocators. It would be good for the MM in general if struct slab were its own
data type, and it also helps to prevent tail pages from slipping in anywhere.
As Matthew requested in his proof of concept series, I have taken over the
development of this series, so it's a mix of patches from him (often modified
by me) and my own.

One big difference is the use of coccinelle to perform the less interesting
parts of the conversions automatically and at once, instead of a larger number
of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
Chamberlain for all their help!

Another notable difference is (based also on review feedback) I don't represent
with a struct slab the large kmalloc allocations which are not really a slab,
but use page allocator directly. When going from an object address to a struct
slab, the code tests first folio slab flag, and only if it's set it converts to
struct slab. This makes the struct slab type stronger.

Finally, although Matthew's version didn't use any of the folio work, the
initial support has been merged meanwhile so my version builds on top of it
where appropriate. This eliminates some of the redundant compound_head() e.g.
when testing the slab flag.

To sum up, after this series, struct page fields used by slab allocators are
moved from struct page to a new struct slab, that uses the same physical
storage. The availability of the fields is further distinguished by the
selected slab allocator implementation. The advantages include:

- Similar to plain folio, if the slab is of order > 0, struct slab always is
guaranteed to be the head page. Additionally it's guaranteed to be an actual
slab page, not a large kmalloc. This removes uncertainty and potential for
bugs.
- It's not possible to accidentally use fields of slab implementation that's
not actually selected.
- Other subsystems cannot use slab's fields in struct page anymore (some
existing non-slab usages had to be adjusted in this series), so slab
implementations have more freedom in rearranging them in the struct slab.

Matthew Wilcox (Oracle) (16):
mm: Split slab into its own type
mm: Add account_slab() and unaccount_slab()
mm: Convert virt_to_cache() to use struct slab
mm: Convert __ksize() to struct slab
mm: Use struct slab in kmem_obj_info()
mm: Convert check_heap_object() to use struct slab
mm/slub: Convert detached_freelist to use a struct slab
mm/slub: Convert kfree() to use a struct slab
mm/slub: Convert print_page_info() to print_slab_info()
mm/slub: Convert pfmemalloc_match() to take a struct slab
mm/slob: Convert SLOB to use struct slab
mm/kasan: Convert to struct slab
zsmalloc: Stop using slab fields in struct page
bootmem: Use page->index instead of page->freelist
iommu: Use put_pages_list
mm: Remove slab from struct page

Vlastimil Babka (16):
mm/slab: Dissolve slab_map_pages() in its caller
mm/slub: Make object_err() static
mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
mm/slub: Convert alloc_slab_page() to return a struct slab
mm/slub: Convert __free_slab() to use struct slab
mm/slub: Convert most struct page to struct slab by spatch
mm/slub: Finish struct page to struct slab conversion
mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
mm/slab: Convert most struct page to struct slab by spatch
mm/slab: Finish struct page to struct slab conversion
mm: Convert struct page to struct slab in functions used by other
subsystems
mm/memcg: Convert slab objcgs from struct page to struct slab
mm/kfence: Convert kfence_guarded_alloc() to struct slab
mm/sl*b: Differentiate struct slab fields by sl*b implementations
mm/slub: Simplify struct slab slabs field definition
mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
when enabled

arch/x86/mm/init_64.c | 2 +-
drivers/iommu/amd/io_pgtable.c | 59 +-
drivers/iommu/dma-iommu.c | 11 +-
drivers/iommu/intel/iommu.c | 89 +--
include/linux/bootmem_info.h | 2 +-
include/linux/iommu.h | 3 +-
include/linux/kasan.h | 9 +-
include/linux/memcontrol.h | 48 --
include/linux/mm_types.h | 38 +-
include/linux/page-flags.h | 37 -
include/linux/slab.h | 8 -
include/linux/slab_def.h | 16 +-
include/linux/slub_def.h | 29 +-
mm/bootmem_info.c | 7 +-
mm/kasan/common.c | 25 +-
mm/kasan/generic.c | 8 +-
mm/kasan/kasan.h | 1 +
mm/kasan/quarantine.c | 2 +-
mm/kasan/report.c | 12 +-
mm/kasan/report_tags.c | 10 +-
mm/kfence/core.c | 17 +-
mm/kfence/kfence_test.c | 6 +-
mm/memcontrol.c | 43 +-
mm/slab.c | 455 ++++++-------
mm/slab.h | 322 ++++++++-
mm/slab_common.c | 8 +-
mm/slob.c | 46 +-
mm/slub.c | 1164 ++++++++++++++++----------------
mm/sparse.c | 2 +-
mm/usercopy.c | 13 +-
mm/zsmalloc.c | 18 +-
31 files changed, 1302 insertions(+), 1208 deletions(-)

--
2.33.1

Vlastimil Babka

unread,
Nov 15, 2021, 7:16:42 PM11/15/21
to Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Vlastimil Babka, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com
From: "Matthew Wilcox (Oracle)" <wi...@infradead.org>

KASAN accesses some slab related struct page fields so we need to convert it
to struct slab. Some places are a bit simplified thanks to kasan_addr_to_slab()
encapsulating the PageSlab flag check through virt_to_slab().

[ vba...@suse.cz: adjust to differences in previous patches ]

Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
Signed-off-by: Vlastimil Babka <vba...@suse.cz>
Cc: Andrey Ryabinin <ryabin...@gmail.com>
Cc: Alexander Potapenko <gli...@google.com>
Cc: Andrey Konovalov <andre...@gmail.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: <kasa...@googlegroups.com>
---
include/linux/kasan.h | 9 +++++----
mm/kasan/common.c | 21 +++++++++++----------
mm/kasan/generic.c | 8 ++++----
mm/kasan/kasan.h | 1 +
mm/kasan/quarantine.c | 2 +-
mm/kasan/report.c | 12 ++++++++++--
mm/kasan/report_tags.c | 10 +++++-----
mm/slab.c | 2 +-
mm/slub.c | 2 +-
9 files changed, 39 insertions(+), 28 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d8783b682669..fb78108d694e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -9,6 +9,7 @@

struct kmem_cache;
struct page;
+struct slab;
struct vm_struct;
struct task_struct;

@@ -193,11 +194,11 @@ static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
return 0;
}

-void __kasan_poison_slab(struct page *page);
-static __always_inline void kasan_poison_slab(struct page *page)
+void __kasan_poison_slab(struct slab *slab);
+static __always_inline void kasan_poison_slab(struct slab *slab)
{
if (kasan_enabled())
- __kasan_poison_slab(page);
+ __kasan_poison_slab(slab);
}

void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
@@ -322,7 +323,7 @@ static inline void kasan_cache_create(struct kmem_cache *cache,
slab_flags_t *flags) {}
static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
-static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_poison_slab(struct slab *slab) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
static inline void kasan_poison_object_data(struct kmem_cache *cache,
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 6a1cd2d38bff..f0091112a381 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -247,8 +247,9 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
}
#endif

-void __kasan_poison_slab(struct page *page)
+void __kasan_poison_slab(struct slab *slab)
{
+ struct page *page = slab_page(slab);
unsigned long i;

for (i = 0; i < compound_nr(page); i++)
@@ -401,9 +402,9 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)

void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
{
- struct page *page;
+ struct folio *folio;

- page = virt_to_head_page(ptr);
+ folio = page_folio(virt_to_page(ptr));

/*
* Even though this function is only called for kmem_cache_alloc and
@@ -411,12 +412,12 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
* !PageSlab() when the size provided to kmalloc is larger than
* KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
*/
- if (unlikely(!PageSlab(page))) {
+ if (unlikely(!folio_test_slab(folio))) {
if (____kasan_kfree_large(ptr, ip))
return;
- kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
+ kasan_poison(ptr, folio_size(folio), KASAN_FREE_PAGE, false);
} else {
- ____kasan_slab_free(page->slab_cache, ptr, ip, false, false);
+ ____kasan_slab_free(folio_slab(folio)->slab_cache, ptr, ip, false, false);
}
}

@@ -560,7 +561,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,

void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
- struct page *page;
+ struct slab *slab;

if (unlikely(object == ZERO_SIZE_PTR))
return (void *)object;
@@ -572,13 +573,13 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
*/
kasan_unpoison(object, size, false);

- page = virt_to_head_page(object);
+ slab = virt_to_slab(object);

/* Piggy-back on kmalloc() instrumentation to poison the redzone. */
- if (unlikely(!PageSlab(page)))
+ if (unlikely(!slab))
return __kasan_kmalloc_large(object, size, flags);
else
- return ____kasan_kmalloc(page->slab_cache, object, size, flags);
+ return ____kasan_kmalloc(slab->slab_cache, object, size, flags);
}

bool __kasan_check_byte(const void *address, unsigned long ip)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 5d0b79416c4e..a25ad4090615 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -330,16 +330,16 @@ DEFINE_ASAN_SET_SHADOW(f8);

static void __kasan_record_aux_stack(void *addr, bool can_alloc)
{
- struct page *page = kasan_addr_to_page(addr);
+ struct slab *slab = kasan_addr_to_slab(addr);
struct kmem_cache *cache;
struct kasan_alloc_meta *alloc_meta;
void *object;

- if (is_kfence_address(addr) || !(page && PageSlab(page)))
+ if (is_kfence_address(addr) || !slab)
return;

- cache = page->slab_cache;
- object = nearest_obj(cache, page_slab(page), addr);
+ cache = slab->slab_cache;
+ object = nearest_obj(cache, slab, addr);
alloc_meta = kasan_get_alloc_meta(cache, object);
if (!alloc_meta)
return;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aebd8df86a1f..c17fa8d26ffe 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -265,6 +265,7 @@ bool kasan_report(unsigned long addr, size_t size,
void kasan_report_invalid_free(void *object, unsigned long ip);

struct page *kasan_addr_to_page(const void *addr);
+struct slab *kasan_addr_to_slab(const void *addr);

depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index d8ccff4c1275..587da8995f2d 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -117,7 +117,7 @@ static unsigned long quarantine_batch_size;

static struct kmem_cache *qlink_to_cache(struct qlist_node *qlink)
{
- return virt_to_head_page(qlink)->slab_cache;
+ return virt_to_slab(qlink)->slab_cache;
}

static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index e00999dc6499..7df696c0422c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -149,6 +149,13 @@ struct page *kasan_addr_to_page(const void *addr)
return virt_to_head_page(addr);
return NULL;
}
+struct slab *kasan_addr_to_slab(const void *addr)
+{
+ if ((addr >= (void *)PAGE_OFFSET) &&
+ (addr < high_memory))
+ return virt_to_slab(addr);
+ return NULL;
+}

static void describe_object_addr(struct kmem_cache *cache, void *object,
const void *addr)
@@ -248,8 +255,9 @@ static void print_address_description(void *addr, u8 tag)
pr_err("\n");

if (page && PageSlab(page)) {
- struct kmem_cache *cache = page->slab_cache;
- void *object = nearest_obj(cache, page_slab(page), addr);
+ struct slab *slab = page_slab(page);
+ struct kmem_cache *cache = slab->slab_cache;
+ void *object = nearest_obj(cache, slab, addr);

describe_object(cache, object, addr, tag);
}
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 06c21dd77493..1b41de88c53e 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -12,7 +12,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info)
#ifdef CONFIG_KASAN_TAGS_IDENTIFY
struct kasan_alloc_meta *alloc_meta;
struct kmem_cache *cache;
- struct page *page;
+ struct slab *slab;
const void *addr;
void *object;
u8 tag;
@@ -20,10 +20,10 @@ const char *kasan_get_bug_type(struct kasan_access_info *info)

tag = get_tag(info->access_addr);
addr = kasan_reset_tag(info->access_addr);
- page = kasan_addr_to_page(addr);
- if (page && PageSlab(page)) {
- cache = page->slab_cache;
- object = nearest_obj(cache, page_slab(page), (void *)addr);
+ slab = kasan_addr_to_slab(addr);
+ if (slab) {
+ cache = slab->slab_cache;
+ object = nearest_obj(cache, slab, (void *)addr);
alloc_meta = kasan_get_alloc_meta(cache, object);

if (alloc_meta) {
diff --git a/mm/slab.c b/mm/slab.c
index adf688d2da64..5aa601c5756a 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2605,7 +2605,7 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep,
* page_address() in the latter returns a non-tagged pointer,
* as it should be for slab pages.
*/
- kasan_poison_slab(slab_page(slab));
+ kasan_poison_slab(slab);

/* Get slab management. */
freelist = alloc_slabmgmt(cachep, slab, offset,
diff --git a/mm/slub.c b/mm/slub.c
index 981e40a88bab..1ff3fa2ab528 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1961,7 +1961,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)

slab->slab_cache = s;

- kasan_poison_slab(slab_page(slab));
+ kasan_poison_slab(slab);

start = slab_address(slab);

--
2.33.1

Vlastimil Babka

unread,
Nov 15, 2021, 7:16:42 PM11/15/21
to Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Vlastimil Babka, Julia Lawall, Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Marco Elver, Johannes Weiner, Michal Hocko, Vladimir Davydov, kasa...@googlegroups.com, cgr...@vger.kernel.org
KASAN, KFENCE and memcg interact with SLAB or SLUB internals through functions
nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as
parameter. This patch converts it to struct slab including all callers, through
a coccinelle semantic patch.

// Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
// Note: needs coccinelle 1.1.1 to avoid breaking whitespace

@@
@@

-objs_per_slab_page(
+objs_per_slab(
...
)
{ ... }

@@
@@

-objs_per_slab_page(
+objs_per_slab(
...
)

@@
identifier fn =~ "obj_to_index|objs_per_slab";
@@

fn(...,
- const struct page *page
+ const struct slab *slab
,...)
{
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
}

@@
identifier fn =~ "nearest_obj";
@@

fn(...,
- struct page *page
+ const struct slab *slab
,...)
{
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
}

@@
identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
expression E;
@@

fn(...,
(
- slab_page(E)
+ E
|
- virt_to_page(E)
+ virt_to_slab(E)
|
- virt_to_head_page(E)
+ virt_to_slab(E)
|
- page
+ page_slab(page)
)
,...)

Signed-off-by: Vlastimil Babka <vba...@suse.cz>
Cc: Julia Lawall <julia....@inria.fr>
Cc: Luis Chamberlain <mcg...@kernel.org>
Cc: Andrey Ryabinin <ryabin...@gmail.com>
Cc: Alexander Potapenko <gli...@google.com>
Cc: Andrey Konovalov <andre...@gmail.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: Marco Elver <el...@google.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Michal Hocko <mho...@kernel.org>
Cc: Vladimir Davydov <vdavyd...@gmail.com>
Cc: <kasa...@googlegroups.com>
Cc: <cgr...@vger.kernel.org>
---
include/linux/slab_def.h | 16 ++++++++--------
include/linux/slub_def.h | 18 +++++++++---------
mm/kasan/common.c | 4 ++--
mm/kasan/generic.c | 2 +-
mm/kasan/report.c | 2 +-
mm/kasan/report_tags.c | 2 +-
mm/kfence/kfence_test.c | 4 ++--
mm/memcontrol.c | 4 ++--
mm/slab.c | 10 +++++-----
mm/slab.h | 4 ++--
mm/slub.c | 2 +-
11 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 3aa5e1e73ab6..e24c9aff6fed 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -87,11 +87,11 @@ struct kmem_cache {
struct kmem_cache_node *node[MAX_NUMNODES];
};

-static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
void *x)
{
- void *object = x - (x - page->s_mem) % cache->size;
- void *last_object = page->s_mem + (cache->num - 1) * cache->size;
+ void *object = x - (x - slab->s_mem) % cache->size;
+ void *last_object = slab->s_mem + (cache->num - 1) * cache->size;

if (unlikely(object > last_object))
return last_object;
@@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
* reciprocal_divide(offset, cache->reciprocal_buffer_size)
*/
static inline unsigned int obj_to_index(const struct kmem_cache *cache,
- const struct page *page, void *obj)
+ const struct slab *slab, void *obj)
{
- u32 offset = (obj - page->s_mem);
+ u32 offset = (obj - slab->s_mem);
return reciprocal_divide(offset, cache->reciprocal_buffer_size);
}

-static inline int objs_per_slab_page(const struct kmem_cache *cache,
- const struct page *page)
+static inline int objs_per_slab(const struct kmem_cache *cache,
+ const struct slab *slab)
{
- if (is_kfence_address(page_address(page)))
+ if (is_kfence_address(slab_address(slab)))
return 1;
return cache->num;
}
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 8a9c2876ca89..33c5c0e3bd8d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -158,11 +158,11 @@ static inline void sysfs_slab_release(struct kmem_cache *s)

void *fixup_red_left(struct kmem_cache *s, void *p);

-static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
void *x) {
- void *object = x - (x - page_address(page)) % cache->size;
- void *last_object = page_address(page) +
- (page->objects - 1) * cache->size;
+ void *object = x - (x - slab_address(slab)) % cache->size;
+ void *last_object = slab_address(slab) +
+ (slab->objects - 1) * cache->size;
void *result = (unlikely(object > last_object)) ? last_object : object;

result = fixup_red_left(cache, result);
@@ -178,16 +178,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
}

static inline unsigned int obj_to_index(const struct kmem_cache *cache,
- const struct page *page, void *obj)
+ const struct slab *slab, void *obj)
{
if (is_kfence_address(obj))
return 0;
- return __obj_to_index(cache, page_address(page), obj);
+ return __obj_to_index(cache, slab_address(slab), obj);
}

-static inline int objs_per_slab_page(const struct kmem_cache *cache,
- const struct page *page)
+static inline int objs_per_slab(const struct kmem_cache *cache,
+ const struct slab *slab)
{
- return page->objects;
+ return slab->objects;
}
#endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8428da2aaf17..6a1cd2d38bff 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
/* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
#ifdef CONFIG_SLAB
/* For SLAB assign tags based on the object index in the freelist. */
- return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
+ return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object);
#else
/*
* For SLUB assign a random tag during slab creation, otherwise reuse
@@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
if (is_kfence_address(object))
return false;

- if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
+ if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
object)) {
kasan_report_invalid_free(tagged_object, ip);
return true;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 84a038b07c6f..5d0b79416c4e 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -339,7 +339,7 @@ static void __kasan_record_aux_stack(void *addr, bool can_alloc)
return;

cache = page->slab_cache;
- object = nearest_obj(cache, page, addr);
+ object = nearest_obj(cache, page_slab(page), addr);
alloc_meta = kasan_get_alloc_meta(cache, object);
if (!alloc_meta)
return;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 0bc10f452f7e..e00999dc6499 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 tag)

if (page && PageSlab(page)) {
struct kmem_cache *cache = page->slab_cache;
- void *object = nearest_obj(cache, page, addr);
+ void *object = nearest_obj(cache, page_slab(page), addr);

describe_object(cache, object, addr, tag);
}
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 8a319fc16dab..06c21dd77493 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -23,7 +23,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info)
page = kasan_addr_to_page(addr);
if (page && PageSlab(page)) {
cache = page->slab_cache;
- object = nearest_obj(cache, page, (void *)addr);
+ object = nearest_obj(cache, page_slab(page), (void *)addr);
alloc_meta = kasan_get_alloc_meta(cache, object);

if (alloc_meta) {
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 695030c1fff8..f7276711d7b9 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat
* even for KFENCE objects; these are required so that
* memcg accounting works correctly.
*/
- KUNIT_EXPECT_EQ(test, obj_to_index(s, page, alloc), 0U);
- KUNIT_EXPECT_EQ(test, objs_per_slab_page(s, page), 1);
+ KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U);
+ KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1);

if (policy == ALLOCATE_ANY)
return alloc;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 781605e92015..c8b53ec074b4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2819,7 +2819,7 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
gfp_t gfp, bool new_page)
{
- unsigned int objects = objs_per_slab_page(s, page);
+ unsigned int objects = objs_per_slab(s, page_slab(page));
unsigned long memcg_data;
void *vec;

@@ -2881,7 +2881,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
struct obj_cgroup *objcg;
unsigned int off;

- off = obj_to_index(page->slab_cache, page, p);
+ off = obj_to_index(page->slab_cache, page_slab(page), p);
objcg = page_objcgs(page)[off];
if (objcg)
return obj_cgroup_memcg(objcg);
diff --git a/mm/slab.c b/mm/slab.c
index 78ef4d94e3de..adf688d2da64 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1560,7 +1560,7 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp)
struct slab *slab = virt_to_slab(objp);
unsigned int objnr;

- objnr = obj_to_index(cachep, slab_page(slab), objp);
+ objnr = obj_to_index(cachep, slab, objp);
if (objnr) {
objp = index_to_obj(cachep, slab, objnr - 1);
realobj = (char *)objp + obj_offset(cachep);
@@ -2530,7 +2530,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab)
static void slab_put_obj(struct kmem_cache *cachep,
struct slab *slab, void *objp)
{
- unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp);
+ unsigned int objnr = obj_to_index(cachep, slab, objp);
#if DEBUG
unsigned int i;

@@ -2717,7 +2717,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
if (cachep->flags & SLAB_STORE_USER)
*dbg_userword(cachep, objp) = (void *)caller;

- objnr = obj_to_index(cachep, slab_page(slab), objp);
+ objnr = obj_to_index(cachep, slab, objp);

BUG_ON(objnr >= cachep->num);
BUG_ON(objp != index_to_obj(cachep, slab, objnr));
@@ -3663,7 +3663,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
objp = object - obj_offset(cachep);
kpp->kp_data_offset = obj_offset(cachep);
slab = virt_to_slab(objp);
- objnr = obj_to_index(cachep, slab_page(slab), objp);
+ objnr = obj_to_index(cachep, slab, objp);
objp = index_to_obj(cachep, slab, objnr);
kpp->kp_objp = objp;
if (DEBUG && cachep->flags & SLAB_STORE_USER)
@@ -4182,7 +4182,7 @@ void __check_heap_object(const void *ptr, unsigned long n,

/* Find and validate object. */
cachep = slab->slab_cache;
- objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr);
+ objnr = obj_to_index(cachep, slab, (void *)ptr);
BUG_ON(objnr >= cachep->num);

/* Find offset within object. */
diff --git a/mm/slab.h b/mm/slab.h
index d6c993894c02..b07e842b5cfc 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -483,7 +483,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
continue;
}

- off = obj_to_index(s, page, p[i]);
+ off = obj_to_index(s, page_slab(page), p[i]);
obj_cgroup_get(objcg);
page_objcgs(page)[off] = objcg;
mod_objcg_state(objcg, page_pgdat(page),
@@ -522,7 +522,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
else
s = s_orig;

- off = obj_to_index(s, page, p[i]);
+ off = obj_to_index(s, page_slab(page), p[i]);
objcg = objcgs[off];
if (!objcg)
continue;
diff --git a/mm/slub.c b/mm/slub.c
index 7759f3dde64b..981e40a88bab 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4342,7 +4342,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
#else
objp = objp0;
#endif
- objnr = obj_to_index(s, slab_page(slab), objp);
+ objnr = obj_to_index(s, slab, objp);
kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
objp = base + s->size * objnr;
kpp->kp_objp = objp;
--
2.33.1

Vlastimil Babka

unread,
Nov 15, 2021, 7:16:42 PM11/15/21
to Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Vlastimil Babka, Alexander Potapenko, Marco Elver, Dmitry Vyukov, kasa...@googlegroups.com
The function sets some fields that are being moved from struct page to struct
slab so it needs to be converted.

Signed-off-by: Vlastimil Babka <vba...@suse.cz>
Cc: Alexander Potapenko <gli...@google.com>
Cc: Marco Elver <el...@google.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: <kasa...@googlegroups.com>
---
mm/kfence/core.c | 12 ++++++------
mm/kfence/kfence_test.c | 6 +++---
2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 09945784df9e..4eb60cf5ff8b 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -360,7 +360,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
{
struct kfence_metadata *meta = NULL;
unsigned long flags;
- struct page *page;
+ struct slab *slab;
void *addr;

/* Try to obtain a free object. */
@@ -424,13 +424,13 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g

alloc_covered_add(alloc_stack_hash, 1);

- /* Set required struct page fields. */
- page = virt_to_page(meta->addr);
- page->slab_cache = cache;
+ /* Set required slab fields. */
+ slab = virt_to_slab((void *)meta->addr);
+ slab->slab_cache = cache;
if (IS_ENABLED(CONFIG_SLUB))
- page->objects = 1;
+ slab->objects = 1;
if (IS_ENABLED(CONFIG_SLAB))
- page->s_mem = addr;
+ slab->s_mem = addr;

/* Memory initialization. */
for_each_canary(meta, set_canary_byte);
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index f7276711d7b9..a22b1af85577 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -282,7 +282,7 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat
alloc = kmalloc(size, gfp);

if (is_kfence_address(alloc)) {
- struct page *page = virt_to_head_page(alloc);
+ struct slab *slab = virt_to_slab(alloc);
struct kmem_cache *s = test_cache ?:
kmalloc_caches[kmalloc_type(GFP_KERNEL)][__kmalloc_index(size, false)];

@@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat
* even for KFENCE objects; these are required so that
* memcg accounting works correctly.
*/
- KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U);
- KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1);
+ KUNIT_EXPECT_EQ(test, obj_to_index(s, slab, alloc), 0U);
+ KUNIT_EXPECT_EQ(test, objs_per_slab(s, slab), 1);

if (policy == ALLOCATE_ANY)
return alloc;
--
2.33.1

Vlastimil Babka

unread,
Nov 15, 2021, 7:16:43 PM11/15/21
to Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Vlastimil Babka, Alexander Potapenko, Marco Elver, Dmitry Vyukov, kasa...@googlegroups.com
With a struct slab definition separate from struct page, we can go further and
define only fields that the chosen sl*b implementation uses. This means
everything between __page_flags and __page_refcount placeholders now depends on
the chosen CONFIG_SL*B. Some fields exist in all implementations (slab_list)
but can be part of a union in some, so it's simpler to repeat them than
complicate the definition with ifdefs even more.

The patch doesn't change physical offsets of the fields, although it could be
done later - for example it's now clear that tighter packing in SLOB could be
possible.

This should also prevent accidental use of fields that don't exist in given
implementation. Before this patch virt_to_cache() and and cache_from_obj() was
visible for SLOB (albeit not used), although it relies on the slab_cache field
that isn't set by SLOB. With this patch it's now a compile error, so these
functions are now hidden behind #ifndef CONFIG_SLOB.

Signed-off-by: Vlastimil Babka <vba...@suse.cz>
Cc: Alexander Potapenko <gli...@google.com> (maintainer:KFENCE)
Cc: Marco Elver <el...@google.com> (maintainer:KFENCE)
Cc: Dmitry Vyukov <dvy...@google.com> (reviewer:KFENCE)
Cc: <kasa...@googlegroups.com>
---
mm/kfence/core.c | 9 +++++----
mm/slab.h | 46 ++++++++++++++++++++++++++++++++++++----------
2 files changed, 41 insertions(+), 14 deletions(-)

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 4eb60cf5ff8b..46103a7628a6 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -427,10 +427,11 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
/* Set required slab fields. */
slab = virt_to_slab((void *)meta->addr);
slab->slab_cache = cache;
- if (IS_ENABLED(CONFIG_SLUB))
- slab->objects = 1;
- if (IS_ENABLED(CONFIG_SLAB))
- slab->s_mem = addr;
+#if defined(CONFIG_SLUB)
+ slab->objects = 1;
+#elif defined (CONFIG_SLAB)
+ slab->s_mem = addr;
+#endif

/* Memory initialization. */
for_each_canary(meta, set_canary_byte);
diff --git a/mm/slab.h b/mm/slab.h
index 58b65e5e5d49..10a9ee195249 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -8,9 +8,24 @@
/* Reuses the bits in struct page */
struct slab {
unsigned long __page_flags;
+
+#if defined(CONFIG_SLAB)
+
+ union {
+ struct list_head slab_list;
+ struct rcu_head rcu_head;
+ };
+ struct kmem_cache *slab_cache;
+ void *freelist; /* array of free object indexes */
+ void * s_mem; /* first object */
+ unsigned int active;
+
+#elif defined(CONFIG_SLUB)
+
union {
struct list_head slab_list;
- struct { /* Partial pages */
+ struct rcu_head rcu_head;
+ struct {
struct slab *next;
#ifdef CONFIG_64BIT
int slabs; /* Nr of slabs left */
@@ -18,25 +33,32 @@ struct slab {
short int slabs;
#endif
};
- struct rcu_head rcu_head;
};
- struct kmem_cache *slab_cache; /* not slob */
+ struct kmem_cache *slab_cache;
/* Double-word boundary */
void *freelist; /* first free object */
union {
- void *s_mem; /* slab: first object */
- unsigned long counters; /* SLUB */
- struct { /* SLUB */
+ unsigned long counters;
+ struct {
unsigned inuse:16;
unsigned objects:15;
unsigned frozen:1;
};
};
+ unsigned int __unused;
+
+#elif defined(CONFIG_SLOB)
+
+ struct list_head slab_list;
+ void * __unused_1;
+ void *freelist; /* first free block */
+ void * __unused_2;
+ int units;
+
+#else
+#error "Unexpected slab allocator configured"
+#endif

- union {
- unsigned int active; /* SLAB */
- int units; /* SLOB */
- };
atomic_t __page_refcount;
#ifdef CONFIG_MEMCG
unsigned long memcg_data;
@@ -47,7 +69,9 @@ struct slab {
static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
SLAB_MATCH(flags, __page_flags);
SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */
+#ifndef CONFIG_SLOB
SLAB_MATCH(rcu_head, rcu_head);
+#endif
SLAB_MATCH(_refcount, __page_refcount);
#ifdef CONFIG_MEMCG
SLAB_MATCH(memcg_data, memcg_data);
@@ -623,6 +647,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s,
}
#endif /* CONFIG_MEMCG_KMEM */

+#ifndef CONFIG_SLOB
static inline struct kmem_cache *virt_to_cache(const void *obj)
{
struct slab *slab;
@@ -669,6 +694,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
print_tracking(cachep, x);
return cachep;
}
+#endif /* CONFIG_SLOB */

static inline size_t slab_ksize(const struct kmem_cache *s)
{
--
2.33.1

Andrey Konovalov

unread,
Nov 16, 2021, 8:58:50 AM11/16/21
to Vlastimil Babka, Matthew Wilcox, Linux Memory Management List, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, kasan-dev
This is a bit confusing: the series, and this patch in particular, is
supposedly about adding struct slab, but here struct folio suddenly
appears. It makes sense to adjust the patch description.

Also, perhaps a virt_to_folio() helper would be handy to replace
virt_to_head_page()?
Please add a line between the functions.

Andrey Konovalov

unread,
Nov 16, 2021, 9:02:40 AM11/16/21
to Vlastimil Babka, Matthew Wilcox, Linux Memory Management List, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Julia Lawall, Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Marco Elver, Johannes Weiner, Michal Hocko, Vladimir Davydov, kasan-dev, cgr...@vger.kernel.org
On Tue, Nov 16, 2021 at 1:16 AM Vlastimil Babka <vba...@suse.cz> wrote:
>
The tab before addr should be a space. checkpatch should probably report this.

Vlastimil Babka

unread,
Nov 16, 2021, 11:33:00 AM11/16/21
to Andrey Konovalov, Matthew Wilcox, Linux Memory Management List, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Julia Lawall, Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Marco Elver, Johannes Weiner, Michal Hocko, Vladimir Davydov, kasan-dev, cgr...@vger.kernel.org
On 11/16/21 15:02, Andrey Konovalov wrote:
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 tag)
>>
>> if (page && PageSlab(page)) {
>> struct kmem_cache *cache = page->slab_cache;
>> - void *object = nearest_obj(cache, page, addr);
>> + void *object = nearest_obj(cache, page_slab(page), addr);
>
> The tab before addr should be a space. checkpatch should probably report this.

Good catch, thanks. Note the tab is there already before this patch, it just
happened to appear identical to a single space before.

Matthew Wilcox

unread,
Nov 16, 2021, 1:17:23 PM11/16/21
to Vlastimil Babka, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, kasa...@googlegroups.com
On Tue, Nov 16, 2021 at 01:16:20AM +0100, Vlastimil Babka wrote:
> @@ -411,12 +412,12 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> * !PageSlab() when the size provided to kmalloc is larger than
> * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
> */
> - if (unlikely(!PageSlab(page))) {
> + if (unlikely(!folio_test_slab(folio))) {
> if (____kasan_kfree_large(ptr, ip))
> return;
> - kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
> + kasan_poison(ptr, folio_size(folio), KASAN_FREE_PAGE, false);
> } else {
> - ____kasan_slab_free(page->slab_cache, ptr, ip, false, false);
> + ____kasan_slab_free(folio_slab(folio)->slab_cache, ptr, ip, false, false);

I'd avoid this long line by doing:
struct slab *slab = folio_slab(folio);
____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);

Andrey Konovalov

unread,
Nov 16, 2021, 6:04:58 PM11/16/21
to Vlastimil Babka, Matthew Wilcox, Linux Memory Management List, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Julia Lawall, Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Marco Elver, Johannes Weiner, Michal Hocko, Vladimir Davydov, kasan-dev, cgr...@vger.kernel.org
Ah, indeed. Free free to keep this as is to not pollute the patch. Thanks!

Vlastimil Babka

unread,
Nov 16, 2021, 6:38:01 PM11/16/21
to Andrey Konovalov, Matthew Wilcox, Linux Memory Management List, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Julia Lawall, Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Marco Elver, Johannes Weiner, Michal Hocko, Vladimir Davydov, kasan-dev, cgr...@vger.kernel.org
I will fix it up in patch 24/32 so that this one can stay purely autogenerated
and there's no extra pre-patch.

Marco Elver

unread,
Nov 17, 2021, 2:00:48 AM11/17/21
to Vlastimil Babka, Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Alexander Potapenko, Dmitry Vyukov, kasa...@googlegroups.com
On Tue, 16 Nov 2021 at 01:16, Vlastimil Babka <vba...@suse.cz> wrote:
> The function sets some fields that are being moved from struct page to struct
> slab so it needs to be converted.
>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: Marco Elver <el...@google.com>
> Cc: Dmitry Vyukov <dvy...@google.com>
> Cc: <kasa...@googlegroups.com>

It looks sane. I ran kfence_test with both slab and slub, and all passes:

Tested-by: Marco Elver <el...@google.com>

But should there be other major changes, we should re-test.

Thanks,
-- Marco

Marco Elver

unread,
Nov 17, 2021, 2:01:17 AM11/17/21
to Vlastimil Babka, Matthew Wilcox, linu...@kvack.org, Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg, Alexander Potapenko, Dmitry Vyukov, kasa...@googlegroups.com
On Tue, 16 Nov 2021 at 01:16, Vlastimil Babka <vba...@suse.cz> wrote:
> With a struct slab definition separate from struct page, we can go further and
> define only fields that the chosen sl*b implementation uses. This means
> everything between __page_flags and __page_refcount placeholders now depends on
> the chosen CONFIG_SL*B. Some fields exist in all implementations (slab_list)
> but can be part of a union in some, so it's simpler to repeat them than
> complicate the definition with ifdefs even more.
>
> The patch doesn't change physical offsets of the fields, although it could be
> done later - for example it's now clear that tighter packing in SLOB could be
> possible.
>
> This should also prevent accidental use of fields that don't exist in given
> implementation. Before this patch virt_to_cache() and and cache_from_obj() was
> visible for SLOB (albeit not used), although it relies on the slab_cache field
> that isn't set by SLOB. With this patch it's now a compile error, so these
> functions are now hidden behind #ifndef CONFIG_SLOB.
>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Cc: Alexander Potapenko <gli...@google.com> (maintainer:KFENCE)
> Cc: Marco Elver <el...@google.com> (maintainer:KFENCE)
> Cc: Dmitry Vyukov <dvy...@google.com> (reviewer:KFENCE)
> Cc: <kasa...@googlegroups.com>

Ran kfence_test with both slab and slub, and all passes:

Tested-by: Marco Elver <el...@google.com>
Reply all
Reply to author
Forward
0 new messages