Re: [PATCH RFC] mm+net: allow to set kmem_cache create flag for SLAB_NEVER_MERGE

0 views
Skip to first unread message

Vlastimil Babka

unread,
Jan 18, 2023, 2:36:54 AM1/18/23
to Jesper Dangaard Brouer, net...@vger.kernel.org, linu...@kvack.org, Christoph Lameter, Andrew Morton, Mel Gorman, Joonsoo Kim, pen...@kernel.org, Jakub Kicinski, David S. Miller, edum...@google.com, pab...@redhat.com, David Rientjes, Hyeonggon Yoo, Roman Gushchin, Alexander Potapenko, Marco Elver, kasan-dev
On 1/17/23 14:40, Jesper Dangaard Brouer wrote:
> Allow API users of kmem_cache_create to specify that they don't want
> any slab merge or aliasing (with similar sized objects). Use this in
> network stack and kfence_test.
>
> The SKB (sk_buff) kmem_cache slab is critical for network performance.
> Network stack uses kmem_cache_{alloc,free}_bulk APIs to gain
> performance by amortising the alloc/free cost.
>
> For the bulk API to perform efficiently the slub fragmentation need to
> be low. Especially for the SLUB allocator, the efficiency of bulk free
> API depend on objects belonging to the same slab (page).

Incidentally, would you know if anyone still uses SLAB instead of SLUB
because it would perform better for networking? IIRC in the past discussions
networking was one of the reasons for SLAB to stay. We are looking again
into the possibility of removing it, so it would be good to know if there
are benchmarks where SLUB does worse so it can be looked into.

> When running different network performance microbenchmarks, I started
> to notice that performance was reduced (slightly) when machines had
> longer uptimes. I believe the cause was 'skbuff_head_cache' got
> aliased/merged into the general slub for 256 bytes sized objects (with
> my kernel config, without CONFIG_HARDENED_USERCOPY).

So did things improve with SLAB_NEVER_MERGE?

> For SKB kmem_cache network stack have reasons for not merging, but it
> varies depending on kernel config (e.g. CONFIG_HARDENED_USERCOPY).
> We want to explicitly set SLAB_NEVER_MERGE for this kmem_cache.
>
> Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
> ---
> include/linux/slab.h | 2 ++
> mm/kfence/kfence_test.c | 7 +++----
> mm/slab.h | 5 +++--
> mm/slab_common.c | 8 ++++----
> net/core/skbuff.c | 13 ++++++++++++-
> 5 files changed, 24 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 45af70315a94..83a89ba7c4be 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -138,6 +138,8 @@
> #define SLAB_SKIP_KFENCE 0
> #endif
>
> +#define SLAB_NEVER_MERGE ((slab_flags_t __force)0x40000000U)

I think there should be an explanation what this does and when to consider
it. We should discourage blind use / cargo cult / copy paste from elsewhere
resulting in excessive proliferation of the flag.

- very specialized internal things like kfence? ok
- prevent a bad user of another cache corrupt my cache due to merging? no,
use slub_debug to find and fix the root cause
- performance concerns? only after proper evaluation, not prematurely

> +
> /* The following flags affect the page allocator grouping pages by mobility */
> /* Objects are reclaimable */
> #ifndef CONFIG_SLUB_TINY
> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> index b5d66a69200d..9e83e344ee3c 100644
> --- a/mm/kfence/kfence_test.c
> +++ b/mm/kfence/kfence_test.c
> @@ -191,11 +191,10 @@ static size_t setup_test_cache(struct kunit *test, size_t size, slab_flags_t fla
> kunit_info(test, "%s: size=%zu, ctor=%ps\n", __func__, size, ctor);
>
> /*
> - * Use SLAB_NOLEAKTRACE to prevent merging with existing caches. Any
> - * other flag in SLAB_NEVER_MERGE also works. Use SLAB_ACCOUNT to
> - * allocate via memcg, if enabled.
> + * Use SLAB_NEVER_MERGE to prevent merging with existing caches.
> + * Use SLAB_ACCOUNT to allocate via memcg, if enabled.
> */
> - flags |= SLAB_NOLEAKTRACE | SLAB_ACCOUNT;
> + flags |= SLAB_NEVER_MERGE | SLAB_ACCOUNT;
> test_cache = kmem_cache_create("test", size, 1, flags, ctor);
> KUNIT_ASSERT_TRUE_MSG(test, test_cache, "could not create cache");
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 7cc432969945..be1383176d3e 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -341,11 +341,11 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> #if defined(CONFIG_SLAB)
> #define SLAB_CACHE_FLAGS (SLAB_MEM_SPREAD | SLAB_NOLEAKTRACE | \
> SLAB_RECLAIM_ACCOUNT | SLAB_TEMPORARY | \
> - SLAB_ACCOUNT)
> + SLAB_ACCOUNT | SLAB_NEVER_MERGE)
> #elif defined(CONFIG_SLUB)
> #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
> SLAB_TEMPORARY | SLAB_ACCOUNT | \
> - SLAB_NO_USER_FLAGS | SLAB_KMALLOC)
> + SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NEVER_MERGE)
> #else
> #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
> #endif
> @@ -366,6 +366,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> SLAB_TEMPORARY | \
> SLAB_ACCOUNT | \
> SLAB_KMALLOC | \
> + SLAB_NEVER_MERGE | \
> SLAB_NO_USER_FLAGS)
>
> bool __kmem_cache_empty(struct kmem_cache *);
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 1cba98acc486..269f67c5fee6 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -45,9 +45,9 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
> /*
> * Set of flags that will prevent slab merging
> */
> -#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
> +#define SLAB_NEVER_MERGE_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER |\
> SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
> - SLAB_FAILSLAB | kasan_never_merge())
> + SLAB_FAILSLAB | SLAB_NEVER_MERGE | kasan_never_merge())
>
> #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
> SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
> @@ -137,7 +137,7 @@ static unsigned int calculate_alignment(slab_flags_t flags,
> */
> int slab_unmergeable(struct kmem_cache *s)
> {
> - if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
> + if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE_FLAGS))
> return 1;
>
> if (s->ctor)
> @@ -173,7 +173,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
> size = ALIGN(size, align);
> flags = kmem_cache_flags(size, flags, name);
>
> - if (flags & SLAB_NEVER_MERGE)
> + if (flags & SLAB_NEVER_MERGE_FLAGS)
> return NULL;
>
> list_for_each_entry_reverse(s, &slab_caches, list) {
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 79c9e795a964..799b9914457b 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -4629,12 +4629,23 @@ static void skb_extensions_init(void)
> static void skb_extensions_init(void) {}
> #endif
>
> +/* The SKB kmem_cache slab is critical for network performance. Never
> + * merge/alias the slab with similar sized objects. This avoids fragmentation
> + * that hurts performance of kmem_cache_{alloc,free}_bulk APIs.
> + */
> +#ifndef CONFIG_SLUB_TINY
> +#define FLAG_SKB_NEVER_MERGE SLAB_NEVER_MERGE
> +#else /* CONFIG_SLUB_TINY - simple loop in kmem_cache_alloc_bulk */
> +#define FLAG_SKB_NEVER_MERGE 0
> +#endif
> +
> void __init skb_init(void)
> {
> skbuff_head_cache = kmem_cache_create_usercopy("skbuff_head_cache",
> sizeof(struct sk_buff),
> 0,
> - SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> + SLAB_HWCACHE_ALIGN|SLAB_PANIC|
> + FLAG_SKB_NEVER_MERGE,
> offsetof(struct sk_buff, cb),
> sizeof_field(struct sk_buff, cb),
> NULL);
>
>

Jesper Dangaard Brouer

unread,
Jan 23, 2023, 11:14:27 AM1/23/23
to Vlastimil Babka, net...@vger.kernel.org, linu...@kvack.org, bro...@redhat.com, Christoph Lameter, Andrew Morton, Mel Gorman, Joonsoo Kim, pen...@kernel.org, Jakub Kicinski, David S. Miller, edum...@google.com, pab...@redhat.com, David Rientjes, Hyeonggon Yoo, Roman Gushchin, Alexander Potapenko, Marco Elver, kasan-dev, Matthew Wilcox


On 18/01/2023 08.36, Vlastimil Babka wrote:
> On 1/17/23 14:40, Jesper Dangaard Brouer wrote:
>> Allow API users of kmem_cache_create to specify that they don't want
>> any slab merge or aliasing (with similar sized objects). Use this in
>> network stack and kfence_test.
>>
>> The SKB (sk_buff) kmem_cache slab is critical for network performance.
>> Network stack uses kmem_cache_{alloc,free}_bulk APIs to gain
>> performance by amortising the alloc/free cost.
>>
>> For the bulk API to perform efficiently the slub fragmentation need to
>> be low. Especially for the SLUB allocator, the efficiency of bulk free
>> API depend on objects belonging to the same slab (page).
>
> Incidentally, would you know if anyone still uses SLAB instead of SLUB
> because it would perform better for networking? IIRC in the past discussions
> networking was one of the reasons for SLAB to stay. We are looking again
> into the possibility of removing it, so it would be good to know if there
> are benchmarks where SLUB does worse so it can be looked into.
>

I don't know of any users using SLAB for network performance reasons.
I've only been benchmarking with SLUB for a long time.
Anyone else on netdev?

Both SLUB and SLAB got the kmem_cache bulk API implemented. This is
used today in network stack to squeeze extra performance for networking
for our SKB (sk_buff) metadata structure (that point to packet data).
Details: Networking cache upto 64 of these SKBs for RX-path NAPI-softirq
processing per CPU, which is repopulated with kmem_cache bulking API
(bulk alloc 16 and bulk free 32).

>> When running different network performance microbenchmarks, I started
>> to notice that performance was reduced (slightly) when machines had
>> longer uptimes. I believe the cause was 'skbuff_head_cache' got
>> aliased/merged into the general slub for 256 bytes sized objects (with
>> my kernel config, without CONFIG_HARDENED_USERCOPY).
>
> So did things improve with SLAB_NEVER_MERGE?

Yes, but only the stability of the results.

The performance tests were microbenchmarks and as Christoph points out
there might be gains from more partial slabs when there are more
fragmentation. The "overload" microbench will always do maximum
bulking, while more real workloads might be satisfied from the partial
slabs. I would need to do a broader range of benchmarks before I can
conclude if this is always a win.

>> For SKB kmem_cache network stack have reasons for not merging, but it
>> varies depending on kernel config (e.g. CONFIG_HARDENED_USERCOPY).
>> We want to explicitly set SLAB_NEVER_MERGE for this kmem_cache.
>>

In most distro kernels configs SKB kmem_cache will already not get
merged / aliased. I was just trying to make this consistent.

>> Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
>> ---
>> include/linux/slab.h | 2 ++
>> mm/kfence/kfence_test.c | 7 +++----
>> mm/slab.h | 5 +++--
>> mm/slab_common.c | 8 ++++----
>> net/core/skbuff.c | 13 ++++++++++++-
>> 5 files changed, 24 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index 45af70315a94..83a89ba7c4be 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -138,6 +138,8 @@
>> #define SLAB_SKIP_KFENCE 0
>> #endif
>>
>> +#define SLAB_NEVER_MERGE ((slab_flags_t __force)0x40000000U)
>
> I think there should be an explanation what this does and when to consider
> it. We should discourage blind use / cargo cult / copy paste from elsewhere
> resulting in excessive proliferation of the flag.

I agree.

> - very specialized internal things like kfence? ok
> - prevent a bad user of another cache corrupt my cache due to merging? no,
> use slub_debug to find and fix the root cause

Agree, and the comment could point to the slub_debug trick.

> - performance concerns? only after proper evaluation, not prematurely
>

Yes, and I would need to do more perf eval myself ;-)
I don't have time atm, thus I'll not pursue this RFC patch anytime soon.

--Jesper

Reply all
Reply to author
Forward
0 new messages