[PATCH v4 15/16] kasan: Remove references to folio in __kasan_mempool_poison_object()

1 view
Skip to first unread message

Matthew Wilcox (Oracle)

unread,
Nov 12, 2025, 7:09:43 PMNov 12
to Vlastimil Babka, Andrew Morton, Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linu...@kvack.org, David Hildenbrand, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev
In preparation for splitting struct slab from struct page and struct
folio, remove mentions of struct folio from this function. There is a
mild improvement for large kmalloc objects as we will avoid calling
compound_head() for them. We can discard the comment as using
PageLargeKmalloc() rather than !folio_test_slab() makes it obvious.

Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
Acked-by: David Hildenbrand <da...@redhat.com>
Cc: Alexander Potapenko <gli...@google.com>
Cc: Andrey Konovalov <andre...@gmail.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: Vincenzo Frascino <vincenzo...@arm.com>
Cc: kasan-dev <kasa...@googlegroups.com>
---
mm/kasan/common.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 22e5d67ff064..1d27f1bd260b 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -517,24 +517,20 @@ void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,

bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
- struct folio *folio = virt_to_folio(ptr);
+ struct page *page = virt_to_page(ptr);
struct slab *slab;

- /*
- * This function can be called for large kmalloc allocation that get
- * their memory from page_alloc. Thus, the folio might not be a slab.
- */
- if (unlikely(!folio_test_slab(folio))) {
+ if (unlikely(PageLargeKmalloc(page))) {
if (check_page_allocation(ptr, ip))
return false;
- kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
+ kasan_poison(ptr, page_size(page), KASAN_PAGE_FREE, false);
return true;
}

if (is_kfence_address(ptr))
return true;

- slab = folio_slab(folio);
+ slab = page_slab(page);

if (check_slab_allocation(slab->slab_cache, ptr, ip))
return false;
--
2.47.2

Harry Yoo

unread,
Nov 24, 2025, 2:03:01 AM (3 days ago) Nov 24
to Matthew Wilcox (Oracle), Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, linu...@kvack.org, David Hildenbrand, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev
On Thu, Nov 13, 2025 at 12:09:29AM +0000, Matthew Wilcox (Oracle) wrote:
> In preparation for splitting struct slab from struct page and struct
> folio, remove mentions of struct folio from this function. There is a
> mild improvement for large kmalloc objects as we will avoid calling
> compound_head() for them. We can discard the comment as using
> PageLargeKmalloc() rather than !folio_test_slab() makes it obvious.
>
> Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
> Acked-by: David Hildenbrand <da...@redhat.com>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: Andrey Konovalov <andre...@gmail.com>
> Cc: Dmitry Vyukov <dvy...@google.com>
> Cc: Vincenzo Frascino <vincenzo...@arm.com>
> Cc: kasan-dev <kasa...@googlegroups.com>
> ---

Acked-by: Harry Yoo <harr...@oracle.com>

> mm/kasan/common.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 22e5d67ff064..1d27f1bd260b 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -517,24 +517,20 @@ void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,
>
> bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> {
> - struct folio *folio = virt_to_folio(ptr);
> + struct page *page = virt_to_page(ptr);
> struct slab *slab;
>
> - /*
> - * This function can be called for large kmalloc allocation that get
> - * their memory from page_alloc. Thus, the folio might not be a slab.
> - */
> - if (unlikely(!folio_test_slab(folio))) {
> + if (unlikely(PageLargeKmalloc(page))) {

nit: no strong opinion from me, but maybe KASAN folks still want to catch
!PageLargeKmalloc() && !slab case gracefully, as they care more about
detecting invalid frees than performance.


> if (check_page_allocation(ptr, ip))
> return false;
> - kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
> + kasan_poison(ptr, page_size(page), KASAN_PAGE_FREE, false);
> return true;
> }
>
> if (is_kfence_address(ptr))
> return true;
>
> - slab = folio_slab(folio);
> + slab = page_slab(page);
>
> if (check_slab_allocation(slab->slab_cache, ptr, ip))
> return false;
> --
> 2.47.2

--
Cheers,
Harry / Hyeonggon
Reply all
Reply to author
Forward
0 new messages