Vlastimil Babka
unread,Oct 27, 2025, 2:39:53 PMOct 27Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Matthew Wilcox (Oracle), Andrew Morton, Andrey Ryabinin, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linu...@kvack.org, David Hildenbrand, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev
On 10/24/25 22:44, Matthew Wilcox (Oracle) wrote:
> In preparation for splitting struct slab from struct page and struct
> folio, remove mentions of struct folio from this function. We can
> discard the comment as using PageLargeKmalloc() rather than
> !folio_test_slab() makes it obvious.
>
> Signed-off-by: Matthew Wilcox (Oracle) <
wi...@infradead.org>
> Acked-by: David Hildenbrand <
da...@redhat.com>
+Cc KASAN folks
This too could check page_slab() first, but it's not that important. Note
that it should be fine even with not marking all tail pages as
PageLargeKmalloc(), as ptr should be pointer to the head page here.
> ---
> mm/kasan/common.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 22e5d67ff064..1d27f1bd260b 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -517,24 +517,20 @@ void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,
>
> bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> {
> - struct folio *folio = virt_to_folio(ptr);
> + struct page *page = virt_to_page(ptr);
> struct slab *slab;
>
> - /*
> - * This function can be called for large kmalloc allocation that get
> - * their memory from page_alloc. Thus, the folio might not be a slab.
> - */
> - if (unlikely(!folio_test_slab(folio))) {
> + if (unlikely(PageLargeKmalloc(page))) {
> if (check_page_allocation(ptr, ip))
> return false;
> - kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
> + kasan_poison(ptr, page_size(page), KASAN_PAGE_FREE, false);
> return true;
> }
>
> if (is_kfence_address(ptr))
> return true;
>
> - slab = folio_slab(folio);
> + slab = page_slab(page);
>
> if (check_slab_allocation(slab->slab_cache, ptr, ip))
> return false;