[PATCH v5 0/2] kasan: unify kasan_enabled() and remove arch-specific implementations

1 view
Skip to first unread message

Sabyrzhan Tasbolatov

unread,
Aug 7, 2025, 3:40:23 PMAug 7
to ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, christop...@csgroup.eu, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.

The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior

Changes in v5:
- Unified patches where arch (powerpc, UML, loongarch) selects
ARCH_DEFER_KASAN in the first patch not to break
bisectability. So in v5 we have 2 patches now in the series instead of 9.
- Removed kasan_arch_is_ready completely as there is no user
- Removed __wrappers in v4, left only those where it's necessary
due to different implementations

Tested on:
- powerpc - selects ARCH_DEFER_KASAN
Built ppc64_defconfig (PPC_BOOK3S_64) - OK
Booted via qemu-system-ppc64 - OK

I have not tested in v4 powerpc without KASAN enabled.

In v4 arch/powerpc/Kconfig it was:
select ARCH_DEFER_KASAN if PPC_RADIX_MMU

and compiling with ppc64_defconfig caused:
lib/stackdepot.o:(__jump_table+0x8): undefined reference to `kasan_flag_enabled'

I have fixed it in v5 via adding KASAN condition:
select ARCH_DEFER_KASAN if KASAN && PPC_RADIX_MMU

- um - selects ARCH_DEFER_KASAN

KASAN_GENERIC && KASAN_INLINE && STATIC_LINK
Before:
In file included from mm/kasan/common.c:32:
mm/kasan/kasan.h:550:2: error: #error kasan_arch_is_ready only works in KASAN generic outline mode!
550 | #error kasan_arch_is_ready only works in KASAN generic outline mode

After (with auto-selected ARCH_DEFER_KASAN):
./arch/um/include/asm/kasan.h:29:2: error: #error UML does not work in KASAN_INLINE mode with STATIC_LINK enabled!
29 | #error UML does not work in KASAN_INLINE mode with STATIC_LINK enabled!

KASAN_GENERIC && KASAN_OUTLINE && STATIC_LINK &&
Before:
./linux boots.

After (with auto-selected ARCH_DEFER_KASAN):
./linux boots.

KASAN_GENERIC && KASAN_OUTLINE && !STATIC_LINK
Before:
./linux boots

After (with auto-disabled !ARCH_DEFER_KASAN):
./linux boots

- loongarch - selects ARCH_DEFER_KASAN
Built defconfig with KASAN_GENERIC - OK
Haven't tested the boot. Asking Loongarch developers to verify - N/A
But should be good, since Loongarch does not have specific "kasan_init()"
call like UML does. It selects ARCH_DEFER_KASAN and calls kasan_init()
in the end of setup_arch() after jump_label_init().

Previous v4 thread: https://lore.kernel.org/all/20250805142622.5...@gmail.com/
Previous v3 thread: https://lore.kernel.org/all/20250717142732.2...@gmail.com/
Previous v2 thread: https://lore.kernel.org/all/20250626153147.1...@gmail.com/

Sabyrzhan Tasbolatov (2):
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
kasan: call kasan_init_generic in kasan_init

arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 ------
arch/loongarch/mm/kasan_init.c | 8 +++----
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 ----------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +----
arch/riscv/mm/kasan_init.c | 1 +
arch/s390/kernel/early.c | 3 ++-
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 ++--
arch/um/kernel/mem.c | 10 ++++++--
arch/x86/mm/kasan_init_64.c | 2 +-
arch/xtensa/mm/kasan_init.c | 2 +-
include/linux/kasan-enabled.h | 32 ++++++++++++++++++--------
include/linux/kasan.h | 6 +++++
lib/Kconfig.kasan | 8 +++++++
mm/kasan/common.c | 17 ++++++++++----
mm/kasan/generic.c | 19 +++++++++++----
mm/kasan/hw_tags.c | 9 +-------
mm/kasan/kasan.h | 8 ++++++-
mm/kasan/shadow.c | 12 +++++-----
mm/kasan/sw_tags.c | 1 +
mm/kasan/tags.c | 2 +-
27 files changed, 107 insertions(+), 76 deletions(-)

--
2.34.1

Sabyrzhan Tasbolatov

unread,
Aug 7, 2025, 3:40:26 PMAug 7
to ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, christop...@csgroup.eu, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up,
and unify the static key infrastructure across all KASAN modes.

[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>
---
Changes in v5:
- Unified patches where arch (powerpc, UML, loongarch) selects
ARCH_DEFER_KASAN in the first patch not to break
bisectability
- Removed kasan_arch_is_ready completely as there is no user
- Removed __wrappers in v4, left only those where it's necessary
due to different implementations

Changes in v4:
- Fixed HW_TAGS static key functionality (was broken in v3)
- Merged configuration and implementation for atomicity
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 ------
arch/loongarch/mm/kasan_init.c | 8 +++----
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 ----------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +----
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 ++--
arch/um/kernel/mem.c | 10 ++++++--
include/linux/kasan-enabled.h | 32 ++++++++++++++++++--------
include/linux/kasan.h | 6 +++++
lib/Kconfig.kasan | 8 +++++++
mm/kasan/common.c | 17 ++++++++++----
mm/kasan/generic.c | 19 +++++++++++----
mm/kasan/hw_tags.c | 9 +-------
mm/kasan/kasan.h | 8 ++++++-
mm/kasan/shadow.c | 12 +++++-----
mm/kasan/sw_tags.c | 1 +
mm/kasan/tags.c | 2 +-
21 files changed, 100 insertions(+), 69 deletions(-)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index f0abc38c40a..cd64b2bc12d 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,6 +9,7 @@ config LOONGARCH
select ACPI_PPTT if ACPI
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_BINFMT_ELF_STATE
+ select ARCH_DEFER_KASAN if KASAN
select ARCH_DISABLE_KASAN_INLINE
select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE
diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
index 62f139a9c87..0e50e5b5e05 100644
--- a/arch/loongarch/include/asm/kasan.h
+++ b/arch/loongarch/include/asm/kasan.h
@@ -66,7 +66,6 @@
#define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
#define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)

-extern bool kasan_early_stage;
extern unsigned char kasan_early_shadow_page[PAGE_SIZE];

#define kasan_mem_to_shadow kasan_mem_to_shadow
@@ -75,12 +74,6 @@ void *kasan_mem_to_shadow(const void *addr);
#define kasan_shadow_to_mem kasan_shadow_to_mem
const void *kasan_shadow_to_mem(const void *shadow_addr);

-#define kasan_arch_is_ready kasan_arch_is_ready
-static __always_inline bool kasan_arch_is_ready(void)
-{
- return !kasan_early_stage;
-}
-
#define addr_has_metadata addr_has_metadata
static __always_inline bool addr_has_metadata(const void *addr)
{
diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
index d2681272d8f..170da98ad4f 100644
--- a/arch/loongarch/mm/kasan_init.c
+++ b/arch/loongarch/mm/kasan_init.c
@@ -40,11 +40,9 @@ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
#define __pte_none(early, pte) (early ? pte_none(pte) : \
((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))

-bool kasan_early_stage = true;
-
void *kasan_mem_to_shadow(const void *addr)
{
- if (!kasan_arch_is_ready()) {
+ if (!kasan_enabled()) {
return (void *)(kasan_early_shadow_page);
} else {
unsigned long maddr = (unsigned long)addr;
@@ -298,7 +296,8 @@ void __init kasan_init(void)
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)KFENCE_AREA_END));

- kasan_early_stage = false;
+ /* Enable KASAN here before kasan_mem_to_shadow(). */
+ kasan_init_generic();

/* Populate the linear mapping */
for_each_mem_range(i, &pa_start, &pa_end) {
@@ -329,5 +328,4 @@ void __init kasan_init(void)

/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized.\n");
}
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 93402a1d9c9..a324dcdb8eb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -122,6 +122,7 @@ config PPC
# Please keep this list sorted alphabetically.
#
select ARCH_32BIT_OFF_T if PPC32
+ select ARCH_DEFER_KASAN if KASAN && PPC_RADIX_MMU
select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU
select ARCH_DMA_DEFAULT_COHERENT if !NOT_COHERENT_CACHE
select ARCH_ENABLE_MEMORY_HOTPLUG
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index b5bbb94c51f..957a57c1db5 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -53,18 +53,6 @@
#endif

#ifdef CONFIG_KASAN
-#ifdef CONFIG_PPC_BOOK3S_64
-DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
-static __always_inline bool kasan_arch_is_ready(void)
-{
- if (static_branch_likely(&powerpc_kasan_enabled_key))
- return true;
- return false;
-}
-
-#define kasan_arch_is_ready kasan_arch_is_ready
-#endif

void kasan_early_init(void);
void kasan_mmu_init(void);
diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index 03666d790a5..1d083597464 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -165,7 +165,7 @@ void __init kasan_init(void)

/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_late_init(void)
diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c
index 60c78aac0f6..0d3a73d6d4b 100644
--- a/arch/powerpc/mm/kasan/init_book3e_64.c
+++ b/arch/powerpc/mm/kasan/init_book3e_64.c
@@ -127,7 +127,7 @@ void __init kasan_init(void)

/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_late_init(void) { }
diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c
index 7d959544c07..dcafa641804 100644
--- a/arch/powerpc/mm/kasan/init_book3s_64.c
+++ b/arch/powerpc/mm/kasan/init_book3s_64.c
@@ -19,8 +19,6 @@
#include <linux/memblock.h>
#include <asm/pgalloc.h>

-DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
static void __init kasan_init_phys_region(void *start, void *end)
{
unsigned long k_start, k_end, k_cur;
@@ -92,11 +90,9 @@ void __init kasan_init(void)
*/
memset(kasan_early_shadow_page, 0, PAGE_SIZE);

- static_branch_inc(&powerpc_kasan_enabled_key);
-
/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_early_init(void) { }
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 9083bfdb773..a12cc072ab1 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -5,6 +5,7 @@ menu "UML-specific options"
config UML
bool
default y
+ select ARCH_DEFER_KASAN if STATIC_LINK
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CPU_FINALIZE_INIT
diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h
index f97bb1f7b85..b54a4e937fd 100644
--- a/arch/um/include/asm/kasan.h
+++ b/arch/um/include/asm/kasan.h
@@ -24,10 +24,9 @@

#ifdef CONFIG_KASAN
void kasan_init(void);
-extern int kasan_um_is_ready;

-#ifdef CONFIG_STATIC_LINK
-#define kasan_arch_is_ready() (kasan_um_is_ready)
+#if defined(CONFIG_STATIC_LINK) && defined(CONFIG_KASAN_INLINE)
+#error UML does not work in KASAN_INLINE mode with STATIC_LINK enabled!
#endif
#else
static inline void kasan_init(void) { }
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 76bec7de81b..261fdcd21be 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -21,9 +21,9 @@
#include <os.h>
#include <um_malloc.h>
#include <linux/sched/task.h>
+#include <linux/kasan.h>

#ifdef CONFIG_KASAN
-int kasan_um_is_ready;
void kasan_init(void)
{
/*
@@ -32,7 +32,10 @@ void kasan_init(void)
*/
kasan_map_memory((void *)KASAN_SHADOW_START, KASAN_SHADOW_SIZE);
init_task.kasan_depth = 0;
- kasan_um_is_ready = true;
+ /* Since kasan_init() is called before main(),
+ * KASAN is initialized but the enablement is deferred after
+ * jump_label_init(). See arch_mm_preinit().
+ */
}

static void (*kasan_init_ptr)(void)
@@ -58,6 +61,9 @@ static unsigned long brk_end;

void __init arch_mm_preinit(void)
{
+ /* Safe to call after jump_label_init(). Enables KASAN. */
+ kasan_init_generic();
+
/* clear the zero-page */
memset(empty_zero_page, 0, PAGE_SIZE);

diff --git a/include/linux/kasan-enabled.h b/include/linux/kasan-enabled.h
index 6f612d69ea0..9eca967d852 100644
--- a/include/linux/kasan-enabled.h
+++ b/include/linux/kasan-enabled.h
@@ -4,32 +4,46 @@

#include <linux/static_key.h>

-#ifdef CONFIG_KASAN_HW_TAGS
-
+#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
+/*
+ * Global runtime flag for KASAN modes that need runtime control.
+ * Used by ARCH_DEFER_KASAN architectures and HW_TAGS mode.
+ */
DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);

+/*
+ * Runtime control for shadow memory initialization or HW_TAGS mode.
+ * Uses static key for architectures that need deferred KASAN or HW_TAGS.
+ */
static __always_inline bool kasan_enabled(void)
{
return static_branch_likely(&kasan_flag_enabled);
}

-static inline bool kasan_hw_tags_enabled(void)
+static inline void kasan_enable(void)
{
- return kasan_enabled();
+ static_branch_enable(&kasan_flag_enabled);
}
-
-#else /* CONFIG_KASAN_HW_TAGS */
-
-static inline bool kasan_enabled(void)
+#else
+/* For architectures that can enable KASAN early, use compile-time check. */
+static __always_inline bool kasan_enabled(void)
{
return IS_ENABLED(CONFIG_KASAN);
}

+static inline void kasan_enable(void) {}
+#endif /* CONFIG_ARCH_DEFER_KASAN || CONFIG_KASAN_HW_TAGS */
+
+#ifdef CONFIG_KASAN_HW_TAGS
+static inline bool kasan_hw_tags_enabled(void)
+{
+ return kasan_enabled();
+}
+#else
static inline bool kasan_hw_tags_enabled(void)
{
return false;
}
-
#endif /* CONFIG_KASAN_HW_TAGS */

#endif /* LINUX_KASAN_ENABLED_H */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2..51a8293d1af 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -543,6 +543,12 @@ void kasan_report_async(void);

#endif /* CONFIG_KASAN_HW_TAGS */

+#ifdef CONFIG_KASAN_GENERIC
+void __init kasan_init_generic(void);
+#else
+static inline void kasan_init_generic(void) { }
+#endif
+
#ifdef CONFIG_KASAN_SW_TAGS
void __init kasan_init_sw_tags(void);
#else
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830f..38456560c85 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,6 +19,14 @@ config ARCH_DISABLE_KASAN_INLINE
Disables both inline and stack instrumentation. Selected by
architectures that do not support these instrumentation types.

+config ARCH_DEFER_KASAN
+ bool
+ help
+ Architectures should select this if they need to defer KASAN
+ initialization until shadow memory is properly set up. This
+ enables runtime control via static keys. Otherwise, KASAN uses
+ compile-time constants for better performance.
+
config CC_HAS_KASAN_GENERIC
def_bool $(cc-option, -fsanitize=kernel-address)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 9142964ab9c..d9d389870a2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -32,6 +32,15 @@
#include "kasan.h"
#include "../slab.h"

+#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
+/*
+ * Definition of the unified static key declared in kasan-enabled.h.
+ * This provides consistent runtime enable/disable across KASAN modes.
+ */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
+EXPORT_SYMBOL(kasan_flag_enabled);
+#endif
+
struct slab *kasan_addr_to_slab(const void *addr)
{
if (virt_addr_valid(addr))
@@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
unsigned long ip)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;
return check_slab_allocation(cache, object, ip);
}
@@ -254,7 +263,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
bool still_accessible)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;

/*
@@ -293,7 +302,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,

static inline bool check_page_allocation(void *ptr, unsigned long ip)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return false;

if (ptr != page_address(virt_to_head_page(ptr))) {
@@ -522,7 +531,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
return true;
}

- if (is_kfence_address(ptr) || !kasan_arch_is_ready())
+ if (is_kfence_address(ptr))
return true;

slab = folio_slab(folio);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d54e89f8c3e..b413c46b3e0 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -36,6 +36,17 @@
#include "kasan.h"
#include "../slab.h"

+/*
+ * Initialize Generic KASAN and enable runtime checks.
+ * This should be called from arch kasan_init() once shadow memory is ready.
+ */
+void __init kasan_init_generic(void)
+{
+ kasan_enable();
+
+ pr_info("KernelAddressSanitizer initialized (generic)\n");
+}
+
/*
* All functions below always inlined so compiler could
* perform better optimizations in each of __asan_loadX/__assn_storeX
@@ -165,7 +176,7 @@ static __always_inline bool check_region_inline(const void *addr,
size_t size, bool write,
unsigned long ret_ip)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return true;

if (unlikely(size == 0))
@@ -193,7 +204,7 @@ bool kasan_byte_accessible(const void *addr)
{
s8 shadow_byte;

- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return true;

shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr));
@@ -495,7 +506,7 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta)

static void release_free_meta(const void *object, struct kasan_free_meta *meta)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return;

/* Check if free meta is valid. */
@@ -562,7 +573,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_save_track(&alloc_meta->alloc_track, flags);
}

-void kasan_save_free_info(struct kmem_cache *cache, void *object)
+void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
struct kasan_free_meta *free_meta;

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b5..c8289a3feab 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -45,13 +45,6 @@ static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;

-/*
- * Whether KASAN is enabled at all.
- * The value remains false until KASAN is initialized by kasan_init_hw_tags().
- */
-DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
-EXPORT_SYMBOL(kasan_flag_enabled);
-
/*
* Whether the selected mode is synchronous, asynchronous, or asymmetric.
* Defaults to KASAN_MODE_SYNC.
@@ -260,7 +253,7 @@ void __init kasan_init_hw_tags(void)
kasan_init_tags();

/* KASAN is now initialized, enable it. */
- static_branch_enable(&kasan_flag_enabled);
+ kasan_enable();

pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e6..8a9d8a6ea71 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -398,7 +398,13 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags);
void kasan_set_track(struct kasan_track *track, depot_stack_handle_t stack);
void kasan_save_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-void kasan_save_free_info(struct kmem_cache *cache, void *object);
+
+void __kasan_save_free_info(struct kmem_cache *cache, void *object);
+static inline void kasan_save_free_info(struct kmem_cache *cache, void *object)
+{
+ if (kasan_enabled())
+ __kasan_save_free_info(cache, object);
+}

#ifdef CONFIG_KASAN_GENERIC
bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb..2e126cb21b6 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -125,7 +125,7 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init)
{
void *shadow_start, *shadow_end;

- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return;

/*
@@ -150,7 +150,7 @@ EXPORT_SYMBOL_GPL(kasan_poison);
#ifdef CONFIG_KASAN_GENERIC
void kasan_poison_last_granule(const void *addr, size_t size)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return;

if (size & KASAN_GRANULE_MASK) {
@@ -390,7 +390,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
unsigned long shadow_start, shadow_end;
int ret;

- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return 0;

if (!is_vmalloc_or_module_addr((void *)addr))
@@ -560,7 +560,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long region_start, region_end;
unsigned long size;

- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return;

region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE);
@@ -611,7 +611,7 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
* with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored.
*/

- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return (void *)start;

if (!is_vmalloc_or_module_addr(start))
@@ -636,7 +636,7 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
*/
void __kasan_poison_vmalloc(const void *start, unsigned long size)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return;

if (!is_vmalloc_or_module_addr(start))
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index b9382b5b6a3..c75741a7460 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -44,6 +44,7 @@ void __init kasan_init_sw_tags(void)
per_cpu(prng_state, cpu) = (u32)get_cycles();

kasan_init_tags();
+ kasan_enable();

pr_info("KernelAddressSanitizer initialized (sw-tags, stacktrace=%s)\n",
str_on_off(kasan_stack_collection_enabled()));
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index d65d48b85f9..b9f31293622 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -142,7 +142,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
save_stack_info(cache, object, flags, false);
}

-void kasan_save_free_info(struct kmem_cache *cache, void *object)
+void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
save_stack_info(cache, object, 0, true);
}
--
2.34.1

Sabyrzhan Tasbolatov

unread,
Aug 7, 2025, 3:40:30 PMAug 7
to ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, christop...@csgroup.eu, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com, Alexandre Ghiti
Call kasan_init_generic() which handles Generic KASAN initialization.
For architectures that do not select ARCH_DEFER_KASAN,
this will be a no-op for the runtime flag but will
print the initialization banner.

For SW_TAGS and HW_TAGS modes, their respective init functions will
handle the flag enabling, if they are enabled/implemented.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>
Tested-by: Alexandre Ghiti <alex...@rivosinc.com> # riscv
Acked-by: Alexander Gordeev <agor...@linux.ibm.com> # s390
---
Changes in v5:
- Unified arch patches into a single one, where we just call
kasan_init_generic()
- Added Tested-by tag for riscv (tested the same change in v4)
- Added Acked-by tag for s390 (tested the same change in v4)
---
arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +---
arch/riscv/mm/kasan_init.c | 1 +
arch/s390/kernel/early.c | 3 ++-
arch/x86/mm/kasan_init_64.c | 2 +-
arch/xtensa/mm/kasan_init.c | 2 +-
6 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 111d4f70313..c6625e808bf 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -300,6 +300,6 @@ void __init kasan_init(void)
local_flush_tlb_all();

memset(kasan_early_shadow_page, 0, PAGE_SIZE);
- pr_info("Kernel address sanitizer initialized\n");
init_task.kasan_depth = 0;
+ kasan_init_generic();
}
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45dae..abeb81bf6eb 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -399,14 +399,12 @@ void __init kasan_init(void)
{
kasan_init_shadow();
kasan_init_depth();
-#if defined(CONFIG_KASAN_GENERIC)
+ kasan_init_generic();
/*
* Generic KASAN is now fully initialized.
* Software and Hardware Tag-Based modes still require
* kasan_init_sw_tags() and kasan_init_hw_tags() correspondingly.
*/
- pr_info("KernelAddressSanitizer initialized (generic)\n");
-#endif
}

#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 41c635d6aca..ba2709b1eec 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -530,6 +530,7 @@ void __init kasan_init(void)

memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
init_task.kasan_depth = 0;
+ kasan_init_generic();

csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
local_flush_tlb_all();
diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
index 9adfbdd377d..544e5403dd9 100644
--- a/arch/s390/kernel/early.c
+++ b/arch/s390/kernel/early.c
@@ -21,6 +21,7 @@
#include <linux/kernel.h>
#include <asm/asm-extable.h>
#include <linux/memblock.h>
+#include <linux/kasan.h>
#include <asm/access-regs.h>
#include <asm/asm-offsets.h>
#include <asm/machine.h>
@@ -65,7 +66,7 @@ static void __init kasan_early_init(void)
{
#ifdef CONFIG_KASAN
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
#endif
}

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d21..998b6010d6d 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -451,5 +451,5 @@ void __init kasan_init(void)
__flush_tlb_all();

init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index f39c4d83173..0524b9ed5e6 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -94,5 +94,5 @@ void __init kasan_init(void)

/* At this point kasan is fully initialized. Enable error messages. */
current->kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
--
2.34.1

Christophe Leroy

unread,
Aug 8, 2025, 1:07:35 AMAug 8
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, Alexandre Ghiti
I understood KASAN is really ready to function only once the csr_write()
and local_flush_tlb_all() below are done. Shouldn't kasan_init_generic()
be called after it ?

Christophe Leroy

unread,
Aug 8, 2025, 1:20:35 AMAug 8
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org


Le 07/08/2025 à 21:40, Sabyrzhan Tasbolatov a écrit :
> Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
> to defer KASAN initialization until shadow memory is properly set up,
> and unify the static key infrastructure across all KASAN modes.

That probably desserves more details, maybe copy in informations from
the top of cover letter.

I think there should also be some exeplanations about
kasan_arch_is_ready() becoming kasan_enabled(), and also why
kasan_arch_is_ready() completely disappear from mm/kasan/common.c
without being replaced by kasan_enabled().
Instead of adding 'if KASAN' in all users, you could do in two steps:

Add a symbol ARCH_NEEDS_DEFER_KASAN.

+config ARCH_NEEDS_DEFER_KASAN
+ bool

And then:

+config ARCH_DEFER_KASAN
+ def_bool
+ depends on KASAN
+ depends on ARCH_DEFER_KASAN
+ help
+ Architectures should select this if they need to defer KASAN
+ initialization until shadow memory is properly set up. This
+ enables runtime control via static keys. Otherwise, KASAN uses
+ compile-time constants for better performance.



No need to also verify KASAN here like powerpc and loongarch ?
Format standard is different outside network, see:
https://docs.kernel.org/process/coding-style.html#commenting
Shouldn't new exports be GPL ?

> +#endif
> +
> struct slab *kasan_addr_to_slab(const void *addr)
> {
> if (virt_addr_valid(addr))
> @@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> unsigned long ip)
> {
> - if (!kasan_arch_is_ready() || is_kfence_address(object))
> + if (is_kfence_address(object))

Here and below, no need to replace kasan_arch_is_ready() by
kasan_enabled() ?

Sabyrzhan Tasbolatov

unread,
Aug 8, 2025, 2:44:51 AMAug 8
to Christophe Leroy, al...@ghiti.fr, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, Alexandre Ghiti
I will try to test this in v6:

csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
local_flush_tlb_all();
kasan_init_generic();

Alexandre Ghiti said [1] it was not a problem, but I will check.

[1] https://lore.kernel.org/all/20c1e656-512e-4424...@ghiti.fr/

Alexandre Ghiti

unread,
Aug 8, 2025, 3:21:43 AMAug 8
to Sabyrzhan Tasbolatov, Christophe Leroy, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, Alexandre Ghiti
Before setting the final kasan mapping, we still have the early one so
we won't trap or anything on some kasan accesses. But if there is a v6,
I agree it will be cleaner to do it this ^ way.

Thanks,

Alex
> _______________________________________________
> linux-riscv mailing list
> linux...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

Sabyrzhan Tasbolatov

unread,
Aug 8, 2025, 3:27:04 AMAug 8
to Christophe Leroy, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Fri, Aug 8, 2025 at 10:03 AM Christophe Leroy
<christop...@csgroup.eu> wrote:
>
>
>
> Le 07/08/2025 à 21:40, Sabyrzhan Tasbolatov a écrit :
> > Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
> > to defer KASAN initialization until shadow memory is properly set up,
> > and unify the static key infrastructure across all KASAN modes.
>
> That probably desserves more details, maybe copy in informations from
> the top of cover letter.
>
> I think there should also be some exeplanations about
> kasan_arch_is_ready() becoming kasan_enabled(), and also why
> kasan_arch_is_ready() completely disappear from mm/kasan/common.c
> without being replaced by kasan_enabled().

I will try to explain in details in this git commit message. Will copy this part
from my cover letter as well. Hopefully, this below is concise yet
informative description:

The core issue is that different architectures have
inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior

This patch addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.

Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support.
For other arch, kasan_enabled() checks the enablement during
compile time.

Now KASAN users can use a single kasan_enabled() check everywhere.
Thanks, will do it in v6 (during weekends though as I'm away from my PC)
unless anyone has objections to it.

FYI, I see that Andrew added yesterday v5 to mm-new:
https://lore.kernel.org/all/2025080722294...@smtp.kernel.org/
https://lore.kernel.org/all/2025080722294...@smtp.kernel.org/

Andrey Ryabinin, could you please also review if all comments are
addressed in v5?
So I could work on anything new in v6 during these weekends.
Sorry, I didn't quite understand the question.
I've verified powerpc with KASAN enabled which selects KASAN_OUTLINE,
as far as I remember, and GENERIC mode.

I haven't tested LoongArch booting via QEMU, only tested compilation.
I guess, I need to test the boot, will try to learn how to do it for
qemu-system-loongarch64. Would be helpful LoongArch devs in CC can
assist as well.

STATIC_LINK is defined for UML only.
Thanks! Will do in v6.
Hmm, I did it as it's currently EXPORT_SYMBOL for HW_TAGS
https://elixir.bootlin.com/linux/v6.16/source/mm/kasan/hw_tags.c#L53

but I see that in the same HW_TAGS file we have
EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);

So I guess, we should also export kasan_flag_enabled as EXPORT_SYMBOL_GPL.
Will do in v6.

>
> > +#endif
> > +
> > struct slab *kasan_addr_to_slab(const void *addr)
> > {
> > if (virt_addr_valid(addr))
> > @@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> > bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> > unsigned long ip)
> > {
> > - if (!kasan_arch_is_ready() || is_kfence_address(object))
> > + if (is_kfence_address(object))
>
> Here and below, no need to replace kasan_arch_is_ready() by
> kasan_enabled() ?

Both functions have __wrappers in include/linux/kasan.h [1],
where there's already kasan_enabled() check. Since we've replaced
kasan_arch_is_ready() with kasan_enabled(), these checks are not needed here.

[1] https://elixir.bootlin.com/linux/v6.16/source/include/linux/kasan.h#L197

Christophe Leroy

unread,
Aug 8, 2025, 3:33:51 AMAug 8
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org


Le 08/08/2025 à 09:26, Sabyrzhan Tasbolatov a écrit :
> On Fri, Aug 8, 2025 at 10:03 AM Christophe Leroy
> <christop...@csgroup.eu> wrote:
>>> diff --git a/arch/um/Kconfig b/arch/um/Kconfig
>>> index 9083bfdb773..a12cc072ab1 100644
>>> --- a/arch/um/Kconfig
>>> +++ b/arch/um/Kconfig
>>> @@ -5,6 +5,7 @@ menu "UML-specific options"
>>> config UML
>>> bool
>>> default y
>>> + select ARCH_DEFER_KASAN if STATIC_LINK
>>
>> No need to also verify KASAN here like powerpc and loongarch ?
>
> Sorry, I didn't quite understand the question.
> I've verified powerpc with KASAN enabled which selects KASAN_OUTLINE,
> as far as I remember, and GENERIC mode.

The question is whether:

select ARCH_DEFER_KASAN if STATIC_LINK

is enough ? Shouldn't it be:

select ARCH_DEFER_KASAN if KASAN && STATIC_LINK

Like for powerpc and loongarch ?

Sabyrzhan Tasbolatov

unread,
Aug 8, 2025, 11:34:07 AMAug 8
to Christophe Leroy, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Fri, Aug 8, 2025 at 10:03 AM Christophe Leroy
<christop...@csgroup.eu> wrote:
>
>
>
Actually, I don't see the benefits from this option. Sorry, have just
revisited this again.
With the new symbol, arch (PowerPC, UML, LoongArch) still needs select
2 options:

select ARCH_NEEDS_DEFER_KASAN
select ARCH_DEFER_KASAN

and the oneline with `if` condition is cleaner.
select ARCH_DEFER_KASAN if KASAN

Christophe Leroy

unread,
Aug 8, 2025, 1:04:00 PMAug 8
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org


Le 08/08/2025 à 17:33, Sabyrzhan Tasbolatov a écrit :
> On Fri, Aug 8, 2025 at 10:03 AM Christophe Leroy
> <christop...@csgroup.eu> wrote:
>>
>>
>>
>> Le 07/08/2025 à 21:40, Sabyrzhan Tasbolatov a écrit :
>>> Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
>>> to defer KASAN initialization until shadow memory is properly set up,
>>> and unify the static key infrastructure across all KASAN modes.
>>
>> That probably desserves more details, maybe copy in informations from
>> the top of cover letter.
>>
>> I think there should also be some exeplanations about
>> kasan_arch_is_ready() becoming kasan_enabled(), and also why
>> kasan_arch_is_ready() completely disappear from mm/kasan/common.c
>> without being replaced by kasan_enabled().
>>
>>>
>>> [1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
>>>
>>> Closes: https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.kernel.org%2Fshow_bug.cgi%3Fid%3D217049&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7Cfe4f5a759ad6452b047408ddd691024a%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638902640503259176%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=UM4uvQihJdeWwcC6DIiJXbn4wGsrijjRcHc55uCMErI%3D&reserved=0
Sorry, my mistake, ARCH_DEFER_KASAN has to be 'def_bool y'. Missing the
'y'. That way it is automatically set to 'y' as long as KASAN and
ARCH_NEEDS_DEFER_KASAN are selected. Should be:

config ARCH_DEFER_KASAN
def_bool y
depends on KASAN
depends on ARCH_NEEDS_DEFER_KASAN


>
> and the oneline with `if` condition is cleaner.
> select ARCH_DEFER_KASAN if KASAN
>

I don't think so because it requires all architectures to add 'if KASAN'
which is not convenient.

Christophe

Sabyrzhan Tasbolatov

unread,
Aug 10, 2025, 3:20:52 AMAug 10
to Christophe Leroy, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.co, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Fri, Aug 8, 2025 at 10:03 PM Christophe Leroy
Hello,

Have just had a chance to test this.

lib/Kconfig.kasan:
config ARCH_NEEDS_DEFER_KASAN
bool

config ARCH_DEFER_KASAN
def_bool y
depends on KASAN
depends on ARCH_NEEDS_DEFER_KASAN

It works for UML defconfig where arch/um/Kconfig is:

config UML
bool
default y
select ARCH_NEEDS_DEFER_KASAN
select ARCH_DEFER_KASAN if STATIC_LINK

But it prints warnings for PowerPC, LoongArch:

config LOONGARCH
bool
...
select ARCH_NEEDS_DEFER_KASAN
select ARCH_DEFER_KASAN

$ make defconfig ARCH=loongarch
*** Default configuration is based on 'loongson3_defconfig'

WARNING: unmet direct dependencies detected for ARCH_DEFER_KASAN
Depends on [n]: KASAN [=n] && ARCH_NEEDS_DEFER_KASAN [=y]
Selected by [y]:
- LOONGARCH [=y]


config PPC
bool
default y
select ARCH_DEFER_KASAN if PPC_RADIX_MMU
select ARCH_NEEDS_DEFER_KASAN

$ make ppc64_defconfig

WARNING: unmet direct dependencies detected for ARCH_DEFER_KASAN
Depends on [n]: KASAN [=n] && ARCH_NEEDS_DEFER_KASAN [=y]
Selected by [y]:
- PPC [=y] && PPC_RADIX_MMU [=y]

Sabyrzhan Tasbolatov

unread,
Aug 10, 2025, 3:32:32 AMAug 10
to Christophe Leroy, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, gli...@google.com, dvy...@google.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, davi...@google.com
Setting Kconfig.kasan without KASAN works fine for 3 arch that selects
ARCH_DEFER_KASAN:

config ARCH_DEFER_KASAN
def_bool y
depends on ARCH_NEEDS_DEFER_KASAN

Going to send v6 soon.

P.S.: Fixed email of David Gow.

Sabyrzhan Tasbolatov

unread,
Aug 10, 2025, 8:58:03 AMAug 10
to ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.

The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior

Changes in v6:
- Call kasan_init_generic() in arch/riscv _after_ local_flush_tlb_all()
- Added more details in git commit message
- Fixed commenting format per coding style in UML (Christophe Leroy)
- Changed exporting to GPL for kasan_flag_enabled (Christophe Leroy)
- Converted ARCH_DEFER_KASAN to def_bool depending on KASAN to avoid
arch users to have `if KASAN` condition (Christophe Leroy)
- Forgot to add __init for kasan_init in UML

Changes in v5:
- Unified patches where arch (powerpc, UML, loongarch) selects
ARCH_DEFER_KASAN in the first patch not to break
bisectability. So in v5 we have 2 patches now in the series instead of 9.
- Removed kasan_arch_is_ready completely as there is no user
- Removed __wrappers in v4, left only those where it's necessary
due to different implementations

Tested on:
- powerpc - selects ARCH_DEFER_KASAN
Built ppc64_defconfig (PPC_BOOK3S_64) - OK
Booted via qemu-system-ppc64 - OK

I have not tested in v4 powerpc without KASAN enabled.

In v4 arch/powerpc/Kconfig it was:
select ARCH_DEFER_KASAN if PPC_RADIX_MMU

and compiling with ppc64_defconfig caused:
lib/stackdepot.o:(__jump_table+0x8): undefined reference to `kasan_flag_enabled'

I have fixed it in v5 via adding KASAN condition:
select ARCH_DEFER_KASAN if KASAN && PPC_RADIX_MMU

Previous v5 thread: https://lore.kernel.org/all/20250807194012.6...@gmail.com/
Previous v4 thread: https://lore.kernel.org/all/20250805142622.5...@gmail.com/
Previous v3 thread: https://lore.kernel.org/all/20250717142732.2...@gmail.com/
Previous v2 thread: https://lore.kernel.org/all/20250626153147.1...@gmail.com/

Sabyrzhan Tasbolatov (2):
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
kasan: call kasan_init_generic in kasan_init

arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 ------
arch/loongarch/mm/kasan_init.c | 8 +++----
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 ----------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +----
arch/riscv/mm/kasan_init.c | 1 +
arch/s390/kernel/early.c | 3 ++-
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 ++--
arch/um/kernel/mem.c | 13 ++++++++---
arch/x86/mm/kasan_init_64.c | 2 +-
arch/xtensa/mm/kasan_init.c | 2 +-
include/linux/kasan-enabled.h | 32 ++++++++++++++++++--------
include/linux/kasan.h | 6 +++++
lib/Kconfig.kasan | 12 ++++++++++
mm/kasan/common.c | 17 ++++++++++----
mm/kasan/generic.c | 19 +++++++++++----
mm/kasan/hw_tags.c | 9 +-------
mm/kasan/kasan.h | 8 ++++++-
mm/kasan/shadow.c | 12 +++++-----
mm/kasan/sw_tags.c | 1 +
mm/kasan/tags.c | 2 +-
27 files changed, 113 insertions(+), 77 deletions(-)

--
2.34.1

Sabyrzhan Tasbolatov

unread,
Aug 10, 2025, 8:58:09 AMAug 10
to ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up,
and unify the static key infrastructure across all KASAN modes.

[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.

The core issue is that different architectures haveinconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions or always-on
behavior

This patch addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.

Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support.
For other arch, kasan_enabled() checks the enablement during compile time.

Now KASAN users can use a single kasan_enabled() check everywhere.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>
---
Changes in v6:
- Added more details in git commit message
- Fixed commenting format per coding style in UML (Christophe Leroy)
- Changed exporting to GPL for kasan_flag_enabled (Christophe Leroy)
- Converted ARCH_DEFER_KASAN to def_bool depending on KASAN to avoid
arch users to have `if KASAN` condition (Christophe Leroy)
- Forgot to add __init for kasan_init in UML

Changes in v5:
- Unified patches where arch (powerpc, UML, loongarch) selects
ARCH_DEFER_KASAN in the first patch not to break
bisectability
- Removed kasan_arch_is_ready completely as there is no user
- Removed __wrappers in v4, left only those where it's necessary
due to different implementations

Changes in v4:
- Fixed HW_TAGS static key functionality (was broken in v3)
- Merged configuration and implementation for atomicity
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 ------
arch/loongarch/mm/kasan_init.c | 8 +++----
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 ----------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +----
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 ++--
arch/um/kernel/mem.c | 13 ++++++++---
include/linux/kasan-enabled.h | 32 ++++++++++++++++++--------
include/linux/kasan.h | 6 +++++
lib/Kconfig.kasan | 12 ++++++++++
mm/kasan/common.c | 17 ++++++++++----
mm/kasan/generic.c | 19 +++++++++++----
mm/kasan/hw_tags.c | 9 +-------
mm/kasan/kasan.h | 8 ++++++-
mm/kasan/shadow.c | 12 +++++-----
mm/kasan/sw_tags.c | 1 +
mm/kasan/tags.c | 2 +-
21 files changed, 106 insertions(+), 70 deletions(-)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index f0abc38c40ac..e449e3fcecf9 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,6 +9,7 @@ config LOONGARCH
select ACPI_PPTT if ACPI
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_BINFMT_ELF_STATE
+ select ARCH_NEEDS_DEFER_KASAN
select ARCH_DISABLE_KASAN_INLINE
select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE
diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
index 62f139a9c87d..0e50e5b5e056 100644
--- a/arch/loongarch/include/asm/kasan.h
+++ b/arch/loongarch/include/asm/kasan.h
@@ -66,7 +66,6 @@
#define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
#define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)

-extern bool kasan_early_stage;
extern unsigned char kasan_early_shadow_page[PAGE_SIZE];

#define kasan_mem_to_shadow kasan_mem_to_shadow
@@ -75,12 +74,6 @@ void *kasan_mem_to_shadow(const void *addr);
#define kasan_shadow_to_mem kasan_shadow_to_mem
const void *kasan_shadow_to_mem(const void *shadow_addr);

-#define kasan_arch_is_ready kasan_arch_is_ready
-static __always_inline bool kasan_arch_is_ready(void)
-{
- return !kasan_early_stage;
-}
-
#define addr_has_metadata addr_has_metadata
static __always_inline bool addr_has_metadata(const void *addr)
{
diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
index d2681272d8f0..170da98ad4f5 100644
index 93402a1d9c9f..4730c676b6bf 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -122,6 +122,7 @@ config PPC
# Please keep this list sorted alphabetically.
#
select ARCH_32BIT_OFF_T if PPC32
+ select ARCH_NEEDS_DEFER_KASAN if PPC_RADIX_MMU
select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU
select ARCH_DMA_DEFAULT_COHERENT if !NOT_COHERENT_CACHE
select ARCH_ENABLE_MEMORY_HOTPLUG
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index b5bbb94c51f6..957a57c1db58 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -53,18 +53,6 @@
#endif

#ifdef CONFIG_KASAN
-#ifdef CONFIG_PPC_BOOK3S_64
-DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
-static __always_inline bool kasan_arch_is_ready(void)
-{
- if (static_branch_likely(&powerpc_kasan_enabled_key))
- return true;
- return false;
-}
-
-#define kasan_arch_is_ready kasan_arch_is_ready
-#endif

void kasan_early_init(void);
void kasan_mmu_init(void);
diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index 03666d790a53..1d083597464f 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -165,7 +165,7 @@ void __init kasan_init(void)

/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_late_init(void)
diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c
index 60c78aac0f63..0d3a73d6d4b0 100644
--- a/arch/powerpc/mm/kasan/init_book3e_64.c
+++ b/arch/powerpc/mm/kasan/init_book3e_64.c
@@ -127,7 +127,7 @@ void __init kasan_init(void)

/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_late_init(void) { }
diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c
index 7d959544c077..dcafa641804c 100644
--- a/arch/powerpc/mm/kasan/init_book3s_64.c
+++ b/arch/powerpc/mm/kasan/init_book3s_64.c
@@ -19,8 +19,6 @@
#include <linux/memblock.h>
#include <asm/pgalloc.h>

-DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
static void __init kasan_init_phys_region(void *start, void *end)
{
unsigned long k_start, k_end, k_cur;
@@ -92,11 +90,9 @@ void __init kasan_init(void)
*/
memset(kasan_early_shadow_page, 0, PAGE_SIZE);

- static_branch_inc(&powerpc_kasan_enabled_key);
-
/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}

void __init kasan_early_init(void) { }
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 9083bfdb7735..1d4def0db841 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -5,6 +5,7 @@ menu "UML-specific options"
config UML
bool
default y
+ select ARCH_NEEDS_DEFER_KASAN if STATIC_LINK
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CPU_FINALIZE_INIT
diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h
index f97bb1f7b851..b54a4e937fd1 100644
--- a/arch/um/include/asm/kasan.h
+++ b/arch/um/include/asm/kasan.h
@@ -24,10 +24,9 @@

#ifdef CONFIG_KASAN
void kasan_init(void);
-extern int kasan_um_is_ready;

-#ifdef CONFIG_STATIC_LINK
-#define kasan_arch_is_ready() (kasan_um_is_ready)
+#if defined(CONFIG_STATIC_LINK) && defined(CONFIG_KASAN_INLINE)
+#error UML does not work in KASAN_INLINE mode with STATIC_LINK enabled!
#endif
#else
static inline void kasan_init(void) { }
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 76bec7de81b5..32e3b1972dc1 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -21,10 +21,10 @@
#include <os.h>
#include <um_malloc.h>
#include <linux/sched/task.h>
+#include <linux/kasan.h>

#ifdef CONFIG_KASAN
-int kasan_um_is_ready;
-void kasan_init(void)
+void __init kasan_init(void)
{
/*
* kasan_map_memory will map all of the required address space and
@@ -32,7 +32,11 @@ void kasan_init(void)
*/
kasan_map_memory((void *)KASAN_SHADOW_START, KASAN_SHADOW_SIZE);
init_task.kasan_depth = 0;
- kasan_um_is_ready = true;
+ /*
+ * Since kasan_init() is called before main(),
+ * KASAN is initialized but the enablement is deferred after
+ * jump_label_init(). See arch_mm_preinit().
+ */
}

static void (*kasan_init_ptr)(void)
@@ -58,6 +62,9 @@ static unsigned long brk_end;

void __init arch_mm_preinit(void)
{
+ /* Safe to call after jump_label_init(). Enables KASAN. */
+ kasan_init_generic();
+
/* clear the zero-page */
memset(empty_zero_page, 0, PAGE_SIZE);

diff --git a/include/linux/kasan-enabled.h b/include/linux/kasan-enabled.h
index 6f612d69ea0c..9eca967d8526 100644
index 890011071f2b..51a8293d1af6 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -543,6 +543,12 @@ void kasan_report_async(void);

#endif /* CONFIG_KASAN_HW_TAGS */

+#ifdef CONFIG_KASAN_GENERIC
+void __init kasan_init_generic(void);
+#else
+static inline void kasan_init_generic(void) { }
+#endif
+
#ifdef CONFIG_KASAN_SW_TAGS
void __init kasan_init_sw_tags(void);
#else
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830fa..a4bb610a7a6f 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,6 +19,18 @@ config ARCH_DISABLE_KASAN_INLINE
Disables both inline and stack instrumentation. Selected by
architectures that do not support these instrumentation types.

+config ARCH_NEEDS_DEFER_KASAN
+ bool
+
+config ARCH_DEFER_KASAN
+ def_bool y
+ depends on KASAN && ARCH_NEEDS_DEFER_KASAN
+ help
+ Architectures should select this if they need to defer KASAN
+ initialization until shadow memory is properly set up. This
+ enables runtime control via static keys. Otherwise, KASAN uses
+ compile-time constants for better performance.
+
config CC_HAS_KASAN_GENERIC
def_bool $(cc-option, -fsanitize=kernel-address)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 9142964ab9c9..e3765931a31f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -32,6 +32,15 @@
#include "kasan.h"
#include "../slab.h"

+#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
+/*
+ * Definition of the unified static key declared in kasan-enabled.h.
+ * This provides consistent runtime enable/disable across KASAN modes.
+ */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
+EXPORT_SYMBOL_GPL(kasan_flag_enabled);
+#endif
+
struct slab *kasan_addr_to_slab(const void *addr)
{
if (virt_addr_valid(addr))
@@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
unsigned long ip)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;
return check_slab_allocation(cache, object, ip);
}
@@ -254,7 +263,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
bool still_accessible)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;

/*
@@ -293,7 +302,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,

static inline bool check_page_allocation(void *ptr, unsigned long ip)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_enabled())
return false;

if (ptr != page_address(virt_to_head_page(ptr))) {
@@ -522,7 +531,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
return true;
}

- if (is_kfence_address(ptr) || !kasan_arch_is_ready())
+ if (is_kfence_address(ptr))
return true;

slab = folio_slab(folio);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d54e89f8c3e7..b413c46b3e04 100644
index 9a6927394b54..c8289a3feabf 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -45,13 +45,6 @@ static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;

-/*
- * Whether KASAN is enabled at all.
- * The value remains false until KASAN is initialized by kasan_init_hw_tags().
- */
-DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
-EXPORT_SYMBOL(kasan_flag_enabled);
-
/*
* Whether the selected mode is synchronous, asynchronous, or asymmetric.
* Defaults to KASAN_MODE_SYNC.
@@ -260,7 +253,7 @@ void __init kasan_init_hw_tags(void)
kasan_init_tags();

/* KASAN is now initialized, enable it. */
- static_branch_enable(&kasan_flag_enabled);
+ kasan_enable();

pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e64..8a9d8a6ea717 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -398,7 +398,13 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags);
void kasan_set_track(struct kasan_track *track, depot_stack_handle_t stack);
void kasan_save_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-void kasan_save_free_info(struct kmem_cache *cache, void *object);
+
+void __kasan_save_free_info(struct kmem_cache *cache, void *object);
+static inline void kasan_save_free_info(struct kmem_cache *cache, void *object)
+{
+ if (kasan_enabled())
+ __kasan_save_free_info(cache, object);
+}

#ifdef CONFIG_KASAN_GENERIC
bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb1..2e126cb21b68 100644
index b9382b5b6a37..c75741a74602 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -44,6 +44,7 @@ void __init kasan_init_sw_tags(void)
per_cpu(prng_state, cpu) = (u32)get_cycles();

kasan_init_tags();
+ kasan_enable();

pr_info("KernelAddressSanitizer initialized (sw-tags, stacktrace=%s)\n",
str_on_off(kasan_stack_collection_enabled()));
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index d65d48b85f90..b9f31293622b 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -142,7 +142,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
save_stack_info(cache, object, flags, false);
}

-void kasan_save_free_info(struct kmem_cache *cache, void *object)
+void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
save_stack_info(cache, object, 0, true);
}
--
2.34.1

Sabyrzhan Tasbolatov

unread,
Aug 10, 2025, 8:58:14 AMAug 10
to ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
Call kasan_init_generic() which handles Generic KASAN initialization.
For architectures that do not select ARCH_DEFER_KASAN,
this will be a no-op for the runtime flag but will
print the initialization banner.

For SW_TAGS and HW_TAGS modes, their respective init functions will
handle the flag enabling, if they are enabled/implemented.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>
Tested-by: Alexandre Ghiti <alex...@rivosinc.com> # riscv
Acked-by: Alexander Gordeev <agor...@linux.ibm.com> # s390
---
Changes in v6:
- Call kasan_init_generic() in arch/riscv _after_ local_flush_tlb_all()
---
arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +---
arch/riscv/mm/kasan_init.c | 1 +
arch/s390/kernel/early.c | 3 ++-
arch/x86/mm/kasan_init_64.c | 2 +-
arch/xtensa/mm/kasan_init.c | 2 +-
6 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 111d4f703136..c6625e808bf8 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -300,6 +300,6 @@ void __init kasan_init(void)
local_flush_tlb_all();

memset(kasan_early_shadow_page, 0, PAGE_SIZE);
- pr_info("Kernel address sanitizer initialized\n");
init_task.kasan_depth = 0;
+ kasan_init_generic();
}
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45daeb..abeb81bf6ebd 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -399,14 +399,12 @@ void __init kasan_init(void)
{
kasan_init_shadow();
kasan_init_depth();
-#if defined(CONFIG_KASAN_GENERIC)
+ kasan_init_generic();
/*
* Generic KASAN is now fully initialized.
* Software and Hardware Tag-Based modes still require
* kasan_init_sw_tags() and kasan_init_hw_tags() correspondingly.
*/
- pr_info("KernelAddressSanitizer initialized (generic)\n");
-#endif
}

#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 41c635d6aca4..c4a2a9e5586e 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -533,4 +533,5 @@ void __init kasan_init(void)

csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
local_flush_tlb_all();
+ kasan_init_generic();
}
diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
index 9adfbdd377dc..544e5403dd91 100644
--- a/arch/s390/kernel/early.c
+++ b/arch/s390/kernel/early.c
@@ -21,6 +21,7 @@
#include <linux/kernel.h>
#include <asm/asm-extable.h>
#include <linux/memblock.h>
+#include <linux/kasan.h>
#include <asm/access-regs.h>
#include <asm/asm-offsets.h>
#include <asm/machine.h>
@@ -65,7 +66,7 @@ static void __init kasan_early_init(void)
{
#ifdef CONFIG_KASAN
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
#endif
}

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..998b6010d6d3 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -451,5 +451,5 @@ void __init kasan_init(void)
__flush_tlb_all();

init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index f39c4d83173a..0524b9ed5e63 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -94,5 +94,5 @@ void __init kasan_init(void)

/* At this point kasan is fully initialized. Enable error messages. */
current->kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
--
2.34.1

Christophe Leroy

unread,
Aug 11, 2025, 1:38:47 AMAug 11
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org


Le 10/08/2025 à 14:57, Sabyrzhan Tasbolatov a écrit :
> Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
> to defer KASAN initialization until shadow memory is properly set up,
> and unify the static key infrastructure across all KASAN modes.
>
> [1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
>
> The core issue is that different architectures haveinconsistent approaches
> to KASAN readiness tracking:
> - PowerPC, LoongArch, and UML arch, each implement own
> kasan_arch_is_ready()
> - Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
> - Generic and SW_TAGS modes relied on arch-specific solutions or always-on
> behavior
>
> This patch addresses the fragmentation in KASAN initialization
> across architectures by introducing a unified approach that eliminates
> duplicate static keys and arch-specific kasan_arch_is_ready()
> implementations.
>
> Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
> which examines the static key being enabled if arch selects
> ARCH_DEFER_KASAN or has HW_TAGS mode support.
> For other arch, kasan_enabled() checks the enablement during compile time.
>
> Now KASAN users can use a single kasan_enabled() check everywhere.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>

Reviewed-by: Christophe Leroy <christop...@csgroup.eu>

Christophe Leroy

unread,
Aug 11, 2025, 1:39:07 AMAug 11
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org


Le 10/08/2025 à 14:57, Sabyrzhan Tasbolatov a écrit :
> Call kasan_init_generic() which handles Generic KASAN initialization.
> For architectures that do not select ARCH_DEFER_KASAN,
> this will be a no-op for the runtime flag but will
> print the initialization banner.
>
> For SW_TAGS and HW_TAGS modes, their respective init functions will
> handle the flag enabling, if they are enabled/implemented.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> Signed-off-by: Sabyrzhan Tasbolatov <snov...@gmail.com>
> Tested-by: Alexandre Ghiti <alex...@rivosinc.com> # riscv
> Acked-by: Alexander Gordeev <agor...@linux.ibm.com> # s390

Reviewed-by: Christophe Leroy <christop...@csgroup.eu>

Andrey Konovalov

unread,
Sep 3, 2025, 9:01:09 AMSep 3
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
Why is the check removed here and in some other places below? This
need to be explained in the commit message.
What I meant with these __wrappers was that we should add them for the
KASAN hooks that are called from non-KASAN code (i.e. for the hooks
defined in include/linux/kasan.h). And then move all the
kasan_enabled() checks from mm/kasan/* to where the wrappers are
defined in include/linux/kasan.h (see kasan_unpoison_range() as an
example).

kasan_save_free_info is a KASAN internal function that should need
such a wrapper.

For now, to make these patches simpler, you can keep kasan_enabled()
checks in mm/kasan/*, where they are now. Later we can move them to
include/linux/kasan.h with a separate patch.

Andrey Konovalov

unread,
Sep 3, 2025, 9:01:56 AMSep 3
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Wed, Sep 3, 2025 at 3:00 PM Andrey Konovalov <andre...@gmail.com> wrote:
>
> > +void __kasan_save_free_info(struct kmem_cache *cache, void *object);
> > +static inline void kasan_save_free_info(struct kmem_cache *cache, void *object)
> > +{
> > + if (kasan_enabled())
> > + __kasan_save_free_info(cache, object);
> > +}
>
> What I meant with these __wrappers was that we should add them for the
> KASAN hooks that are called from non-KASAN code (i.e. for the hooks
> defined in include/linux/kasan.h). And then move all the
> kasan_enabled() checks from mm/kasan/* to where the wrappers are
> defined in include/linux/kasan.h (see kasan_unpoison_range() as an
> example).
>
> kasan_save_free_info is a KASAN internal function that should need
> such a wrapper.

... should _not_ need ...

Ritesh Harjani

unread,
Sep 3, 2025, 8:38:55 PMSep 3
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, andre...@gmail.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org, snov...@gmail.com
Only book3s64 needs static keys here because of radix v/s hash mode
selection during runtime. The changes in above for powerpc looks good to
me. It's a nice cleanup too.

So feel free to take:
Reviewed-by: Ritesh Harjani (IBM) <rites...@gmail.com> #powerpc

However I have few comments below...

...
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 9142964ab9c9..e3765931a31f 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -32,6 +32,15 @@
> #include "kasan.h"
> #include "../slab.h"
>
> +#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
> +/*
> + * Definition of the unified static key declared in kasan-enabled.h.
> + * This provides consistent runtime enable/disable across KASAN modes.
> + */
> +DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
> +EXPORT_SYMBOL_GPL(kasan_flag_enabled);
> +#endif
> +
> struct slab *kasan_addr_to_slab(const void *addr)
> {
> if (virt_addr_valid(addr))
> @@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> unsigned long ip)
> {
> - if (!kasan_arch_is_ready() || is_kfence_address(object))
> + if (is_kfence_address(object))

For changes in mm/kasan/common.c.. you have removed !kasan_enabled()
check at few places. This seems to be partial revert of commit [1]:

b3c34245756ada "kasan: catch invalid free before SLUB reinitializes the object"

Can you please explain why this needs to be removed?
Also the explaination of the same should be added in the commit msg too.

[1]: https://lore.kernel.org/all/20240809-kasan-tsbr...@google.com/

> return false;
> return check_slab_allocation(cache, object, ip);
> }
> @@ -254,7 +263,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
> bool still_accessible)
> {
> - if (!kasan_arch_is_ready() || is_kfence_address(object))
> + if (is_kfence_address(object))
> return false;
>
> /*
> @@ -293,7 +302,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
>
> static inline bool check_page_allocation(void *ptr, unsigned long ip)
> {
> - if (!kasan_arch_is_ready())
> + if (!kasan_enabled())
> return false;
>
> if (ptr != page_address(virt_to_head_page(ptr))) {
> @@ -522,7 +531,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> return true;
> }
>
> - if (is_kfence_address(ptr) || !kasan_arch_is_ready())
> + if (is_kfence_address(ptr))
> return true;
>
> slab = folio_slab(folio);

-ritesh

Sabyrzhan Tasbolatov

unread,
Sep 15, 2025, 12:30:25 AM (9 days ago) Sep 15
to Andrey Konovalov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
kasan_arch_is_ready which was unified with kasan_enabled, was removed
here because
__kasan_slab_pre_free is called from include/linux/kasan.h [1] where
there's already kasan_enabled() check.

[1] https://elixir.bootlin.com/linux/v6.16.7/source/include/linux/kasan.h#L198

Please let me know if v7 is required with the change in the git commit
message only.
Yes, I'd like to revisit this in the next separate patch series.

Andrew Morton

unread,
Sep 15, 2025, 11:36:56 PM (8 days ago) Sep 15
to Sabyrzhan Tasbolatov, Andrey Konovalov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Mon, 15 Sep 2025 09:30:03 +0500 Sabyrzhan Tasbolatov <snov...@gmail.com> wrote:

> On Wed, Sep 3, 2025 at 6:01 PM Andrey Konovalov <andre...@gmail.com> wrote:
>

[400+ lines removed - people, please have mercy]

>
> > > @@ -246,7 +255,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> > > bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> > > unsigned long ip)
> > > {
> > > - if (!kasan_arch_is_ready() || is_kfence_address(object))
> > > + if (is_kfence_address(object))
> > > return false;
> >
> > Why is the check removed here and in some other places below? This
> > need to be explained in the commit message.
>
> kasan_arch_is_ready which was unified with kasan_enabled, was removed
> here because
> __kasan_slab_pre_free is called from include/linux/kasan.h [1] where
> there's already kasan_enabled() check.
>
> [1] https://elixir.bootlin.com/linux/v6.16.7/source/include/linux/kasan.h#L198
>
> Please let me know if v7 is required with the change in the git commit
> message only.

Neither works - please send along the appropriate paragraph and I'll
paste it in, can't get easier than that.

> >
>
> [another ~250 lines snipped]
>

Andrey Konovalov

unread,
1:49 PM (2 hours ago) 1:49 PM
to Sabyrzhan Tasbolatov, ryabin...@gmail.com, christop...@csgroup.eu, b...@redhat.com, h...@linux.ibm.com, ak...@linux-foundation.org, zhan...@loongson.cn, chenh...@loongson.cn, davi...@google.com, gli...@google.com, dvy...@google.com, alex...@rivosinc.com, al...@ghiti.fr, agor...@linux.ibm.com, vincenzo...@arm.com, el...@google.com, kasa...@googlegroups.com, linux-ar...@lists.infradead.org, linux-...@vger.kernel.org, loon...@lists.linux.dev, linuxp...@lists.ozlabs.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linu...@lists.infradead.org, linu...@kvack.org
On Mon, Sep 15, 2025 at 6:30 AM Sabyrzhan Tasbolatov
<snov...@gmail.com> wrote:
>
> > Why is the check removed here and in some other places below? This
> > need to be explained in the commit message.
>
> kasan_arch_is_ready which was unified with kasan_enabled, was removed
> here because
> __kasan_slab_pre_free is called from include/linux/kasan.h [1] where
> there's already kasan_enabled() check.
>
> [1] https://elixir.bootlin.com/linux/v6.16.7/source/include/linux/kasan.h#L198
>
> Please let me know if v7 is required with the change in the git commit
> message only.

No need, but next time please add such info into the commit message.

> > What I meant with these __wrappers was that we should add them for the
> > KASAN hooks that are called from non-KASAN code (i.e. for the hooks
> > defined in include/linux/kasan.h). And then move all the
> > kasan_enabled() checks from mm/kasan/* to where the wrappers are
> > defined in include/linux/kasan.h (see kasan_unpoison_range() as an
> > example).
> >
> > kasan_save_free_info is a KASAN internal function that should need
> > such a wrapper.
> >
> > For now, to make these patches simpler, you can keep kasan_enabled()
> > checks in mm/kasan/*, where they are now. Later we can move them to
> > include/linux/kasan.h with a separate patch.
>
> Yes, I'd like to revisit this in the next separate patch series.

Great!

But for now, please send a fix-up patch that removes the
__kasan_save_free_info() wrapper (or a v8? But I see that your series
is now in mm-stable, so I guess a separate fix-up patch is preferred).

I don't think you need a kasan_enabled() check in
kasan_save_free_info() at all. Both the higher level paths
(kasan_slab_free and kasan_mempool_poison_object) already contain this
check.

Thanks!
Reply all
Reply to author
Forward
0 new messages