[PATCH v4 00/18] kasan: x86: arm64: KASAN tag-based mode for x86

3 views
Skip to first unread message

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:26:49 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
======= Introduction
The patchset aims to add a KASAN tag-based mode for the x86 architecture
with the help of the new CPU feature called Linear Address Masking
(LAM). Main improvement introduced by the series is 2x lower memory
usage compared to KASAN's generic mode, the only currently available
mode on x86. The tag based mode may also find errors that the generic
mode couldn't because of differences in how these modes operate.

======= How does KASAN' tag-based mode work?
When enabled, memory accesses and allocations are augmented by the
compiler during kernel compilation. Instrumentation functions are added
to each memory allocation and each pointer dereference.

The allocation related functions generate a random tag and save it in
two places: in shadow memory that maps to the allocated memory, and in
the top bits of the pointer that points to the allocated memory. Storing
the tag in the top of the pointer is possible because of Top-Byte Ignore
(TBI) on arm64 architecture and LAM on x86.

The access related functions are performing a comparison between the tag
stored in the pointer and the one stored in shadow memory. If the tags
don't match an out of bounds error must have occurred and so an error
report is generated.

The general idea for the tag-based mode is very well explained in the
series with the original implementation [1].

[1] https://lore.kernel.org/all/cover.154409902...@google.com/

======= Differences summary compared to the arm64 tag-based mode
- Tag width:
- Tag width influences the chance of a tag mismatch due to two
tags from different allocations having the same value. The
bigger the possible range of tag values the lower the chance
of that happening.
- Shortening the tag width from 8 bits to 4, while it can help
with memory usage, it also increases the chance of not
reporting an error. 4 bit tags have a ~7% chance of a tag
mismatch.

- Address masking mechanism
- TBI in arm64 allows for storing metadata in the top 8 bits of
the virtual address.
- LAM in x86 allows storing tags in bits [62:57] of the pointer.
To maximize memory savings the tag width is reduced to bits
[60:57].

- Inline mode mismatch reporting
- Arm64 inserts a BRK instruction to pass metadata about a tag
mismatch to the KASAN report.
- On x86 the INT3 instruction is used for the same purpose.

======= Testing
Checked all the kunits for both software tags and generic KASAN after
making changes.

In generic mode the results were:

kasan: pass:59 fail:0 skip:13 total:72
Totals: pass:59 fail:0 skip:13 total:72
ok 1 kasan

and for software tags:

kasan: pass:63 fail:0 skip:9 total:72
Totals: pass:63 fail:0 skip:9 total:72
ok 1 kasan

======= Benchmarks [1]
All tests were ran on a Sierra Forest server platform. The only
differences between the tests were kernel options:
- CONFIG_KASAN
- CONFIG_KASAN_GENERIC
- CONFIG_KASAN_SW_TAGS
- CONFIG_KASAN_INLINE [1]
- CONFIG_KASAN_OUTLINE

Boot time (until login prompt):
* 02:55 for clean kernel
* 05:42 / 06:32 for generic KASAN (inline/outline)
* 05:58 for tag-based KASAN (outline) [2]

Total memory usage (512GB present on the system - MemAvailable just
after boot):
* 12.56 GB for clean kernel
* 81.74 GB for generic KASAN
* 44.39 GB for tag-based KASAN

Kernel size:
* 14 MB for clean kernel
* 24.7 MB / 19.5 MB for generic KASAN (inline/outline)
* 27.1 MB / 18.1 MB for tag-based KASAN (inline/outline)

Compilation time comparison (10 cores):
* 7:27 for clean kernel
* 8:21/7:44 for generic KASAN (inline/outline)
* 8:20/7:41 for tag-based KASAN (inline/outline)

[1] Currently inline mode doesn't work on x86 due to things missing in
the compiler. I have written a patch for clang that seems to fix the
inline mode and I was able to boot and check that all patches regarding
the inline mode work as expected. My hope is to post the patch to LLVM
once this series is completed, and then make inline mode available in
the kernel config.

[2] While I was able to boot the inline tag-based kernel with my
compiler changes in a simulated environment, due to toolchain
difficulties I couldn't get it to boot on the machine I had access to.
Also boot time results from the simulation seem too good to be true, and
they're much too worse for the generic case to be believable. Therefore
I'm posting only results from the physical server platform.

======= Compilation
Clang was used to compile the series (make LLVM=1) since gcc doesn't
seem to have support for KASAN tag-based compiler instrumentation on
x86.

======= Dependencies
The base branch for the series is the mainline kernel, tag 6.17-rc1.

======= Enabling LAM for testing
Since LASS is needed for LAM and it can't be compiled without it I
applied the LASS series [1] first, then applied my patches.

[1] https://lore.kernel.org/all/20250707080317.37916...@linux.intel.com/

Changes v4:
- Revert x86 kasan_mem_to_shadow() scheme to the same on used in generic
KASAN. Keep the arithmetic shift idea for the KASAN in general since
it makes more sense for arm64 and in risc-v.
- Fix inline mode but leave it unavailable until a complementary
compiler patch can be merged.
- Apply Dave Hansen's comments on series formatting, patch style and
code simplifications.

Changes v3:
- Remove the runtime_const patch and setup a unified offset for both 5
and 4 paging levels.
- Add a fix for inline mode on x86 tag-based KASAN. Add a handler for
int3 that is generated on inline tag mismatches.
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion.
- Remove patches 2 and 3 since they related to risc-v and this series
adds only x86 related things.
- Reorder __tag_*() functions so they're before arch_kasan_*(). Remove
CONFIG_KASAN condition from __tag_set().

Changes v2:
- Split the series into one adding KASAN tag-based mode (this one) and
another one that adds the dense mode to KASAN (will post later).
- Removed exporting kasan_poison() and used a wrapper instead in
kasan_init_64.c
- Prepended series with 4 patches from the risc-v series and applied
review comments to the first patch as the rest already are reviewed.

Maciej Wieczor-Retman (16):
kasan: Fix inline mode for x86 tag-based mode
x86: Add arch specific kasan functions
kasan: arm64: x86: Make special tags arch specific
x86: Reset tag for virtual to physical address conversions
mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
x86: Physical address comparisons in fill_p*d/pte
x86: KASAN raw shadow memory PTE init
x86: LAM compatible non-canonical definition
x86: LAM initialization
x86: Minimal SLAB alignment
kasan: arm64: x86: Handle int3 for inline KASAN reports
kasan: x86: Apply multishot to the inline report handler
kasan: x86: Logical bit shift for kasan_mem_to_shadow
mm: Unpoison pcpu chunks with base address tag
mm: Unpoison vms[area] addresses with a common tag
x86: Make software tag-based kasan available

Samuel Holland (2):
kasan: sw_tags: Use arithmetic shift for shadow computation
kasan: sw_tags: Support tag widths less than 8 bits

Documentation/arch/arm64/kasan-offsets.sh | 8 ++-
Documentation/arch/x86/x86_64/mm.rst | 6 +-
MAINTAINERS | 4 +-
arch/arm64/Kconfig | 10 ++--
arch/arm64/include/asm/kasan-tags.h | 9 +++
arch/arm64/include/asm/kasan.h | 6 +-
arch/arm64/include/asm/memory.h | 14 ++++-
arch/arm64/include/asm/uaccess.h | 1 +
arch/arm64/kernel/traps.c | 17 +-----
arch/arm64/mm/kasan_init.c | 7 ++-
arch/x86/Kconfig | 4 +-
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/cache.h | 4 ++
arch/x86/include/asm/kasan-tags.h | 9 +++
arch/x86/include/asm/kasan.h | 71 ++++++++++++++++++++++-
arch/x86/include/asm/page.h | 24 +++++++-
arch/x86/include/asm/page_64.h | 2 +-
arch/x86/kernel/alternative.c | 4 +-
arch/x86/kernel/head_64.S | 3 +
arch/x86/kernel/setup.c | 2 +
arch/x86/kernel/traps.c | 4 ++
arch/x86/mm/Makefile | 2 +
arch/x86/mm/init.c | 3 +
arch/x86/mm/init_64.c | 11 ++--
arch/x86/mm/kasan_init_64.c | 19 +++++-
arch/x86/mm/kasan_inline.c | 26 +++++++++
arch/x86/mm/pat/set_memory.c | 1 +
arch/x86/mm/physaddr.c | 1 +
include/linux/kasan-tags.h | 21 +++++--
include/linux/kasan.h | 51 +++++++++++++++-
include/linux/mm.h | 6 +-
include/linux/mmzone.h | 1 -
include/linux/page-flags-layout.h | 9 +--
lib/Kconfig.kasan | 3 +-
mm/execmem.c | 4 +-
mm/kasan/hw_tags.c | 11 ++++
mm/kasan/report.c | 45 ++++++++++++--
mm/kasan/shadow.c | 18 ++++++
mm/vmalloc.c | 8 +--
scripts/Makefile.kasan | 3 +
scripts/gdb/linux/kasan.py | 5 +-
scripts/gdb/linux/mm.py | 5 +-
42 files changed, 381 insertions(+), 82 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
create mode 100644 arch/x86/mm/kasan_inline.c

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:26:52 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
From: Samuel Holland <samuel....@sifive.com>

Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.

For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.

However, for KASAN_SW_TAGS we have some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift
in the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.

Using an arithmetic shift in kasan_mem_to_shadow() provides a number of
benefits:

1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.

2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.

3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.

These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.

Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel....@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
the comments in kasan_non_canonical_hook().

Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion. Settled on overflow on both ranges and separate checks for
x86 and arm.

Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
kasan_non_canonical_hook().

Documentation/arch/arm64/kasan-offsets.sh | 8 +++--
arch/arm64/Kconfig | 10 +++----
arch/arm64/include/asm/memory.h | 14 ++++++++-
arch/arm64/mm/kasan_init.c | 7 +++--
include/linux/kasan.h | 10 +++++--
mm/kasan/report.c | 36 ++++++++++++++++++++---
scripts/gdb/linux/kasan.py | 3 ++
scripts/gdb/linux/mm.py | 5 ++--
8 files changed, 75 insertions(+), 18 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh

diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
old mode 100644
new mode 100755
index 2dc5f9e18039..ce777c7c7804
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@

print_kasan_offset () {
printf "%02d\t" $1
- printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- - (1 << (64 - 32 - $2)) ))
+ if [[ $2 -ne 4 ]] then
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+ - (1 << (64 - 32 - $2)) ))
+ else
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+ fi
}

echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e9bbfacc35a6..82cbfc7d1233 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
- default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
- default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
- default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
- default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
- default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+ default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+ default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+ default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+ default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+ default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff

config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5213248e081b..277d56ceeb01 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
*
* KASAN_SHADOW_END is defined first as the shadow address that corresponds to
* the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
*
* KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
* memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
*/
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
+#endif
#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
#define PAGE_END KASAN_SHADOW_START
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45daeb..dc2de12c4f26 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
/* The early shadow maps everything to a single page of zeroes */
asmlinkage void __init kasan_early_init(void)
{
- BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
- KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+ KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ else
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2b..b396feca714f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
+ void *scaled;
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
}
#endif

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..93c6cadb0765 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;

/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
+ * both x86 and arm64). Thus, the possible shadow addresses (even for
+ * bogus pointers) belong to a single contiguous region that is the
+ * result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (addr < KASAN_SHADOW_OFFSET)
- return;
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (addr < (u64)kasan_mem_to_shadow((void *)(0UL)) ||
+ addr > (u64)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }
+
+ /*
+ * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+ * arithmetic shift. Normally, this would make checking for a possible
+ * shadow address complicated, as the shadow address computation
+ * operation would overflow only for some memory addresses. However, due
+ * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+ * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+ * the overflow always happens.
+ *
+ * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+ * possible shadow addresses belong to a region that is the result of
+ * kasan_mem_to_shadow() applied to the memory range
+ * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+ * resulting possible shadow region is contiguous, as the overflow
+ * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
+ if (addr < (u64)kasan_mem_to_shadow((void *)(0xFFUL << 56)) ||
+ addr > (u64)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }

orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);

diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..fca39968d308 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -8,6 +8,7 @@

import gdb
from linux import constants, mm
+from ctypes import c_int64 as s64

def help():
t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
+ if constants.CONFIG_KASAN_SW_TAGS:
+ addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET

KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
self.KERNEL_END = gdb.parse_and_eval("_end")

if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+ self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
if constants.LX_CONFIG_KASAN_GENERIC:
self.KASAN_SHADOW_SCALE_SHIFT = 3
+ self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
else:
self.KASAN_SHADOW_SCALE_SHIFT = 4
- self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
- self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+ self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
else:
self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:26:55 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
From: Samuel Holland <samuel....@sifive.com>

Allow architectures to override KASAN_TAG_KERNEL in asm/kasan.h. This
is needed on RISC-V, which supports 57-bit virtual addresses and 7-bit
pointer tags. For consistency, move the arm64 MTE definition of
KASAN_TAG_MIN to asm/kasan.h, since it is also architecture-dependent;
RISC-V's equivalent extension is expected to support 7-bit hardware
memory tags.

Reviewed-by: Andrey Konovalov <andre...@gmail.com>
Signed-off-by: Samuel Holland <samuel....@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
arch/arm64/include/asm/kasan.h | 6 ++++--
arch/arm64/include/asm/uaccess.h | 1 +
include/linux/kasan-tags.h | 13 ++++++++-----
3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index e1b57c13f8a4..4ab419df8b93 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,8 +6,10 @@

#include <linux/linkage.h>
#include <asm/memory.h>
-#include <asm/mte-kasan.h>
-#include <asm/pgtable-types.h>
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#endif

#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 5b91803201ef..f890dadc7b4e 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -22,6 +22,7 @@
#include <asm/cpufeature.h>
#include <asm/mmu.h>
#include <asm/mte.h>
+#include <asm/mte-kasan.h>
#include <asm/ptrace.h>
#include <asm/memory.h>
#include <asm/extable.h>
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index 4f85f562512c..e07c896f95d3 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,13 +2,16 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H

+#include <asm/kasan.h>
+
+#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
-#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
-#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
+#endif
+
+#define KASAN_TAG_INVALID (KASAN_TAG_KERNEL - 1) /* inaccessible memory tag */
+#define KASAN_TAG_MAX (KASAN_TAG_KERNEL - 2) /* maximum value for random tags */

-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#else
+#ifndef KASAN_TAG_MIN
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
#endif

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:26:59 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
inline or outline mode in tag-based KASAN. If zeroed, it means the
instrumentation implementation will be pasted into each relevant
location along with KASAN related constants during compilation. If set
to one all function instrumentation will be done with function calls
instead.

The default hwasan-instrument-with-calls value for the x86 architecture
in the compiler is "1", which is not true for other architectures.
Because of this, enabling inline mode in software tag-based KASAN
doesn't work on x86 as the kernel script doesn't zero out the parameter
and always sets up the outline mode.

Explicitly zero out hwasan-instrument-with-calls when enabling inline
mode in tag-based KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v3:
- Add this patch to the series.

scripts/Makefile.kasan | 3 +++
1 file changed, 3 insertions(+)

diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 693dbbebebba..2c7be96727ac 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
-Zsanitizer-recover=kernel-hwaddress

+# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
+# when inline mode is enabled.
ifdef CONFIG_KASAN_INLINE
kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+ kasan_params += hwasan-instrument-with-calls=0
else
kasan_params += hwasan-instrument-with-calls=1
endif
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:27:01 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
KASAN's software tag-based mode needs multiple macros/functions to
handle tag and pointer interactions - to set, retrieve and reset tags
from the top bits of a pointer.

Mimic functions currently used by arm64 but change the tag's position to
bits [60:57] in the pointer.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Rewrite __tag_set() without pointless casts and make it more readable.

Changelog v3:
- Reorder functions so that __tag_*() etc are above the
arch_kasan_*() ones.
- Remove CONFIG_KASAN condition from __tag_set()

arch/x86/include/asm/kasan.h | 36 ++++++++++++++++++++++++++++++++++--
1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index d7e33c7f096b..1963eb2fcff3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -3,6 +3,8 @@
#define _ASM_X86_KASAN_H

#include <linux/const.h>
+#include <linux/kasan-tags.h>
+#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#define KASAN_SHADOW_SCALE_SHIFT 3

@@ -24,8 +26,37 @@
KASAN_SHADOW_SCALE_SHIFT)))

#ifndef __ASSEMBLER__
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+
+#ifdef CONFIG_KASAN_SW_TAGS
+
+#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
+#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
+#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+#else
+#define __tag_shifted(tag) 0UL
+#define __tag_reset(addr) (addr)
+#define __tag_get(addr) 0
+#endif /* CONFIG_KASAN_SW_TAGS */
+
+static inline void *__tag_set(const void *__addr, u8 tag)
+{
+ u64 addr = (u64)__addr;
+
+ addr &= ~__tag_shifted(KASAN_TAG_MASK);
+ addr |= __tag_shifted(tag);
+
+ return (void *)addr;
+}
+
+#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
+#define arch_kasan_reset_tag(addr) __tag_reset(addr)
+#define arch_kasan_get_tag(addr) __tag_get(addr)

#ifdef CONFIG_KASAN
+
void __init kasan_early_init(void);
void __init kasan_init(void);
void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
@@ -34,8 +65,9 @@ static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
int nid) { }
-#endif

-#endif
+#endif /* CONFIG_KASAN */
+
+#endif /* __ASSEMBLER__ */

#endif
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:27:02 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
KASAN's tag-based mode defines multiple special tag values. They're
reserved for:
- Native kernel value. On arm64 it's 0xFF and it causes an early return
in the tag checking function.
- Invalid value. 0xFE marks an area as freed / unallocated. It's also
the value that is used to initialize regions of shadow memory.
- Max value. 0xFD is the highest value that can be randomly generated
for a new tag.

Metadata macro is also defined:
- Tag width equal to 8.

Tag-based mode on x86 is going to use 4 bit wide tags so all the above
values need to be changed accordingly.

Make native kernel tag arch specific for x86 and arm64.

Replace hardcoded kernel tag value and tag width with macros in KASAN's
non-arch specific code.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Move KASAN_TAG_MASK to kasan-tags.h.

Changelog v2:
- Remove risc-v from the patch.

MAINTAINERS | 2 +-
arch/arm64/include/asm/kasan-tags.h | 9 +++++++++
arch/x86/include/asm/kasan-tags.h | 9 +++++++++
include/linux/kasan-tags.h | 10 +++++++++-
include/linux/kasan.h | 4 +++-
include/linux/mm.h | 6 +++---
include/linux/mmzone.h | 1 -
include/linux/page-flags-layout.h | 9 +--------
8 files changed, 35 insertions(+), 15 deletions(-)
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h

diff --git a/MAINTAINERS b/MAINTAINERS
index fe168477caa4..7ce8c6b86e3d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13166,7 +13166,7 @@ L: kasa...@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
-F: arch/*/include/asm/*kasan.h
+F: arch/*/include/asm/*kasan*.h
F: arch/*/mm/kasan_init*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..8cb12ebae57f
--- /dev/null
+++ b/arch/arm64/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 8
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..68ba385bc75c
--- /dev/null
+++ b/arch/x86/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 4
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index e07c896f95d3..fe80fa8f3315 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,7 +2,15 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H

-#include <asm/kasan.h>
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+#include <asm/kasan-tags.h>
+#endif
+
+#ifndef KASAN_TAG_WIDTH
+#define KASAN_TAG_WIDTH 0
+#endif
+
+#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)

#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b396feca714f..54481f8c30c5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -40,7 +40,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;

#ifdef CONFIG_KASAN_SW_TAGS
/* This matches KASAN_TAG_INVALID. */
-#define KASAN_SHADOW_INIT 0xFE
+#ifndef KASAN_SHADOW_INIT
+#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
+#endif
#else
#define KASAN_SHADOW_INIT 0
#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ae97a0b8ec7..bb494cb1d5af 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1692,7 +1692,7 @@ static inline u8 page_kasan_tag(const struct page *page)

if (kasan_enabled()) {
tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
}

return tag;
@@ -1705,7 +1705,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
if (!kasan_enabled())
return;

- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
old_flags = READ_ONCE(page->flags);
do {
flags = old_flags;
@@ -1724,7 +1724,7 @@ static inline void page_kasan_tag_reset(struct page *page)

static inline u8 page_kasan_tag(const struct page *page)
{
- return 0xff;
+ return KASAN_TAG_KERNEL;
}

static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0c5da9141983..c139fb3d862d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1166,7 +1166,6 @@ static inline bool zone_is_empty(struct zone *zone)
#define NODES_MASK ((1UL << NODES_WIDTH) - 1)
#define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1)
#define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1)
-#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
#define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1)

static inline enum zone_type page_zonenum(const struct page *page)
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 760006b1c480..b2cc4cb870e0 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -3,6 +3,7 @@
#define PAGE_FLAGS_LAYOUT_H

#include <linux/numa.h>
+#include <linux/kasan-tags.h>
#include <generated/bounds.h>

/*
@@ -72,14 +73,6 @@
#define NODE_NOT_IN_PAGE_FLAGS 1
#endif

-#if defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_TAG_WIDTH 8
-#elif defined(CONFIG_KASAN_HW_TAGS)
-#define KASAN_TAG_WIDTH 4
-#else
-#define KASAN_TAG_WIDTH 0
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
#define LAST__PID_SHIFT 8
#define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:27:19 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Any place where pointer arithmetic is used to convert a virtual address
into a physical one can raise errors if the virtual address is tagged.

Reset the pointer's tag by sign extending the tag bits in macros that do
pointer arithmetic in address conversions. There will be no change in
compiled code with KASAN disabled since the compiler will optimize the
__tag_reset() out.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Simplify page_to_virt() by removing pointless casts.
- Remove change in __is_canonical_address() because it's taken care of
in a later patch due to a LAM compatible definition of canonical.

arch/x86/include/asm/page.h | 14 +++++++++++---
arch/x86/include/asm/page_64.h | 2 +-
arch/x86/mm/physaddr.c | 1 +
3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 9265f2fca99a..15c95e96fd15 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -7,6 +7,7 @@
#ifdef __KERNEL__

#include <asm/page_types.h>
+#include <asm/kasan.h>

#ifdef CONFIG_X86_64
#include <asm/page_64.h>
@@ -41,7 +42,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
#define __pa(x) __phys_addr((unsigned long)(x))
#endif

-#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(x))
+#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(__tag_reset(x)))
/* __pa_symbol should be used for C visible symbols.
This seems to be the official gcc blessed way to do such arithmetic. */
/*
@@ -65,9 +66,16 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
* virt_to_page(kaddr) returns a valid pointer if and only if
* virt_addr_valid(kaddr) returns true.
*/
-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define page_to_virt(x) ({ \
+ void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \
+ __tag_set(__addr, page_kasan_tag(x)); \
+})
+#endif
+#define virt_to_page(kaddr) pfn_to_page(__pa((void *)__tag_reset(kaddr)) >> PAGE_SHIFT)
extern bool __virt_addr_valid(unsigned long kaddr);
-#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
+#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long)(__tag_reset(kaddr)))

static __always_inline void *pfn_to_kaddr(unsigned long pfn)
{
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 015d23f3e01f..de68ac40dba2 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -33,7 +33,7 @@ static __always_inline unsigned long __phys_addr_nodebug(unsigned long x)
extern unsigned long __phys_addr(unsigned long);
extern unsigned long __phys_addr_symbol(unsigned long);
#else
-#define __phys_addr(x) __phys_addr_nodebug(x)
+#define __phys_addr(x) __phys_addr_nodebug(__tag_reset(x))
#define __phys_addr_symbol(x) \
((unsigned long)(x) - __START_KERNEL_map + phys_base)
#endif
diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c
index fc3f3d3e2ef2..7f2b11308245 100644
--- a/arch/x86/mm/physaddr.c
+++ b/arch/x86/mm/physaddr.c
@@ -14,6 +14,7 @@
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;

/* use the carry flag to determine if x was < __START_KERNEL_map */
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:27:40 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
Related code has multiple spots where page virtual addresses end up used
as arguments in arithmetic operations. Combined with enabled tag-based
KASAN it can result in pointers that don't point where they should or
logical operations not giving expected results.

vm_reset_perms() calculates range's start and end addresses using min()
and max() functions. To do that it compares pointers but some are not
tagged - addr variable is, start and end variables aren't.

within() and within_range() can receive tagged addresses which get
compared to untagged start and end variables.

Reset tags in addresses used as function arguments in min(), max(),
within() and within_range().

execmem_cache_add() adds tagged pointers to a maple tree structure,
which then are incorrectly compared when walking the tree. That results
in different pointers being returned later and page permission violation
errors panicking the kernel.

Reset tag of the address range inserted into the maple tree inside
execmem_cache_add().

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add patch to the series.

arch/x86/mm/pat/set_memory.c | 1 +
mm/execmem.c | 4 +++-
mm/vmalloc.c | 4 ++--
3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 8834c76f91c9..1f14a1297db0 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -222,6 +222,7 @@ static inline void cpa_inc_lp_preserved(int level) { }
static inline int
within(unsigned long addr, unsigned long start, unsigned long end)
{
+ addr = (unsigned long)kasan_reset_tag((void *)addr);
return addr >= start && addr < end;
}

diff --git a/mm/execmem.c b/mm/execmem.c
index 0822305413ec..743fa4a8c069 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -191,6 +191,8 @@ static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask)
unsigned long lower, upper;
void *area = NULL;

+ addr = arch_kasan_reset_tag(addr);
+
lower = addr;
upper = addr + size - 1;

@@ -216,7 +218,7 @@ static int execmem_cache_add(void *ptr, size_t size, gfp_t gfp_mask)
static bool within_range(struct execmem_range *range, struct ma_state *mas,
size_t size)
{
- unsigned long addr = mas->index;
+ unsigned long addr = arch_kasan_reset_tag(mas->index);

if (addr >= range->start && addr + size < range->end)
return true;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..83d666e4837a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3328,8 +3328,8 @@ static void vm_reset_perms(struct vm_struct *area)
unsigned long page_size;

page_size = PAGE_SIZE << page_order;
- start = min(addr, start);
- end = max(addr + page_size, end);
+ start = min((unsigned long)arch_kasan_reset_tag(addr), start);
+ end = max((unsigned long)arch_kasan_reset_tag(addr) + page_size, end);
flush_dmap = 1;
}
}
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:28:05 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Calculating page offset returns a pointer without a tag. When comparing
the calculated offset to a tagged page pointer an error is raised
because they are not equal.

Change pointer comparisons to physical address comparisons as to avoid
issues with tagged pointers that pointer arithmetic would create. Open
code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
Because one parameter is always zero and the rest of the function
insides are enclosed inside __va(), removing that layer lowers the
complexity of final assembly.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v2:
- Open code *_offset() to avoid it's internal __va().

arch/x86/mm/init_64.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 76e33bd7c556..51a247e258b1 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -251,7 +251,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
if (pgd_none(*pgd)) {
p4d_t *p4d = (p4d_t *)spp_getpage();
pgd_populate(&init_mm, pgd, p4d);
- if (p4d != p4d_offset(pgd, 0))
+
+ if (__pa(p4d) != (pgtable_l5_enabled() ?
+ __pa(pgd) :
+ (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
p4d, p4d_offset(pgd, 0));
}
@@ -263,7 +266,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
if (p4d_none(*p4d)) {
pud_t *pud = (pud_t *)spp_getpage();
p4d_populate(&init_mm, p4d, pud);
- if (pud != pud_offset(p4d, 0))
+ if (__pa(pud) != (p4d_val(*p4d) & p4d_pfn_mask(*p4d)))
printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
pud, pud_offset(p4d, 0));
}
@@ -275,7 +278,7 @@ static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr)
if (pud_none(*pud)) {
pmd_t *pmd = (pmd_t *) spp_getpage();
pud_populate(&init_mm, pud, pmd);
- if (pmd != pmd_offset(pud, 0))
+ if (__pa(pmd) != (pud_val(*pud) & pud_pfn_mask(*pud)))
printk(KERN_ERR "PAGETABLE BUG #02! %p <-> %p\n",
pmd, pmd_offset(pud, 0));
}
@@ -287,7 +290,7 @@ static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr)
if (pmd_none(*pmd)) {
pte_t *pte = (pte_t *) spp_getpage();
pmd_populate_kernel(&init_mm, pmd, pte);
- if (pte != pte_offset_kernel(pmd, 0))
+ if (__pa(pte) != (pmd_val(*pmd) & pmd_pfn_mask(*pmd)))
printk(KERN_ERR "PAGETABLE BUG #03!\n");
}
return pte_offset_kernel(pmd, vaddr);
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:28:28 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
In KASAN's generic mode the default value in shadow memory is zero.
During initialization of shadow memory pages they are allocated and
zeroed.

In KASAN's tag-based mode the default tag for the arm64 architecture is
0xFE which corresponds to any memory that should not be accessed. On x86
(where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so
during the initializations all the bytes in shadow memory pages should
be filled with it.

Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to
avoid zeroing out the memory so it can be set with the KASAN invalid
tag.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v2:
- Remove dense mode references, use memset() instead of kasan_poison().

arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..e8a451cafc8c 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -34,6 +34,18 @@ static __init void *early_alloc(size_t size, int nid, bool should_panic)
return ptr;
}

+static __init void *early_raw_alloc(size_t size, int nid, bool should_panic)
+{
+ void *ptr = memblock_alloc_try_nid_raw(size, size,
+ __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+
+ if (!ptr && should_panic)
+ panic("%pS: Failed to allocate page, nid=%d from=%lx\n",
+ (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS));
+
+ return ptr;
+}
+
static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
unsigned long end, int nid)
{
@@ -63,8 +75,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
if (!pte_none(*pte))
continue;

- p = early_alloc(PAGE_SIZE, nid, true);
- entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL);
+ p = early_raw_alloc(PAGE_SIZE, nid, true);
+ memset(p, PAGE_SIZE, KASAN_SHADOW_INIT);
+ entry = pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL);
set_pte_at(&init_mm, addr, pte, entry);
} while (pte++, addr += PAGE_SIZE, addr != end);
}
@@ -436,7 +449,7 @@ void __init kasan_init(void)
* it may contain some garbage. Now we can clear and write protect it,
* since after the TLB flush no one should write to it.
*/
- memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
for (i = 0; i < PTRS_PER_PTE; i++) {
pte_t pte;
pgprot_t prot;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:28:50 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
For an address to be canonical it has to have its top bits equal to each
other. The number of bits depends on the paging level and whether
they're supposed to be ones or zeroes depends on whether the address
points to kernel or user space.

With Linear Address Masking (LAM) enabled, the definition of linear
address canonicality is modified. Not all of the previously required
bits need to be equal, only the first and last from the previously equal
bitmask. So for example a 5-level paging kernel address needs to have
bits [63] and [56] set.

Add separate __canonical_address() implementation for
CONFIG_KASAN_SW_TAGS since it's the only thing right now that enables
LAM for kernel addresses (LAM_SUP bit in CR4).

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add patch to the series.

arch/x86/include/asm/page.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 15c95e96fd15..97de2878f0b3 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -82,10 +82,20 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
return __va(pfn << PAGE_SHIFT);
}

+/*
+ * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
+ */
+#ifdef CONFIG_KASAN_SW_TAGS
+static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+{
+ return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
+}
+#else
static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
{
return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
}
+#endif

static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
{
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:29:15 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
To make use of KASAN's tag based mode on x86, Linear Address Masking
(LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set.

Set the bit in early memory initialization.

When launching secondary CPUs the LAM bit gets lost. To avoid this add
it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass
from the primary CPU to the secondary CPUs without being cleared.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
arch/x86/kernel/head_64.S | 3 +++
arch/x86/mm/init.c | 3 +++
2 files changed, 6 insertions(+)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3e9b3a3bd039..18ca77daa481 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -209,6 +209,9 @@ SYM_INNER_LABEL(common_startup_64, SYM_L_LOCAL)
* there will be no global TLB entries after the execution."
*/
movl $(X86_CR4_PAE | X86_CR4_LA57), %edx
+#ifdef CONFIG_ADDRESS_MASKING
+ orl $X86_CR4_LAM_SUP, %edx
+#endif
#ifdef CONFIG_X86_MCE
/*
* Preserve CR4.MCE if the kernel will enable #MC support.
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bb57e93b4caf..756bd96c3b8b 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -763,6 +763,9 @@ void __init init_mem_mapping(void)
probe_page_size_mask();
setup_pcid();

+ if (boot_cpu_has(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
+
#ifdef CONFIG_X86_64
end = max_pfn << PAGE_SHIFT;
#else
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:29:35 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.

Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.

Adjust x86 minimal SLAB alignment to match KASAN granularity size.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Extend the patch message with some more context and impact
information.

Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.

arch/x86/include/asm/cache.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
#endif
#endif

+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
#endif /* _ASM_X86_CACHE_H */
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:30:00 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Inline KASAN on x86 does tag mismatch reports by passing the faulty
address and metadata through the INT3 instruction - scheme that's setup
in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).

Add a kasan hook to the INT3 handling function.

Disable KASAN in an INT3 core kernel selftest function since it can raise
a false tag mismatch report and potentially panic the kernel.

Make part of that hook - which decides whether to die or recover from a
tag mismatch - arch independent to avoid duplicating a long comment on
both x86 and arm64 architectures.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Make kasan_handler() a stub in a header file. Remove #ifdef from
traps.c.
- Consolidate the "recover" comment into one place.
- Make small changes to the patch message.

MAINTAINERS | 2 +-
arch/arm64/kernel/traps.c | 17 +----------------
arch/x86/include/asm/kasan.h | 26 ++++++++++++++++++++++++++
arch/x86/kernel/alternative.c | 4 +++-
arch/x86/kernel/traps.c | 4 ++++
arch/x86/mm/Makefile | 2 ++
arch/x86/mm/kasan_inline.c | 23 +++++++++++++++++++++++
include/linux/kasan.h | 24 ++++++++++++++++++++++++
8 files changed, 84 insertions(+), 18 deletions(-)
create mode 100644 arch/x86/mm/kasan_inline.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 7ce8c6b86e3d..3daeeaf67546 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13167,7 +13167,7 @@ S: Maintained
F: arch/*/include/asm/*kasan*.h
-F: arch/*/mm/kasan_init*
+F: arch/*/mm/kasan_*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
F: mm/kasan/
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index f528b6041f6a..b9bdabc14ad1 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1068,22 +1068,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)

kasan_report(addr, size, write, pc);

- /*
- * The instrumentation allows to control whether we can proceed after
- * a crash was detected. This is done by passing the -recover flag to
- * the compiler. Disabling recovery allows to generate more compact
- * code.
- *
- * Unfortunately disabling recovery doesn't work for the kernel right
- * now. KASAN reporting is disabled in some contexts (for example when
- * the allocator accesses slab object metadata; this is controlled by
- * current->kasan_depth). All these accesses are detected by the tool,
- * even though the reports for them are not printed.
- *
- * This is something that might be fixed at some point in the future.
- */
- if (!recover)
- die("Oops - KASAN", regs, esr);
+ kasan_inline_recover(recover, "Oops - KASAN", regs, esr);

/* If thread survives, skip over the brk instruction and continue: */
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 1963eb2fcff3..5bf38bb836e1 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -6,7 +6,28 @@
#include <linux/kasan-tags.h>
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_SW_TAGS
+
+/*
+ * LLVM ABI for reporting tag mismatches in inline KASAN mode.
+ * On x86 the INT3 instruction is used to carry metadata in RAX
+ * to the KASAN report.
+ *
+ * SIZE refers to how many bytes the faulty memory access
+ * requested.
+ * WRITE bit, when set, indicates the access was a write, otherwise
+ * it was a read.
+ * RECOVER bit, when set, should allow the kernel to carry on after
+ * a tag mismatch. Otherwise die() is called.
+ */
+#define KASAN_RAX_RECOVER 0x20
+#define KASAN_RAX_WRITE 0x10
+#define KASAN_RAX_SIZE_MASK 0x0f
+#define KASAN_RAX_SIZE(rax) (1 << ((rax) & KASAN_RAX_SIZE_MASK))
+
+#else
#define KASAN_SHADOW_SCALE_SHIFT 3
+#endif

/*
* Compiler uses shadow offset assuming that addresses start
@@ -35,10 +56,15 @@
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+bool kasan_inline_handler(struct pt_regs *regs);
#else
#define __tag_shifted(tag) 0UL
#define __tag_reset(addr) (addr)
#define __tag_get(addr) 0
+static inline bool kasan_inline_handler(struct pt_regs *regs)
+{
+ return false;
+}
#endif /* CONFIG_KASAN_SW_TAGS */

static inline void *__tag_set(const void *__addr, u8 tag)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2a330566e62b..4cb085daad31 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2228,7 +2228,7 @@ int3_exception_notify(struct notifier_block *self, unsigned long val, void *data
}

/* Must be noinline to ensure uniqueness of int3_selftest_ip. */
-static noinline void __init int3_selftest(void)
+static noinline __no_sanitize_address void __init int3_selftest(void)
{
static __initdata struct notifier_block int3_exception_nb = {
.notifier_call = int3_exception_notify,
@@ -2236,6 +2236,7 @@ static noinline void __init int3_selftest(void)
};
unsigned int val = 0;

+ kasan_disable_current();
BUG_ON(register_die_notifier(&int3_exception_nb));

/*
@@ -2253,6 +2254,7 @@ static noinline void __init int3_selftest(void)

BUG_ON(val != 1);

+ kasan_enable_current();
unregister_die_notifier(&int3_exception_nb);
}

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 0f6f187b1a9e..2a119279980f 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -912,6 +912,10 @@ static bool do_int3(struct pt_regs *regs)
if (kprobe_int3_handler(regs))
return true;
#endif
+
+ if (kasan_inline_handler(regs))
+ return true;
+
res = notify_die(DIE_INT3, "int3", regs, 0, X86_TRAP_BP, SIGTRAP);

return res == NOTIFY_STOP;
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..1dc18090cbe7 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP) += dump_pagetables.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o

KASAN_SANITIZE_kasan_init_$(BITS).o := n
+KASAN_SANITIZE_kasan_inline.o := n
obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
+obj-$(CONFIG_KASAN_SW_TAGS) += kasan_inline.o

KMSAN_SANITIZE_kmsan_shadow.o := n
obj-$(CONFIG_KMSAN) += kmsan_shadow.o
diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
new file mode 100644
index 000000000000..9f85dfd1c38b
--- /dev/null
+++ b/arch/x86/mm/kasan_inline.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+
+bool kasan_inline_handler(struct pt_regs *regs)
+{
+ int metadata = regs->ax;
+ u64 addr = regs->di;
+ u64 pc = regs->ip;
+ bool recover = metadata & KASAN_RAX_RECOVER;
+ bool write = metadata & KASAN_RAX_WRITE;
+ size_t size = KASAN_RAX_SIZE(metadata);
+
+ if (user_mode(regs))
+ return false;
+
+ if (!kasan_report((void *)addr, size, write, pc))
+ return false;
+
+ kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
+
+ return true;
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 54481f8c30c5..8691ad870f3b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,4 +663,28 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+#ifdef CONFIG_KASAN_SW_TAGS
+/*
+ * The instrumentation allows to control whether we can proceed after
+ * a crash was detected. This is done by passing the -recover flag to
+ * the compiler. Disabling recovery allows to generate more compact
+ * code.
+ *
+ * Unfortunately disabling recovery doesn't work for the kernel right
+ * now. KASAN reporting is disabled in some contexts (for example when
+ * the allocator accesses slab object metadata; this is controlled by
+ * current->kasan_depth). All these accesses are detected by the tool,
+ * even though the reports for them are not printed.
+ *
+ * This is something that might be fixed at some point in the future.
+ */
+static inline void kasan_inline_recover(
+ bool recover, char *msg, struct pt_regs *regs, unsigned long err,
+ void die_fn(const char *str, struct pt_regs *regs, long err))
+{
+ if (!recover)
+ die_fn(msg, regs, err);
+}
+#endif
+
#endif /* LINUX_KASAN_H */
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:30:25 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
KASAN by default reports only one tag mismatch and based on other
command line parameters either keeps going or panics. The multishot
mechanism - enabled either through a command line parameter or by inline
enable/disable function calls - lifts that restriction and allows an
infinite number of tag mismatch reports to be shown.

Inline KASAN uses the INT3 instruction to pass metadata to the report
handling function. Currently the "recover" field in that metadata is
broken in the compiler layer and causes every inline tag mismatch to
panic the kernel.

Check the multishot state in the KASAN hook called inside the INT3
handling function.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add this patch to the series.

arch/x86/mm/kasan_inline.c | 3 +++
include/linux/kasan.h | 3 +++
mm/kasan/report.c | 8 +++++++-
3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
index 9f85dfd1c38b..f837caf32e6c 100644
--- a/arch/x86/mm/kasan_inline.c
+++ b/arch/x86/mm/kasan_inline.c
@@ -17,6 +17,9 @@ bool kasan_inline_handler(struct pt_regs *regs)
if (!kasan_report((void *)addr, size, write, pc))
return false;

+ if (kasan_multi_shot_enabled())
+ return true;
+
kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);

return true;
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 8691ad870f3b..7a2527794549 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,7 +663,10 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+bool kasan_multi_shot_enabled(void);
+
#ifdef CONFIG_KASAN_SW_TAGS
+
/*
* The instrumentation allows to control whether we can proceed after
* a crash was detected. This is done by passing the -recover flag to
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 93c6cadb0765..cfa2da0e2985 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -121,6 +121,12 @@ static void report_suppress_stop(void)
#endif
}

+bool kasan_multi_shot_enabled(void)
+{
+ return test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags);
+}
+EXPORT_SYMBOL(kasan_multi_shot_enabled);
+
/*
* Used to avoid reporting more than one KASAN bug unless kasan_multi_shot
* is enabled. Note that KASAN tests effectively enable kasan_multi_shot
@@ -128,7 +134,7 @@ static void report_suppress_stop(void)
*/
static bool report_enabled(void)
{
- if (test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+ if (kasan_multi_shot_enabled())
return true;
return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags);
}
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:30:55 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
While generally tag-based KASAN adopts an arithemitc bit shift to
convert a memory address to a shadow memory address, it doesn't work for
all cases on x86. Testing different shadow memory offsets proved that
either 4 or 5 level paging didn't work correctly or inline mode ran into
issues. Thus the best working scheme is the logical bit shift and
non-canonical shadow offset that x86 uses for generic KASAN, of course
adjusted for the increased granularity from 8 to 16 bytes.

Add an arch specific implementation of kasan_mem_to_shadow() that uses
the logical bit shift.

The non-canonical hook tries to calculate whether an address came from
kasan_mem_to_shadow(). First it checks whether this address fits into
the legal set of values possible to output from the mem to shadow
function.

Tie both generic and tag-based x86 KASAN modes to the address range
check associated with generic KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add this patch to the series.

arch/x86/include/asm/kasan.h | 8 ++++++++
mm/kasan/report.c | 5 +++--
2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 5bf38bb836e1..f3e34a9754d2 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -53,6 +53,14 @@

#ifdef CONFIG_KASAN_SW_TAGS

+static inline void *__kasan_mem_to_shadow(const void *addr)
+{
+ return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+}
+
+#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr)
+
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index cfa2da0e2985..11c8b3ddb4cc 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -648,13 +648,14 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;

/*
- * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * For Generic KASAN and Software Tag-Based mode on the x86
+ * architecture, kasan_mem_to_shadow() uses the logical right shift
* and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
* both x86 and arm64). Thus, the possible shadow addresses (even for
* bogus pointers) belong to a single contiguous region that is the
* result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) {
if (addr < (u64)kasan_mem_to_shadow((void *)(0UL)) ||
addr > (u64)kasan_mem_to_shadow((void *)(~0UL)))
return;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:31:11 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.

Refactor code by moving it into a helper in preparation for the actual
fix.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Redo the patch message numbered list.
- Do the refactoring in this patch and move additions to the next new
one.

Changelog v3:
- Remove last version of this patch that just resets the tag on
base_addr and add this patch that unpoisons all areas with the same
tag instead.

include/linux/kasan.h | 10 ++++++++++
mm/kasan/hw_tags.c | 11 +++++++++++
mm/kasan/shadow.c | 10 ++++++++++
mm/vmalloc.c | 4 +---
4 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7a2527794549..3ec432d7df9a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -613,6 +613,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
__kasan_poison_vmalloc(start, size);
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms);
+static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmap_areas(vms, nr_vms);
+}
+
#else /* CONFIG_KASAN_VMALLOC */

static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -637,6 +644,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{ }
+
#endif /* CONFIG_KASAN_VMALLOC */

#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b54..1f569df313c3 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -382,6 +382,17 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
*/
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ vms[area]->addr = __kasan_unpoison_vmalloc(
+ vms[area]->addr, vms[area]->size,
+ KASAN_VMALLOC_PROT_NORMAL);
+ }
+}
+
#endif

void kasan_enable_hw_tags(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb1..b41f74d68916 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,6 +646,16 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ kasan_poison(vms[area]->addr, vms[area]->size,
+ arch_kasan_get_tag(vms[area]->addr), false);
+ }
+}
+
#else /* CONFIG_KASAN_VMALLOC */

int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 83d666e4837a..72eecc8b087a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4847,9 +4847,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* With hardware tag-based KASAN, marking is skipped for
* non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
- for (area = 0; area < nr_vms; area++)
- vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
+ kasan_unpoison_vmap_areas(vms, nr_vms);

kfree(vas);
return vms;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:31:34 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.

Unpoison all vms[]->addr memory and pointers with the same tag to
resolve the mismatch.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Move tagging the vms[]->addr to this new patch and leave refactoring
there.
- Comment the fix to provide some context.

mm/kasan/shadow.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index b41f74d68916..ee2488371784 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,13 +646,21 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
}

+/*
+ * A tag mismatch happens when calculating per-cpu chunk addresses, because
+ * they all inherit the tag from vms[0]->addr, even when nr_vms is bigger
+ * than 1. This is a problem because all the vms[]->addr come from separate
+ * allocations and have different tags so while the calculated address is
+ * correct the tag isn't.
+ */
void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
{
int area;

for (area = 0 ; area < nr_vms ; area++) {
kasan_poison(vms[area]->addr, vms[area]->size,
- arch_kasan_get_tag(vms[area]->addr), false);
+ arch_kasan_get_tag(vms[0]->addr), false);
+ arch_kasan_set_tag(vms[area]->addr, arch_kasan_get_tag(vms[0]->addr));
}
}

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 12, 2025, 9:31:59 AMAug 12
to nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, maciej.wie...@intel.com, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have
ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore
(TBI) that allows the software tag-based mode on arm64 platform.

Set scale macro based on KASAN mode: in software tag-based mode 16 bytes
of memory map to one shadow byte and 8 in generic mode.

Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when
CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler
support is available.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add x86 specific kasan_mem_to_shadow().
- Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to
KASAN_SHADOW_START/END.
- Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset.
- Disable inline and stack support when software tags are enabled on
x86.

Changelog v3:
- Remove runtime_const from previous patch and merge the rest here.
- Move scale shift definition back to header file.
- Add new kasan offset for software tag based mode.
- Fix patch message typo 32 -> 16, and 16 -> 8.
- Update lib/Kconfig.kasan with x86 now having software tag-based
support.

Changelog v2:
- Remove KASAN dense code.

Documentation/arch/x86/x86_64/mm.rst | 6 ++++--
arch/x86/Kconfig | 4 +++-
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/kasan.h | 1 +
arch/x86/kernel/setup.c | 2 ++
lib/Kconfig.kasan | 3 ++-
scripts/gdb/linux/kasan.py | 4 ++--
7 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
index a6cf05d51bd8..ccbdbb4cda36 100644
--- a/Documentation/arch/x86/x86_64/mm.rst
+++ b/Documentation/arch/x86/x86_64/mm.rst
@@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole
ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory map (vmemmap_base)
ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
- ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
+ ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
+ fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 56-bit one from here on:
@@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole
ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory map (vmemmap_base)
ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole
- ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory
+ ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
+ ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 47-bit one from here on:
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b8df57ac0f28..f44fec1190b6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -69,6 +69,7 @@ config X86
select ARCH_CLOCKSOURCE_INIT
select ARCH_CONFIGURES_CPU_MITIGATIONS
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
+ select ARCH_DISABLE_KASAN_INLINE if X86_64 && KASAN_SW_TAGS
select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
@@ -199,6 +200,7 @@ config X86
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
select HAVE_ARCH_KASAN_VMALLOC if X86_64
+ select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
select HAVE_ARCH_KFENCE
select HAVE_ARCH_KMSAN if X86_64
select HAVE_ARCH_KGDB
@@ -403,7 +405,7 @@ config AUDIT_ARCH

config KASAN_SHADOW_OFFSET
hex
- depends on KASAN
+ default 0xeffffc0000000000 if KASAN_SW_TAGS
default 0xdffffc0000000000

config HAVE_INTEL_TXT
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index db1048621ea2..ded92b439ada 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -13,6 +13,7 @@
#undef CONFIG_PARAVIRT_SPINLOCKS
#undef CONFIG_KASAN
#undef CONFIG_KASAN_GENERIC
+#undef CONFIG_KASAN_SW_TAGS

#define __NO_FORTIFY

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index f3e34a9754d2..385f4e9daab3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -7,6 +7,7 @@
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#ifdef CONFIG_KASAN_SW_TAGS
+#define KASAN_SHADOW_SCALE_SHIFT 4

/*
* LLVM ABI for reporting tag mismatches in inline KASAN mode.
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1b2edd07a3e1..5b819f84f6db 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1207,6 +1207,8 @@ void __init setup_arch(char **cmdline_p)

kasan_init();

+ kasan_init_sw_tags();
+
/*
* Sync back kernel address range.
*
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830fa..9ddbc6aeb5d5 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -100,7 +100,8 @@ config KASAN_SW_TAGS

Requires GCC 11+ or Clang.

- Supported only on arm64 CPUs and relies on Top Byte Ignore.
+ Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs
+ that support Linear Address Masking.

Consumes about 1/16th of available memory at kernel start and
add an overhead of ~20% for dynamic allocations.
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index fca39968d308..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,7 @@
#

import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
from ctypes import c_int64 as s64

def help():
@@ -40,7 +40,7 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
- if constants.CONFIG_KASAN_SW_TAGS:
+ if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET

--
2.50.1

Kiryl Shutsemau

unread,
Aug 13, 2025, 4:16:37 AMAug 13
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Aug 12, 2025 at 03:23:36PM +0200, Maciej Wieczor-Retman wrote:
> Compilation time comparison (10 cores):
> * 7:27 for clean kernel
> * 8:21/7:44 for generic KASAN (inline/outline)
> * 8:20/7:41 for tag-based KASAN (inline/outline)

It is not clear if it is compilation time of a kernel with different
config options or compilation time of the same kernel running on machine
with different kernels (KASAN-off/KASAN-generic/KASAN-tagged).

--
Kiryl Shutsemau / Kirill A. Shutemov

Maciej Wieczor-Retman

unread,
Aug 13, 2025, 6:40:58 AMAug 13
to Kiryl Shutsemau, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
It's the first one, I'll reword this accordingly.

When you said a while ago this would be a good thing to measure, did you mean
the first or the second thing? I thought you meant the first one but now I have
doubts.

>
>--
> Kiryl Shutsemau / Kirill A. Shutemov

--
Kind regards
Maciej Wieczór-Retman

Kiryl Shutsemau

unread,
Aug 13, 2025, 7:05:56 AMAug 13
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
On Wed, Aug 13, 2025 at 12:39:35PM +0200, Maciej Wieczor-Retman wrote:
> On 2025-08-13 at 09:16:29 +0100, Kiryl Shutsemau wrote:
> >On Tue, Aug 12, 2025 at 03:23:36PM +0200, Maciej Wieczor-Retman wrote:
> >> Compilation time comparison (10 cores):
> >> * 7:27 for clean kernel
> >> * 8:21/7:44 for generic KASAN (inline/outline)
> >> * 8:20/7:41 for tag-based KASAN (inline/outline)
> >
> >It is not clear if it is compilation time of a kernel with different
> >config options or compilation time of the same kernel running on machine
> >with different kernels (KASAN-off/KASAN-generic/KASAN-tagged).
>
> It's the first one, I'll reword this accordingly.
>
> When you said a while ago this would be a good thing to measure, did you mean
> the first or the second thing? I thought you meant the first one but now I have
> doubts.

I meant the second. We want to know how slow is it to run a workload
under kernel with KASAN enabled.

Maciej Wieczor-Retman

unread,
Aug 13, 2025, 7:44:44 AMAug 13
to Kiryl Shutsemau, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Okay, thanks for confirming, I'll run these compilations on the system with the
tested kernels and attach results to v5 of the series.

>
>--
> Kiryl Shutsemau / Kirill A. Shutemov

Ada Couprie Diaz

unread,
Aug 13, 2025, 10:48:55 AMAug 13
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, Ada Couprie Diaz
Hi,
Building CONFIG_KASAN_HW_TAGS with -Werror on arm64 fails here
due to a warning about KASAN_TAG_MIN being redefined.

On my side the error got triggered when compiling
arch/arm64/kernel/asm-offsets.c due to the ordering of some includes :
from <asm/processor.h>, <linux/kasan-tags.h> ends up being included
(by <asm/cpufeatures.h> including <asm/sysreg.h>) before <asm/kasan.h>.
(Build trace at the end for reference)

Adding `#undef KASAN_TAG_MIN` before redefining the arch version
allows building CONFIG_KASAN_HW_TAGS on arm64 without
further issues, but I don't know if this is most appropriate fix.Thanks,
Ada ---

CC arch/arm64/kernel/asm-offsets.s
In file included from ./arch/arm64/include/asm/processor.h:42,
from ./include/asm-generic/qrwlock.h:18,
from ./arch/arm64/include/generated/asm/qrwlock.h:1,
from ./arch/arm64/include/asm/spinlock.h:9,
from ./include/linux/spinlock.h:95,
from ./include/linux/mmzone.h:8,
from ./include/linux/gfp.h:7,
from ./include/linux/slab.h:16,
from ./include/linux/resource_ext.h:11,
from ./include/linux/acpi.h:13,
from ./include/acpi/apei.h:9,
from ./include/acpi/ghes.h:5,
from ./include/linux/arm_sdei.h:8,
from ./arch/arm64/kernel/asm-offsets.c:10:
./arch/arm64/include/asm/kasan.h:11: error: "KASAN_TAG_MIN" redefined [-Werror]
11 | #define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
|
In file included from ./arch/arm64/include/asm/sysreg.h:14,
from ./arch/arm64/include/asm/cputype.h:250,
from ./arch/arm64/include/asm/cache.h:43,
from ./include/vdso/cache.h:5,
from ./include/linux/cache.h:6,
from ./include/linux/slab.h:15:
./include/linux/kasan-tags.h:23: note: this is the location of the previous definition
23 | #define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
|

Ada Couprie Diaz

unread,
Aug 13, 2025, 10:49:36 AMAug 13
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, Ada Couprie Diaz
Hi,

On 12/08/2025 14:23, Maciej Wieczor-Retman wrote:
> [...]
>
> Make part of that hook - which decides whether to die or recover from a
> tag mismatch - arch independent to avoid duplicating a long comment on
> both x86 and arm64 architectures.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
> ---
> [...]
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index f528b6041f6a..b9bdabc14ad1 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -1068,22 +1068,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)
>
> kasan_report(addr, size, write, pc);
>
> - /*
> - * The instrumentation allows to control whether we can proceed after
> - * a crash was detected. This is done by passing the -recover flag to
> - * the compiler. Disabling recovery allows to generate more compact
> - * code.
> - *
> - * Unfortunately disabling recovery doesn't work for the kernel right
> - * now. KASAN reporting is disabled in some contexts (for example when
> - * the allocator accesses slab object metadata; this is controlled by
> - * current->kasan_depth). All these accesses are detected by the tool,
> - * even though the reports for them are not printed.
> - *
> - * This is something that might be fixed at some point in the future.
> - */
> - if (!recover)
> - die("Oops - KASAN", regs, esr);
> + kasan_inline_recover(recover, "Oops - KASAN", regs, esr);
It seems that `die` is missing as the last argument, otherwise
CONFIG_KASAN_SW_TAGS will not build on arm64.
With the fix, it builds fully without further issues.

Thanks,
Ada

Peter Zijlstra

unread,
Aug 13, 2025, 11:17:16 AMAug 13
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Aug 12, 2025 at 03:23:49PM +0200, Maciej Wieczor-Retman wrote:
> Inline KASAN on x86 does tag mismatch reports by passing the faulty
> address and metadata through the INT3 instruction - scheme that's setup
> in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).
>
> Add a kasan hook to the INT3 handling function.
>
> Disable KASAN in an INT3 core kernel selftest function since it can raise
> a false tag mismatch report and potentially panic the kernel.
>
> Make part of that hook - which decides whether to die or recover from a
> tag mismatch - arch independent to avoid duplicating a long comment on
> both x86 and arm64 architectures.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>

Can we please split this into an arm64 and x86 patch. Also, why use int3
here rather than a #UD trap, which we use for all other such cases?

Mike Rapoport

unread,
Aug 14, 2025, 3:15:42 AMAug 14
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Why not reset the tag inside __phys_addr_nodebug() and __phys_addr()?

> /* __pa_symbol should be used for C visible symbols.
> This seems to be the official gcc blessed way to do such arithmetic. */
> /*
> @@ -65,9 +66,16 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
> * virt_to_page(kaddr) returns a valid pointer if and only if
> * virt_addr_valid(kaddr) returns true.
> */
> -#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> +
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define page_to_virt(x) ({ \
> + void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \
> + __tag_set(__addr, page_kasan_tag(x)); \
> +})
> +#endif
> +#define virt_to_page(kaddr) pfn_to_page(__pa((void *)__tag_reset(kaddr)) >> PAGE_SHIFT)

then virt_to_page() will remain the same, no?

> extern bool __virt_addr_valid(unsigned long kaddr);
> -#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
> +#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long)(__tag_reset(kaddr)))

The same here, I think tag_reset() should be inside __virt_addr_valid()
--
Sincerely yours,
Mike.

Mike Rapoport

unread,
Aug 14, 2025, 3:26:51 AMAug 14
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Shouldn't this use kasan_reset_tag()?
And the calls below as well?

Also this can be done when addr is initialized

> +
> lower = addr;
> upper = addr + size - 1;
>
> @@ -216,7 +218,7 @@ static int execmem_cache_add(void *ptr, size_t size, gfp_t gfp_mask)
> static bool within_range(struct execmem_range *range, struct ma_state *mas,
> size_t size)
> {
> - unsigned long addr = mas->index;
> + unsigned long addr = arch_kasan_reset_tag(mas->index);

AFAIU, we use plain address without the tag as an index in
execmem_cache_add(), so here mas->index will be a plain address as well

> if (addr >= range->start && addr + size < range->end)
> return true;
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6dbcdceecae1..83d666e4837a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3328,8 +3328,8 @@ static void vm_reset_perms(struct vm_struct *area)
> unsigned long page_size;
>
> page_size = PAGE_SIZE << page_order;
> - start = min(addr, start);
> - end = max(addr + page_size, end);
> + start = min((unsigned long)arch_kasan_reset_tag(addr), start);
> + end = max((unsigned long)arch_kasan_reset_tag(addr) + page_size, end);
> flush_dmap = 1;
> }
> }
> --
> 2.50.1
>

--
Sincerely yours,
Mike.

Maciej Wieczor-Retman

unread,
Aug 18, 2025, 12:26:20 AMAug 18
to Ada Couprie Diaz, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Hi, thanks for pointing it out :).

I'll cross compile for arm64 it with different KASAN settings and fix any such
errors. I did this a while ago and it went okay then, but there were so many
rebases in the meantime I must have missed something.

Kind regards
Maciej Wieczór-Retman

Maciej Wieczor-Retman

unread,
Aug 18, 2025, 1:31:48 AMAug 18
to Mike Rapoport, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Hi and thanks for looking at the patches :)
Right, this should be one less line in the changelog and no behavior changes.
I'll fix it.

>
>> /* __pa_symbol should be used for C visible symbols.
>> This seems to be the official gcc blessed way to do such arithmetic. */
>> /*
>> @@ -65,9 +66,16 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
>> * virt_to_page(kaddr) returns a valid pointer if and only if
>> * virt_addr_valid(kaddr) returns true.
>> */
>> -#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> +
>> +#ifdef CONFIG_KASAN_SW_TAGS
>> +#define page_to_virt(x) ({ \
>> + void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \
>> + __tag_set(__addr, page_kasan_tag(x)); \
>> +})
>> +#endif
>> +#define virt_to_page(kaddr) pfn_to_page(__pa((void *)__tag_reset(kaddr)) >> PAGE_SHIFT)
>
>then virt_to_page() will remain the same, no?

Oh, yes, that is redundant with __pa() resetting the tag. Thanks!

>
>> extern bool __virt_addr_valid(unsigned long kaddr);
>> -#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
>> +#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long)(__tag_reset(kaddr)))
>
>The same here, I think tag_reset() should be inside __virt_addr_valid()

Sure, that does sound better.

Maciej Wieczor-Retman

unread,
Aug 18, 2025, 1:49:00 AMAug 18
to Mike Rapoport, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Yes, my mistake, the kernel bot pointed that out for me too :b.

>
>Also this can be done when addr is initialized

Sure, I'll do that there.

>
>> +
>> lower = addr;
>> upper = addr + size - 1;
>>
>> @@ -216,7 +218,7 @@ static int execmem_cache_add(void *ptr, size_t size, gfp_t gfp_mask)
>> static bool within_range(struct execmem_range *range, struct ma_state *mas,
>> size_t size)
>> {
>> - unsigned long addr = mas->index;
>> + unsigned long addr = arch_kasan_reset_tag(mas->index);
>
>AFAIU, we use plain address without the tag as an index in
>execmem_cache_add(), so here mas->index will be a plain address as well

I'll recheck to make sure but I had some unspecific errors such as "page
permission violation". So I thought a page address must be picked incorrectly
somewhere due to tagging. After revising most places where there is pointer
arithmetic / comparisons and printing these addresses I found some were tagged
in within_range().

But I'll recheck if my other changes didn't make this line redundant. I added
this first which fixed some issues but then I found more which were fixed by
resetting addr in execmem_cache_add_locked().

>
>> if (addr >= range->start && addr + size < range->end)
>> return true;
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 6dbcdceecae1..83d666e4837a 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -3328,8 +3328,8 @@ static void vm_reset_perms(struct vm_struct *area)
>> unsigned long page_size;
>>
>> page_size = PAGE_SIZE << page_order;
>> - start = min(addr, start);
>> - end = max(addr + page_size, end);
>> + start = min((unsigned long)arch_kasan_reset_tag(addr), start);
>> + end = max((unsigned long)arch_kasan_reset_tag(addr) + page_size, end);
>> flush_dmap = 1;
>> }
>> }
>> --
>> 2.50.1
>>
>
>--
>Sincerely yours,
>Mike.

Maciej Wieczor-Retman

unread,
Aug 18, 2025, 1:57:34 AMAug 18
to Ada Couprie Diaz, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Oh right, thank you!

Maciej Wieczor-Retman

unread,
Aug 18, 2025, 2:28:16 AMAug 18
to Peter Zijlstra, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Sure, two patches seem okay. I'll first add all the new functions and modify the
x86 code, then add the arm64 patch which will replace its die() + comment with
kasan_inline_recover().

About INT3 I'm not sure, it's just how it's written in the LLVM code. I didn't
see any justification why it's not #UD. My guess is SMD describes INT3 as an
interrupt for debugger purposes while #UD is described as "for software
testing". So from the documentation point INT3 seems to have a stronger case.

Does INT3 interfere with something? Or is #UD better just because of
consistency?

Ada Couprie Diaz

unread,
Aug 21, 2025, 8:31:37 AMAug 21
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org, Ada Couprie Diaz
Hi,

On 12/08/2025 14:23, Maciej Wieczor-Retman wrote:
> [...]
> ======= Testing
> Checked all the kunits for both software tags and generic KASAN after
> making changes.
>
> In generic mode the results were:
>
> kasan: pass:59 fail:0 skip:13 total:72
> Totals: pass:59 fail:0 skip:13 total:72
> ok 1 kasan
>
> and for software tags:
>
> kasan: pass:63 fail:0 skip:9 total:72
> Totals: pass:63 fail:0 skip:9 total:72
> ok 1 kasan
I tested the series on arm64 and after fixing the build issues mentioned
I was able to boot without issues and did not observe any regressions
in the KASAN KUnit tests with either generic or software tags.

So this is Tested-by: Ada Couprie Diaz <ada.cou...@arm.com> (For arm64)

I will note that the tests `kmalloc_memmove_negative_size` and
`kmalloc_memmove_invalid_size` seem to be able to corrupt memory
and lead to kernel crashes if `memmove()` is not properly instrumented,
which I discovered while investigating [0].
> [...]
> ======= Compilation
> Clang was used to compile the series (make LLVM=1) since gcc doesn't
> seem to have support for KASAN tag-based compiler instrumentation on
> x86.

Interestingly, while investigating [0], this comment slipped by me and
I managed to compile your series for x86 with software tags using GCC,
though it is a bit hacky.
You need to update the CC_HAS_KASAN_SW_TAGS to pass `-mlam=u48`
or `-mlam=u57`, as it is disabled by default, and pass `-march=arrowlake`
for compilation (the support for software tags depends on the arch).
You could then test with GCC (though the issue in [0] also applies to x86).

Best,
Ada

[0]: https://groups.google.com/g/kasan-dev/c/v1PYeoitg88

> ======= Dependencies
> The base branch for the series is the mainline kernel, tag 6.17-rc1.
>
> ======= Enabling LAM for testing
> Since LASS is needed for LAM and it can't be compiled without it I
> applied the LASS series [1] first, then applied my patches.
>
> [1] https://lore.kernel.org/all/20250707080317.37916...@linux.intel.com/
>
> Changes v4:
> - Revert x86 kasan_mem_to_shadow() scheme to the same on used in generic
> KASAN. Keep the arithmetic shift idea for the KASAN in general since
> it makes more sense for arm64 and in risc-v.
> - Fix inline mode but leave it unavailable until a complementary
> compiler patch can be merged.
> - Apply Dave Hansen's comments on series formatting, patch style and
> code simplifications.
>
> Changes v3:
> - Remove the runtime_const patch and setup a unified offset for both 5
> and 4 paging levels.
> - Add a fix for inline mode on x86 tag-based KASAN. Add a handler for
> int3 that is generated on inline tag mismatches.
> - Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
> reflected there.
> - Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
> account.
> - Made changes to the kasan_non_canonical_hook() according to upstream
> discussion.
> - Remove patches 2 and 3 since they related to risc-v and this series
> adds only x86 related things.
> - Reorder __tag_*() functions so they're before arch_kasan_*(). Remove
> CONFIG_KASAN condition from __tag_set().
>
> Changes v2:
> - Split the series into one adding KASAN tag-based mode (this one) and
> another one that adds the dense mode to KASAN (will post later).
> - Removed exporting kasan_poison() and used a wrapper instead in
> kasan_init_64.c
> - Prepended series with 4 patches from the risc-v series and applied
> review comments to the first patch as the rest already are reviewed.
>
> Maciej Wieczor-Retman (16):
> kasan: Fix inline mode for x86 tag-based mode
> x86: Add arch specific kasan functions
> kasan: arm64: x86: Make special tags arch specific
> x86: Reset tag for virtual to physical address conversions
> mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
> x86: Physical address comparisons in fill_p*d/pte
> x86: KASAN raw shadow memory PTE init
> x86: LAM compatible non-canonical definition
> x86: LAM initialization
> x86: Minimal SLAB alignment
> kasan: arm64: x86: Handle int3 for inline KASAN reports
> kasan: x86: Apply multishot to the inline report handler
> kasan: x86: Logical bit shift for kasan_mem_to_shadow
> mm: Unpoison pcpu chunks with base address tag
> mm: Unpoison vms[area] addresses with a common tag
> x86: Make software tag-based kasan available
>
> Samuel Holland (2):
> kasan: sw_tags: Use arithmetic shift for shadow computation
> kasan: sw_tags: Support tag widths less than 8 bits
>
> Documentation/arch/arm64/kasan-offsets.sh | 8 ++-
> Documentation/arch/x86/x86_64/mm.rst | 6 +-
> MAINTAINERS | 4 +-
> arch/arm64/Kconfig | 10 ++--
> arch/arm64/include/asm/kasan-tags.h | 9 +++
> arch/arm64/include/asm/kasan.h | 6 +-
> arch/arm64/include/asm/memory.h | 14 ++++-
> arch/arm64/include/asm/uaccess.h | 1 +
> arch/arm64/kernel/traps.c | 17 +-----
> arch/arm64/mm/kasan_init.c | 7 ++-
> arch/x86/Kconfig | 4 +-
> arch/x86/boot/compressed/misc.h | 1 +
> arch/x86/include/asm/cache.h | 4 ++
> arch/x86/include/asm/kasan-tags.h | 9 +++
> arch/x86/include/asm/kasan.h | 71 ++++++++++++++++++++++-
> arch/x86/include/asm/page.h | 24 +++++++-
> arch/x86/include/asm/page_64.h | 2 +-
> arch/x86/kernel/alternative.c | 4 +-
> arch/x86/kernel/head_64.S | 3 +
> arch/x86/kernel/setup.c | 2 +
> arch/x86/kernel/traps.c | 4 ++
> arch/x86/mm/Makefile | 2 +
> arch/x86/mm/init.c | 3 +
> arch/x86/mm/init_64.c | 11 ++--
> arch/x86/mm/kasan_init_64.c | 19 +++++-
> arch/x86/mm/kasan_inline.c | 26 +++++++++
> arch/x86/mm/pat/set_memory.c | 1 +
> arch/x86/mm/physaddr.c | 1 +
> include/linux/kasan-tags.h | 21 +++++--
> include/linux/kasan.h | 51 +++++++++++++++-
> include/linux/mm.h | 6 +-
> include/linux/mmzone.h | 1 -
> include/linux/page-flags-layout.h | 9 +--
> lib/Kconfig.kasan | 3 +-
> mm/execmem.c | 4 +-
> mm/kasan/hw_tags.c | 11 ++++
> mm/kasan/report.c | 45 ++++++++++++--
> mm/kasan/shadow.c | 18 ++++++
> mm/vmalloc.c | 8 +--
> scripts/Makefile.kasan | 3 +
> scripts/gdb/linux/kasan.py | 5 +-
> scripts/gdb/linux/mm.py | 5 +-
> 42 files changed, 381 insertions(+), 82 deletions(-)
> mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
> create mode 100644 arch/arm64/include/asm/kasan-tags.h
> create mode 100644 arch/x86/include/asm/kasan-tags.h
> create mode 100644 arch/x86/mm/kasan_inline.c
>

Maciej Wieczor-Retman

unread,
Aug 22, 2025, 3:37:40 AMAug 22
to Ada Couprie Diaz, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, pet...@infradead.org, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Hello, and thanks for testing the series!

On 2025-08-21 at 13:30:28 +0100, Ada Couprie Diaz wrote:
>Hi,
>
>On 12/08/2025 14:23, Maciej Wieczor-Retman wrote:
>> [...]
>> ======= Testing
>> Checked all the kunits for both software tags and generic KASAN after
>> making changes.
>>
>> In generic mode the results were:
>>
>> kasan: pass:59 fail:0 skip:13 total:72
>> Totals: pass:59 fail:0 skip:13 total:72
>> ok 1 kasan
>>
>> and for software tags:
>>
>> kasan: pass:63 fail:0 skip:9 total:72
>> Totals: pass:63 fail:0 skip:9 total:72
>> ok 1 kasan
>I tested the series on arm64 and after fixing the build issues mentioned
>I was able to boot without issues and did not observe any regressions
>in the KASAN KUnit tests with either generic or software tags.
>
>So this is Tested-by: Ada Couprie Diaz <ada.cou...@arm.com> (For arm64)

Thank you! I'll try to send the fixed series on monday/tuesday.

>I will note that the tests `kmalloc_memmove_negative_size` and
>`kmalloc_memmove_invalid_size` seem to be able to corrupt memory
>and lead to kernel crashes if `memmove()` is not properly instrumented,
>which I discovered while investigating [0].

What do you mean by 'properly instrumented'? Is it the intrinsic prefix thing
for gcc that you mentioned?

>> [...]
>> ======= Compilation
>> Clang was used to compile the series (make LLVM=1) since gcc doesn't
>> seem to have support for KASAN tag-based compiler instrumentation on
>> x86.
>
>Interestingly, while investigating [0], this comment slipped by me and
>I managed to compile your series for x86 with software tags using GCC,
>though it is a bit hacky.
>You need to update the CC_HAS_KASAN_SW_TAGS to pass `-mlam=u48`
>or `-mlam=u57`, as it is disabled by default, and pass `-march=arrowlake`
>for compilation (the support for software tags depends on the arch).
>You could then test with GCC (though the issue in [0] also applies to x86).

Thanks! I'll try it out :)

>
>Best,
>Ada

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:25:34 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
======= Introduction
The patchset aims to add a KASAN tag-based mode for the x86 architecture
with the help of the new CPU feature called Linear Address Masking
(LAM). Main improvement introduced by the series is 2x lower memory
usage compared to KASAN's generic mode, the only currently available
mode on x86. The tag based mode may also find errors that the generic
mode couldn't because of differences in how these modes operate.

======= How does KASAN' tag-based mode work?
When enabled, memory accesses and allocations are augmented by the
compiler during kernel compilation. Instrumentation functions are added
to each memory allocation and each pointer dereference.

The allocation related functions generate a random tag and save it in
two places: in shadow memory that maps to the allocated memory, and in
the top bits of the pointer that points to the allocated memory. Storing
the tag in the top of the pointer is possible because of Top-Byte Ignore
(TBI) on arm64 architecture and LAM on x86.

The access related functions are performing a comparison between the tag
stored in the pointer and the one stored in shadow memory. If the tags
don't match an out of bounds error must have occurred and so an error
report is generated.

The general idea for the tag-based mode is very well explained in the
series with the original implementation [1].

[1] https://lore.kernel.org/all/cover.154409902...@google.com/

======= Differences summary compared to the arm64 tag-based mode
- Tag width:
- Tag width influences the chance of a tag mismatch due to two
tags from different allocations having the same value. The
bigger the possible range of tag values the lower the chance
of that happening.
- Shortening the tag width from 8 bits to 4, while it can help
with memory usage, it also increases the chance of not
reporting an error. 4 bit tags have a ~7% chance of a tag
mismatch.

- Address masking mechanism
- TBI in arm64 allows for storing metadata in the top 8 bits of
the virtual address.
- LAM in x86 allows storing tags in bits [62:57] of the pointer.
To maximize memory savings the tag width is reduced to bits
[60:57].

- Inline mode mismatch reporting
- Arm64 inserts a BRK instruction to pass metadata about a tag
mismatch to the KASAN report.
- On x86 the INT3 instruction is used for the same purpose.

======= Testing
Checked all the kunits for both software tags and generic KASAN after
making changes.

In generic mode the results were:

kasan: pass:59 fail:0 skip:13 total:72
Totals: pass:59 fail:0 skip:13 total:72
ok 1 kasan

and for software tags:

kasan: pass:63 fail:0 skip:9 total:72
Totals: pass:63 fail:0 skip:9 total:72
ok 1 kasan

======= Benchmarks [1]
All tests were ran on a Sierra Forest server platform. The only
differences between the tests were kernel options:
- CONFIG_KASAN
- CONFIG_KASAN_GENERIC
- CONFIG_KASAN_SW_TAGS
- CONFIG_KASAN_INLINE [1]
- CONFIG_KASAN_OUTLINE

Boot time (until login prompt):
* 02:55 for clean kernel
* 05:42 / 06:32 for generic KASAN (inline/outline)
* 05:58 for tag-based KASAN (outline) [2]

Total memory usage (512GB present on the system - MemAvailable just
after boot):
* 12.56 GB for clean kernel
* 81.74 GB for generic KASAN
* 44.39 GB for tag-based KASAN

Kernel size:
* 14 MB for clean kernel
* 24.7 MB / 19.5 MB for generic KASAN (inline/outline)
* 27.1 MB / 18.1 MB for tag-based KASAN (inline/outline)

Work under load time comparison (compiling the mainline kernel) (200 cores):
* 62s for clean kernel
* 171s / 125s for generic KASAN (outline/inline)
* 145s for tag-based KASAN (outline) [2]

[1] Currently inline mode doesn't work on x86 due to things missing in
the compiler. I have written a patch for clang that seems to fix the
inline mode and I was able to boot and check that all patches regarding
the inline mode work as expected. My hope is to post the patch to LLVM
once this series is completed, and then make inline mode available in
the kernel config.

[2] While I was able to boot the inline tag-based kernel with my
compiler changes in a simulated environment, due to toolchain
difficulties I couldn't get it to boot on the machine I had access to.
Also boot time results from the simulation seem too good to be true, and
they're much too worse for the generic case to be believable. Therefore
I'm posting only results from the physical server platform.

======= Compilation
Clang was used to compile the series (make LLVM=1) since gcc doesn't
seem to have support for KASAN tag-based compiler instrumentation on
x86.

======= Dependencies
The base branch for the series is the mainline kernel, tag 6.17-rc3.

======= Enabling LAM for testing
Since LASS is needed for LAM and it can't be compiled without it I
applied the LASS series [1] first, then applied my patches.

[1] https://lore.kernel.org/all/20250707080317.37916...@linux.intel.com/

Changes v5:
- Fix a bunch of arm64 compilation errors I didn't catch earlier.
Thank You Ada for testing the series!
- Simplify the usage of the tag handling x86 functions (virt_to_page,
phys_addr etc.).
- Remove within() and within_range() from the EXECMEM_ROX patch.
- Count time it takes to compile a kernel when running kernels with generic
KASAN, tag based KASAN and a clean kernel. Put data in the cover letter
benchmark section.
Maciej Wieczor-Retman (17):
kasan: Fix inline mode for x86 tag-based mode
x86: Add arch specific kasan functions
kasan: arm64: x86: Make special tags arch specific
x86: Reset tag for virtual to physical address conversions
mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
x86: Physical address comparisons in fill_p*d/pte
x86: KASAN raw shadow memory PTE init
x86: LAM compatible non-canonical definition
x86: LAM initialization
x86: Minimal SLAB alignment
kasan: x86: Handle int3 for inline KASAN reports
arm64: Unify software tag-based KASAN inline recovery path
kasan: x86: Apply multishot to the inline report handler
kasan: x86: Logical bit shift for kasan_mem_to_shadow
mm: Unpoison pcpu chunks with base address tag
mm: Unpoison vms[area] addresses with a common tag
x86: Make software tag-based kasan available

Samuel Holland (2):
kasan: sw_tags: Use arithmetic shift for shadow computation
kasan: sw_tags: Support tag widths less than 8 bits

Documentation/arch/arm64/kasan-offsets.sh | 8 ++-
Documentation/arch/x86/x86_64/mm.rst | 6 +-
MAINTAINERS | 4 +-
arch/arm64/Kconfig | 10 ++--
arch/arm64/include/asm/kasan-tags.h | 13 +++++
arch/arm64/include/asm/kasan.h | 2 -
arch/arm64/include/asm/memory.h | 14 ++++-
arch/arm64/include/asm/uaccess.h | 1 +
arch/arm64/kernel/traps.c | 17 +-----
arch/arm64/mm/kasan_init.c | 7 ++-
arch/x86/Kconfig | 4 +-
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/cache.h | 4 ++
arch/x86/include/asm/kasan-tags.h | 9 +++
arch/x86/include/asm/kasan.h | 71 ++++++++++++++++++++++-
arch/x86/include/asm/page.h | 18 ++++++
arch/x86/include/asm/page_64.h | 1 +
arch/x86/kernel/alternative.c | 4 +-
arch/x86/kernel/head_64.S | 3 +
arch/x86/kernel/setup.c | 2 +
arch/x86/kernel/traps.c | 4 ++
arch/x86/mm/Makefile | 2 +
arch/x86/mm/init.c | 3 +
arch/x86/mm/init_64.c | 11 ++--
arch/x86/mm/kasan_init_64.c | 19 +++++-
arch/x86/mm/kasan_inline.c | 26 +++++++++
arch/x86/mm/physaddr.c | 2 +
include/linux/kasan-tags.h | 21 +++++--
include/linux/kasan.h | 51 +++++++++++++++-
include/linux/mm.h | 6 +-
include/linux/mmzone.h | 1 -
include/linux/page-flags-layout.h | 9 +--
lib/Kconfig.kasan | 3 +-
mm/execmem.c | 2 +-
mm/kasan/hw_tags.c | 11 ++++
mm/kasan/report.c | 45 ++++++++++++--
mm/kasan/shadow.c | 18 ++++++
mm/vmalloc.c | 6 +-
scripts/Makefile.kasan | 3 +
scripts/gdb/linux/kasan.py | 5 +-
scripts/gdb/linux/mm.py | 5 +-
41 files changed, 375 insertions(+), 77 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
create mode 100644 arch/x86/mm/kasan_inline.c

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:25:57 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
From: Samuel Holland <samuel....@sifive.com>

Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.

For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.

However, for KASAN_SW_TAGS we have some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift
in the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.

Using an arithmetic shift in kasan_mem_to_shadow() provides a number of
benefits:

1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.

2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.

3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.

These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.

Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel....@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5: (Maciej)
- (u64) -> (unsigned long) in report.c

Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
the comments in kasan_non_canonical_hook().

Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion. Settled on overflow on both ranges and separate checks for
x86 and arm.

Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
kasan_non_canonical_hook().

Documentation/arch/arm64/kasan-offsets.sh | 8 +++--
arch/arm64/Kconfig | 10 +++----
arch/arm64/include/asm/memory.h | 14 ++++++++-
arch/arm64/mm/kasan_init.c | 7 +++--
include/linux/kasan.h | 10 +++++--
mm/kasan/report.c | 36 ++++++++++++++++++++---
scripts/gdb/linux/kasan.py | 3 ++
scripts/gdb/linux/mm.py | 5 ++--
8 files changed, 75 insertions(+), 18 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh

diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
old mode 100644
new mode 100755
index 2dc5f9e18039..ce777c7c7804
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@

print_kasan_offset () {
printf "%02d\t" $1
- printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- - (1 << (64 - 32 - $2)) ))
+ if [[ $2 -ne 4 ]] then
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+ - (1 << (64 - 32 - $2)) ))
+ else
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+ fi
}

echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e9bbfacc35a6..82cbfc7d1233 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
- default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
- default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
- default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
- default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
- default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+ default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+ default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+ default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+ default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+ default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff

config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5213248e081b..277d56ceeb01 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
*
* KASAN_SHADOW_END is defined first as the shadow address that corresponds to
* the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
*
* KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
* memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
*/
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
+#endif
#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
#define PAGE_END KASAN_SHADOW_START
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45daeb..dc2de12c4f26 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
/* The early shadow maps everything to a single page of zeroes */
asmlinkage void __init kasan_early_init(void)
{
- BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
- KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+ KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ else
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2b..b396feca714f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
+ void *scaled;
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
}
#endif

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..50d487a0687a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;

/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
+ * both x86 and arm64). Thus, the possible shadow addresses (even for
+ * bogus pointers) belong to a single contiguous region that is the
+ * result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (addr < KASAN_SHADOW_OFFSET)
- return;
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }
+
+ /*
+ * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+ * arithmetic shift. Normally, this would make checking for a possible
+ * shadow address complicated, as the shadow address computation
+ * operation would overflow only for some memory addresses. However, due
+ * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+ * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+ * the overflow always happens.
+ *
+ * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+ * possible shadow addresses belong to a region that is the result of
+ * kasan_mem_to_shadow() applied to the memory range
+ * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+ * resulting possible shadow region is contiguous, as the overflow
+ * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFUL << 56)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }

orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);

diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..fca39968d308 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -8,6 +8,7 @@

import gdb
from linux import constants, mm
+from ctypes import c_int64 as s64

def help():
t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
+ if constants.CONFIG_KASAN_SW_TAGS:
+ addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET

KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
self.KERNEL_END = gdb.parse_and_eval("_end")

if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+ self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
if constants.LX_CONFIG_KASAN_GENERIC:
self.KASAN_SHADOW_SCALE_SHIFT = 3
+ self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
else:
self.KASAN_SHADOW_SCALE_SHIFT = 4
- self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
- self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+ self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
else:
self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:26:19 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
From: Samuel Holland <samuel....@sifive.com>

Allow architectures to override KASAN_TAG_KERNEL in asm/kasan.h. This
is needed on RISC-V, which supports 57-bit virtual addresses and 7-bit
pointer tags. For consistency, move the arm64 MTE definition of
KASAN_TAG_MIN to asm/kasan.h, since it is also architecture-dependent;
RISC-V's equivalent extension is expected to support 7-bit hardware
memory tags.

Reviewed-by: Andrey Konovalov <andre...@gmail.com>
Signed-off-by: Samuel Holland <samuel....@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
arch/arm64/include/asm/kasan.h | 6 ++++--
arch/arm64/include/asm/uaccess.h | 1 +
include/linux/kasan-tags.h | 13 ++++++++-----
3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index e1b57c13f8a4..4ab419df8b93 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,8 +6,10 @@

#include <linux/linkage.h>
#include <asm/memory.h>
-#include <asm/mte-kasan.h>
-#include <asm/pgtable-types.h>
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#endif

#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 5b91803201ef..f890dadc7b4e 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -22,6 +22,7 @@
#include <asm/cpufeature.h>
#include <asm/mmu.h>
#include <asm/mte.h>
+#include <asm/mte-kasan.h>
#include <asm/ptrace.h>
#include <asm/memory.h>
#include <asm/extable.h>
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index 4f85f562512c..e07c896f95d3 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,13 +2,16 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H

+#include <asm/kasan.h>
+
+#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
-#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
-#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
+#endif
+
+#define KASAN_TAG_INVALID (KASAN_TAG_KERNEL - 1) /* inaccessible memory tag */
+#define KASAN_TAG_MAX (KASAN_TAG_KERNEL - 2) /* maximum value for random tags */

-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#else
+#ifndef KASAN_TAG_MIN
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
#endif

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:26:37 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
inline or outline mode in tag-based KASAN. If zeroed, it means the
instrumentation implementation will be pasted into each relevant
location along with KASAN related constants during compilation. If set
to one all function instrumentation will be done with function calls
instead.

The default hwasan-instrument-with-calls value for the x86 architecture
in the compiler is "1", which is not true for other architectures.
Because of this, enabling inline mode in software tag-based KASAN
doesn't work on x86 as the kernel script doesn't zero out the parameter
and always sets up the outline mode.

Explicitly zero out hwasan-instrument-with-calls when enabling inline
mode in tag-based KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v3:
- Add this patch to the series.

scripts/Makefile.kasan | 3 +++
1 file changed, 3 insertions(+)

diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 693dbbebebba..2c7be96727ac 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
-Zsanitizer-recover=kernel-hwaddress

+# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
+# when inline mode is enabled.
ifdef CONFIG_KASAN_INLINE
kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+ kasan_params += hwasan-instrument-with-calls=0
else
kasan_params += hwasan-instrument-with-calls=1
endif
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:26:59 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
KASAN's software tag-based mode needs multiple macros/functions to
handle tag and pointer interactions - to set, retrieve and reset tags
from the top bits of a pointer.

Mimic functions currently used by arm64 but change the tag's position to
bits [60:57] in the pointer.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Rewrite __tag_set() without pointless casts and make it more readable.

Changelog v3:
- Reorder functions so that __tag_*() etc are above the
arch_kasan_*() ones.
- Remove CONFIG_KASAN condition from __tag_set()

arch/x86/include/asm/kasan.h | 36 ++++++++++++++++++++++++++++++++++--
1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index d7e33c7f096b..1963eb2fcff3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -3,6 +3,8 @@
#define _ASM_X86_KASAN_H

#include <linux/const.h>
+#include <linux/kasan-tags.h>
+#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#define KASAN_SHADOW_SCALE_SHIFT 3

@@ -24,8 +26,37 @@
KASAN_SHADOW_SCALE_SHIFT)))

#ifndef __ASSEMBLER__
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+
+#ifdef CONFIG_KASAN_SW_TAGS
+
+#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
+#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
+#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+#else
+#define __tag_shifted(tag) 0UL
+#define __tag_reset(addr) (addr)
+#define __tag_get(addr) 0
+#endif /* CONFIG_KASAN_SW_TAGS */
+
+static inline void *__tag_set(const void *__addr, u8 tag)
+{
+ u64 addr = (u64)__addr;
+
+ addr &= ~__tag_shifted(KASAN_TAG_MASK);
+ addr |= __tag_shifted(tag);
+
+ return (void *)addr;
+}
+
+#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
+#define arch_kasan_reset_tag(addr) __tag_reset(addr)
+#define arch_kasan_get_tag(addr) __tag_get(addr)

#ifdef CONFIG_KASAN
+
void __init kasan_early_init(void);
void __init kasan_init(void);
void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
@@ -34,8 +65,9 @@ static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
int nid) { }
-#endif

-#endif
+#endif /* CONFIG_KASAN */
+
+#endif /* __ASSEMBLER__ */

#endif
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:27:21 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
KASAN's tag-based mode defines multiple special tag values. They're
reserved for:
- Native kernel value. On arm64 it's 0xFF and it causes an early return
in the tag checking function.
- Invalid value. 0xFE marks an area as freed / unallocated. It's also
the value that is used to initialize regions of shadow memory.
- Max value. 0xFD is the highest value that can be randomly generated
for a new tag.

Metadata macro is also defined:
- Tag width equal to 8.

Tag-based mode on x86 is going to use 4 bit wide tags so all the above
values need to be changed accordingly.

Make native kernel tag arch specific for x86 and arm64.

Replace hardcoded kernel tag value and tag width with macros in KASAN's
non-arch specific code.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5:
- Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN
mode case.

Changelog v4:
- Move KASAN_TAG_MASK to kasan-tags.h.

Changelog v2:
- Remove risc-v from the patch.

MAINTAINERS | 2 +-
arch/arm64/include/asm/kasan-tags.h | 13 +++++++++++++
arch/arm64/include/asm/kasan.h | 4 ----
arch/x86/include/asm/kasan-tags.h | 9 +++++++++
include/linux/kasan-tags.h | 10 +++++++++-
include/linux/kasan.h | 4 +++-
include/linux/mm.h | 6 +++---
include/linux/mmzone.h | 1 -
include/linux/page-flags-layout.h | 9 +--------
9 files changed, 39 insertions(+), 19 deletions(-)
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h

diff --git a/MAINTAINERS b/MAINTAINERS
index fed6cd812d79..788532771832 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13176,7 +13176,7 @@ L: kasa...@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
-F: arch/*/include/asm/*kasan.h
+F: arch/*/include/asm/*kasan*.h
F: arch/*/mm/kasan_init*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..152465d03508
--- /dev/null
+++ b/arch/arm64/include/asm/kasan-tags.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 8
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#endif
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 4ab419df8b93..d2841e0fb908 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -7,10 +7,6 @@
#include <linux/linkage.h>
#include <asm/memory.h>

-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#endif
-
#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
#define arch_kasan_get_tag(addr) __tag_get(addr)
diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..68ba385bc75c
--- /dev/null
+++ b/arch/x86/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 4
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index e07c896f95d3..fe80fa8f3315 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,7 +2,15 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H

-#include <asm/kasan.h>
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+#include <asm/kasan-tags.h>
+#endif
+
+#ifndef KASAN_TAG_WIDTH
+#define KASAN_TAG_WIDTH 0
+#endif
+
+#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)

#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b396feca714f..54481f8c30c5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -40,7 +40,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;

#ifdef CONFIG_KASAN_SW_TAGS
/* This matches KASAN_TAG_INVALID. */
-#define KASAN_SHADOW_INIT 0xFE
+#ifndef KASAN_SHADOW_INIT
+#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
+#endif
#else
#define KASAN_SHADOW_INIT 0
#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ae97a0b8ec7..bb494cb1d5af 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1692,7 +1692,7 @@ static inline u8 page_kasan_tag(const struct page *page)

if (kasan_enabled()) {
tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
}

return tag;
@@ -1705,7 +1705,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
if (!kasan_enabled())
return;

- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
old_flags = READ_ONCE(page->flags);
do {
flags = old_flags;
@@ -1724,7 +1724,7 @@ static inline void page_kasan_tag_reset(struct page *page)

static inline u8 page_kasan_tag(const struct page *page)
{
- return 0xff;
+ return KASAN_TAG_KERNEL;
}

static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0c5da9141983..c139fb3d862d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1166,7 +1166,6 @@ static inline bool zone_is_empty(struct zone *zone)
#define NODES_MASK ((1UL << NODES_WIDTH) - 1)
#define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1)
#define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1)
-#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
#define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1)

static inline enum zone_type page_zonenum(const struct page *page)
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 760006b1c480..b2cc4cb870e0 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -3,6 +3,7 @@
#define PAGE_FLAGS_LAYOUT_H

#include <linux/numa.h>
+#include <linux/kasan-tags.h>
#include <generated/bounds.h>

/*
@@ -72,14 +73,6 @@
#define NODE_NOT_IN_PAGE_FLAGS 1
#endif

-#if defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_TAG_WIDTH 8
-#elif defined(CONFIG_KASAN_HW_TAGS)
-#define KASAN_TAG_WIDTH 4
-#else
-#define KASAN_TAG_WIDTH 0
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
#define LAST__PID_SHIFT 8
#define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:27:40 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Any place where pointer arithmetic is used to convert a virtual address
into a physical one can raise errors if the virtual address is tagged.

Reset the pointer's tag by sign extending the tag bits in macros that do
pointer arithmetic in address conversions. There will be no change in
compiled code with KASAN disabled since the compiler will optimize the
__tag_reset() out.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5:
- Move __tag_reset() calls into __phys_addr_nodebug() and
__virt_addr_valid() instead of calling it on the arguments of higher
level functions.

Changelog v4:
- Simplify page_to_virt() by removing pointless casts.
- Remove change in __is_canonical_address() because it's taken care of
in a later patch due to a LAM compatible definition of canonical.

arch/x86/include/asm/page.h | 8 ++++++++
arch/x86/include/asm/page_64.h | 1 +
arch/x86/mm/physaddr.c | 2 ++
3 files changed, 11 insertions(+)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 9265f2fca99a..bcf5cad3da36 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -7,6 +7,7 @@
#ifdef __KERNEL__

#include <asm/page_types.h>
+#include <asm/kasan.h>

#ifdef CONFIG_X86_64
#include <asm/page_64.h>
@@ -65,6 +66,13 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
* virt_to_page(kaddr) returns a valid pointer if and only if
* virt_addr_valid(kaddr) returns true.
*/
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define page_to_virt(x) ({ \
+ void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \
+ __tag_set(__addr, page_kasan_tag(x)); \
+})
+#endif
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
extern bool __virt_addr_valid(unsigned long kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 015d23f3e01f..b18fef43dd34 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -21,6 +21,7 @@ extern unsigned long direct_map_physmem_end;

static __always_inline unsigned long __phys_addr_nodebug(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;

/* use the carry flag to determine if x was < __START_KERNEL_map */
diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c
index fc3f3d3e2ef2..d6aa3589c798 100644
--- a/arch/x86/mm/physaddr.c
+++ b/arch/x86/mm/physaddr.c
@@ -14,6 +14,7 @@
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;

/* use the carry flag to determine if x was < __START_KERNEL_map */
@@ -46,6 +47,7 @@ EXPORT_SYMBOL(__phys_addr_symbol);

bool __virt_addr_valid(unsigned long x)

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:28:01 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
Related code has multiple spots where page virtual addresses end up used
as arguments in arithmetic operations. Combined with enabled tag-based
KASAN it can result in pointers that don't point where they should or
logical operations not giving expected results.

vm_reset_perms() calculates range's start and end addresses using min()
and max() functions. To do that it compares pointers but some are not
tagged - addr variable is, start and end variables aren't.

within() and within_range() can receive tagged addresses which get
compared to untagged start and end variables.

Reset tags in addresses used as function arguments in min(), max(),
within().

execmem_cache_add() adds tagged pointers to a maple tree structure,
which then are incorrectly compared when walking the tree. That results
in different pointers being returned later and page permission violation
errors panicking the kernel.

Reset tag of the address range inserted into the maple tree inside
execmem_cache_add().

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5:
- Remove the within_range() change.
- arch_kasan_reset_tag -> kasan_reset_tag.

Changelog v4:
- Add patch to the series.

mm/execmem.c | 2 +-
mm/vmalloc.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/execmem.c b/mm/execmem.c
index 0822305413ec..f7b7bdacaec5 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -186,7 +186,7 @@ static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean);
static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask)
{
struct maple_tree *free_areas = &execmem_cache.free_areas;
- unsigned long addr = (unsigned long)ptr;
+ unsigned long addr = (unsigned long)kasan_reset_tag(ptr);
MA_STATE(mas, free_areas, addr - 1, addr + 1);
unsigned long lower, upper;
void *area = NULL;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..c93893fb8dd4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
* the vm_unmap_aliases() flush includes the direct map.
*/
for (i = 0; i < area->nr_pages; i += 1U << page_order) {
- unsigned long addr = (unsigned long)page_address(area->pages[i]);
+ unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));

if (addr) {
unsigned long page_size;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:28:23 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Calculating page offset returns a pointer without a tag. When comparing
the calculated offset to a tagged page pointer an error is raised
because they are not equal.

Change pointer comparisons to physical address comparisons as to avoid
issues with tagged pointers that pointer arithmetic would create. Open
code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
Because one parameter is always zero and the rest of the function
insides are enclosed inside __va(), removing that layer lowers the
complexity of final assembly.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v2:
- Open code *_offset() to avoid it's internal __va().

arch/x86/mm/init_64.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 76e33bd7c556..51a247e258b1 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -251,7 +251,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
if (pgd_none(*pgd)) {
p4d_t *p4d = (p4d_t *)spp_getpage();
pgd_populate(&init_mm, pgd, p4d);
- if (p4d != p4d_offset(pgd, 0))
+
+ if (__pa(p4d) != (pgtable_l5_enabled() ?
+ __pa(pgd) :
+ (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
p4d, p4d_offset(pgd, 0));
}
@@ -263,7 +266,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
if (p4d_none(*p4d)) {
pud_t *pud = (pud_t *)spp_getpage();
p4d_populate(&init_mm, p4d, pud);
- if (pud != pud_offset(p4d, 0))
+ if (__pa(pud) != (p4d_val(*p4d) & p4d_pfn_mask(*p4d)))
printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
pud, pud_offset(p4d, 0));
}
@@ -275,7 +278,7 @@ static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr)
if (pud_none(*pud)) {
pmd_t *pmd = (pmd_t *) spp_getpage();
pud_populate(&init_mm, pud, pmd);
- if (pmd != pmd_offset(pud, 0))
+ if (__pa(pmd) != (pud_val(*pud) & pud_pfn_mask(*pud)))
printk(KERN_ERR "PAGETABLE BUG #02! %p <-> %p\n",
pmd, pmd_offset(pud, 0));
}
@@ -287,7 +290,7 @@ static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr)
if (pmd_none(*pmd)) {
pte_t *pte = (pte_t *) spp_getpage();
pmd_populate_kernel(&init_mm, pmd, pte);
- if (pte != pte_offset_kernel(pmd, 0))
+ if (__pa(pte) != (pmd_val(*pmd) & pmd_pfn_mask(*pmd)))
printk(KERN_ERR "PAGETABLE BUG #03!\n");
}
return pte_offset_kernel(pmd, vaddr);
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:28:41 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
In KASAN's generic mode the default value in shadow memory is zero.
During initialization of shadow memory pages they are allocated and
zeroed.

In KASAN's tag-based mode the default tag for the arm64 architecture is
0xFE which corresponds to any memory that should not be accessed. On x86
(where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so
during the initializations all the bytes in shadow memory pages should
be filled with it.

Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to
avoid zeroing out the memory so it can be set with the KASAN invalid
tag.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v2:
- Remove dense mode references, use memset() instead of kasan_poison().

arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..e8a451cafc8c 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -34,6 +34,18 @@ static __init void *early_alloc(size_t size, int nid, bool should_panic)
return ptr;
}

+static __init void *early_raw_alloc(size_t size, int nid, bool should_panic)
+{
+ void *ptr = memblock_alloc_try_nid_raw(size, size,
+ __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+
+ if (!ptr && should_panic)
+ panic("%pS: Failed to allocate page, nid=%d from=%lx\n",
+ (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS));
+
+ return ptr;
+}
+
static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
unsigned long end, int nid)
{
@@ -63,8 +75,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
if (!pte_none(*pte))
continue;

- p = early_alloc(PAGE_SIZE, nid, true);
- entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL);
+ p = early_raw_alloc(PAGE_SIZE, nid, true);
+ memset(p, PAGE_SIZE, KASAN_SHADOW_INIT);
+ entry = pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL);
set_pte_at(&init_mm, addr, pte, entry);
} while (pte++, addr += PAGE_SIZE, addr != end);
}
@@ -436,7 +449,7 @@ void __init kasan_init(void)
* it may contain some garbage. Now we can clear and write protect it,
* since after the TLB flush no one should write to it.
*/
- memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
for (i = 0; i < PTRS_PER_PTE; i++) {
pte_t pte;
pgprot_t prot;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:29:04 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
For an address to be canonical it has to have its top bits equal to each
other. The number of bits depends on the paging level and whether
they're supposed to be ones or zeroes depends on whether the address
points to kernel or user space.

With Linear Address Masking (LAM) enabled, the definition of linear
address canonicality is modified. Not all of the previously required
bits need to be equal, only the first and last from the previously equal
bitmask. So for example a 5-level paging kernel address needs to have
bits [63] and [56] set.

Add separate __canonical_address() implementation for
CONFIG_KASAN_SW_TAGS since it's the only thing right now that enables
LAM for kernel addresses (LAM_SUP bit in CR4).

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add patch to the series.

arch/x86/include/asm/page.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index bcf5cad3da36..a83f23a71f35 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -82,10 +82,20 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
return __va(pfn << PAGE_SHIFT);
}

+/*
+ * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
+ */
+#ifdef CONFIG_KASAN_SW_TAGS
+static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+{
+ return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
+}
+#else
static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
{
return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
}
+#endif

static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
{
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:29:25 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
To make use of KASAN's tag based mode on x86, Linear Address Masking
(LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set.

Set the bit in early memory initialization.

When launching secondary CPUs the LAM bit gets lost. To avoid this add
it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass
from the primary CPU to the secondary CPUs without being cleared.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
arch/x86/kernel/head_64.S | 3 +++
arch/x86/mm/init.c | 3 +++
2 files changed, 6 insertions(+)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3e9b3a3bd039..18ca77daa481 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -209,6 +209,9 @@ SYM_INNER_LABEL(common_startup_64, SYM_L_LOCAL)
* there will be no global TLB entries after the execution."
*/
movl $(X86_CR4_PAE | X86_CR4_LA57), %edx
+#ifdef CONFIG_ADDRESS_MASKING
+ orl $X86_CR4_LAM_SUP, %edx
+#endif
#ifdef CONFIG_X86_MCE
/*
* Preserve CR4.MCE if the kernel will enable #MC support.
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bb57e93b4caf..756bd96c3b8b 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -763,6 +763,9 @@ void __init init_mem_mapping(void)
probe_page_size_mask();
setup_pcid();

+ if (boot_cpu_has(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
+
#ifdef CONFIG_X86_64
end = max_pfn << PAGE_SHIFT;
#else
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:29:47 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.

Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.

Adjust x86 minimal SLAB alignment to match KASAN granularity size.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Extend the patch message with some more context and impact
information.

Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.

arch/x86/include/asm/cache.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
#endif
#endif

+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
#endif /* _ASM_X86_CACHE_H */
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:30:10 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Inline KASAN on x86 does tag mismatch reports by passing the faulty
address and metadata through the INT3 instruction - scheme that's setup
in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).

Add a kasan hook to the INT3 handling function.

Disable KASAN in an INT3 core kernel selftest function since it can raise
a false tag mismatch report and potentially panic the kernel.

Make part of that hook - which decides whether to die or recover from a
tag mismatch - arch independent to avoid duplicating a long comment on
both x86 and arm64 architectures.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5:
- Add die to argument list of kasan_inline_recover() in
arch/arm64/kernel/traps.c.

Changelog v4:
- Make kasan_handler() a stub in a header file. Remove #ifdef from
traps.c.
- Consolidate the "recover" comment into one place.
- Make small changes to the patch message.

MAINTAINERS | 2 +-
arch/x86/include/asm/kasan.h | 26 ++++++++++++++++++++++++++
arch/x86/kernel/alternative.c | 4 +++-
arch/x86/kernel/traps.c | 4 ++++
arch/x86/mm/Makefile | 2 ++
arch/x86/mm/kasan_inline.c | 23 +++++++++++++++++++++++
include/linux/kasan.h | 24 ++++++++++++++++++++++++
7 files changed, 83 insertions(+), 2 deletions(-)
create mode 100644 arch/x86/mm/kasan_inline.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 788532771832..f5b1ce242002 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13177,7 +13177,7 @@ S: Maintained
F: arch/*/include/asm/*kasan*.h
-F: arch/*/mm/kasan_init*
+F: arch/*/mm/kasan_*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
F: mm/kasan/
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 1963eb2fcff3..5bf38bb836e1 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -6,7 +6,28 @@
#include <linux/kasan-tags.h>
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_SW_TAGS
+
+/*
+ * LLVM ABI for reporting tag mismatches in inline KASAN mode.
+ * On x86 the INT3 instruction is used to carry metadata in RAX
+ * to the KASAN report.
+ *
+ * SIZE refers to how many bytes the faulty memory access
+ * requested.
+ * WRITE bit, when set, indicates the access was a write, otherwise
+ * it was a read.
+ * RECOVER bit, when set, should allow the kernel to carry on after
+ * a tag mismatch. Otherwise die() is called.
+ */
+#define KASAN_RAX_RECOVER 0x20
+#define KASAN_RAX_WRITE 0x10
+#define KASAN_RAX_SIZE_MASK 0x0f
+#define KASAN_RAX_SIZE(rax) (1 << ((rax) & KASAN_RAX_SIZE_MASK))
+
+#else
#define KASAN_SHADOW_SCALE_SHIFT 3
+#endif

/*
* Compiler uses shadow offset assuming that addresses start
@@ -35,10 +56,15 @@
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+bool kasan_inline_handler(struct pt_regs *regs);
#else
#define __tag_shifted(tag) 0UL
#define __tag_reset(addr) (addr)
#define __tag_get(addr) 0
+static inline bool kasan_inline_handler(struct pt_regs *regs)
+{
+ return false;
+}
#endif /* CONFIG_KASAN_SW_TAGS */

static inline void *__tag_set(const void *__addr, u8 tag)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2a330566e62b..4cb085daad31 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2228,7 +2228,7 @@ int3_exception_notify(struct notifier_block *self, unsigned long val, void *data
}

/* Must be noinline to ensure uniqueness of int3_selftest_ip. */
-static noinline void __init int3_selftest(void)
+static noinline __no_sanitize_address void __init int3_selftest(void)
{
static __initdata struct notifier_block int3_exception_nb = {
.notifier_call = int3_exception_notify,
@@ -2236,6 +2236,7 @@ static noinline void __init int3_selftest(void)
};
unsigned int val = 0;

+ kasan_disable_current();
BUG_ON(register_die_notifier(&int3_exception_nb));

/*
@@ -2253,6 +2254,7 @@ static noinline void __init int3_selftest(void)

BUG_ON(val != 1);

+ kasan_enable_current();
unregister_die_notifier(&int3_exception_nb);
}

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 0f6f187b1a9e..2a119279980f 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -912,6 +912,10 @@ static bool do_int3(struct pt_regs *regs)
if (kprobe_int3_handler(regs))
return true;
#endif
+
+ if (kasan_inline_handler(regs))
+ return true;
+
res = notify_die(DIE_INT3, "int3", regs, 0, X86_TRAP_BP, SIGTRAP);

return res == NOTIFY_STOP;
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..1dc18090cbe7 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP) += dump_pagetables.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o

KASAN_SANITIZE_kasan_init_$(BITS).o := n
+KASAN_SANITIZE_kasan_inline.o := n
obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
+obj-$(CONFIG_KASAN_SW_TAGS) += kasan_inline.o

KMSAN_SANITIZE_kmsan_shadow.o := n
obj-$(CONFIG_KMSAN) += kmsan_shadow.o
diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
new file mode 100644
index 000000000000..9f85dfd1c38b
--- /dev/null
+++ b/arch/x86/mm/kasan_inline.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+
+bool kasan_inline_handler(struct pt_regs *regs)
+{
+ int metadata = regs->ax;
+ u64 addr = regs->di;
+ u64 pc = regs->ip;
+ bool recover = metadata & KASAN_RAX_RECOVER;
+ bool write = metadata & KASAN_RAX_WRITE;
+ size_t size = KASAN_RAX_SIZE(metadata);
+
+ if (user_mode(regs))
+ return false;
+
+ if (!kasan_report((void *)addr, size, write, pc))
+ return false;
+
+ kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
+
+ return true;
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 54481f8c30c5..8691ad870f3b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,4 +663,28 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+#ifdef CONFIG_KASAN_SW_TAGS
+/*
+ * The instrumentation allows to control whether we can proceed after
+ * a crash was detected. This is done by passing the -recover flag to
+ * the compiler. Disabling recovery allows to generate more compact
+ * code.
+ *
+ * Unfortunately disabling recovery doesn't work for the kernel right
+ * now. KASAN reporting is disabled in some contexts (for example when
+ * the allocator accesses slab object metadata; this is controlled by
+ * current->kasan_depth). All these accesses are detected by the tool,
+ * even though the reports for them are not printed.
+ *
+ * This is something that might be fixed at some point in the future.
+ */
+static inline void kasan_inline_recover(
+ bool recover, char *msg, struct pt_regs *regs, unsigned long err,
+ void die_fn(const char *str, struct pt_regs *regs, long err))
+{
+ if (!recover)
+ die_fn(msg, regs, err);
+}
+#endif
+
#endif /* LINUX_KASAN_H */
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:30:30 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
To avoid having a copy of a long comment explaining the intricacies of
the inline KASAN recovery system and issues for every architecture that
uses the software tag-based mode, a unified kasan_inline_recover()
function was added.

Use kasan_inline_recover() in the kasan brk handler to cleanup the long
comment, that's kept in the non-arch KASAN code.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v5:
- Split arm64 portion of patch 13/18 into this one. (Peter Zijlstra)

arch/arm64/kernel/traps.c | 17 +----------------
1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index f528b6041f6a..fe3c0104fe31 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1068,22 +1068,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)

kasan_report(addr, size, write, pc);

- /*
- * The instrumentation allows to control whether we can proceed after
- * a crash was detected. This is done by passing the -recover flag to
- * the compiler. Disabling recovery allows to generate more compact
- * code.
- *
- * Unfortunately disabling recovery doesn't work for the kernel right
- * now. KASAN reporting is disabled in some contexts (for example when
- * the allocator accesses slab object metadata; this is controlled by
- * current->kasan_depth). All these accesses are detected by the tool,
- * even though the reports for them are not printed.
- *
- * This is something that might be fixed at some point in the future.
- */
- if (!recover)
- die("Oops - KASAN", regs, esr);
+ kasan_inline_recover(recover, "Oops - KASAN", regs, esr, die);

/* If thread survives, skip over the brk instruction and continue: */
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:30:52 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
KASAN by default reports only one tag mismatch and based on other
command line parameters either keeps going or panics. The multishot
mechanism - enabled either through a command line parameter or by inline
enable/disable function calls - lifts that restriction and allows an
infinite number of tag mismatch reports to be shown.

Inline KASAN uses the INT3 instruction to pass metadata to the report
handling function. Currently the "recover" field in that metadata is
broken in the compiler layer and causes every inline tag mismatch to
panic the kernel.

Check the multishot state in the KASAN hook called inside the INT3
handling function.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add this patch to the series.

arch/x86/mm/kasan_inline.c | 3 +++
include/linux/kasan.h | 3 +++
mm/kasan/report.c | 8 +++++++-
3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
index 9f85dfd1c38b..f837caf32e6c 100644
--- a/arch/x86/mm/kasan_inline.c
+++ b/arch/x86/mm/kasan_inline.c
@@ -17,6 +17,9 @@ bool kasan_inline_handler(struct pt_regs *regs)
if (!kasan_report((void *)addr, size, write, pc))
return false;

+ if (kasan_multi_shot_enabled())
+ return true;
+
kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);

return true;
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 8691ad870f3b..7a2527794549 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,7 +663,10 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+bool kasan_multi_shot_enabled(void);
+
#ifdef CONFIG_KASAN_SW_TAGS
+
/*
* The instrumentation allows to control whether we can proceed after
* a crash was detected. This is done by passing the -recover flag to
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 50d487a0687a..9e830639e1b2 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -121,6 +121,12 @@ static void report_suppress_stop(void)
#endif
}

+bool kasan_multi_shot_enabled(void)
+{
+ return test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags);
+}
+EXPORT_SYMBOL(kasan_multi_shot_enabled);
+
/*
* Used to avoid reporting more than one KASAN bug unless kasan_multi_shot
* is enabled. Note that KASAN tests effectively enable kasan_multi_shot
@@ -128,7 +134,7 @@ static void report_suppress_stop(void)
*/
static bool report_enabled(void)
{
- if (test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+ if (kasan_multi_shot_enabled())
return true;
return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags);
}
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:31:14 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
While generally tag-based KASAN adopts an arithemitc bit shift to
convert a memory address to a shadow memory address, it doesn't work for
all cases on x86. Testing different shadow memory offsets proved that
either 4 or 5 level paging didn't work correctly or inline mode ran into
issues. Thus the best working scheme is the logical bit shift and
non-canonical shadow offset that x86 uses for generic KASAN, of course
adjusted for the increased granularity from 8 to 16 bytes.

Add an arch specific implementation of kasan_mem_to_shadow() that uses
the logical bit shift.

The non-canonical hook tries to calculate whether an address came from
kasan_mem_to_shadow(). First it checks whether this address fits into
the legal set of values possible to output from the mem to shadow
function.

Tie both generic and tag-based x86 KASAN modes to the address range
check associated with generic KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add this patch to the series.

arch/x86/include/asm/kasan.h | 8 ++++++++
mm/kasan/report.c | 5 +++--
2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 5bf38bb836e1..f3e34a9754d2 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -53,6 +53,14 @@

#ifdef CONFIG_KASAN_SW_TAGS

+static inline void *__kasan_mem_to_shadow(const void *addr)
+{
+ return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+}
+
+#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr)
+
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 9e830639e1b2..ee440ed1ecd3 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -648,13 +648,14 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;

/*
- * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * For Generic KASAN and Software Tag-Based mode on the x86
+ * architecture, kasan_mem_to_shadow() uses the logical right shift
* and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
* both x86 and arm64). Thus, the possible shadow addresses (even for
* bogus pointers) belong to a single contiguous region that is the
* result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) {
if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
return;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:31:32 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.

Refactor code by moving it into a helper in preparation for the actual
fix.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Redo the patch message numbered list.
- Do the refactoring in this patch and move additions to the next new
one.

Changelog v3:
- Remove last version of this patch that just resets the tag on
base_addr and add this patch that unpoisons all areas with the same
tag instead.

include/linux/kasan.h | 10 ++++++++++
mm/kasan/hw_tags.c | 11 +++++++++++
mm/kasan/shadow.c | 10 ++++++++++
mm/vmalloc.c | 4 +---
4 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7a2527794549..3ec432d7df9a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -613,6 +613,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
__kasan_poison_vmalloc(start, size);
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms);
+static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmap_areas(vms, nr_vms);
+}
+
#else /* CONFIG_KASAN_VMALLOC */

static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -637,6 +644,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{ }
+
#endif /* CONFIG_KASAN_VMALLOC */

#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b54..1f569df313c3 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -382,6 +382,17 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
*/
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ vms[area]->addr = __kasan_unpoison_vmalloc(
+ vms[area]->addr, vms[area]->size,
+ KASAN_VMALLOC_PROT_NORMAL);
+ }
+}
+
#endif

void kasan_enable_hw_tags(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb1..b41f74d68916 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,6 +646,16 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
}

+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ kasan_poison(vms[area]->addr, vms[area]->size,
+ arch_kasan_get_tag(vms[area]->addr), false);
+ }
+}
+
#else /* CONFIG_KASAN_VMALLOC */

int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c93893fb8dd4..00be0abcaf60 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4847,9 +4847,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* With hardware tag-based KASAN, marking is skipped for
* non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
- for (area = 0; area < nr_vms; area++)
- vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
+ kasan_unpoison_vmap_areas(vms, nr_vms);

kfree(vas);
return vms;
--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:31:55 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.

Unpoison all vms[]->addr memory and pointers with the same tag to
resolve the mismatch.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Move tagging the vms[]->addr to this new patch and leave refactoring
there.
- Comment the fix to provide some context.

mm/kasan/shadow.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index b41f74d68916..ee2488371784 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,13 +646,21 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
}

+/*
+ * A tag mismatch happens when calculating per-cpu chunk addresses, because
+ * they all inherit the tag from vms[0]->addr, even when nr_vms is bigger
+ * than 1. This is a problem because all the vms[]->addr come from separate
+ * allocations and have different tags so while the calculated address is
+ * correct the tag isn't.
+ */
void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
{
int area;

for (area = 0 ; area < nr_vms ; area++) {
kasan_poison(vms[area]->addr, vms[area]->size,
- arch_kasan_get_tag(vms[area]->addr), false);
+ arch_kasan_get_tag(vms[0]->addr), false);
+ arch_kasan_set_tag(vms[area]->addr, arch_kasan_get_tag(vms[0]->addr));
}
}

--
2.50.1

Maciej Wieczor-Retman

unread,
Aug 25, 2025, 4:32:16 PMAug 25
to sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, maciej.wie...@intel.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have
ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore
(TBI) that allows the software tag-based mode on arm64 platform.

Set scale macro based on KASAN mode: in software tag-based mode 16 bytes
of memory map to one shadow byte and 8 in generic mode.

Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when
CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler
support is available.

Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
---
Changelog v4:
- Add x86 specific kasan_mem_to_shadow().
- Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to
KASAN_SHADOW_START/END.
- Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset.
- Disable inline and stack support when software tags are enabled on
x86.

Changelog v3:
- Remove runtime_const from previous patch and merge the rest here.
- Move scale shift definition back to header file.
- Add new kasan offset for software tag based mode.
- Fix patch message typo 32 -> 16, and 16 -> 8.
- Update lib/Kconfig.kasan with x86 now having software tag-based
support.

Changelog v2:
- Remove KASAN dense code.

Documentation/arch/x86/x86_64/mm.rst | 6 ++++--
arch/x86/Kconfig | 4 +++-
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/kasan.h | 1 +
arch/x86/kernel/setup.c | 2 ++
lib/Kconfig.kasan | 3 ++-
scripts/gdb/linux/kasan.py | 4 ++--
7 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
index a6cf05d51bd8..ccbdbb4cda36 100644
--- a/Documentation/arch/x86/x86_64/mm.rst
+++ b/Documentation/arch/x86/x86_64/mm.rst
@@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole
ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory map (vmemmap_base)
ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
- ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
+ ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
+ fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 56-bit one from here on:
@@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole
ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory map (vmemmap_base)
ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole
- ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory
+ ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
+ ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 47-bit one from here on:
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b8df57ac0f28..f44fec1190b6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -69,6 +69,7 @@ config X86
select ARCH_CLOCKSOURCE_INIT
select ARCH_CONFIGURES_CPU_MITIGATIONS
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
+ select ARCH_DISABLE_KASAN_INLINE if X86_64 && KASAN_SW_TAGS
select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
@@ -199,6 +200,7 @@ config X86
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
select HAVE_ARCH_KASAN_VMALLOC if X86_64
+ select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
select HAVE_ARCH_KFENCE
select HAVE_ARCH_KMSAN if X86_64
select HAVE_ARCH_KGDB
@@ -403,7 +405,7 @@ config AUDIT_ARCH

config KASAN_SHADOW_OFFSET
hex
- depends on KASAN
+ default 0xeffffc0000000000 if KASAN_SW_TAGS
default 0xdffffc0000000000

config HAVE_INTEL_TXT
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index db1048621ea2..ded92b439ada 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -13,6 +13,7 @@
#undef CONFIG_PARAVIRT_SPINLOCKS
#undef CONFIG_KASAN
#undef CONFIG_KASAN_GENERIC
+#undef CONFIG_KASAN_SW_TAGS

#define __NO_FORTIFY

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index f3e34a9754d2..385f4e9daab3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -7,6 +7,7 @@
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#ifdef CONFIG_KASAN_SW_TAGS
+#define KASAN_SHADOW_SCALE_SHIFT 4

/*
* LLVM ABI for reporting tag mismatches in inline KASAN mode.
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1b2edd07a3e1..5b819f84f6db 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1207,6 +1207,8 @@ void __init setup_arch(char **cmdline_p)

kasan_init();

+ kasan_init_sw_tags();
+
/*
* Sync back kernel address range.
*
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830fa..9ddbc6aeb5d5 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -100,7 +100,8 @@ config KASAN_SW_TAGS

Requires GCC 11+ or Clang.

- Supported only on arm64 CPUs and relies on Top Byte Ignore.
+ Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs
+ that support Linear Address Masking.

Consumes about 1/16th of available memory at kernel start and
add an overhead of ~20% for dynamic allocations.
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index fca39968d308..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,7 @@
#

import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
from ctypes import c_int64 as s64

def help():
@@ -40,7 +40,7 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
- if constants.CONFIG_KASAN_SW_TAGS:
+ if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET

--
2.50.1

Samuel Holland

unread,
Aug 25, 2025, 4:59:55 PMAug 25
to Maciej Wieczor-Retman, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de
Hi Maciej,
These two implementations have different semantics. The new function works only
on kernel addresses, whereas the existing one works on user addresses as well.
It looks like at least KVM's use of __is_canonical_address() expects the
function to work with user addresses.

Regards,
Samuel

Dave Hansen

unread,
Aug 25, 2025, 5:36:40 PMAug 25
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On 8/25/25 13:24, Maciej Wieczor-Retman wrote:
> +/*
> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
> + */
> +#ifdef CONFIG_KASAN_SW_TAGS
> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
> +{
> + return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
> +}
> +#else
> static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
> {
> return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
> }
> +#endif

This is the kind of thing that's bound to break. Could we distill it
down to something simpler, perhaps?

In the end, the canonical enforcement mask is the thing that's changing.
So perhaps it should be all common code except for the mask definition:

#ifdef CONFIG_KASAN_SW_TAGS
#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits-1))
#else
#define CANONICAL_MASK(vaddr_bits) GENMASK_UL(63, vaddr_bits)
#endif

(modulo off-by-one bugs ;)

Then the canonical check itself becomes something like:

unsigned long cmask = CANONICAL_MASK(vaddr_bits);
return (vaddr & mask) == mask;

That, to me, is the most straightforward way to do it.

I don't see it addressed in the cover letter, but what happens when a
CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?

Maciej Wieczor-Retman

unread,
Aug 26, 2025, 4:09:05 AMAug 26
to Dave Hansen, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Thanks, I'll try something like this. I will also have to investigate what
Samuel brought up that KVM possibly wants to pass user addresses to this
function as well.

>
>I don't see it addressed in the cover letter, but what happens when a
>CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?

That's a good point, I need to add it to the cover letter. On non-LAM hardware
the kernel just doesn't boot. Disabling KASAN in runtime on unsupported hardware
isn't that difficult in outline mode, but I'm not sure it can work in inline
mode (where checks into shadow memory are just pasted into code by the
compiler).

Since for now there is no compiler support for the inline mode anyway, I'll try to
disable KASAN on non-LAM hardware in runtime.

Catalin Marinas

unread,
Aug 26, 2025, 3:35:12 PMAug 26
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:24:39PM +0200, Maciej Wieczor-Retman wrote:
> To avoid having a copy of a long comment explaining the intricacies of
> the inline KASAN recovery system and issues for every architecture that
> uses the software tag-based mode, a unified kasan_inline_recover()
> function was added.
>
> Use kasan_inline_recover() in the kasan brk handler to cleanup the long
> comment, that's kept in the non-arch KASAN code.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>

Acked-by: Catalin Marinas <catalin...@arm.com>

Catalin Marinas

unread,
Aug 26, 2025, 3:36:03 PMAug 26
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
For the arm64 parts:

Acked-by: Catalin Marinas <catalin...@arm.com>

I wonder whether it's worth keeping the generic KASAN mode for arm64.
We've had the hardware TBI from the start, so the architecture version
is not an issue. The compiler support may differ though.

Anyway, that would be more suitable for a separate cleanup patch.

--
Catalin

Samuel Holland

unread,
Aug 26, 2025, 8:46:22 PMAug 26
to Maciej Wieczor-Retman, Dave Hansen, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Hi Maciej,
On RISC-V at least, I was able to run inline mode with missing hardware support.
The shadow memory is still allocated, so the inline tag checks do not fault. And
with a patch to make kasan_enabled() return false[1], all pointers remain
canonical (they match the MatchAllTag), so the inline tag checks all succeed.

[1]:
https://lore.kernel.org/linux-riscv/20241022015913.3524...@sifive.com/

Regards,
Samuel

Maciej Wieczor-Retman

unread,
Aug 27, 2025, 2:09:09 AMAug 27
to Samuel Holland, Dave Hansen, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Thanks, that should work :)

I'll test it and apply to the series.

>
>Regards,
>Samuel
>
>> Since for now there is no compiler support for the inline mode anyway, I'll try to
>> disable KASAN on non-LAM hardware in runtime.
>>
>

Maciej Wieczor-Retman

unread,
Aug 27, 2025, 2:27:56 AMAug 27
to Catalin Marinas, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Thanks :)

>
>I wonder whether it's worth keeping the generic KASAN mode for arm64.
>We've had the hardware TBI from the start, so the architecture version
>is not an issue. The compiler support may differ though.
>
>Anyway, that would be more suitable for a separate cleanup patch.
>
>--
>Catalin

I want to test it at some point, but I was always under the impression (that at
least in theory) different modes should be able to catch slightly different
errors. Not a big set but an example being accessing wrong address, but
allocated memory - on Generic it should be okay since shadow memory only says if
and how much is allocated. On sw-tags it will fault because randomized tags
would mismatch. Now I can't think of any examples the other way around but I
assume there is a few.

Maciej Wieczor-Retman

unread,
Aug 27, 2025, 2:32:41 AMAug 27
to Samuel Holland, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de
Thanks for noticing that, I'll think of a way to make it work for user addresses
too :)

>
>Regards,
>Samuel
>
>>
>> static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
>> {
>

Mike Rapoport

unread,
Aug 28, 2025, 5:50:49 AMAug 28
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Thinking more about it, we anyway reset tag in execmem_alloc() and return
untagged pointer to the caller. Let's just move kasan_reset_tag() to
execmem_vmalloc() so that we always use untagged pointers. Seems more
robust to me.

> MA_STATE(mas, free_areas, addr - 1, addr + 1);
> unsigned long lower, upper;
> void *area = NULL;
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6dbcdceecae1..c93893fb8dd4 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
> * the vm_unmap_aliases() flush includes the direct map.
> */
> for (i = 0; i < area->nr_pages; i += 1U << page_order) {
> - unsigned long addr = (unsigned long)page_address(area->pages[i]);
> + unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));

This is not strictly related to execemem, there may other users of
VM_FLUSH_RESET_PERMS.

Regardless, I wonder how this works on arm64 with tags enabled?

Also, it's not the only place in the kernel that does (unsigned
long)page_address(page). Do other sites need to reset the tag as well?

>
> if (addr) {
> unsigned long page_size;
> --
> 2.50.1
>

--
Sincerely yours,
Mike.

Maciej Wieczor-Retman

unread,
Aug 28, 2025, 12:23:02 PMAug 28
to Mike Rapoport, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Sure, I'll test if it works and change it :)

>
>> MA_STATE(mas, free_areas, addr - 1, addr + 1);
>> unsigned long lower, upper;
>> void *area = NULL;
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 6dbcdceecae1..c93893fb8dd4 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
>> * the vm_unmap_aliases() flush includes the direct map.
>> */
>> for (i = 0; i < area->nr_pages; i += 1U << page_order) {
>> - unsigned long addr = (unsigned long)page_address(area->pages[i]);
>> + unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
>
>This is not strictly related to execemem, there may other users of
>VM_FLUSH_RESET_PERMS.
>
>Regardless, I wonder how this works on arm64 with tags enabled?

Hmm, good point, I'll check it out in qemu if this function is called on arm64.

However this issue didn't pop up for me before 6.14 when EXECMEM_ROX was
enabled, so maybe it just didn't hit tagged pages before? I'll try to recheck
that on x86 too.

>Also, it's not the only place in the kernel that does (unsigned
>long)page_address(page). Do other sites need to reset the tag as well?

This place is special in the sense that it does "start = min(addr, start)" and
"end = max(addr, end)" just a few lines later. And start and end seem to always be
untagged, while addr sometimes gets tagged. So with software KASAN and vmalloc
support enabled it would get the final start and end values wrong and then a
page permission error would happen someplace else.

I don't think all other page_address(page) sites need resetting, but I'll double
check if there is any pointer arithmetic there.

>
>>
>> if (addr) {
>> unsigned long page_size;
>> --
>> 2.50.1
>>
>
>--
>Sincerely yours,
>Mike.

Andrey Konovalov

unread,
Sep 6, 2025, 1:17:51 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:26 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
> inline or outline mode in tag-based KASAN. If zeroed, it means the
> instrumentation implementation will be pasted into each relevant
> location along with KASAN related constants during compilation. If set
> to one all function instrumentation will be done with function calls
> instead.
>
> The default hwasan-instrument-with-calls value for the x86 architecture
> in the compiler is "1", which is not true for other architectures.
> Because of this, enabling inline mode in software tag-based KASAN
> doesn't work on x86 as the kernel script doesn't zero out the parameter
> and always sets up the outline mode.
>
> Explicitly zero out hwasan-instrument-with-calls when enabling inline
> mode in tag-based KASAN.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
> ---
> Changelog v3:
> - Add this patch to the series.
>
> scripts/Makefile.kasan | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
> index 693dbbebebba..2c7be96727ac 100644
> --- a/scripts/Makefile.kasan
> +++ b/scripts/Makefile.kasan
> @@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
> RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
> -Zsanitizer-recover=kernel-hwaddress
>
> +# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
> +# when inline mode is enabled.
> ifdef CONFIG_KASAN_INLINE
> kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
> + kasan_params += hwasan-instrument-with-calls=0
> else
> kasan_params += hwasan-instrument-with-calls=1
> endif
> --
> 2.50.1
>

Reviewed-by: Andrey Konovalov <andre...@gmail.com>

Andrey Konovalov

unread,
Sep 6, 2025, 1:17:58 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:27 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> KASAN's software tag-based mode needs multiple macros/functions to
> handle tag and pointer interactions - to set, retrieve and reset tags
> from the top bits of a pointer.
>
> Mimic functions currently used by arm64 but change the tag's position to
> bits [60:57] in the pointer.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
> ---
> Changelog v4:
> - Rewrite __tag_set() without pointless casts and make it more readable.
>
> Changelog v3:
> - Reorder functions so that __tag_*() etc are above the
> arch_kasan_*() ones.
> - Remove CONFIG_KASAN condition from __tag_set()
>
> arch/x86/include/asm/kasan.h | 36 ++++++++++++++++++++++++++++++++++--
> 1 file changed, 34 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index d7e33c7f096b..1963eb2fcff3 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -3,6 +3,8 @@
> #define _ASM_X86_KASAN_H
>
> #include <linux/const.h>
> +#include <linux/kasan-tags.h>
> +#include <linux/types.h>
> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> #define KASAN_SHADOW_SCALE_SHIFT 3
>
> @@ -24,8 +26,37 @@
> KASAN_SHADOW_SCALE_SHIFT)))
>
> #ifndef __ASSEMBLER__
> +#include <linux/bitops.h>
> +#include <linux/bitfield.h>
> +#include <linux/bits.h>
> +
> +#ifdef CONFIG_KASAN_SW_TAGS
> +

Nit: can remove this empty line.

> +#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
> +#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
> +#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
> +#else
> +#define __tag_shifted(tag) 0UL
> +#define __tag_reset(addr) (addr)
> +#define __tag_get(addr) 0
> +#endif /* CONFIG_KASAN_SW_TAGS */
> +
> +static inline void *__tag_set(const void *__addr, u8 tag)
> +{
> + u64 addr = (u64)__addr;
> +
> + addr &= ~__tag_shifted(KASAN_TAG_MASK);
> + addr |= __tag_shifted(tag);
> +
> + return (void *)addr;
> +}
> +
> +#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
> +#define arch_kasan_reset_tag(addr) __tag_reset(addr)
> +#define arch_kasan_get_tag(addr) __tag_get(addr)
>
> #ifdef CONFIG_KASAN
> +
> void __init kasan_early_init(void);
> void __init kasan_init(void);
> void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
> @@ -34,8 +65,9 @@ static inline void kasan_early_init(void) { }
> static inline void kasan_init(void) { }
> static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
> int nid) { }
> -#endif
>
> -#endif
> +#endif /* CONFIG_KASAN */
> +
> +#endif /* __ASSEMBLER__ */
>
> #endif
> --
> 2.50.1
>

Acked-by: Andrey Konovalov <andre...@gmail.com>

Andrey Konovalov

unread,
Sep 6, 2025, 1:18:47 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:27 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> KASAN's tag-based mode defines multiple special tag values. They're
> reserved for:
> - Native kernel value. On arm64 it's 0xFF and it causes an early return
> in the tag checking function.
> - Invalid value. 0xFE marks an area as freed / unallocated. It's also
> the value that is used to initialize regions of shadow memory.
> - Max value. 0xFD is the highest value that can be randomly generated
> for a new tag.
>
> Metadata macro is also defined:
> - Tag width equal to 8.
>
> Tag-based mode on x86 is going to use 4 bit wide tags so all the above
> values need to be changed accordingly.
>
> Make native kernel tag arch specific for x86 and arm64.
>
> Replace hardcoded kernel tag value and tag width with macros in KASAN's
> non-arch specific code.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
> ---
> Changelog v5:
> - Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN
> mode case.
>
> Changelog v4:
> - Move KASAN_TAG_MASK to kasan-tags.h.
>
> Changelog v2:
> - Remove risc-v from the patch.
>
> MAINTAINERS | 2 +-
> arch/arm64/include/asm/kasan-tags.h | 13 +++++++++++++
> arch/arm64/include/asm/kasan.h | 4 ----
> arch/x86/include/asm/kasan-tags.h | 9 +++++++++
> include/linux/kasan-tags.h | 10 +++++++++-
> include/linux/kasan.h | 4 +++-
> include/linux/mm.h | 6 +++---
> include/linux/mmzone.h | 1 -
> include/linux/page-flags-layout.h | 9 +--------
> 9 files changed, 39 insertions(+), 19 deletions(-)
> create mode 100644 arch/arm64/include/asm/kasan-tags.h
> create mode 100644 arch/x86/include/asm/kasan-tags.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index fed6cd812d79..788532771832 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13176,7 +13176,7 @@ L: kasa...@googlegroups.com
> S: Maintained
> B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
> F: Documentation/dev-tools/kasan.rst
> -F: arch/*/include/asm/*kasan.h
> +F: arch/*/include/asm/*kasan*.h
> F: arch/*/mm/kasan_init*
> F: include/linux/kasan*.h
> F: lib/Kconfig.kasan
> diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
> new file mode 100644
> index 000000000000..152465d03508
> --- /dev/null
> +++ b/arch/arm64/include/asm/kasan-tags.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_KASAN_TAGS_H
> +#define __ASM_KASAN_TAGS_H
> +
> +#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
> +
> +#define KASAN_TAG_WIDTH 8
> +
> +#ifdef CONFIG_KASAN_HW_TAGS
> +#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
> +#endif
> +
> +#endif /* ASM_KASAN_TAGS_H */
> diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
> index 4ab419df8b93..d2841e0fb908 100644
> --- a/arch/arm64/include/asm/kasan.h
> +++ b/arch/arm64/include/asm/kasan.h
> @@ -7,10 +7,6 @@
> #include <linux/linkage.h>
> #include <asm/memory.h>
>
> -#ifdef CONFIG_KASAN_HW_TAGS
> -#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
> -#endif
> -
> #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
> #define arch_kasan_reset_tag(addr) __tag_reset(addr)
> #define arch_kasan_get_tag(addr) __tag_get(addr)
> diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
> new file mode 100644
> index 000000000000..68ba385bc75c
> --- /dev/null
> +++ b/arch/x86/include/asm/kasan-tags.h
> @@ -0,0 +1,9 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_KASAN_TAGS_H
> +#define __ASM_KASAN_TAGS_H
> +
> +#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */
> +
> +#define KASAN_TAG_WIDTH 4
> +
> +#endif /* ASM_KASAN_TAGS_H */
> diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
> index e07c896f95d3..fe80fa8f3315 100644
> --- a/include/linux/kasan-tags.h
> +++ b/include/linux/kasan-tags.h
> @@ -2,7 +2,15 @@
> #ifndef _LINUX_KASAN_TAGS_H
> #define _LINUX_KASAN_TAGS_H
>
> -#include <asm/kasan.h>
> +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +#include <asm/kasan-tags.h>
> +#endif
> +
> +#ifndef KASAN_TAG_WIDTH
> +#define KASAN_TAG_WIDTH 0
> +#endif
> +
> +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
>
> #ifndef KASAN_TAG_KERNEL
> #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index b396feca714f..54481f8c30c5 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -40,7 +40,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
>
> #ifdef CONFIG_KASAN_SW_TAGS
> /* This matches KASAN_TAG_INVALID. */
> -#define KASAN_SHADOW_INIT 0xFE
> +#ifndef KASAN_SHADOW_INIT

Do we need this ifndef?

> +#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
> +#endif
> #else
> #define KASAN_SHADOW_INIT 0
> #endif
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 1ae97a0b8ec7..bb494cb1d5af 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1692,7 +1692,7 @@ static inline u8 page_kasan_tag(const struct page *page)
>
> if (kasan_enabled()) {
> tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> - tag ^= 0xff;
> + tag ^= KASAN_TAG_KERNEL;
> }
>
> return tag;
> @@ -1705,7 +1705,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
> if (!kasan_enabled())
> return;
>
> - tag ^= 0xff;
> + tag ^= KASAN_TAG_KERNEL;
> old_flags = READ_ONCE(page->flags);
> do {
> flags = old_flags;
> @@ -1724,7 +1724,7 @@ static inline void page_kasan_tag_reset(struct page *page)
>
> static inline u8 page_kasan_tag(const struct page *page)
> {
> - return 0xff;
> + return KASAN_TAG_KERNEL;
> }
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 0c5da9141983..c139fb3d862d 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1166,7 +1166,6 @@ static inline bool zone_is_empty(struct zone *zone)
> #define NODES_MASK ((1UL << NODES_WIDTH) - 1)
> #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1)
> #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1)
> -#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)

So we cannot define this here because of include dependencies? Having
this value defined here would look cleaner.

Otherwise, let's add a comment here with a reference to where this
value is defined.

> #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1)
>
> static inline enum zone_type page_zonenum(const struct page *page)
> diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
> index 760006b1c480..b2cc4cb870e0 100644
> --- a/include/linux/page-flags-layout.h
> +++ b/include/linux/page-flags-layout.h
> @@ -3,6 +3,7 @@
> #define PAGE_FLAGS_LAYOUT_H
>
> #include <linux/numa.h>
> +#include <linux/kasan-tags.h>
> #include <generated/bounds.h>
>
> /*
> @@ -72,14 +73,6 @@
> #define NODE_NOT_IN_PAGE_FLAGS 1
> #endif
>
> -#if defined(CONFIG_KASAN_SW_TAGS)
> -#define KASAN_TAG_WIDTH 8
> -#elif defined(CONFIG_KASAN_HW_TAGS)
> -#define KASAN_TAG_WIDTH 4

This case is removed here but not added to arch/arm64/include/asm/kasan-tags.h.


> -#else
> -#define KASAN_TAG_WIDTH 0
> -#endif
> -
> #ifdef CONFIG_NUMA_BALANCING
> #define LAST__PID_SHIFT 8
> #define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
> --
> 2.50.1
>

Andrey Konovalov

unread,
Sep 6, 2025, 1:18:55 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Reviewed-by: Andrey Konovalov <andre...@gmail.com>

Andrey Konovalov

unread,
Sep 6, 2025, 1:19:16 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Putting this under else in this patch looks odd, we can move this part
to "x86: Make software tag-based kasan available".
Hm, this part is different than on arm64: there, we don't check the
return value.

Do I understand correctly that the return value from this function
controls whether we skip over the int3 instruction and continue the
execution? If so, we should return the same value regardless of
whether the report is suppressed or not. And then you should not need
to explicitly check for KASAN_BIT_MULTI_SHOT in the latter patch.

> +
> + kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);

Maybe name this is as kasan_die_unless_recover()?

Andrey Konovalov

unread,
Sep 6, 2025, 1:19:21 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:30 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
It's odd this this is required on x86 but not on arm64, see my comment
on the patch that adds kasan_inline_handler().

Andrey Konovalov

unread,
Sep 6, 2025, 1:19:25 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Do we need this fix for the HW_TAGS mode too?

Andrey Konovalov

unread,
Sep 6, 2025, 1:19:48 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Do you think it would make sense to drop the parts of the series that
add int3 handling, since the inline instrumentation does not work yet
anyway?

> select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
> select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
> select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
> @@ -199,6 +200,7 @@ config X86
> select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if X86_64
> select HAVE_ARCH_KASAN_VMALLOC if X86_64
> + select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
> select HAVE_ARCH_KFENCE
> select HAVE_ARCH_KMSAN if X86_64
> select HAVE_ARCH_KGDB
> @@ -403,7 +405,7 @@ config AUDIT_ARCH
>
> config KASAN_SHADOW_OFFSET
> hex
> - depends on KASAN

Line accidentally removed?
This change seems to belong to the patch that changes how the shadow
memory address is calculated.

Borislav Petkov

unread,
Sep 6, 2025, 6:26:00 PMSep 6
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Aug 25, 2025 at 10:24:36PM +0200, Maciej Wieczor-Retman wrote:
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index bb57e93b4caf..756bd96c3b8b 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -763,6 +763,9 @@ void __init init_mem_mapping(void)
> probe_page_size_mask();
> setup_pcid();
>
> + if (boot_cpu_has(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))

cpu_feature_enabled()

> + cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
> +
> #ifdef CONFIG_X86_64
> end = max_pfn << PAGE_SHIFT;
> #else
> --

Also, for all your patches' subjects and text:

Pls read

https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#patch-subject
https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#changelog

and fixup.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 4:31:40 AMSep 8
to Borislav Petkov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, andre...@gmail.com, jhub...@nvidia.com, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On 2025-09-07 at 00:24:20 +0200, Borislav Petkov wrote:
>On Mon, Aug 25, 2025 at 10:24:36PM +0200, Maciej Wieczor-Retman wrote:
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index bb57e93b4caf..756bd96c3b8b 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -763,6 +763,9 @@ void __init init_mem_mapping(void)
>> probe_page_size_mask();
>> setup_pcid();
>>
>> + if (boot_cpu_has(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))
>
>cpu_feature_enabled()
>

Thanks, I'll correct it.

>> + cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
>> +
>> #ifdef CONFIG_X86_64
>> end = max_pfn << PAGE_SHIFT;
>> #else
>> --
>
>Also, for all your patches' subjects and text:
>
>Pls read
>
>https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#patch-subject
>https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#changelog
>
>and fixup.

Thanks, I'll recheck all the patches with that in mind.

>
>Thx.
>
>--
>Regards/Gruss,
> Boris.
>
>https://people.kernel.org/tglx/notes-about-netiquette

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 5:07:16 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Sure, will do, thanks.

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 6:13:07 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
I just checked and you're right, it's not needed. I think it might have been a
leftover of my dense mode code.
I'll retest with a couple of configs but I removed this change and everything
compile fine. Thanks for noticing that

>
>> #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1)
>>
>> static inline enum zone_type page_zonenum(const struct page *page)
>> diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
>> index 760006b1c480..b2cc4cb870e0 100644
>> --- a/include/linux/page-flags-layout.h
>> +++ b/include/linux/page-flags-layout.h
>> @@ -3,6 +3,7 @@
>> #define PAGE_FLAGS_LAYOUT_H
>>
>> #include <linux/numa.h>
>> +#include <linux/kasan-tags.h>
>> #include <generated/bounds.h>
>>
>> /*
>> @@ -72,14 +73,6 @@
>> #define NODE_NOT_IN_PAGE_FLAGS 1
>> #endif
>>
>> -#if defined(CONFIG_KASAN_SW_TAGS)
>> -#define KASAN_TAG_WIDTH 8
>> -#elif defined(CONFIG_KASAN_HW_TAGS)
>> -#define KASAN_TAG_WIDTH 4
>
>This case is removed here but not added to arch/arm64/include/asm/kasan-tags.h.

Right, I'll correct that.

>
>
>> -#else
>> -#define KASAN_TAG_WIDTH 0
>> -#endif
>> -
>> #ifdef CONFIG_NUMA_BALANCING
>> #define LAST__PID_SHIFT 8
>> #define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
>> --
>> 2.50.1
>>

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 6:44:00 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Sure, will do!
I recall there were some corner cases where this code path got called in outline
mode, didn't have a mismatch but still died due to the die() below. But I'll
recheck and either apply what you wrote above or get add a better explanation
to the patch message.

>
>> +
>> + kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
>
>Maybe name this is as kasan_die_unless_recover()?

Sure, sounds good

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 8:55:46 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On 2025-09-08 at 12:38:57 +0200, Maciej Wieczor-Retman wrote:
>On 2025-09-06 at 19:19:01 +0200, Andrey Konovalov wrote:
>>On Mon, Aug 25, 2025 at 10:30 PM Maciej Wieczor-Retman
Okay, so the int3_selftest_ip() is causing a problem in outline mode.

I tried disabling kasan with kasan_disable_current() but thinking of it now it
won't work because int3 handler will still be called and die() will happen.

What did you mean by "return the same value regardless of kasan_report()"? Then
it will never reach the kasan_inline_recover() which I assume is needed for
inline mode (once recover will work).

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 9:04:46 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
I think this is needed if we want to keep the kasan_inline_recover below.
Because without this patch, kasan_report() will report a mismatch, an then die()
will be called. So the multishot gets ignored.

I'll check what happens on arm64 when a mismatch happens with inline mode +
multishot.

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 9:09:21 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Sorry, I meant to write that kasan_disable_current() works together with
if(!kasan_report()). Because without checking kasan_report()' return
value, if kasan is disabled through kasan_disable_current() it will have no
effect in both inline mode, and if int3 is called in outline mode - the
kasan_inline_handler will lead to die().

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 9:12:35 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Oh, I suppose it could also affect the hardware mode since this is related to
tagged pointers and NUMA nodes. I'll try to also make it work for HW_TAGS.

Maciej Wieczor-Retman

unread,
Sep 8, 2025, 10:11:31 AMSep 8
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
I thought we might as well put it into the kernel, so once the compiler side
gets upstreamed only the Kconfig needs to be modified.

But both options are okay, I thought itd be easy to argument changes to LLVM if
this inline mode is already prepared in the kernel.

>
>> select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
>> select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
>> select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
>> @@ -199,6 +200,7 @@ config X86
>> select HAVE_ARCH_JUMP_LABEL_RELATIVE
>> select HAVE_ARCH_KASAN if X86_64
>> select HAVE_ARCH_KASAN_VMALLOC if X86_64
>> + select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
>> select HAVE_ARCH_KFENCE
>> select HAVE_ARCH_KMSAN if X86_64
>> select HAVE_ARCH_KGDB
>> @@ -403,7 +405,7 @@ config AUDIT_ARCH
>>
>> config KASAN_SHADOW_OFFSET
>> hex
>> - depends on KASAN
>
>Line accidentally removed?

Yes, sorry, I'll put it back in.
Okay, I can move it there.

>
>> addr = s64(addr)
>> return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
>>
>> --
>> 2.50.1
>>

Peter Zijlstra

unread,
Sep 8, 2025, 11:41:05 AMSep 8
to Maciej Wieczor-Retman, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
On Mon, Aug 18, 2025 at 08:26:11AM +0200, Maciej Wieczor-Retman wrote:
> On 2025-08-13 at 17:17:02 +0200, Peter Zijlstra wrote:
> >On Tue, Aug 12, 2025 at 03:23:49PM +0200, Maciej Wieczor-Retman wrote:
> >> Inline KASAN on x86 does tag mismatch reports by passing the faulty
> >> address and metadata through the INT3 instruction - scheme that's setup
> >> in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).
> >>
> >> Add a kasan hook to the INT3 handling function.
> >>
> >> Disable KASAN in an INT3 core kernel selftest function since it can raise
> >> a false tag mismatch report and potentially panic the kernel.
> >>
> >> Make part of that hook - which decides whether to die or recover from a
> >> tag mismatch - arch independent to avoid duplicating a long comment on
> >> both x86 and arm64 architectures.
> >>
> >> Signed-off-by: Maciej Wieczor-Retman <maciej.wie...@intel.com>
> >
> >Can we please split this into an arm64 and x86 patch. Also, why use int3
> >here rather than a #UD trap, which we use for all other such cases?
>
> Sure, two patches seem okay. I'll first add all the new functions and modify the
> x86 code, then add the arm64 patch which will replace its die() + comment with
> kasan_inline_recover().
>
> About INT3 I'm not sure, it's just how it's written in the LLVM code. I didn't
> see any justification why it's not #UD. My guess is SMD describes INT3 as an
> interrupt for debugger purposes while #UD is described as "for software
> testing". So from the documentation point INT3 seems to have a stronger case.
>
> Does INT3 interfere with something? Or is #UD better just because of
> consistency?

INT3 from kernel space is already really tricky, since it is used for
self-modifying code.

I suppose we *can* do this, but #UD is already set up to effectively
forward to WARN and friends, and has UBSAN integration. Its just really
weird to have KASAN do something else again.

Andrey Konovalov

unread,
Sep 8, 2025, 4:19:20 PMSep 8
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Sep 8, 2025 at 3:09 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> >>I recall there were some corner cases where this code path got called in outline
> >>mode, didn't have a mismatch but still died due to the die() below. But I'll
> >>recheck and either apply what you wrote above or get add a better explanation
> >>to the patch message.
> >
> >Okay, so the int3_selftest_ip() is causing a problem in outline mode.
> >
> >I tried disabling kasan with kasan_disable_current() but thinking of it now it
> >won't work because int3 handler will still be called and die() will happen.
>
> Sorry, I meant to write that kasan_disable_current() works together with
> if(!kasan_report()). Because without checking kasan_report()' return
> value, if kasan is disabled through kasan_disable_current() it will have no
> effect in both inline mode, and if int3 is called in outline mode - the
> kasan_inline_handler will lead to die().

So do I understand correctly, that we have no way to distinguish
whether the int3 was inserted by the KASAN instrumentation or natively
called (like in int3_selftest_ip())?

If so, I think that we need to fix/change the compiler first so that
we can distinguish these cases. And only then introduce
kasan_inline_handler(). (Without kasan_inline_handler(), the outline
instrumentation would then just work, right?)

If we can distinguish them, then we should only call
kasan_inline_handler() for the KASAN-inserted int3's. This is what we
do on arm64 (via brk and KASAN_BRK_IMM). And then int3_selftest_ip()
should not be affected.

> >
> >What did you mean by "return the same value regardless of kasan_report()"? Then
> >it will never reach the kasan_inline_recover() which I assume is needed for
> >inline mode (once recover will work).

I meant that with the recovery always enabled, it should not matter
whether the report is suppressed (kasan_report() returns false) or
printed (returns true). We should always skip over the int3
instruction and continue the execution.

Andrey Konovalov

unread,
Sep 8, 2025, 4:19:26 PMSep 8
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Mon, Sep 8, 2025 at 3:04 PM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> >> + if (kasan_multi_shot_enabled())
> >> + return true;
> >
> >It's odd this this is required on x86 but not on arm64, see my comment
> >on the patch that adds kasan_inline_handler().
> >
>
> I think this is needed if we want to keep the kasan_inline_recover below.
> Because without this patch, kasan_report() will report a mismatch, an then die()
> will be called. So the multishot gets ignored.

But die() should be called only when recovery is disabled. And
recovery should always be enabled.

But maybe this is the problem with when kasan_inline_handler(), see my
comment on the the patch #13.

Andrey Konovalov

unread,
Sep 8, 2025, 4:19:29 PMSep 8
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Ack. I suspect you won't need to provide two separate implementations
for this then.

Also, could you split out this fix into a separate patch with the
Fixes and CC stable tags (or put the fix first in the series)?

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:24:52 AMSep 9
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Looking at it again I suppose LLVM does pass a number along metadata to the
int3. I didn't notice because no other function checks anything in the x86 int3
handler, compared to how it's done on arm64 with brk.

So right, thanks, after fixing it up it shouldn't affect the int3_selftest_ip().

>
>> >
>> >What did you mean by "return the same value regardless of kasan_report()"? Then
>> >it will never reach the kasan_inline_recover() which I assume is needed for
>> >inline mode (once recover will work).
>
>I meant that with the recovery always enabled, it should not matter
>whether the report is suppressed (kasan_report() returns false) or
>printed (returns true). We should always skip over the int3
>instruction and continue the execution.

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:27:11 AMSep 9
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Sure, I'll move it to the beginning of the series.

Peter Zijlstra

unread,
Sep 9, 2025, 4:34:40 AMSep 9
to Maciej Wieczor-Retman, Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Seriously guys, stop using int3 for this. UBSAN uses UD1, why the heck
would KASAN not do the same?

Peter Zijlstra

unread,
Sep 9, 2025, 4:40:38 AMSep 9
to Maciej Wieczor-Retman, Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Specifically, look at arch/x86/kernel/traps.h:decode_bug(), UBSan uses
UD1 /0, I would suggest KASAN to use UD1 /1.

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:42:53 AMSep 9
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On 2025-09-08 at 22:19:11 +0200, Andrey Konovalov wrote:
>On Mon, Sep 8, 2025 at 3:04 PM Maciej Wieczor-Retman
><maciej.wie...@intel.com> wrote:
>>
>> >> + if (kasan_multi_shot_enabled())
>> >> + return true;
>> >
>> >It's odd this this is required on x86 but not on arm64, see my comment
>> >on the patch that adds kasan_inline_handler().
>> >
>>
>> I think this is needed if we want to keep the kasan_inline_recover below.
>> Because without this patch, kasan_report() will report a mismatch, an then die()
>> will be called. So the multishot gets ignored.
>
>But die() should be called only when recovery is disabled. And
>recovery should always be enabled.

Hmm I thought when I was testing inline mode last time, that recovery was always
disabled. I'll recheck later.

But just looking at llvm code, hwasan-recover has init(false). And the kernel
doesn't do anything to this value in Makefile.kasan. Perhaps it just needs to be
corrected in the Makefile.kasan?

>But maybe this is the problem with when kasan_inline_handler(), see my
>comment on the the patch #13.

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:47:57 AMSep 9
to Peter Zijlstra, nat...@kernel.org, ar...@arndb.de, bro...@kernel.org, Liam.H...@oracle.com, ure...@gmail.com, wi...@kernel.org, kales...@google.com, rp...@kernel.org, lei...@debian.org, co...@redhat.com, sur...@google.com, ak...@linux-foundation.org, lu...@kernel.org, jpoi...@kernel.org, chang...@google.com, h...@zytor.com, dvy...@google.com, k...@kernel.org, cor...@lwn.net, vincenzo...@arm.com, smos...@google.com, nick.desau...@gmail.com, mo...@google.com, andre...@gmail.com, alexander...@linux.intel.com, thiago.b...@linaro.org, catalin...@arm.com, ryabin...@gmail.com, jan.k...@siemens.com, jbo...@suse.cz, dan.j.w...@intel.com, joel.g...@kernel.org, bao...@kernel.org, kevin....@arm.com, nicolas...@linux.dev, p...@google.com, andriy.s...@linux.intel.com, wei...@kernel.org, b...@alien8.de, ada.cou...@arm.com, x...@zytor.com, pankaj...@amd.com, vba...@suse.cz, gli...@google.com, jgr...@suse.com, ke...@kernel.org, jhub...@nvidia.com, joey....@arm.com, ar...@kernel.org, th...@redhat.com, pasha.t...@soleen.com, kristina....@arm.com, big...@linutronix.de, lorenzo...@oracle.com, jason....@amd.com, da...@redhat.com, gr...@amazon.com, wangkef...@huawei.com, z...@nvidia.com, mark.r...@arm.com, dave....@linux.intel.com, samuel....@sifive.com, kbin...@kernel.org, trinta...@gmail.com, sc...@os.amperecomputing.com, justi...@google.com, kuan-y...@canonical.com, m...@kernel.org, tg...@linutronix.de, samito...@google.com, mho...@suse.com, nunoda...@linux.microsoft.com, brg...@gmail.com, wi...@infradead.org, ubi...@gmail.com, mi...@redhat.com, sohil...@intel.com, linu...@kvack.org, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org, x...@kernel.org, ll...@lists.linux.dev, kasa...@googlegroups.com, linu...@vger.kernel.org, linux-...@vger.kernel.org
Ah, I see, the handle_bug(). Then perhaps it's better to move the kasan
handler there and then patch LLVM to use #UD instead. Thanks!

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:50:22 AMSep 9
to Peter Zijlstra, Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
Okay, that sounds great, I'll change it in this patchset and write the LLVM
patch later.

Maciej Wieczor-Retman

unread,
Sep 9, 2025, 4:54:18 AMSep 9
to Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
But as Peter Zijlstra noticed, x86 already uses the #UD instruction similarly to
BRK on arm64. So I think I'll use this one here, and then change INT3 to UD in
the LLVM patch.

Peter Zijlstra

unread,
Sep 9, 2025, 5:04:12 AMSep 9
to Maciej Wieczor-Retman, Andrey Konovalov, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Tue, Sep 09, 2025 at 10:49:53AM +0200, Maciej Wieczor-Retman wrote:

> >Specifically, look at arch/x86/kernel/traps.h:decode_bug(), UBSan uses
> >UD1 /0, I would suggest KASAN to use UD1 /1.
>
> Okay, that sounds great, I'll change it in this patchset and write the LLVM
> patch later.

Thanks! Also note how UBSAN encodes an immediate in the UD1 instruction.
You can use that same to pass through your meta-data thing.

MOD=1 gives you a single byte immediate, and MOD=2 gives you 4 bytes,
eg:

0f b9 49 xx -- ud1 xx(%rcx), %rcx

When poking at LLVM, try and convince the thing to not emit that
'operand address size prefix' byte like UBSAN does, that's just a waste
of bytes.

Andrey Konovalov

unread,
Sep 9, 2025, 10:45:59 AMSep 9
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Tue, Sep 9, 2025 at 10:42 AM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> On 2025-09-08 at 22:19:11 +0200, Andrey Konovalov wrote:
> >On Mon, Sep 8, 2025 at 3:04 PM Maciej Wieczor-Retman
> ><maciej.wie...@intel.com> wrote:
> >>
> >> >> + if (kasan_multi_shot_enabled())
> >> >> + return true;
> >> >
> >> >It's odd this this is required on x86 but not on arm64, see my comment
> >> >on the patch that adds kasan_inline_handler().
> >> >
> >>
> >> I think this is needed if we want to keep the kasan_inline_recover below.
> >> Because without this patch, kasan_report() will report a mismatch, an then die()
> >> will be called. So the multishot gets ignored.
> >
> >But die() should be called only when recovery is disabled. And
> >recovery should always be enabled.
>
> Hmm I thought when I was testing inline mode last time, that recovery was always
> disabled. I'll recheck later.
>
> But just looking at llvm code, hwasan-recover has init(false). And the kernel
> doesn't do anything to this value in Makefile.kasan. Perhaps it just needs to be
> corrected in the Makefile.kasan?

Recovery should be disabled as the default when
-fsanitize=kernel-hwaddress is used (unless something was
broken/changed); see this patch:

https://github.com/llvm/llvm-project/commit/1ba9d9c6ca1ffeef7e833261ebca463a92adf82f

Andrey Konovalov

unread,
Sep 9, 2025, 10:46:24 AMSep 9
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Tue, Sep 9, 2025 at 4:45 PM Andrey Konovalov <andre...@gmail.com> wrote:
>
> On Tue, Sep 9, 2025 at 10:42 AM Maciej Wieczor-Retman
> <maciej.wie...@intel.com> wrote:
> >
> > On 2025-09-08 at 22:19:11 +0200, Andrey Konovalov wrote:
> > >On Mon, Sep 8, 2025 at 3:04 PM Maciej Wieczor-Retman
> > ><maciej.wie...@intel.com> wrote:
> > >>
> > >> >> + if (kasan_multi_shot_enabled())
> > >> >> + return true;
> > >> >
> > >> >It's odd this this is required on x86 but not on arm64, see my comment
> > >> >on the patch that adds kasan_inline_handler().
> > >> >
> > >>
> > >> I think this is needed if we want to keep the kasan_inline_recover below.
> > >> Because without this patch, kasan_report() will report a mismatch, an then die()
> > >> will be called. So the multishot gets ignored.
> > >
> > >But die() should be called only when recovery is disabled. And
> > >recovery should always be enabled.
> >
> > Hmm I thought when I was testing inline mode last time, that recovery was always
> > disabled. I'll recheck later.
> >
> > But just looking at llvm code, hwasan-recover has init(false). And the kernel
> > doesn't do anything to this value in Makefile.kasan. Perhaps it just needs to be
> > corrected in the Makefile.kasan?
>
> Recovery should be disabled as the default when

Eh, enabled, not disabled.

Andrey Konovalov

unread,
Sep 9, 2025, 10:46:31 AMSep 9
to Maciej Wieczor-Retman, sohil...@intel.com, bao...@kernel.org, da...@redhat.com, kbin...@kernel.org, wei...@google.com, Liam.H...@oracle.com, alexandr...@oracle.com, k...@kernel.org, mark.r...@arm.com, trinta...@gmail.com, axelra...@google.com, yua...@google.com, joey....@arm.com, samito...@google.com, joel.g...@kernel.org, gr...@amazon.com, vincenzo...@arm.com, ke...@kernel.org, ar...@kernel.org, thiago.b...@linaro.org, gli...@google.com, th...@redhat.com, kuan-y...@canonical.com, pasha.t...@soleen.com, nick.desau...@gmail.com, vba...@suse.cz, kales...@google.com, justi...@google.com, catalin...@arm.com, alexander...@linux.intel.com, samuel....@sifive.com, dave....@linux.intel.com, cor...@lwn.net, x...@zytor.com, dvy...@google.com, tg...@linutronix.de, sc...@os.amperecomputing.com, jason....@amd.com, mo...@google.com, nat...@kernel.org, lorenzo...@oracle.com, mi...@redhat.com, brg...@gmail.com, kristina....@arm.com, big...@linutronix.de, lu...@kernel.org, jgr...@suse.com, jpoi...@kernel.org, ure...@gmail.com, mho...@suse.com, ada.cou...@arm.com, h...@zytor.com, lei...@debian.org, pet...@infradead.org, wangkef...@huawei.com, sur...@google.com, z...@nvidia.com, smos...@google.com, ryabin...@gmail.com, ubi...@gmail.com, jbo...@suse.cz, bro...@kernel.org, ak...@linux-foundation.org, guoweika...@gmail.com, rp...@kernel.org, p...@google.com, jan.k...@siemens.com, nicolas...@linux.dev, wi...@kernel.org, jhub...@nvidia.com, b...@alien8.de, x...@kernel.org, linu...@vger.kernel.org, linu...@kvack.org, ll...@lists.linux.dev, linux-...@vger.kernel.org, kasa...@googlegroups.com, linux-...@vger.kernel.org, linux-ar...@lists.infradead.org
On Tue, Sep 9, 2025 at 10:54 AM Maciej Wieczor-Retman
<maciej.wie...@intel.com> wrote:
>
> But as Peter Zijlstra noticed, x86 already uses the #UD instruction similarly to
> BRK on arm64. So I think I'll use this one here, and then change INT3 to UD in
> the LLVM patch.

Sound good, thanks!
It is loading more messages.
0 new messages