[PATCH 00/13] kmsan: Enable on powerpc

3 views
Skip to first unread message

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:33 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
This series provides the minimal support for Kernal Memory Sanitizer on
powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
uses of uninitialized memory. Currently KMSAN is clang only.

The clang support for powerpc has not yet been merged, the pull request can
be found here [1].

In addition to this series, there are a number of changes required in
generic kmsan code. These changes are already on mailing lists as part of
the series implementing KMSAN for s390 [2]. This series is intended to be
rebased on top of the s390 series.

In addition, I found a bug in the rtc driver used on powerpc. I have sent
a fix to this in a seperate series [3].

With this series and the two series mentioned above, I can successfully
boot pseries le defconfig without KMSAN warnings. I have not tested other
powerpc platforms.

[1] https://github.com/llvm/llvm-project/pull/73611
[2] https://lore.kernel.org/linux-mm/20231121220155...@linux.ibm.com/
[3] https://lore.kernel.org/linux-rtc/20231129073647.2...@linux.ibm.com/

Nicholas Miehlbradt (13):
kmsan: Export kmsan_handle_dma
hvc: Fix use of uninitialized array in udbg_hvc_putc
powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
powerpc: Unpoison buffers populated by hcalls
powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
powerpc/kprobes: Unpoison instruction in kprobe struct
powerpc: Unpoison pt_regs
powerpc: Disable KMSAN checks on functions which walk the stack
powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
powerpc: Implement architecture specific KMSAN interface
powerpc/string: Add KMSAN support
powerpc: Enable KMSAN on powerpc

arch/powerpc/Kconfig | 3 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 42 +++++++++++++++
arch/powerpc/include/asm/interrupt.h | 2 +
arch/powerpc/include/asm/kmsan.h | 51 +++++++++++++++++++
arch/powerpc/include/asm/string.h | 18 ++++++-
arch/powerpc/kernel/Makefile | 2 +
arch/powerpc/kernel/irq_64.c | 2 +
arch/powerpc/kernel/kprobes.c | 2 +
arch/powerpc/kernel/module.c | 2 +-
arch/powerpc/kernel/process.c | 6 +--
arch/powerpc/kernel/stacktrace.c | 10 ++--
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 +
arch/powerpc/lib/mem_64.S | 5 +-
arch/powerpc/lib/memcpy_64.S | 2 +
arch/powerpc/perf/callchain.c | 2 +-
arch/powerpc/platforms/pseries/hvconsole.c | 2 +
arch/powerpc/platforms/pseries/nvram.c | 4 ++
arch/powerpc/purgatory/Makefile | 1 +
arch/powerpc/sysdev/xive/spapr.c | 3 ++
drivers/tty/hvc/hvc_vio.c | 2 +-
mm/kmsan/hooks.c | 1 +
.../selftests/powerpc/copyloops/asm/kmsan.h | 0
.../powerpc/copyloops/linux/export.h | 1 +
24 files changed, 152 insertions(+), 14 deletions(-)
create mode 100644 arch/powerpc/include/asm/kmsan.h
create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:33 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
All elements of bounce_buffer are eventually read and passed to the
hypervisor so it should probably be fully initialized.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
drivers/tty/hvc/hvc_vio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 736b230f5ec0..1e88bfcdde20 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
static void udbg_hvc_putc(char c)
{
int count = -1;
- unsigned char bounce_buffer[16];
+ unsigned char bounce_buffer[16] = { 0 };

if (!hvterm_privs[0])
return;
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:34 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Functions which walk the stack read parts of the stack which cannot be
instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
these functions to prevent false positives.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/kernel/process.c | 6 +++---
arch/powerpc/kernel/stacktrace.c | 10 ++++++----
arch/powerpc/perf/callchain.c | 2 +-
3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 392404688cec..3dc88143c3b2 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk)

static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;

-void __no_sanitize_address show_stack(struct task_struct *tsk,
- unsigned long *stack,
- const char *loglvl)
+void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk,
+ unsigned long *stack,
+ const char *loglvl)
{
unsigned long sp, ip, lr, newsp;
int count = 0;
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index e6a958a5da27..369b8b2a1bcd 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -24,8 +24,9 @@

#include <asm/paca.h>

-void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
- struct task_struct *task, struct pt_regs *regs)
+void __no_sanitize_address __no_kmsan_checks
+ arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+ struct task_struct *task, struct pt_regs *regs)
{
unsigned long sp;

@@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry,
*
* If the task is not 'current', the caller *must* ensure the task is inactive.
*/
-int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
- void *cookie, struct task_struct *task)
+int __no_sanitize_address __no_kmsan_checks
+ arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
+ struct task_struct *task)
{
unsigned long sp;
unsigned long newsp;
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 6b4434dd0ff3..c7610b38e9b8 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
return 0;
}

-void __no_sanitize_address
+void __no_sanitize_address __no_kmsan_checks
perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
unsigned long sp, next_sp;
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:34 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Other sanitizers are disabled for these, disable KMSAN too.

prom_init.o can only reference a limited set of external symbols. KMSAN
adds additional references which are not permitted so disable
sanitization.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/kernel/Makefile | 2 ++
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/purgatory/Makefile | 1 +
3 files changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 2919433be355..78ea441f7e18 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -61,6 +61,8 @@ KCSAN_SANITIZE_btext.o := n
KCSAN_SANITIZE_paca.o := n
KCSAN_SANITIZE_setup_64.o := n

+KMSAN_SANITIZE_prom_init.o := n
+
#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
# Remove stack protector to avoid triggering unneeded stack canary
# checks due to randomize_kstack_offset.
diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
index 0c7d82c270c3..86fa6ff1ee51 100644
--- a/arch/powerpc/kernel/vdso/Makefile
+++ b/arch/powerpc/kernel/vdso/Makefile
@@ -52,6 +52,7 @@ KCOV_INSTRUMENT := n
UBSAN_SANITIZE := n
KASAN_SANITIZE := n
KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n

ccflags-y := -fno-common -fno-builtin
ldflags-y := -Wl,--hash-style=both -nostdlib -shared -z noexecstack $(CLANG_FLAGS)
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index 78473d69cd2b..4b267061bf84 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -2,6 +2,7 @@

KASAN_SANITIZE := n
KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n

targets += trampoline_$(BITS).o purgatory.ro

--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:35 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
pt_regs is initialized ppc_save_regs which is implemented in assembly
and therefore does not mark the struct as initialized. Unpoison it so
that it will not generate false positives.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/include/asm/interrupt.h | 2 ++
arch/powerpc/kernel/irq_64.c | 2 ++
2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index a4196ab1d016..a9bb09633689 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -68,6 +68,7 @@

#include <linux/context_tracking.h>
#include <linux/hardirq.h>
+#include <linux/kmsan.h>
#include <asm/cputime.h>
#include <asm/firmware.h>
#include <asm/ftrace.h>
@@ -170,6 +171,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs)
__hard_RI_enable();
}
/* Enable MSR[RI] early, to support kernel SLB and hash faults */
+ kmsan_unpoison_entry_regs(regs);
#endif

if (!arch_irq_disabled_regs(regs))
diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index 938e66829eae..3d441f1b8c49 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -45,6 +45,7 @@
#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#include <linux/static_call.h>
+#include <linux/kmsan.h>

#include <linux/uaccess.h>
#include <asm/interrupt.h>
@@ -117,6 +118,7 @@ static __no_kcsan void __replay_soft_interrupts(void)
local_paca->irq_happened |= PACA_IRQ_REPLAYING;

ppc_save_regs(&regs);
+ kmsan_unpoison_entry_regs(&regs);
regs.softe = IRQS_ENABLED;
regs.msr |= MSR_EE;

--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:35 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Enable KMSAN in the Kconfig.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e33e3250c478..71cc7d2a0a72 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -217,6 +217,7 @@ config PPC
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KCSAN
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
+ select HAVE_ARCH_KMSAN if PPC64
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_WITHIN_STACK_FRAMES
select HAVE_ARCH_KGDB
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:39 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
the respective functions.

Disable use of architecture specific memset{16,32,64} to ensure that
metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
are implemented in assembly and therefore cannot be instrumented to
propagate/check metadata.

Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
instrumented code to correctly propagate metadata.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/include/asm/kmsan.h | 7 +++++++
arch/powerpc/include/asm/string.h | 18 ++++++++++++++++--
arch/powerpc/lib/Makefile | 2 ++
arch/powerpc/lib/mem_64.S | 5 ++++-
arch/powerpc/lib/memcpy_64.S | 2 ++
.../selftests/powerpc/copyloops/asm/kmsan.h | 0
.../selftests/powerpc/copyloops/linux/export.h | 1 +
7 files changed, 32 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
index bc84f6ff2ee9..fc59dc24e170 100644
--- a/arch/powerpc/include/asm/kmsan.h
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -7,6 +7,13 @@
#ifndef _ASM_POWERPC_KMSAN_H
#define _ASM_POWERPC_KMSAN_H

+#ifdef CONFIG_KMSAN
+#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
+ EXPORT_SYMBOL(__##fn)
+#else
+#define EXPORT_SYMBOL_KMSAN(fn)
+#endif
+
#ifndef __ASSEMBLY__
#ifndef MODULE

diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index 60ba22770f51..412626ce619b 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -4,7 +4,7 @@

#ifdef __KERNEL__

-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
#define __HAVE_ARCH_STRNCPY
#define __HAVE_ARCH_STRNCMP
#define __HAVE_ARCH_MEMCHR
@@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
#endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
#endif /* CONFIG_KASAN */

+#ifdef CONFIG_KMSAN
+
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#ifdef __SANITIZE_MEMORY__
+#include <linux/kmsan_string.h>
+#define memset __msan_memset
+#define memcpy __msan_memcpy
+#define memmove __msan_memmove
+#endif
+#endif /* CONFIG_KMSAN */
+
#ifdef CONFIG_PPC64
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
#define __HAVE_ARCH_MEMSET32
#define __HAVE_ARCH_MEMSET64

diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 51ad0397c17a..fc3ea3eebbd6 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o

ifndef CONFIG_KASAN
+ifndef CONFIG_KMSAN
obj-y += string.o memcmp_$(BITS).o
obj-$(CONFIG_PPC32) += strlen_32.o
endif
+endif

obj-$(CONFIG_PPC32) += div64.o copy_32.o crtsavres.o

diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 6fd06cd20faa..a55f2fac49b3 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -9,8 +9,9 @@
#include <asm/errno.h>
#include <asm/ppc_asm.h>
#include <asm/kasan.h>
+#include <asm/kmsan.h>

-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
_GLOBAL(__memset16)
rlwimi r4,r4,16,0,15
/* fall through */
@@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
blr
EXPORT_SYMBOL(memset)
EXPORT_SYMBOL_KASAN(memset)
+EXPORT_SYMBOL_KMSAN(memset)

_GLOBAL_TOC_KASAN(memmove)
cmplw 0,r3,r4
@@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
b 1b
EXPORT_SYMBOL(memmove)
EXPORT_SYMBOL_KASAN(memmove)
+EXPORT_SYMBOL_KMSAN(memmove)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index b5a67e20143f..1657861618cc 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -8,6 +8,7 @@
#include <asm/asm-compat.h>
#include <asm/feature-fixups.h>
#include <asm/kasan.h>
+#include <asm/kmsan.h>

#ifndef SELFTEST_CASE
/* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
@@ -228,3 +229,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
#endif
EXPORT_SYMBOL(memcpy)
EXPORT_SYMBOL_KASAN(memcpy)
+EXPORT_SYMBOL_KMSAN(memcpy)
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h b/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/tools/testing/selftests/powerpc/copyloops/linux/export.h b/tools/testing/selftests/powerpc/copyloops/linux/export.h
index e6b80d5fbd14..6379624bbf9b 100644
--- a/tools/testing/selftests/powerpc/copyloops/linux/export.h
+++ b/tools/testing/selftests/powerpc/copyloops/linux/export.h
@@ -2,3 +2,4 @@
#define EXPORT_SYMBOL(x)
#define EXPORT_SYMBOL_GPL(x)
#define EXPORT_SYMBOL_KASAN(x)
+#define EXPORT_SYMBOL_KMSAN(x)
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:47 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
rtas_call provides a buffer where the return data should be placed. Rtas
initializes the buffer which is not visible to KMSAN so unpoison it
manually.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/platforms/pseries/nvram.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/nvram.c b/arch/powerpc/platforms/pseries/nvram.c
index 8130c37962c0..21a27d459347 100644
--- a/arch/powerpc/platforms/pseries/nvram.c
+++ b/arch/powerpc/platforms/pseries/nvram.c
@@ -14,6 +14,7 @@
#include <linux/ctype.h>
#include <linux/uaccess.h>
#include <linux/of.h>
+#include <linux/kmsan-checks.h>
#include <asm/nvram.h>
#include <asm/rtas.h>
#include <asm/machdep.h>
@@ -41,6 +42,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
int done;
unsigned long flags;
char *p = buf;
+ size_t l;


if (nvram_size == 0 || nvram_fetch == RTAS_UNKNOWN_SERVICE)
@@ -53,6 +55,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
if (i + count > nvram_size)
count = nvram_size - i;

+ l = count;
spin_lock_irqsave(&nvram_lock, flags);

for (; count != 0; count -= len) {
@@ -73,6 +76,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
}

spin_unlock_irqrestore(&nvram_lock, flags);
+ kmsan_unpoison_memory(buf, l);

*index = i;
return p - buf;
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:47 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Splits the vmalloc region into four. The first quarter is the new
vmalloc region, the second is used to store shadow metadata and the
third is used to store origin metadata. The fourth quarter is unused.

Do the same for the ioremap region.

Module data is stored in the vmalloc region so alias the modules
metadata addresses to the respective vmalloc metadata addresses. Define
MODULES_VADDR and MODULES_END to the start and end of the vmalloc
region.

Since MODULES_VADDR was previously only defined on ppc32 targets checks
for if this macro is defined need to be updated to include
defined(CONFIG_PPC32).

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
arch/powerpc/kernel/module.c | 2 +-
2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..b3a02b8d96e3 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -249,7 +249,38 @@ enum pgtable_index {
extern unsigned long __vmalloc_start;
extern unsigned long __vmalloc_end;
#define VMALLOC_START __vmalloc_start
+
+#ifndef CONFIG_KMSAN
#define VMALLOC_END __vmalloc_end
+#else
+/*
+ * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
+ * are used to keep the metadata for virtual pages. The memory formerly
+ * belonging to vmalloc area is now laid out as follows:
+ *
+ * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
+ * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
+ * KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
+ * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
+ * KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
+ * 4th quarter: unused
+ */
+#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
+
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
+
+/*
+ * Module metadata is stored in the corresponding vmalloc metadata regions
+ */
+#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
+#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
+#endif /* CONFIG_KMSAN */
+
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#define MODULES_LEN (MODULES_END - MODULES_VADDR)

static inline unsigned int ioremap_max_order(void)
{
@@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
extern unsigned long __kernel_io_end;
#define KERN_VIRT_START __kernel_virt_start
#define KERN_IO_START __kernel_io_start
+#ifndef CONFIG_KMSAN
#define KERN_IO_END __kernel_io_end
+#else
+/*
+ * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
+ * store metadata. See comment for vmalloc regions above.
+ */
+#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
+#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
+#define KERN_IO_SHADOW_START KERN_IO_END
+#define KERN_IO_ORIGIN_START (KERN_IO_SHADOW_START + KERN_IO_LEN)
+#endif /* !CONFIG_KMSAN */

extern struct page *vmemmap;
extern unsigned long pci_io_base;
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f6d6ae0a1692..5043b959ad4d 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool

void *module_alloc(unsigned long size)
{
-#ifdef MODULES_VADDR
+#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
unsigned long limit = (unsigned long)_etext - SZ_32M;
void *ptr = NULL;

--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:47 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
in the ioremap region which is mapped separately on powerpc.

kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
check that addr is less than high_memory since this function can be
called on addresses higher than this.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/include/asm/kmsan.h | 44 ++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 arch/powerpc/include/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
new file mode 100644
index 000000000000..bc84f6ff2ee9
--- /dev/null
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * powerpc KMSAN support.
+ *
+ */
+
+#ifndef _ASM_POWERPC_KMSAN_H
+#define _ASM_POWERPC_KMSAN_H
+
+#ifndef __ASSEMBLY__
+#ifndef MODULE
+
+#include <linux/mmzone.h>
+#include <asm/page.h>
+#include <asm/book3s/64/pgtable.h>
+
+/*
+ * Functions below are declared in the header to make sure they are inlined.
+ * They all are called from kmsan_get_metadata() for every memory access in
+ * the kernel, so speed is important here.
+ */
+
+/*
+ * No powerpc specific metadata locations
+ */
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+ unsigned long addr64 = (unsigned long)addr, off;
+ if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {
+ off = addr64 - KERN_IO_START;
+ return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : KERN_IO_SHADOW_START);
+ } else {
+ return 0;
+ }
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+ return (unsigned long)addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(addr));
+}
+
+#endif /* !MODULE */
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_POWERPC_KMSAN_H */
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:56:51 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
Manually unpoison it to prevent false positives.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index b20ee72e873a..1cbec54f2b6a 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -27,6 +27,7 @@
#include <asm/sections.h>
#include <asm/inst.h>
#include <linux/uaccess.h>
+#include <linux/kmsan-checks.h>

DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)

if (!ret) {
patch_instruction(p->ainsn.insn, insn);
+ kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));
p->opcode = ppc_inst_val(insn);
}

--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 12:57:00 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
plpar_hcall provides to the hypervisor a buffer where return data should be
placed. The hypervisor initializes the buffers which is not visible to
KMSAN so unpoison them manually.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
arch/powerpc/platforms/pseries/hvconsole.c | 2 ++
arch/powerpc/sysdev/xive/spapr.c | 3 +++
2 files changed, 5 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..7ad66acd5db8 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/errno.h>
+#include <linux/kmsan-checks.h>
#include <asm/hvcall.h>
#include <asm/hvconsole.h>
#include <asm/plpar_wrappers.h>
@@ -32,6 +33,7 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count)
unsigned long *lbuf = (unsigned long *)buf;

ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
+ kmsan_unpoison_memory(retbuf, sizeof(retbuf));
lbuf[0] = be64_to_cpu(retbuf[1]);
lbuf[1] = be64_to_cpu(retbuf[2]);

diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index e45419264391..a9f48a336e4d 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -20,6 +20,7 @@
#include <linux/mm.h>
#include <linux/delay.h>
#include <linux/libfdt.h>
+#include <linux/kmsan-checks.h>

#include <asm/machdep.h>
#include <asm/prom.h>
@@ -191,6 +192,8 @@ static long plpar_int_get_source_info(unsigned long flags,
return rc;
}

+ kmsan_unpoison_memory(retbuf, sizeof(retbuf));
+
*src_flags = retbuf[0];
*eoi_page = retbuf[1];
*trig_page = retbuf[2];
--
2.40.1

Nicholas Miehlbradt

unread,
Dec 14, 2023, 1:01:04 AM12/14/23
to gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
so that the drivers can be compiled as modules.

Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
---
mm/kmsan/hooks.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 7a30274b893c..3532d9275ca5 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
size -= to_go;
}
}
+EXPORT_SYMBOL(kmsan_handle_dma);

void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
enum dma_data_direction dir)
--
2.40.1

Christophe Leroy

unread,
Dec 14, 2023, 3:37:00 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> All elements of bounce_buffer are eventually read and passed to the
> hypervisor so it should probably be fully initialized.

should or shall ?

>
> Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>

Should be a Fixed: tag ?

> ---
> drivers/tty/hvc/hvc_vio.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
> index 736b230f5ec0..1e88bfcdde20 100644
> --- a/drivers/tty/hvc/hvc_vio.c
> +++ b/drivers/tty/hvc/hvc_vio.c
> @@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
> static void udbg_hvc_putc(char c)
> {
> int count = -1;
> - unsigned char bounce_buffer[16];
> + unsigned char bounce_buffer[16] = { 0 };

Why 16 while we have a count of 1 in the call to hvterm_raw_put_chars() ?

Christophe Leroy

unread,
Dec 14, 2023, 4:00:45 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.

Do other architectures have to do it as well ?

I don't see it for show_stack(), is that a specific need for powerpc ?

Christophe Leroy

unread,
Dec 14, 2023, 4:17:35 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>
> Do the same for the ioremap region.
>
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
>
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

Why ?

In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
difference.

Christophe

Christophe Leroy

unread,
Dec 14, 2023, 4:20:11 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
Missing blank line.

> + if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {

off is only used in that block so it should be declared here, can be
done as a single line (followed by a blank line too):

unsigned long off = addr64 - KERN_IO_START;

Christophe Leroy

unread,
Dec 14, 2023, 4:25:49 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
The same is done for KASAN, can't you reuse it ?

> +#ifdef __SANITIZE_MEMORY__
> +#include <linux/kmsan_string.h>
> +#define memset __msan_memset
> +#define memcpy __msan_memcpy
> +#define memmove __msan_memmove
> +#endif

Will that work as you wish ?
What about the calls to memset() or memcpy() emited directly by GCC ?

Christophe Leroy

unread,
Dec 14, 2023, 4:27:30 AM12/14/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Enable KMSAN in the Kconfig.
>
> Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
> ---
> arch/powerpc/Kconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index e33e3250c478..71cc7d2a0a72 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -217,6 +217,7 @@ config PPC
> select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
> select HAVE_ARCH_KCSAN
> select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
> + select HAVE_ARCH_KMSAN if PPC64

You said in cover letter you are doing it for "pseries le guests".

Will it also work on BE and also on nohash/64 ?

Naveen N Rao

unread,
Dec 15, 2023, 2:59:24 AM12/15/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org
kprobe_opcode_t is u32, but we could be probing a prefixed instruction.
You can pass the instruction length through ppc_inst_len(insn).


- Naveen

Aneesh Kumar K.V

unread,
Dec 15, 2023, 4:02:12 AM12/15/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Nicholas Miehlbradt <nich...@linux.ibm.com> writes:

> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.
>

Is the annotation needed to avoid uninitialized access check when
reading parts of the stack? Can you provide a false positive example for
the commit message?

-aneesh

Aneesh Kumar K.V

unread,
Dec 15, 2023, 4:27:56 AM12/15/23
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, christop...@csgroup.eu, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, Nicholas Miehlbradt
Nicholas Miehlbradt <nich...@linux.ibm.com> writes:

> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>

Do we support KMSAN for both hash and radix? If hash is not supported
can we then using radix.h for these changes?

> Do the same for the ioremap region.
>
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
>
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

-aneesh

Michael Ellerman

unread,
Dec 21, 2023, 7:09:56 AM12/21/23
to Christophe Leroy, Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org
Because hvterm_raw_put_chars() calls hvc_put_chars() which requires a 16
byte buffer, because it passes the buffer directly to firmware which
expects a 16 byte buffer.

It's a pretty horrible calling convention, but I guess it's to avoid
needing another bounce buffer inside hvc_put_chars().

We should probably do the change below, to at least document the
interface better.

cheers


diff --git a/arch/powerpc/include/asm/hvconsole.h b/arch/powerpc/include/asm/hvconsole.h
index ccb2034506f0..0ee7ed019e23 100644
--- a/arch/powerpc/include/asm/hvconsole.h
+++ b/arch/powerpc/include/asm/hvconsole.h
@@ -22,7 +22,7 @@
* parm is included to conform to put_chars() function pointer template
*/
extern int hvc_get_chars(uint32_t vtermno, char *buf, int count);
-extern int hvc_put_chars(uint32_t vtermno, const char *buf, int count);
+extern int hvc_put_chars(uint32_t vtermno, const char buf[16], int count);

/* Provided by HVC VIO */
void hvc_vio_init_early(void);
diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..c40a82e49d59 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -52,7 +52,7 @@ EXPORT_SYMBOL(hvc_get_chars);
* firmware. Must be at least 16 bytes, even if count is less than 16.
* @count: Send this number of characters.
*/
-int hvc_put_chars(uint32_t vtermno, const char *buf, int count)
+int hvc_put_chars(uint32_t vtermno, const char buf[16], int count)
{
unsigned long *lbuf = (unsigned long *) buf;
long ret;
diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 736b230f5ec0..011b239a7e52 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -115,7 +115,7 @@ static int hvterm_raw_get_chars(uint32_t vtermno, char *buf, int count)
* you are sending fewer chars.
* @count: number of chars to send.
*/
-static int hvterm_raw_put_chars(uint32_t vtermno, const char *buf, int count)
+static int hvterm_raw_put_chars(uint32_t vtermno, const char buf[16], int count)
{
struct hvterm_priv *pv = hvterm_privs[vtermno];

Nicholas Miehlbradt

unread,
Jan 9, 2024, 10:54:51 PMJan 9
to Christophe Leroy, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


On 14/12/2023 8:17 pm, Christophe Leroy wrote:
>
>
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> Splits the vmalloc region into four. The first quarter is the new
>> vmalloc region, the second is used to store shadow metadata and the
>> third is used to store origin metadata. The fourth quarter is unused.
>>
>> Do the same for the ioremap region.
>>
>> Module data is stored in the vmalloc region so alias the modules
>> metadata addresses to the respective vmalloc metadata addresses. Define
>> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
>> region.
>>
>> Since MODULES_VADDR was previously only defined on ppc32 targets checks
>> for if this macro is defined need to be updated to include
>> defined(CONFIG_PPC32).
>
> Why ?
>
> In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
> difference.
>
> Christophe
>
On 64 bit builds the BUILD_BUG always triggers since MODULES_VADDR
expands to __vmalloc_start which is defined in a different translation
unit. I can restrict the #ifdef CONFIG_PPC32 to just around the
BUILD_BUG since as you pointed out there is no difference otherwise.

Nicholas Miehlbradt

unread,
Jan 9, 2024, 11:09:32 PMJan 9
to Christophe Leroy, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org
I tried this but I believe it makes the file more disorganised and
difficult to edit since there ends up being a set of definitions for
each intersection of features e.g. the definitions needed for both KASAN
and KMSAN, just KASAN, just KMSAN, etc.

This way it's clearer what each sanitizer needs and changing definitions
for one one sanitizer won't require refactors affecting other sanitizers.

>> +#ifdef __SANITIZE_MEMORY__
>> +#include <linux/kmsan_string.h>
>> +#define memset __msan_memset
>> +#define memcpy __msan_memcpy
>> +#define memmove __msan_memmove
>> +#endif
>
> Will that work as you wish ?
> What about the calls to memset() or memcpy() emited directly by GCC ?
>
These are handled by the compiler instrumentation which replaces these
with calls to the instrumented equivalent.

Nicholas Miehlbradt

unread,
Jan 9, 2024, 11:17:12 PMJan 9
to Christophe Leroy, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


On 14/12/2023 8:00 pm, Christophe Leroy wrote:
>
>
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> Functions which walk the stack read parts of the stack which cannot be
>> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
>> these functions to prevent false positives.
>
> Do other architectures have to do it as well ?
>
> I don't see it for show_stack(), is that a specific need for powerpc ?
> Other archs have the annotation on functions called by show_stack(). For
x86 it's on show_trace_log_lvl() and for s390 it's on __unwind_start()
and unwind_next_frame().

Christophe Leroy

unread,
Feb 19, 2024, 2:37:21 PMFeb 19
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
> so that the drivers can be compiled as modules.
>
> Signed-off-by: Nicholas Miehlbradt <nich...@linux.ibm.com>
> ---
> mm/kmsan/hooks.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
> index 7a30274b893c..3532d9275ca5 100644
> --- a/mm/kmsan/hooks.c
> +++ b/mm/kmsan/hooks.c
> @@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
> size -= to_go;
> }
> }
> +EXPORT_SYMBOL(kmsan_handle_dma);

virtio is GPL and all exports inside virtio are EXPORT_SYMBOL_GPL().
Should this one be _GPL as well ?

Christophe Leroy

unread,
Feb 20, 2024, 1:39:33 AMFeb 20
to Nicholas Miehlbradt, gli...@google.com, el...@google.com, dvy...@google.com, ak...@linux-foundation.org, m...@ellerman.id.au, npi...@gmail.com, linu...@kvack.org, kasa...@googlegroups.com, i...@linux.ibm.com, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org


Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> This series provides the minimal support for Kernal Memory Sanitizer on
> powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
> uses of uninitialized memory. Currently KMSAN is clang only.
>
> The clang support for powerpc has not yet been merged, the pull request can
> be found here [1].

As clang doesn't support it yet, it is probably prematurate to merge
that in the kernel.

I have open https://github.com/linuxppc/issues/issues/475 to follow through

In the meantime I flag this series as "change requested" for a revisit
it when time comes

Christophe

>
> In addition to this series, there are a number of changes required in
> generic kmsan code. These changes are already on mailing lists as part of
> the series implementing KMSAN for s390 [2]. This series is intended to be
> rebased on top of the s390 series.
>
> In addition, I found a bug in the rtc driver used on powerpc. I have sent
> a fix to this in a seperate series [3].
>
> With this series and the two series mentioned above, I can successfully
> boot pseries le defconfig without KMSAN warnings. I have not tested other
> powerpc platforms.
>
> [1] https://github.com/llvm/llvm-project/pull/73611
> [2] https://lore.kernel.org/linux-mm/20231121220155...@linux.ibm.com/
> [3] https://lore.kernel.org/linux-rtc/20231129073647.2...@linux.ibm.com/
>
> Nicholas Miehlbradt (13):
> kmsan: Export kmsan_handle_dma
> hvc: Fix use of uninitialized array in udbg_hvc_putc
> powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
> powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
> powerpc: Unpoison buffers populated by hcalls
> powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
> powerpc/kprobes: Unpoison instruction in kprobe struct
> powerpc: Unpoison pt_regs
> powerpc: Disable KMSAN checks on functions which walk the stack
> powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
> powerpc: Implement architecture specific KMSAN interface
> powerpc/string: Add KMSAN support
> powerpc: Enable KMSAN on powerpc
>
> arch/powerpc/Kconfig | 3 +-
> arch/powerpc/include/asm/book3s/64/pgtable.h | 42 +++++++++++++++
> arch/powerpc/include/asm/interrupt.h | 2 +
> arch/powerpc/include/asm/kmsan.h | 51 +++++++++++++++++++
> arch/powerpc/include/asm/string.h | 18 ++++++-
> arch/powerpc/kernel/Makefile | 2 +
> arch/powerpc/kernel/irq_64.c | 2 +
> arch/powerpc/kernel/kprobes.c | 2 +
> arch/powerpc/kernel/module.c | 2 +-
> arch/powerpc/kernel/process.c | 6 +--
> arch/powerpc/kernel/stacktrace.c | 10 ++--
> arch/powerpc/kernel/vdso/Makefile | 1 +
> arch/powerpc/lib/Makefile | 2 +
> arch/powerpc/lib/mem_64.S | 5 +-
> arch/powerpc/lib/memcpy_64.S | 2 +
> arch/powerpc/perf/callchain.c | 2 +-
> arch/powerpc/platforms/pseries/hvconsole.c | 2 +
> arch/powerpc/platforms/pseries/nvram.c | 4 ++
> arch/powerpc/purgatory/Makefile | 1 +
> arch/powerpc/sysdev/xive/spapr.c | 3 ++
> drivers/tty/hvc/hvc_vio.c | 2 +-
> mm/kmsan/hooks.c | 1 +
> .../selftests/powerpc/copyloops/asm/kmsan.h | 0
> .../powerpc/copyloops/linux/export.h | 1 +
> 24 files changed, 152 insertions(+), 14 deletions(-)
> create mode 100644 arch/powerpc/include/asm/kmsan.h
> create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
>
Reply all
Reply to author
Forward
0 new messages