Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

[PATCH v13 0/5]arm64: add ARCH_HAS_COPY_MC support

2 views
Skip to first unread message

Tong Tiangen

unread,
Dec 8, 2024, 9:43:54 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
Problem
=========
With the increase of memory capacity and density, the probability of memory
error also increases. The increasing size and density of server RAM in data
centers and clouds have shown increased uncorrectable memory errors.

Currently, more and more scenarios that can tolerate memory errors,such as
COW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7],
etc.

Solution
=========

This patchset introduces a new processing framework on ARM64, which enables
ARM64 to support error recovery in the above scenarios, and more scenarios
can be expanded based on this in the future.

In arm64, memory error handling in do_sea(), which is divided into two cases:
1. If the user state consumed the memory errors, the solution is to kill
the user process and isolate the error page.
2. If the kernel state consumed the memory errors, the solution is to
panic.

For case 2, Undifferentiated panic may not be the optimal choice, as it can
be handled better. In some scenarios, we can avoid panic, such as uaccess,
if the uaccess fails due to memory error, only the user process will be
affected, killing the user process and isolating the user page with
hardware memory errors is a better choice.

[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()")
[5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
[7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access")

------------------
Test result:

1. copy_page(), copy_mc_page() basic function test pass, and the disassembly
contents remains the same before and after refactor.

2. copy_to/from_user() access kernel NULL pointer raise translation fault
and dump error message then die(), test pass.

3. Test following scenarios: copy_from_user(), get_user(), COW.

Before patched: trigger a hardware memory error then panic.
After patched: trigger a hardware memory error without panic.

Testing step:
step1. start an user-process.
step2. poison(einj) the user-process's page.
step3: user-process access the poison page in kernel mode, then trigger SEA.
step4: the kernel will not panic, only the user process is killed, the poison
page is isolated. (before patched, the kernel will panic in do_sea())

------------------

Benefits
=========
According to the statistics of our storage product, the memory errors triggered
in kernel-mode by COW and page cache read (uaccess) scenarios account for more
than 50%, with this patchset deployed, all the kernel panic caused by COW and
page cache memory errors are eliminated, in addition, other scenarios that
account for a small proportion will also benefit.

Since v12:
Thanks to the suggestions of Jonathan, Mark, and Mauro, the following modifications
are made:
1. Rebase to latest kernel version.
2. Patch1, add Jonathan's and Mauro's review-by.
3. Patch2, modified do_apei_claim_sea() according to Mark's and Jonathan's suggestions,
and optimized the commit message according to Mark's suggestions(Added description of
the impact on regular copy_to_user()).
4. Patch3, optimized the commit message according to Mauro's suggestions and add Jonathan's
review-by.
5. Patch4, modified copy_mc_user_highpage() and Optimized the commit message according to
Jonathan's suggestions(no functional changes).
6. Patch5, optimized the commit message according to Mauro's suggestions.
7. Patch4/5, FEAT_MOPS is added to the code logic. Currently, the fixup is not performed
on the MOPS instruction.
8. Remove patch6 in v12 according to Jonathan's suggestions.

Since v11:
1. Rebase to latest kernel version 6.9-rc1.
2. Add patch 5, Since the problem described in "Since V10 Besides 3" has
been solved in a50026bdb867 ('iov_iter: get rid of 'copy_mc' flag').
3. Add the benefit of applying the patch set to our company to the description of patch0.

Since V10:
Accroding Mark's suggestion:
1. Merge V10's patch2 and patch3 to V11's patch2.
2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal
issues (NULL kernel pointeraccess) been fixup incorrectly.
3. Patch2(V11): refactoring the logic of do_sea().
4. Patch4(V11): Remove duplicate assembly logic and remove do_mte().

Besides:
1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error.
2. Split a part of the logic of patch2(V11) to patch5(V11), for detail,
see patch5(V11)'s commit msg.
3. Remove patch6(v10) “arm64: introduce copy_mc_to_kernel() implementation”.
During modification, some problems that cannot be solved in a short
period are found. The patch will be released after the problems are
solved.
4. Add test result in this patch.
5. Modify patchset title, do not use machine check and remove "-next".

Since V9:
1. Rebase to latest kernel version 6.8-rc2.
2. Add patch 6/6 to support copy_mc_to_kernel().

Since V8:
1. Rebase to latest kernel version and fix topo in some of the patches.
2. According to the suggestion of Catalin, I attempted to modify the
return value of function copy_mc_[user]_highpage() to bytes not copied.
During the modification process, I found that it would be more
reasonable to return -EFAULT when copy error occurs (referring to the
newly added patch 4).

For ARM64, the implementation of copy_mc_[user]_highpage() needs to
consider MTE. Considering the scenario where data copying is successful
but the MTE tag copying fails, it is also not reasonable to return
bytes not copied.
3. Considering the recent addition of machine check safe support for
multiple scenarios, modify commit message for patch 5 (patch 4 for V8).

Since V7:
Currently, there are patches supporting recover from poison
consumption for the cow scenario[1]. Therefore, Supporting cow
scenario under the arm64 architecture only needs to modify the relevant
code under the arch/.
[1]https://lore.kernel.org/lkml/20221031201029.1...@intel.com/

Since V6:
Resend patches that are not merged into the mainline in V6.

Since V5:
1. Add patch2/3 to add uaccess assembly helpers.
2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
3. Remove kernel access fixup in patch9.
All suggestion are from Mark.

Since V4:
1. According Michael's suggestion, add patch5.
2. According Mark's suggestiog, do some restructuring to arm64
extable, then a new adaptation of machine check safe support is made based
on this.
3. According Mark's suggestion, support machine check safe in do_mte() in
cow scene.
4. In V4, two patches have been merged into -next, so V5 not send these
two patches.

Since V3:
1. According to Robin's suggestion, direct modify user_ldst and
user_ldp in asm-uaccess.h and modify mte.S.
2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
and copy_to_user.S.
3. According to Robin's suggestion, using micro in copy_page_mc.S to
simplify code.
4. According to KeFeng's suggestion, modify powerpc code in patch1.
5. According to KeFeng's suggestion, modify mm/extable.c and some code
optimization.

Since V2:
1. According to Mark's suggestion, all uaccess can be recovered due to
memory error.
2. Scenario pagecache reading is also supported as part of uaccess
(copy_to_user()) and duplication code problem is also solved.
Thanks for Robin's suggestion.
3. According Mark's suggestion, update commit message of patch 2/5.
4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V1:
1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
ARM64_UCE_KERNEL_RECOVERY.
2.Add two new scenes, cow and pagecache reading.
3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.396...@huawei.com/

Tong Tiangen (5):
uaccess: add generic fallback version of copy_mc_to_user()
arm64: add support for ARCH_HAS_COPY_MC
mm/hwpoison: return -EFAULT when copy fail in
copy_mc_[user]_highpage()
arm64: support copy_mc_[user]_highpage()
arm64: introduce copy_mc_to_kernel() implementation

arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/asm-extable.h | 31 +++++++--
arch/arm64/include/asm/asm-uaccess.h | 4 ++
arch/arm64/include/asm/extable.h | 1 +
arch/arm64/include/asm/mte.h | 9 +++
arch/arm64/include/asm/page.h | 10 +++
arch/arm64/include/asm/string.h | 5 ++
arch/arm64/include/asm/uaccess.h | 18 +++++
arch/arm64/lib/Makefile | 2 +
arch/arm64/lib/copy_mc_page.S | 37 +++++++++++
arch/arm64/lib/copy_page.S | 62 ++----------------
arch/arm64/lib/copy_page_template.S | 70 ++++++++++++++++++++
arch/arm64/lib/copy_to_user.S | 10 +--
arch/arm64/lib/memcpy_mc.S | 98 ++++++++++++++++++++++++++++
arch/arm64/lib/mte.S | 29 ++++++++
arch/arm64/mm/copypage.c | 75 +++++++++++++++++++++
arch/arm64/mm/extable.c | 19 ++++++
arch/arm64/mm/fault.c | 30 ++++++---
arch/powerpc/include/asm/uaccess.h | 1 +
arch/x86/include/asm/uaccess.h | 1 +
include/linux/highmem.h | 16 +++--
include/linux/uaccess.h | 8 +++
mm/kasan/shadow.c | 12 ++++
mm/khugepaged.c | 4 +-
24 files changed, 472 insertions(+), 81 deletions(-)
create mode 100644 arch/arm64/lib/copy_mc_page.S
create mode 100644 arch/arm64/lib/copy_page_template.S
create mode 100644 arch/arm64/lib/memcpy_mc.S

--
2.25.1

Tong Tiangen

unread,
Dec 8, 2024, 9:43:57 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
fallback in include/linux/uaccess.h prepare for other architechures to
enable CONFIG_ARCH_HAS_COPY_MC.

Signed-off-by: Tong Tiangen <tongt...@huawei.com>
Acked-by: Michael Ellerman <m...@ellerman.id.au>
Reviewed-by: Mauro Carvalho Chehab <mchehab...@kernel.org>
Reviewed-by: Jonathan Cameron <Jonathan...@huawei.com>
---
arch/powerpc/include/asm/uaccess.h | 1 +
arch/x86/include/asm/uaccess.h | 1 +
include/linux/uaccess.h | 8 ++++++++
3 files changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 4f5a46a77fa2..44476d66ed13 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -403,6 +403,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)

return n;
}
+#define copy_mc_to_user copy_mc_to_user
#endif

extern long __copy_from_user_flushcache(void *dst, const void __user *src,
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 3a7755c1a441..3db67f44063b 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);

unsigned long __must_check
copy_mc_to_user(void __user *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
#endif

/*
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index e9c702c1908d..9d8c9f8082ff 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -239,6 +239,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
}
#endif

+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+ return copy_to_user(dst, src, cnt);
+}
+#endif
+
static __always_inline void pagefault_disabled_inc(void)
{
current->pagefault_disabled++;
--
2.25.1

Tong Tiangen

unread,
Dec 8, 2024, 9:43:57 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
For the arm64 kernel, when it processes hardware memory errors for
synchronize notifications(do_sea()), if the errors is consumed within the
kernel, the current processing is panic. However, it is not optimal.

Take copy_from/to_user for example, If ld* triggers a memory error, even in
kernel mode, only the associated process is affected. Killing the user
process and isolating the corrupt page is a better choice.

Add new fixup type EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR to identify insn
that can recover from memory errors triggered by access to kernel memory,
and this fixup type is used in __arch_copy_to_user(), This make the regular
copy_to_user() will handle kernel memory errors.

Signed-off-by: Tong Tiangen <tongt...@huawei.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++-----
arch/arm64/include/asm/asm-uaccess.h | 4 ++++
arch/arm64/include/asm/extable.h | 1 +
arch/arm64/lib/copy_to_user.S | 10 ++++-----
arch/arm64/mm/extable.c | 19 +++++++++++++++++
arch/arm64/mm/fault.c | 30 ++++++++++++++++++++-------
7 files changed, 78 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 100570a048c5..5fa54d31162c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -21,6 +21,7 @@ config ARM64
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CC_PLATFORM
+ select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index b8a5861dc7b7..0f9123efca0a 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -5,11 +5,13 @@
#include <linux/bits.h>
#include <asm/gpr-num.h>

-#define EX_TYPE_NONE 0
-#define EX_TYPE_BPF 1
-#define EX_TYPE_UACCESS_ERR_ZERO 2
-#define EX_TYPE_KACCESS_ERR_ZERO 3
-#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
+#define EX_TYPE_NONE 0
+#define EX_TYPE_BPF 1
+#define EX_TYPE_UACCESS_ERR_ZERO 2
+#define EX_TYPE_KACCESS_ERR_ZERO 3
+#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
+/* kernel access memory error safe */
+#define EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR 5

/* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
#define EX_DATA_REG_ERR_SHIFT 0
@@ -51,6 +53,17 @@
#define _ASM_EXTABLE_UACCESS(insn, fixup) \
_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)

+#define _ASM_EXTABLE_KACCESS_ERR_ZERO_MEM_ERR(insn, fixup, err, zero) \
+ __ASM_EXTABLE_RAW(insn, fixup, \
+ EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR, \
+ ( \
+ EX_DATA_REG(ERR, err) | \
+ EX_DATA_REG(ZERO, zero) \
+ ))
+
+#define _ASM_EXTABLE_KACCESS_MEM_ERR(insn, fixup) \
+ _ASM_EXTABLE_KACCESS_ERR_ZERO_MEM_ERR(insn, fixup, wzr, wzr)
+
/*
* Create an exception table entry for uaccess `insn`, which will branch to `fixup`
* when an unhandled fault is taken.
@@ -69,6 +82,14 @@
.endif
.endm

+/*
+ * Create an exception table entry for kaccess `insn`, which will branch to
+ * `fixup` when an unhandled fault is taken.
+ */
+ .macro _asm_extable_kaccess_mem_err, insn, fixup
+ _ASM_EXTABLE_KACCESS_MEM_ERR(\insn, \fixup)
+ .endm
+
#else /* __ASSEMBLY__ */

#include <linux/stringify.h>
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 5b6efe8abeeb..19aa0180f645 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -57,6 +57,10 @@ alternative_else_nop_endif
.endm
#endif

+#define KERNEL_MEM_ERR(l, x...) \
+9999: x; \
+ _asm_extable_kaccess_mem_err 9999b, l
+
#define USER(l, x...) \
9999: x; \
_asm_extable_uaccess 9999b, l
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..bc49443bc502 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
#endif /* !CONFIG_BPF_JIT */

bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_me(struct pt_regs *regs);
#endif
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..bedab1678431 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,7 +20,7 @@
* x0 - bytes not copied
*/
.macro ldrb1 reg, ptr, val
- ldrb \reg, [\ptr], \val
+ KERNEL_MEM_ERR(9998f, ldrb \reg, [\ptr], \val)
.endm

.macro strb1 reg, ptr, val
@@ -28,7 +28,7 @@
.endm

.macro ldrh1 reg, ptr, val
- ldrh \reg, [\ptr], \val
+ KERNEL_MEM_ERR(9998f, ldrh \reg, [\ptr], \val)
.endm

.macro strh1 reg, ptr, val
@@ -36,7 +36,7 @@
.endm

.macro ldr1 reg, ptr, val
- ldr \reg, [\ptr], \val
+ KERNEL_MEM_ERR(9998f, ldr \reg, [\ptr], \val)
.endm

.macro str1 reg, ptr, val
@@ -44,7 +44,7 @@
.endm

.macro ldp1 reg1, reg2, ptr, val
- ldp \reg1, \reg2, [\ptr], \val
+ KERNEL_MEM_ERR(9998f, ldp \reg1, \reg2, [\ptr], \val)
.endm

.macro stp1 reg1, reg2, ptr, val
@@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
9997: cmp dst, dstin
b.ne 9998f
// Before being absolutely sure we couldn't copy anything, try harder
- ldrb tmp1w, [srcin]
+KERNEL_MEM_ERR(9998f, ldrb tmp1w, [srcin])
USER(9998f, sttrb tmp1w, [dst])
add dst, dst, #1
9998: sub x0, end, dst // bytes not copied
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 228d681a8715..9ad2b6473b60 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs)
return ex_handler_uaccess_err_zero(ex, regs);
case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
return ex_handler_load_unaligned_zeropad(ex, regs);
+ case EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR:
+ return false;
}

BUG();
}
+
+bool fixup_exception_me(struct pt_regs *regs)
+{
+ const struct exception_table_entry *ex;
+
+ ex = search_exception_tables(instruction_pointer(regs));
+ if (!ex)
+ return false;
+
+ switch (ex->type) {
+ case EX_TYPE_UACCESS_ERR_ZERO:
+ case EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR:
+ return ex_handler_uaccess_err_zero(ex, regs);
+ }
+
+ return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index ef63651099a9..278e67357f49 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -801,21 +801,35 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
return 1; /* "fault" */
}

+/*
+ * APEI claimed this as a firmware-first notification.
+ * Some processing deferred to task_work before ret_to_user().
+ */
+static int do_apei_claim_sea(struct pt_regs *regs)
+{
+ int ret;
+
+ ret = apei_claim_sea(regs);
+ if (ret)
+ return ret;
+
+ if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
+ if (!fixup_exception_me(regs))
+ return -ENOENT;
+ }
+
+ return ret;
+}
+
static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
{
const struct fault_info *inf;
unsigned long siaddr;

- inf = esr_to_fault_info(esr);
-
- if (user_mode(regs) && apei_claim_sea(regs) == 0) {
- /*
- * APEI claimed this as a firmware-first notification.
- * Some processing deferred to task_work before ret_to_user().
- */
+ if (do_apei_claim_sea(regs) == 0)
return 0;
- }

+ inf = esr_to_fault_info(esr);
if (esr & ESR_ELx_FnV) {
siaddr = 0;
} else {
--
2.25.1

Tong Tiangen

unread,
Dec 8, 2024, 9:44:02 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
Currently, copy_mc_[user]_highpage() returns zero on success, or in case
of failures, the number of bytes that weren't copied.

While tracking the number of not copied works fine for x86 and PPC, There
are some difficulties in doing the same thing on ARM64 because there is no
available caller-saved register in copy_page()(lib/copy_page.S) to save
"bytes not copied", and the following copy_mc_page() will also encounter
the same problem.

Consider the caller of copy_mc_[user]_highpage() cannot do any processing
on the remaining data(The page has hardware errors), they only check if
copy was succeeded or not, make the interface more generic by using an
error code when copy fails (-EFAULT) or return zero on success.

Signed-off-by: Tong Tiangen <tongt...@huawei.com>
Reviewed-by: Jonathan Cameron <Jonathan...@huawei.com>
Reviewed-by: Mauro Carvalho Chehab <mchehab...@kernel.org>
---
include/linux/highmem.h | 8 ++++----
mm/khugepaged.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 6e452bd8e7e3..0eb4b9b06837 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -329,8 +329,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
/*
* If architecture supports machine check exception handling, define the
* #MC versions of copy_user_highpage and copy_highpage. They copy a memory
- * page with #MC in source page (@from) handled, and return the number
- * of bytes not copied if there was a #MC, otherwise 0 for success.
+ * page with #MC in source page (@from) handled, and return -EFAULT if there
+ * was a #MC, otherwise 0 for success.
*/
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
@@ -349,7 +349,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
if (ret)
memory_failure_queue(page_to_pfn(from), 0);

- return ret;
+ return ret ? -EFAULT : 0;
}

static inline int copy_mc_highpage(struct page *to, struct page *from)
@@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
if (ret)
memory_failure_queue(page_to_pfn(from), 0);

- return ret;
+ return ret ? -EFAULT : 0;
}
#else
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6f8d46d107b4..c3cdc0155dcd 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -820,7 +820,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
continue;
}
src_page = pte_page(pteval);
- if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) {
+ if (copy_mc_user_highpage(page, src_page, src_addr, vma)) {
result = SCAN_COPY_MC;
break;
}
@@ -2081,7 +2081,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
}

for (i = 0; i < nr_pages; i++) {
- if (copy_mc_highpage(dst, folio_page(folio, i)) > 0) {
+ if (copy_mc_highpage(dst, folio_page(folio, i))) {
result = SCAN_COPY_MC;
goto rollback;
}
--
2.25.1

Tong Tiangen

unread,
Dec 8, 2024, 9:44:02 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
Currently, many scenarios that can tolerate memory errors when copying page
have been supported in the kernel[1~5], all of which are implemented by
copy_mc_[user]_highpage(). arm64 should also support this mechanism.

Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
__HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.

Add new helper copy_mc_page() which provide a page copy implementation with
hardware memory error safe. The code logic of copy_mc_page() is the same as
copy_page(), the main difference is that the ldp insn of copy_mc_page()
contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR, therefore, the
main logic is extracted to copy_page_template.S. In addition, the fixup of
MOPS insn is not considered at present.

[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")

Signed-off-by: Tong Tiangen <tongt...@huawei.com>
---
arch/arm64/include/asm/mte.h | 9 ++++
arch/arm64/include/asm/page.h | 10 ++++
arch/arm64/lib/Makefile | 2 +
arch/arm64/lib/copy_mc_page.S | 37 ++++++++++++++
arch/arm64/lib/copy_page.S | 62 ++----------------------
arch/arm64/lib/copy_page_template.S | 70 +++++++++++++++++++++++++++
arch/arm64/lib/mte.S | 29 +++++++++++
arch/arm64/mm/copypage.c | 75 +++++++++++++++++++++++++++++
include/linux/highmem.h | 8 +++
9 files changed, 245 insertions(+), 57 deletions(-)
create mode 100644 arch/arm64/lib/copy_mc_page.S
create mode 100644 arch/arm64/lib/copy_page_template.S

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 6567df8ec8ca..efcd850ea2f8 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -98,6 +98,11 @@ static inline bool try_page_mte_tagging(struct page *page)
void mte_zero_clear_page_tags(void *addr);
void mte_sync_tags(pte_t pte, unsigned int nr_pages);
void mte_copy_page_tags(void *kto, const void *kfrom);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int mte_copy_mc_page_tags(void *kto, const void *kfrom);
+#endif
+
void mte_thread_init_user(void);
void mte_thread_switch(struct task_struct *next);
void mte_cpu_setup(void);
@@ -134,6 +139,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
static inline void mte_copy_page_tags(void *kto, const void *kfrom)
{
}
+static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
+{
+ return 0;
+}
static inline void mte_thread_init_user(void)
{
}
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 2312e6ee595f..304cc86b8a10 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE

+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int copy_mc_page(void *to, const void *from);
+int copy_mc_highpage(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_MC_HIGHPAGE
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
+#endif
+
struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
unsigned long vaddr);
#define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 8e882f479d98..78b0e9904689 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -13,6 +13,8 @@ endif

lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o

+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+
obj-$(CONFIG_CRC32) += crc32.o crc32-glue.o

obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
new file mode 100644
index 000000000000..51564828c30c
--- /dev/null
+++ b/arch/arm64/lib/copy_mc_page.S
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+#include <asm/asm-uaccess.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with memory error safe
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ * Returns:
+ * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
+ * while copying.
+ */
+ .macro ldp1 reg1, reg2, ptr, val
+ KERNEL_MEM_ERR(9998f, ldp \reg1, \reg2, [\ptr, \val])
+ .endm
+
+SYM_FUNC_START(__pi_copy_mc_page)
+#include "copy_page_template.S"
+
+ mov x0, #0
+ ret
+
+9998: mov x0, #-EFAULT
+ ret
+
+SYM_FUNC_END(__pi_copy_mc_page)
+SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
+EXPORT_SYMBOL(copy_mc_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index e6374e7e5511..d0186bbf99f1 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,65 +17,13 @@
* x0 - dest
* x1 - src
*/
-SYM_FUNC_START(__pi_copy_page)
-#ifdef CONFIG_AS_HAS_MOPS
- .arch_extension mops
-alternative_if_not ARM64_HAS_MOPS
- b .Lno_mops
-alternative_else_nop_endif
-
- mov x2, #PAGE_SIZE
- cpypwn [x0]!, [x1]!, x2!
- cpymwn [x0]!, [x1]!, x2!
- cpyewn [x0]!, [x1]!, x2!
- ret
-.Lno_mops:
-#endif
- ldp x2, x3, [x1]
- ldp x4, x5, [x1, #16]
- ldp x6, x7, [x1, #32]
- ldp x8, x9, [x1, #48]
- ldp x10, x11, [x1, #64]
- ldp x12, x13, [x1, #80]
- ldp x14, x15, [x1, #96]
- ldp x16, x17, [x1, #112]
-
- add x0, x0, #256
- add x1, x1, #128
-1:
- tst x0, #(PAGE_SIZE - 1)

- stnp x2, x3, [x0, #-256]
- ldp x2, x3, [x1]
- stnp x4, x5, [x0, #16 - 256]
- ldp x4, x5, [x1, #16]
- stnp x6, x7, [x0, #32 - 256]
- ldp x6, x7, [x1, #32]
- stnp x8, x9, [x0, #48 - 256]
- ldp x8, x9, [x1, #48]
- stnp x10, x11, [x0, #64 - 256]
- ldp x10, x11, [x1, #64]
- stnp x12, x13, [x0, #80 - 256]
- ldp x12, x13, [x1, #80]
- stnp x14, x15, [x0, #96 - 256]
- ldp x14, x15, [x1, #96]
- stnp x16, x17, [x0, #112 - 256]
- ldp x16, x17, [x1, #112]
-
- add x0, x0, #128
- add x1, x1, #128
-
- b.ne 1b
-
- stnp x2, x3, [x0, #-256]
- stnp x4, x5, [x0, #16 - 256]
- stnp x6, x7, [x0, #32 - 256]
- stnp x8, x9, [x0, #48 - 256]
- stnp x10, x11, [x0, #64 - 256]
- stnp x12, x13, [x0, #80 - 256]
- stnp x14, x15, [x0, #96 - 256]
- stnp x16, x17, [x0, #112 - 256]
+ .macro ldp1 reg1, reg2, ptr, val
+ ldp \reg1, \reg2, [\ptr, \val]
+ .endm

+SYM_FUNC_START(__pi_copy_page)
+#include "copy_page_template.S"
ret
SYM_FUNC_END(__pi_copy_page)
SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S
new file mode 100644
index 000000000000..f96c7988c93d
--- /dev/null
+++ b/arch/arm64/lib/copy_page_template.S
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+/*
+ * Copy a page from src to dest (both are page aligned)
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ */
+
+#ifdef CONFIG_AS_HAS_MOPS
+ .arch_extension mops
+alternative_if_not ARM64_HAS_MOPS
+ b .Lno_mops
+alternative_else_nop_endif
+
+ mov x2, #PAGE_SIZE
+ cpypwn [x0]!, [x1]!, x2!
+ cpymwn [x0]!, [x1]!, x2!
+ cpyewn [x0]!, [x1]!, x2!
+ ret
+.Lno_mops:
+#endif
+ ldp1 x2, x3, x1, #0
+ ldp1 x4, x5, x1, #16
+ ldp1 x6, x7, x1, #32
+ ldp1 x8, x9, x1, #48
+ ldp1 x10, x11, x1, #64
+ ldp1 x12, x13, x1, #80
+ ldp1 x14, x15, x1, #96
+ ldp1 x16, x17, x1, #112
+
+ add x0, x0, #256
+ add x1, x1, #128
+1:
+ tst x0, #(PAGE_SIZE - 1)
+
+ stnp x2, x3, [x0, #-256]
+ ldp1 x2, x3, x1, #0
+ stnp x4, x5, [x0, #16 - 256]
+ ldp1 x4, x5, x1, #16
+ stnp x6, x7, [x0, #32 - 256]
+ ldp1 x6, x7, x1, #32
+ stnp x8, x9, [x0, #48 - 256]
+ ldp1 x8, x9, x1, #48
+ stnp x10, x11, [x0, #64 - 256]
+ ldp1 x10, x11, x1, #64
+ stnp x12, x13, [x0, #80 - 256]
+ ldp1 x12, x13, x1, #80
+ stnp x14, x15, [x0, #96 - 256]
+ ldp1 x14, x15, x1, #96
+ stnp x16, x17, [x0, #112 - 256]
+ ldp1 x16, x17, x1, #112
+
+ add x0, x0, #128
+ add x1, x1, #128
+
+ b.ne 1b
+
+ stnp x2, x3, [x0, #-256]
+ stnp x4, x5, [x0, #16 - 256]
+ stnp x6, x7, [x0, #32 - 256]
+ stnp x8, x9, [x0, #48 - 256]
+ stnp x10, x11, [x0, #64 - 256]
+ stnp x12, x13, [x0, #80 - 256]
+ stnp x14, x15, [x0, #96 - 256]
+ stnp x16, x17, [x0, #112 - 256]
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 5018ac03b6bf..9d4eeb76a838 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
ret
SYM_FUNC_END(mte_copy_page_tags)

+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Copy the tags from the source page to the destination one with memory error safe
+ * x0 - address of the destination page
+ * x1 - address of the source page
+ * Returns:
+ * x0 - Return 0 if copy success, or
+ * -EFAULT if anything goes wrong while copying.
+ */
+SYM_FUNC_START(mte_copy_mc_page_tags)
+ mov x2, x0
+ mov x3, x1
+ multitag_transfer_size x5, x6
+1:
+KERNEL_MEM_ERR(2f, ldgm x4, [x3])
+ stgm x4, [x2]
+ add x2, x2, x5
+ add x3, x3, x5
+ tst x2, #(PAGE_SIZE - 1)
+ b.ne 1b
+
+ mov x0, #0
+ ret
+
+2: mov x0, #-EFAULT
+ ret
+SYM_FUNC_END(mte_copy_mc_page_tags)
+#endif
+
/*
* Read tags from a user buffer (one tag per byte) and set the corresponding
* tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a86c897017df..1a369f325ebb 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -67,3 +67,78 @@ void copy_user_highpage(struct page *to, struct page *from,
flush_dcache_page(to);
}
EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Return -EFAULT if anything goes wrong while copying page or mte.
+ */
+int copy_mc_highpage(struct page *to, struct page *from)
+{
+ void *kto = page_address(to);
+ void *kfrom = page_address(from);
+ struct folio *src = page_folio(from);
+ struct folio *dst = page_folio(to);
+ unsigned int i, nr_pages;
+ int ret;
+
+ ret = copy_mc_page(kto, kfrom);
+ if (ret)
+ return -EFAULT;
+
+ if (kasan_hw_tags_enabled())
+ page_kasan_tag_reset(to);
+
+ if (!system_supports_mte())
+ return 0;
+
+ if (folio_test_hugetlb(src)) {
+ if (!folio_test_hugetlb_mte_tagged(src) ||
+ from != folio_page(src, 0))
+ return 0;
+
+ WARN_ON_ONCE(!folio_try_hugetlb_mte_tagging(dst));
+
+ /*
+ * Populate tags for all subpages.
+ *
+ * Don't assume the first page is head page since
+ * huge page copy may start from any subpage.
+ */
+ nr_pages = folio_nr_pages(src);
+ for (i = 0; i < nr_pages; i++) {
+ kfrom = page_address(folio_page(src, i));
+ kto = page_address(folio_page(dst, i));
+ ret = mte_copy_mc_page_tags(kto, kfrom);
+ if (ret)
+ return -EFAULT;
+ }
+ folio_set_hugetlb_mte_tagged(dst);
+ } else if (page_mte_tagged(from)) {
+ /* It's a new page, shouldn't have been tagged yet */
+ WARN_ON_ONCE(!try_page_mte_tagging(to));
+
+ ret = mte_copy_mc_page_tags(kto, kfrom);
+ if (ret)
+ return -EFAULT;
+ set_page_mte_tagged(to);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(copy_mc_highpage);
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma)
+{
+ int ret;
+
+ ret = copy_mc_highpage(to, from);
+ if (ret)
+ return ret;
+
+ flush_dcache_page(to);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
+#endif
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 0eb4b9b06837..89a6e0fd0b31 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -326,6 +326,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
#endif

#ifdef copy_mc_to_kernel
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
/*
* If architecture supports machine check exception handling, define the
* #MC versions of copy_user_highpage and copy_highpage. They copy a memory
@@ -351,7 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,

return ret ? -EFAULT : 0;
}
+#endif

+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
static inline int copy_mc_highpage(struct page *to, struct page *from)
{
unsigned long ret;
@@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)

return ret ? -EFAULT : 0;
}
+#endif
#else
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
{
copy_user_highpage(to, from, vaddr, vma);
return 0;
}
+#endif

+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
static inline int copy_mc_highpage(struct page *to, struct page *from)
{
copy_highpage(to, from);
return 0;
}
#endif
+#endif

static inline void memcpy_page(struct page *dst_page, size_t dst_off,
struct page *src_page, size_t src_off,
--
2.25.1

Tong Tiangen

unread,
Dec 8, 2024, 9:44:03 PM12/8/24
to Mark Rutland, Jonathan Cameron, Mauro Carvalho Chehab, Catalin Marinas, Will Deacon, Andrew Morton, James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, Michael Ellerman, Nicholas Piggin, Andrey Ryabinin, Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin, Madhavan Srinivasan, linux-ar...@lists.infradead.org, linu...@kvack.org, linuxp...@lists.ozlabs.org, linux-...@vger.kernel.org, kasa...@googlegroups.com, Tong Tiangen, wangkef...@huawei.com, Guohanjun
The copy_mc_to_kernel() helper is memory copy implementation that handles
source exceptions. It can be used in memory copy scenarios that tolerate
hardware memory errors(e.g: pmem_read/dax_copy_to_iter).

Currently, only x86 and ppc support this helper, Add this for ARM64 as
well, if ARCH_HAS_COPY_MC is defined, by implementing copy_mc_to_kernel()
and memcpy_mc() functions.

Because there is no caller-saved GPR is available for saving "bytes not
copied" in memcpy(), the memcpy_mc() is referenced to the implementation
of copy_from_user(). In addition, the fixup of MOPS insn is not considered
at present.

Signed-off-by: Tong Tiangen <tongt...@huawei.com>
---
arch/arm64/include/asm/string.h | 5 ++
arch/arm64/include/asm/uaccess.h | 18 ++++++
arch/arm64/lib/Makefile | 2 +-
arch/arm64/lib/memcpy_mc.S | 98 ++++++++++++++++++++++++++++++++
mm/kasan/shadow.c | 12 ++++
5 files changed, 134 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/lib/memcpy_mc.S

diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index 3a3264ff47b9..23eca4fb24fa 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t);
extern void *memcpy(void *, const void *, __kernel_size_t);
extern void *__memcpy(void *, const void *, __kernel_size_t);

+#define __HAVE_ARCH_MEMCPY_MC
+extern int memcpy_mc(void *, const void *, __kernel_size_t);
+extern int __memcpy_mc(void *, const void *, __kernel_size_t);
+
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *, const void *, __kernel_size_t);
extern void *__memmove(void *, const void *, __kernel_size_t);
@@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt);
*/

#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 5b91803201ef..2a14b732306a 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -542,4 +542,22 @@ static inline void put_user_gcs(unsigned long val, unsigned long __user *addr,

#endif /* CONFIG_ARM64_GCS */

+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/**
+ * copy_mc_to_kernel - memory copy that handles source exceptions
+ *
+ * @to: destination address
+ * @from: source address
+ * @size: number of bytes to copy
+ *
+ * Return 0 for success, or bytes not copied.
+ */
+static inline unsigned long __must_check
+copy_mc_to_kernel(void *to, const void *from, unsigned long size)
+{
+ return memcpy_mc(to, from, size);
+}
+#define copy_mc_to_kernel copy_mc_to_kernel
+#endif
+
#endif /* __ASM_UACCESS_H */
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 78b0e9904689..326d71ba0517 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -13,7 +13,7 @@ endif

lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o

-lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o

obj-$(CONFIG_CRC32) += crc32.o crc32-glue.o

diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S
new file mode 100644
index 000000000000..cb9caaa1ab0b
--- /dev/null
+++ b/arch/arm64/lib/memcpy_mc.S
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2013 ARM Ltd.
+ * Copyright (C) 2013 Linaro.
+ *
+ * This code is based on glibc cortex strings work originally authored by Linaro
+ * be found @
+ *
+ * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
+ * files/head:/src/aarch64/
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/cache.h>
+#include <asm/asm-uaccess.h>
+
+/*
+ * Copy a buffer from src to dest (alignment handled by the hardware)
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ * x2 - n
+ * Returns:
+ * x0 - bytes not copied
+ */
+ .macro ldrb1 reg, ptr, val
+ KERNEL_MEM_ERR(9997f, ldrb \reg, [\ptr], \val)
+ .endm
+
+ .macro strb1 reg, ptr, val
+ strb \reg, [\ptr], \val
+ .endm
+
+ .macro ldrh1 reg, ptr, val
+ KERNEL_MEM_ERR(9997f, ldrh \reg, [\ptr], \val)
+ .endm
+
+ .macro strh1 reg, ptr, val
+ strh \reg, [\ptr], \val
+ .endm
+
+ .macro ldr1 reg, ptr, val
+ KERNEL_MEM_ERR(9997f, ldr \reg, [\ptr], \val)
+ .endm
+
+ .macro str1 reg, ptr, val
+ str \reg, [\ptr], \val
+ .endm
+
+ .macro ldp1 reg1, reg2, ptr, val
+ KERNEL_MEM_ERR(9997f, ldp \reg1, \reg2, [\ptr], \val)
+ .endm
+
+ .macro stp1 reg1, reg2, ptr, val
+ stp \reg1, \reg2, [\ptr], \val
+ .endm
+
+end .req x5
+SYM_FUNC_START(__memcpy_mc_generic)
+ add end, x0, x2
+#include "copy_template.S"
+ mov x0, #0 // Nothing to copy
+ ret
+
+ // Exception fixups
+9997: sub x0, end, dst // bytes not copied
+ ret
+SYM_FUNC_END(__memcpy_mc_generic)
+
+#ifdef CONFIG_AS_HAS_MOPS
+ .arch_extension mops
+SYM_FUNC_START(__memcpy_mc)
+alternative_if_not ARM64_HAS_MOPS
+ b __memcpy_mc_generic
+alternative_else_nop_endif
+
+dstin .req x0
+src .req x1
+count .req x2
+dst .req x3
+
+ mov dst, dstin
+ cpyp [dst]!, [src]!, count!
+ cpym [dst]!, [src]!, count!
+ cpye [dst]!, [src]!, count!
+
+ mov x0, #0 // Nothing to copy
+ ret
+SYM_FUNC_END(__memcpy_mc)
+#else
+SYM_FUNC_ALIAS(__memcpy_mc, __memcpy_mc_generic)
+#endif
+
+EXPORT_SYMBOL(__memcpy_mc)
+SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc)
+EXPORT_SYMBOL(memcpy_mc)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 88d1c9dcb507..a12770fb2e9c 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len)
}
#endif

+#ifdef __HAVE_ARCH_MEMCPY_MC
+#undef memcpy_mc
+int memcpy_mc(void *dest, const void *src, size_t len)
+{
+ if (!kasan_check_range(src, len, false, _RET_IP_) ||
+ !kasan_check_range(dest, len, true, _RET_IP_))
+ return (int)len;
+
+ return __memcpy_mc(dest, src, len);
+}
+#endif
+
void *__asan_memset(void *addr, int c, ssize_t len)
{
if (!kasan_check_range(addr, len, true, _RET_IP_))
--
2.25.1

Reply all
Reply to author
Forward
0 new messages