[PATCH 1/5] include/linux: Add instrumented.h infrastructure

1 view
Skip to first unread message

Marco Elver

unread,
Jan 20, 2020, 9:19:42 AM1/20/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
This adds instrumented.h, which provides generic wrappers for memory
access instrumentation that the compiler cannot emit for various
sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
future this will also include KMSAN instrumentation.

Note that, copy_{to,from}_user require special instrumentation,
providing hooks before and after the access, since we may need to know
the actual bytes accessed (currently this is relevant for KCSAN, and is
also relevant in future for KMSAN).

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
1 file changed, 153 insertions(+)
create mode 100644 include/linux/instrumented.h

diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
new file mode 100644
index 000000000000..9f83c8520223
--- /dev/null
+++ b/include/linux/instrumented.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This header provides generic wrappers for memory access instrumentation that
+ * the compiler cannot emit for: KASAN, KCSAN.
+ */
+#ifndef _LINUX_INSTRUMENTED_H
+#define _LINUX_INSTRUMENTED_H
+
+#include <linux/compiler.h>
+#include <linux/kasan-checks.h>
+#include <linux/kcsan-checks.h>
+#include <linux/types.h>
+
+/**
+ * instrument_read - instrument regular read access
+ *
+ * Instrument a regular read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_read(const volatile void *v, size_t size)
+{
+ kasan_check_read(v, size);
+ kcsan_check_read(v, size);
+}
+
+/**
+ * instrument_write - instrument regular write access
+ *
+ * Instrument a regular write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_write(v, size);
+}
+
+/**
+ * instrument_atomic_read - instrument atomic read access
+ *
+ * Instrument an atomic read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
+{
+ kasan_check_read(v, size);
+ kcsan_check_atomic_read(v, size);
+}
+
+/**
+ * instrument_atomic_write - instrument atomic write access
+ *
+ * Instrument an atomic write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_atomic_write(v, size);
+}
+
+/**
+ * instrument_copy_to_user_pre - instrument reads of copy_to_user
+ *
+ * Instrument reads from kernel memory, that are due to copy_to_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted before the accesses. At this point the
+ * actual number of bytes accessed is not yet known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ */
+static __always_inline void
+instrument_copy_to_user_pre(const volatile void *src, size_t size)
+{
+ /* Check before, to warn before potential memory corruption. */
+ kasan_check_read(src, size);
+}
+
+/**
+ * instrument_copy_to_user_post - instrument reads of copy_to_user
+ *
+ * Instrument reads from kernel memory, that are due to copy_to_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted after the accesses. At this point the
+ * actual number of bytes accessed should be known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ * @left number of bytes left that were not copied
+ */
+static __always_inline void
+instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
+{
+ /* Check after, to avoid false positive if memory was not accessed. */
+ kcsan_check_read(src, size - left);
+}
+
+/**
+ * instrument_copy_from_user_pre - instrument writes of copy_from_user
+ *
+ * Instrument writes to kernel memory, that are due to copy_from_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted before the accesses. At this point the
+ * actual number of bytes accessed is not yet known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ */
+static __always_inline void
+instrument_copy_from_user_pre(const volatile void *dst, size_t size)
+{
+ /* Check before, to warn before potential memory corruption. */
+ kasan_check_write(dst, size);
+}
+
+/**
+ * instrument_copy_from_user_post - instrument writes of copy_from_user
+ *
+ * Instrument writes to kernel memory, that are due to copy_from_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted after the accesses. At this point the
+ * actual number of bytes accessed should be known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ * @left number of bytes left that were not copied
+ */
+static __always_inline void
+instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
+{
+ /* Check after, to avoid false positive if memory was not accessed. */
+ kcsan_check_write(dst, size - left);
+}
+
+#endif /* _LINUX_INSTRUMENTED_H */
--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 20, 2020, 9:19:45 AM1/20/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
This switches atomic-instrumented.h to use the generic instrumentation
wrappers provided by instrumented.h.

No functional change intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
include/asm-generic/atomic-instrumented.h | 395 +++++++++++-----------
scripts/atomic/gen-atomic-instrumented.sh | 19 +-
2 files changed, 194 insertions(+), 220 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 63869ded73ac..379986e40159 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -19,25 +19,12 @@

#include <linux/build_bug.h>
#include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
- kasan_check_read(v, size);
- kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
- kasan_check_write(v, size);
- kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>

static __always_inline int
atomic_read(const atomic_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic_read(v);
}
#define atomic_read atomic_read
@@ -46,7 +33,7 @@ atomic_read(const atomic_t *v)
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic_read_acquire(v);
}
#define atomic_read_acquire atomic_read_acquire
@@ -55,7 +42,7 @@ atomic_read_acquire(const atomic_t *v)
static __always_inline void
atomic_set(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_set(v, i);
}
#define atomic_set atomic_set
@@ -64,7 +51,7 @@ atomic_set(atomic_t *v, int i)
static __always_inline void
atomic_set_release(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_set_release(v, i);
}
#define atomic_set_release atomic_set_release
@@ -73,7 +60,7 @@ atomic_set_release(atomic_t *v, int i)
static __always_inline void
atomic_add(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_add(i, v);
}
#define atomic_add atomic_add
@@ -82,7 +69,7 @@ atomic_add(int i, atomic_t *v)
static __always_inline int
atomic_add_return(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return(i, v);
}
#define atomic_add_return atomic_add_return
@@ -92,7 +79,7 @@ atomic_add_return(int i, atomic_t *v)
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_acquire(i, v);
}
#define atomic_add_return_acquire atomic_add_return_acquire
@@ -102,7 +89,7 @@ atomic_add_return_acquire(int i, atomic_t *v)
static __always_inline int
atomic_add_return_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_release(i, v);
}
#define atomic_add_return_release atomic_add_return_release
@@ -112,7 +99,7 @@ atomic_add_return_release(int i, atomic_t *v)
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_relaxed(i, v);
}
#define atomic_add_return_relaxed atomic_add_return_relaxed
@@ -122,7 +109,7 @@ atomic_add_return_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add(i, v);
}
#define atomic_fetch_add atomic_fetch_add
@@ -132,7 +119,7 @@ atomic_fetch_add(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_acquire(i, v);
}
#define atomic_fetch_add_acquire atomic_fetch_add_acquire
@@ -142,7 +129,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_release(i, v);
}
#define atomic_fetch_add_release atomic_fetch_add_release
@@ -152,7 +139,7 @@ atomic_fetch_add_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_relaxed(i, v);
}
#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
@@ -161,7 +148,7 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_sub(i, v);
}
#define atomic_sub atomic_sub
@@ -170,7 +157,7 @@ atomic_sub(int i, atomic_t *v)
static __always_inline int
atomic_sub_return(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return(i, v);
}
#define atomic_sub_return atomic_sub_return
@@ -180,7 +167,7 @@ atomic_sub_return(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_acquire(i, v);
}
#define atomic_sub_return_acquire atomic_sub_return_acquire
@@ -190,7 +177,7 @@ atomic_sub_return_acquire(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_release(i, v);
}
#define atomic_sub_return_release atomic_sub_return_release
@@ -200,7 +187,7 @@ atomic_sub_return_release(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_relaxed(i, v);
}
#define atomic_sub_return_relaxed atomic_sub_return_relaxed
@@ -210,7 +197,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub(i, v);
}
#define atomic_fetch_sub atomic_fetch_sub
@@ -220,7 +207,7 @@ atomic_fetch_sub(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_acquire(i, v);
}
#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
@@ -230,7 +217,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_release(i, v);
}
#define atomic_fetch_sub_release atomic_fetch_sub_release
@@ -240,7 +227,7 @@ atomic_fetch_sub_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_relaxed(i, v);
}
#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
@@ -250,7 +237,7 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_inc(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_inc(v);
}
#define atomic_inc atomic_inc
@@ -260,7 +247,7 @@ atomic_inc(atomic_t *v)
static __always_inline int
atomic_inc_return(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return(v);
}
#define atomic_inc_return atomic_inc_return
@@ -270,7 +257,7 @@ atomic_inc_return(atomic_t *v)
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_acquire(v);
}
#define atomic_inc_return_acquire atomic_inc_return_acquire
@@ -280,7 +267,7 @@ atomic_inc_return_acquire(atomic_t *v)
static __always_inline int
atomic_inc_return_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_release(v);
}
#define atomic_inc_return_release atomic_inc_return_release
@@ -290,7 +277,7 @@ atomic_inc_return_release(atomic_t *v)
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_relaxed(v);
}
#define atomic_inc_return_relaxed atomic_inc_return_relaxed
@@ -300,7 +287,7 @@ atomic_inc_return_relaxed(atomic_t *v)
static __always_inline int
atomic_fetch_inc(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc(v);
}
#define atomic_fetch_inc atomic_fetch_inc
@@ -310,7 +297,7 @@ atomic_fetch_inc(atomic_t *v)
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_acquire(v);
}
#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire
@@ -320,7 +307,7 @@ atomic_fetch_inc_acquire(atomic_t *v)
static __always_inline int
atomic_fetch_inc_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_release(v);
}
#define atomic_fetch_inc_release atomic_fetch_inc_release
@@ -330,7 +317,7 @@ atomic_fetch_inc_release(atomic_t *v)
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_relaxed(v);
}
#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed
@@ -340,7 +327,7 @@ atomic_fetch_inc_relaxed(atomic_t *v)
static __always_inline void
atomic_dec(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_dec(v);
}
#define atomic_dec atomic_dec
@@ -350,7 +337,7 @@ atomic_dec(atomic_t *v)
static __always_inline int
atomic_dec_return(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return(v);
}
#define atomic_dec_return atomic_dec_return
@@ -360,7 +347,7 @@ atomic_dec_return(atomic_t *v)
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_acquire(v);
}
#define atomic_dec_return_acquire atomic_dec_return_acquire
@@ -370,7 +357,7 @@ atomic_dec_return_acquire(atomic_t *v)
static __always_inline int
atomic_dec_return_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_release(v);
}
#define atomic_dec_return_release atomic_dec_return_release
@@ -380,7 +367,7 @@ atomic_dec_return_release(atomic_t *v)
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_relaxed(v);
}
#define atomic_dec_return_relaxed atomic_dec_return_relaxed
@@ -390,7 +377,7 @@ atomic_dec_return_relaxed(atomic_t *v)
static __always_inline int
atomic_fetch_dec(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec(v);
}
#define atomic_fetch_dec atomic_fetch_dec
@@ -400,7 +387,7 @@ atomic_fetch_dec(atomic_t *v)
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_acquire(v);
}
#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire
@@ -410,7 +397,7 @@ atomic_fetch_dec_acquire(atomic_t *v)
static __always_inline int
atomic_fetch_dec_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_release(v);
}
#define atomic_fetch_dec_release atomic_fetch_dec_release
@@ -420,7 +407,7 @@ atomic_fetch_dec_release(atomic_t *v)
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_relaxed(v);
}
#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed
@@ -429,7 +416,7 @@ atomic_fetch_dec_relaxed(atomic_t *v)
static __always_inline void
atomic_and(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_and(i, v);
}
#define atomic_and atomic_and
@@ -438,7 +425,7 @@ atomic_and(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and(i, v);
}
#define atomic_fetch_and atomic_fetch_and
@@ -448,7 +435,7 @@ atomic_fetch_and(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_acquire(i, v);
}
#define atomic_fetch_and_acquire atomic_fetch_and_acquire
@@ -458,7 +445,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_release(i, v);
}
#define atomic_fetch_and_release atomic_fetch_and_release
@@ -468,7 +455,7 @@ atomic_fetch_and_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_relaxed(i, v);
}
#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
@@ -478,7 +465,7 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_andnot(i, v);
}
#define atomic_andnot atomic_andnot
@@ -488,7 +475,7 @@ atomic_andnot(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot(i, v);
}
#define atomic_fetch_andnot atomic_fetch_andnot
@@ -498,7 +485,7 @@ atomic_fetch_andnot(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_acquire(i, v);
}
#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
@@ -508,7 +495,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_release(i, v);
}
#define atomic_fetch_andnot_release atomic_fetch_andnot_release
@@ -518,7 +505,7 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_relaxed(i, v);
}
#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
@@ -527,7 +514,7 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_or(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_or(i, v);
}
#define atomic_or atomic_or
@@ -536,7 +523,7 @@ atomic_or(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or(i, v);
}
#define atomic_fetch_or atomic_fetch_or
@@ -546,7 +533,7 @@ atomic_fetch_or(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_acquire(i, v);
}
#define atomic_fetch_or_acquire atomic_fetch_or_acquire
@@ -556,7 +543,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_release(i, v);
}
#define atomic_fetch_or_release atomic_fetch_or_release
@@ -566,7 +553,7 @@ atomic_fetch_or_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_relaxed(i, v);
}
#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
@@ -575,7 +562,7 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_xor(i, v);
}
#define atomic_xor atomic_xor
@@ -584,7 +571,7 @@ atomic_xor(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor(i, v);
}
#define atomic_fetch_xor atomic_fetch_xor
@@ -594,7 +581,7 @@ atomic_fetch_xor(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_acquire(i, v);
}
#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
@@ -604,7 +591,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_release(i, v);
}
#define atomic_fetch_xor_release atomic_fetch_xor_release
@@ -614,7 +601,7 @@ atomic_fetch_xor_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_relaxed(i, v);
}
#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
@@ -624,7 +611,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_xchg(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg(v, i);
}
#define atomic_xchg atomic_xchg
@@ -634,7 +621,7 @@ atomic_xchg(atomic_t *v, int i)
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_acquire(v, i);
}
#define atomic_xchg_acquire atomic_xchg_acquire
@@ -644,7 +631,7 @@ atomic_xchg_acquire(atomic_t *v, int i)
static __always_inline int
atomic_xchg_release(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_release(v, i);
}
#define atomic_xchg_release atomic_xchg_release
@@ -654,7 +641,7 @@ atomic_xchg_release(atomic_t *v, int i)
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_relaxed(v, i);
}
#define atomic_xchg_relaxed atomic_xchg_relaxed
@@ -664,7 +651,7 @@ atomic_xchg_relaxed(atomic_t *v, int i)
static __always_inline int
atomic_cmpxchg(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg(v, old, new);
}
#define atomic_cmpxchg atomic_cmpxchg
@@ -674,7 +661,7 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_acquire(v, old, new);
}
#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
@@ -684,7 +671,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_release(v, old, new);
}
#define atomic_cmpxchg_release atomic_cmpxchg_release
@@ -694,7 +681,7 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_relaxed(v, old, new);
}
#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
@@ -704,8 +691,8 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
static __always_inline bool
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg(v, old, new);
}
#define atomic_try_cmpxchg atomic_try_cmpxchg
@@ -715,8 +702,8 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_acquire(v, old, new);
}
#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire
@@ -726,8 +713,8 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_release(v, old, new);
}
#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release
@@ -737,8 +724,8 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_relaxed(v, old, new);
}
#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed
@@ -748,7 +735,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_sub_and_test(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_and_test(i, v);
}
#define atomic_sub_and_test atomic_sub_and_test
@@ -758,7 +745,7 @@ atomic_sub_and_test(int i, atomic_t *v)
static __always_inline bool
atomic_dec_and_test(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_and_test(v);
}
#define atomic_dec_and_test atomic_dec_and_test
@@ -768,7 +755,7 @@ atomic_dec_and_test(atomic_t *v)
static __always_inline bool
atomic_inc_and_test(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_and_test(v);
}
#define atomic_inc_and_test atomic_inc_and_test
@@ -778,7 +765,7 @@ atomic_inc_and_test(atomic_t *v)
static __always_inline bool
atomic_add_negative(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_negative(i, v);
}
#define atomic_add_negative atomic_add_negative
@@ -788,7 +775,7 @@ atomic_add_negative(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_unless(v, a, u);
}
#define atomic_fetch_add_unless atomic_fetch_add_unless
@@ -798,7 +785,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
static __always_inline bool
atomic_add_unless(atomic_t *v, int a, int u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_unless(v, a, u);
}
#define atomic_add_unless atomic_add_unless
@@ -808,7 +795,7 @@ atomic_add_unless(atomic_t *v, int a, int u)
static __always_inline bool
atomic_inc_not_zero(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_not_zero(v);
}
#define atomic_inc_not_zero atomic_inc_not_zero
@@ -818,7 +805,7 @@ atomic_inc_not_zero(atomic_t *v)
static __always_inline bool
atomic_inc_unless_negative(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_unless_negative(v);
}
#define atomic_inc_unless_negative atomic_inc_unless_negative
@@ -828,7 +815,7 @@ atomic_inc_unless_negative(atomic_t *v)
static __always_inline bool
atomic_dec_unless_positive(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_unless_positive(v);
}
#define atomic_dec_unless_positive atomic_dec_unless_positive
@@ -838,7 +825,7 @@ atomic_dec_unless_positive(atomic_t *v)
static __always_inline int
atomic_dec_if_positive(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_if_positive(v);
}
#define atomic_dec_if_positive atomic_dec_if_positive
@@ -847,7 +834,7 @@ atomic_dec_if_positive(atomic_t *v)
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic64_read(v);
}
#define atomic64_read atomic64_read
@@ -856,7 +843,7 @@ atomic64_read(const atomic64_t *v)
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic64_read_acquire(v);
}
#define atomic64_read_acquire atomic64_read_acquire
@@ -865,7 +852,7 @@ atomic64_read_acquire(const atomic64_t *v)
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_set(v, i);
}
#define atomic64_set atomic64_set
@@ -874,7 +861,7 @@ atomic64_set(atomic64_t *v, s64 i)
static __always_inline void
atomic64_set_release(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_set_release(v, i);
}
#define atomic64_set_release atomic64_set_release
@@ -883,7 +870,7 @@ atomic64_set_release(atomic64_t *v, s64 i)
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_add(i, v);
}
#define atomic64_add atomic64_add
@@ -892,7 +879,7 @@ atomic64_add(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return(i, v);
}
#define atomic64_add_return atomic64_add_return
@@ -902,7 +889,7 @@ atomic64_add_return(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_acquire(i, v);
}
#define atomic64_add_return_acquire atomic64_add_return_acquire
@@ -912,7 +899,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_release(i, v);
}
#define atomic64_add_return_release atomic64_add_return_release
@@ -922,7 +909,7 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_relaxed(i, v);
}
#define atomic64_add_return_relaxed atomic64_add_return_relaxed
@@ -932,7 +919,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add(i, v);
}
#define atomic64_fetch_add atomic64_fetch_add
@@ -942,7 +929,7 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_acquire(i, v);
}
#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
@@ -952,7 +939,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_release(i, v);
}
#define atomic64_fetch_add_release atomic64_fetch_add_release
@@ -962,7 +949,7 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_relaxed(i, v);
}
#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
@@ -971,7 +958,7 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_sub(i, v);
}
#define atomic64_sub atomic64_sub
@@ -980,7 +967,7 @@ atomic64_sub(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return(i, v);
}
#define atomic64_sub_return atomic64_sub_return
@@ -990,7 +977,7 @@ atomic64_sub_return(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_acquire(i, v);
}
#define atomic64_sub_return_acquire atomic64_sub_return_acquire
@@ -1000,7 +987,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_release(i, v);
}
#define atomic64_sub_return_release atomic64_sub_return_release
@@ -1010,7 +997,7 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_relaxed(i, v);
}
#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
@@ -1020,7 +1007,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub(i, v);
}
#define atomic64_fetch_sub atomic64_fetch_sub
@@ -1030,7 +1017,7 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_acquire(i, v);
}
#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
@@ -1040,7 +1027,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_release(i, v);
}
#define atomic64_fetch_sub_release atomic64_fetch_sub_release
@@ -1050,7 +1037,7 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_relaxed(i, v);
}
#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
@@ -1060,7 +1047,7 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_inc(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_inc(v);
}
#define atomic64_inc atomic64_inc
@@ -1070,7 +1057,7 @@ atomic64_inc(atomic64_t *v)
static __always_inline s64
atomic64_inc_return(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return(v);
}
#define atomic64_inc_return atomic64_inc_return
@@ -1080,7 +1067,7 @@ atomic64_inc_return(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_acquire(v);
}
#define atomic64_inc_return_acquire atomic64_inc_return_acquire
@@ -1090,7 +1077,7 @@ atomic64_inc_return_acquire(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_release(v);
}
#define atomic64_inc_return_release atomic64_inc_return_release
@@ -1100,7 +1087,7 @@ atomic64_inc_return_release(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_relaxed(v);
}
#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed
@@ -1110,7 +1097,7 @@ atomic64_inc_return_relaxed(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc(v);
}
#define atomic64_fetch_inc atomic64_fetch_inc
@@ -1120,7 +1107,7 @@ atomic64_fetch_inc(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_acquire(v);
}
#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire
@@ -1130,7 +1117,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_release(v);
}
#define atomic64_fetch_inc_release atomic64_fetch_inc_release
@@ -1140,7 +1127,7 @@ atomic64_fetch_inc_release(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_relaxed(v);
}
#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed
@@ -1150,7 +1137,7 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
static __always_inline void
atomic64_dec(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_dec(v);
}
#define atomic64_dec atomic64_dec
@@ -1160,7 +1147,7 @@ atomic64_dec(atomic64_t *v)
static __always_inline s64
atomic64_dec_return(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return(v);
}
#define atomic64_dec_return atomic64_dec_return
@@ -1170,7 +1157,7 @@ atomic64_dec_return(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_acquire(v);
}
#define atomic64_dec_return_acquire atomic64_dec_return_acquire
@@ -1180,7 +1167,7 @@ atomic64_dec_return_acquire(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_release(v);
}
#define atomic64_dec_return_release atomic64_dec_return_release
@@ -1190,7 +1177,7 @@ atomic64_dec_return_release(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_relaxed(v);
}
#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed
@@ -1200,7 +1187,7 @@ atomic64_dec_return_relaxed(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec(v);
}
#define atomic64_fetch_dec atomic64_fetch_dec
@@ -1210,7 +1197,7 @@ atomic64_fetch_dec(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_acquire(v);
}
#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire
@@ -1220,7 +1207,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_release(v);
}
#define atomic64_fetch_dec_release atomic64_fetch_dec_release
@@ -1230,7 +1217,7 @@ atomic64_fetch_dec_release(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_relaxed(v);
}
#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed
@@ -1239,7 +1226,7 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_and(i, v);
}
#define atomic64_and atomic64_and
@@ -1248,7 +1235,7 @@ atomic64_and(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and(i, v);
}
#define atomic64_fetch_and atomic64_fetch_and
@@ -1258,7 +1245,7 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_acquire(i, v);
}
#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
@@ -1268,7 +1255,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_release(i, v);
}
#define atomic64_fetch_and_release atomic64_fetch_and_release
@@ -1278,7 +1265,7 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_relaxed(i, v);
}
#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
@@ -1288,7 +1275,7 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_andnot(i, v);
}
#define atomic64_andnot atomic64_andnot
@@ -1298,7 +1285,7 @@ atomic64_andnot(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot(i, v);
}
#define atomic64_fetch_andnot atomic64_fetch_andnot
@@ -1308,7 +1295,7 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_acquire(i, v);
}
#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
@@ -1318,7 +1305,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_release(i, v);
}
#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
@@ -1328,7 +1315,7 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_relaxed(i, v);
}
#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
@@ -1337,7 +1324,7 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_or(i, v);
}
#define atomic64_or atomic64_or
@@ -1346,7 +1333,7 @@ atomic64_or(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or(i, v);
}
#define atomic64_fetch_or atomic64_fetch_or
@@ -1356,7 +1343,7 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_acquire(i, v);
}
#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
@@ -1366,7 +1353,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_release(i, v);
}
#define atomic64_fetch_or_release atomic64_fetch_or_release
@@ -1376,7 +1363,7 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_relaxed(i, v);
}
#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
@@ -1385,7 +1372,7 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_xor(i, v);
}
#define atomic64_xor atomic64_xor
@@ -1394,7 +1381,7 @@ atomic64_xor(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor(i, v);
}
#define atomic64_fetch_xor atomic64_fetch_xor
@@ -1404,7 +1391,7 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_acquire(i, v);
}
#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
@@ -1414,7 +1401,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_release(i, v);
}
#define atomic64_fetch_xor_release atomic64_fetch_xor_release
@@ -1424,7 +1411,7 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_relaxed(i, v);
}
#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
@@ -1434,7 +1421,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_xchg(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg(v, i);
}
#define atomic64_xchg atomic64_xchg
@@ -1444,7 +1431,7 @@ atomic64_xchg(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_acquire(v, i);
}
#define atomic64_xchg_acquire atomic64_xchg_acquire
@@ -1454,7 +1441,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_release(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_release(v, i);
}
#define atomic64_xchg_release atomic64_xchg_release
@@ -1464,7 +1451,7 @@ atomic64_xchg_release(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_relaxed(v, i);
}
#define atomic64_xchg_relaxed atomic64_xchg_relaxed
@@ -1474,7 +1461,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg(v, old, new);
}
#define atomic64_cmpxchg atomic64_cmpxchg
@@ -1484,7 +1471,7 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_acquire(v, old, new);
}
#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire
@@ -1494,7 +1481,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_release(v, old, new);
}
#define atomic64_cmpxchg_release atomic64_cmpxchg_release
@@ -1504,7 +1491,7 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_relaxed(v, old, new);
}
#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed
@@ -1514,8 +1501,8 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg(v, old, new);
}
#define atomic64_try_cmpxchg atomic64_try_cmpxchg
@@ -1525,8 +1512,8 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_acquire(v, old, new);
}
#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire
@@ -1536,8 +1523,8 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_release(v, old, new);
}
#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release
@@ -1547,8 +1534,8 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
}
#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed
@@ -1558,7 +1545,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_sub_and_test(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_and_test(i, v);
}
#define atomic64_sub_and_test atomic64_sub_and_test
@@ -1568,7 +1555,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
static __always_inline bool
atomic64_dec_and_test(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_and_test(v);
}
#define atomic64_dec_and_test atomic64_dec_and_test
@@ -1578,7 +1565,7 @@ atomic64_dec_and_test(atomic64_t *v)
static __always_inline bool
atomic64_inc_and_test(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_and_test(v);
}
#define atomic64_inc_and_test atomic64_inc_and_test
@@ -1588,7 +1575,7 @@ atomic64_inc_and_test(atomic64_t *v)
static __always_inline bool
atomic64_add_negative(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_negative(i, v);
}
#define atomic64_add_negative atomic64_add_negative
@@ -1598,7 +1585,7 @@ atomic64_add_negative(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_unless(v, a, u);
}
#define atomic64_fetch_add_unless atomic64_fetch_add_unless
@@ -1608,7 +1595,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
static __always_inline bool
atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_unless(v, a, u);
}
#define atomic64_add_unless atomic64_add_unless
@@ -1618,7 +1605,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
static __always_inline bool
atomic64_inc_not_zero(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_not_zero(v);
}
#define atomic64_inc_not_zero atomic64_inc_not_zero
@@ -1628,7 +1615,7 @@ atomic64_inc_not_zero(atomic64_t *v)
static __always_inline bool
atomic64_inc_unless_negative(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_unless_negative(v);
}
#define atomic64_inc_unless_negative atomic64_inc_unless_negative
@@ -1638,7 +1625,7 @@ atomic64_inc_unless_negative(atomic64_t *v)
static __always_inline bool
atomic64_dec_unless_positive(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_unless_positive(v);
}
#define atomic64_dec_unless_positive atomic64_dec_unless_positive
@@ -1648,7 +1635,7 @@ atomic64_dec_unless_positive(atomic64_t *v)
static __always_inline s64
atomic64_dec_if_positive(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_if_positive(v);
}
#define atomic64_dec_if_positive atomic64_dec_if_positive
@@ -1658,7 +1645,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1667,7 +1654,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1676,7 +1663,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1685,7 +1672,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1694,7 +1681,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1703,7 +1690,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1712,7 +1699,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1721,7 +1708,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1730,7 +1717,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1739,7 +1726,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1748,7 +1735,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1757,7 +1744,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1765,28 +1752,28 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
})

#define cmpxchg64_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
})

#define sync_cmpxchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
})

#define cmpxchg_double(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \
})

@@ -1794,9 +1781,9 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_double_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
arch_cmpxchg_double_local(__ai_ptr, __VA_ARGS__); \
})

#endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */
-// 7b7e2af0e75c8ecb6f02298a7075f503f30d244c
+// 89bf97f3a7509b740845e51ddf31055b48a81f40
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index fb4222548b22..6afadf73da17 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -20,7 +20,7 @@ gen_param_check()
# We don't write to constant parameters
[ ${type#c} != ${type} ] && rw="read"

- printf "\t__atomic_check_${rw}(${name}, sizeof(*${name}));\n"
+ printf "\tinstrument_atomic_${rw}(${name}, sizeof(*${name}));\n"
}

#gen_param_check(arg...)
@@ -107,7 +107,7 @@ cat <<EOF
#define ${xchg}(ptr, ...) \\
({ \\
typeof(ptr) __ai_ptr = (ptr); \\
- __atomic_check_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
+ instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
arch_${xchg}(__ai_ptr, __VA_ARGS__); \\
})
EOF
@@ -148,20 +148,7 @@ cat << EOF

#include <linux/build_bug.h>
#include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
- kasan_check_read(v, size);
- kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
- kasan_check_write(v, size);
- kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>

EOF

--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 20, 2020, 9:19:48 AM1/20/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
Add explicit KCSAN checks for bitops.

Note that test_bit() is an atomic bitop, and we instrument it as such,
although it is in the non-atomic header. Currently it cannot be moved:
http://lkml.kernel.org/r/87pnh5d...@dja-thinkpad.axtens.net

Signed-off-by: Marco Elver <el...@google.com>
---
include/asm-generic/bitops/instrumented-atomic.h | 14 +++++++-------
include/asm-generic/bitops/instrumented-lock.h | 10 +++++-----
.../asm-generic/bitops/instrumented-non-atomic.h | 16 ++++++++--------
3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/include/asm-generic/bitops/instrumented-atomic.h b/include/asm-generic/bitops/instrumented-atomic.h
index 18ce3c9e8eec..fb2cb33a4013 100644
--- a/include/asm-generic/bitops/instrumented-atomic.h
+++ b/include/asm-generic/bitops/instrumented-atomic.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* set_bit - Atomically set a bit in memory
@@ -25,7 +25,7 @@
*/
static inline void set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_set_bit(nr, addr);
}

@@ -38,7 +38,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr)
*/
static inline void clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit(nr, addr);
}

@@ -54,7 +54,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr)
*/
static inline void change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_change_bit(nr, addr);
}

@@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit(nr, addr);
}

@@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_clear_bit(nr, addr);
}

@@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_change_bit(nr, addr);
}

diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h
index ec53fdeea9ec..b9bec468ae03 100644
--- a/include/asm-generic/bitops/instrumented-lock.h
+++ b/include/asm-generic/bitops/instrumented-lock.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* clear_bit_unlock - Clear a bit in memory, for unlock
@@ -22,7 +22,7 @@
*/
static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit_unlock(nr, addr);
}

@@ -37,7 +37,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
*/
static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit_unlock(nr, addr);
}

@@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit_lock(nr, addr);
}

@@ -71,7 +71,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
static inline bool
clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_clear_bit_unlock_is_negative_byte(nr, addr);
}
/* Let everybody know we have it. */
diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h
index 95ff28d128a1..20f788a25ef9 100644
--- a/include/asm-generic/bitops/instrumented-non-atomic.h
+++ b/include/asm-generic/bitops/instrumented-non-atomic.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* __set_bit - Set a bit in memory
@@ -24,7 +24,7 @@
*/
static inline void __set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___set_bit(nr, addr);
}

@@ -39,7 +39,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr)
*/
static inline void __clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit(nr, addr);
}

@@ -54,7 +54,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr)
*/
static inline void __change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___change_bit(nr, addr);
}

@@ -68,7 +68,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_set_bit(nr, addr);
}

@@ -82,7 +82,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_clear_bit(nr, addr);
}

@@ -96,7 +96,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_change_bit(nr, addr);
}

@@ -107,7 +107,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_bit(long nr, const volatile unsigned long *addr)
{
- kasan_check_read(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_read(addr + BIT_WORD(nr), sizeof(long));
return arch_test_bit(nr, addr);
}

--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 20, 2020, 9:19:51 AM1/20/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
This replaces the kasan instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
lib/iov_iter.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index fb29c02c6a3c..f06f6f1dd686 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -8,6 +8,7 @@
#include <linux/splice.h>
#include <net/checksum.h>
#include <linux/scatterlist.h>
+#include <linux/instrumented.h>

#define PIPE_PARANOIA /* for now */

@@ -137,20 +138,26 @@

static int copyout(void __user *to, const void *from, size_t n)
{
+ size_t res = n;
+
if (access_ok(to, n)) {
- kasan_check_read(from, n);
- n = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_post(from, n, res);
}
- return n;
+ return res;
}

static int copyin(void *to, const void __user *from, size_t n)
{
+ size_t res = n;
+
if (access_ok(from, n)) {
- kasan_check_write(to, n);
- n = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_pre(to, n);
+ res = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_post(to, n, res);
}
- return n;
+ return res;
}

static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes,
@@ -638,11 +645,14 @@ EXPORT_SYMBOL(_copy_to_iter);
#ifdef CONFIG_ARCH_HAS_UACCESS_MCSAFE
static int copyout_mcsafe(void __user *to, const void *from, size_t n)
{
+ size_t res = n;
+
if (access_ok(to, n)) {
- kasan_check_read(from, n);
- n = copy_to_user_mcsafe((__force void *) to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = copy_to_user_mcsafe((__force void *) to, from, n);
+ instrument_copy_to_user_post(from, n, res);
}
- return n;
+ return res;
}

static unsigned long memcpy_mcsafe_to_page(struct page *page, size_t offset,
--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 20, 2020, 9:19:54 AM1/20/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
This replaces the KASAN instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
include/linux/uaccess.h | 46 +++++++++++++++++++++++++++++------------
lib/usercopy.c | 14 ++++++++-----
2 files changed, 42 insertions(+), 18 deletions(-)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 67f016010aad..d3f2d9a8cae3 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -2,9 +2,9 @@
#ifndef __LINUX_UACCESS_H__
#define __LINUX_UACCESS_H__

+#include <linux/instrumented.h>
#include <linux/sched.h>
#include <linux/thread_info.h>
-#include <linux/kasan-checks.h>

#define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)

@@ -58,18 +58,26 @@
static __always_inline __must_check unsigned long
__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
{
- kasan_check_write(to, n);
+ unsigned long res;
+
check_object_size(to, n, false);
- return raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_pre(to, n);
+ res = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_post(to, n, res);
+ return res;
}

static __always_inline __must_check unsigned long
__copy_from_user(void *to, const void __user *from, unsigned long n)
{
+ unsigned long res;
+
might_fault();
- kasan_check_write(to, n);
check_object_size(to, n, false);
- return raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_pre(to, n);
+ res = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_post(to, n, res);
+ return res;
}

/**
@@ -88,18 +96,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
static __always_inline __must_check unsigned long
__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
{
- kasan_check_read(from, n);
+ unsigned long res;
+
check_object_size(from, n, true);
- return raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_post(from, n, res);
+ return res;
}

static __always_inline __must_check unsigned long
__copy_to_user(void __user *to, const void *from, unsigned long n)
{
+ unsigned long res;
+
might_fault();
- kasan_check_read(from, n);
check_object_size(from, n, true);
- return raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_post(from, n, res);
+ return res;
}

#ifdef INLINE_COPY_FROM_USER
@@ -109,8 +125,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
unsigned long res = n;
might_fault();
if (likely(access_ok(from, n))) {
- kasan_check_write(to, n);
+ instrument_copy_from_user_pre(to, n);
res = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_post(to, n, res);
}
if (unlikely(res))
memset(to + (n - res), 0, res);
@@ -125,12 +142,15 @@ _copy_from_user(void *, const void __user *, unsigned long);
static inline __must_check unsigned long
_copy_to_user(void __user *to, const void *from, unsigned long n)
{
+ unsigned long res = n;
+
might_fault();
if (access_ok(to, n)) {
- kasan_check_read(from, n);
- n = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_post(from, n, res);
}
- return n;
+ return res;
}
#else
extern __must_check unsigned long
diff --git a/lib/usercopy.c b/lib/usercopy.c
index cbb4d9ec00f2..1c20d4423b86 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/uaccess.h>
#include <linux/bitops.h>
+#include <linux/instrumented.h>
+#include <linux/uaccess.h>

/* out-of-line parts */

@@ -10,8 +11,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
unsigned long res = n;
might_fault();
if (likely(access_ok(from, n))) {
- kasan_check_write(to, n);
+ instrument_copy_from_user_pre(to, n);
res = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_post(to, n, res);
}
if (unlikely(res))
memset(to + (n - res), 0, res);
@@ -23,12 +25,14 @@ EXPORT_SYMBOL(_copy_from_user);
#ifndef INLINE_COPY_TO_USER
unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
{
+ unsigned long res = n;
might_fault();
if (likely(access_ok(to, n))) {
- kasan_check_read(from, n);
- n = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_pre(from, n);
+ res = raw_copy_to_user(to, from, n);
+ instrument_copy_to_user_post(from, n, res);
}
- return n;
+ return res;
}
EXPORT_SYMBOL(_copy_to_user);
#endif
--
2.25.0.341.g760bfbb309-goog

Alexander Potapenko

unread,
Jan 20, 2020, 9:25:36 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Dmitriy Vyukov, kasan-dev, LKML, Mark Rutland, wi...@kernel.org, Peter Zijlstra, boqun...@gmail.com, Arnd Bergmann, Al Viro, christop...@c-s.fr, Daniel Axtens, m...@ellerman.id.au, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, Kees Cook, linux...@vger.kernel.org
On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <el...@google.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user require special instrumentation,
> providing hooks before and after the access, since we may need to know
> the actual bytes accessed (currently this is relevant for KCSAN, and is
> also relevant in future for KMSAN).
>
> Suggested-by: Arnd Bergmann <ar...@arndb.de>
> Signed-off-by: Marco Elver <el...@google.com>
Acked-by: Alexander Potapenko <gli...@google.com>

Dmitry Vyukov

unread,
Jan 20, 2020, 9:34:58 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <el...@google.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user require special instrumentation,
> providing hooks before and after the access, since we may need to know
> the actual bytes accessed (currently this is relevant for KCSAN, and is
> also relevant in future for KMSAN).

How will KMSAN instrumentation look like?
Why don't we check the full range?
Kernel intending to copy something racy to user already looks like a
bug to me, even if user-space has that page unmapped. User-space can
always make the full range succeed. What am I missing?

Peter Zijlstra

unread,
Jan 20, 2020, 9:41:12 AM1/20/20
to Marco Elver, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> Add explicit KCSAN checks for bitops.
>
> Note that test_bit() is an atomic bitop, and we instrument it as such,

Well, it is 'atomic' in the same way that atomic_read() is. Both are
very much not atomic ops, but are part of an interface that facilitates
atomic operations.

Dmitry Vyukov

unread,
Jan 20, 2020, 9:46:10 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <el...@google.com> wrote:
>
Based on offline discussion, that's what we add for KMSAN:

> +static __always_inline void instrument_read(const volatile void *v, size_t size)
> +{
> + kasan_check_read(v, size);
> + kcsan_check_read(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_write - instrument regular write access
> + *
> + * Instrument a regular write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_write(const volatile void *v, size_t size)
> +{
> + kasan_check_write(v, size);
> + kcsan_check_write(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_atomic_read - instrument atomic read access
> + *
> + * Instrument an atomic read access. The instrumentation should be inserted
> + * before the actual read happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> +{
> + kasan_check_read(v, size);
> + kcsan_check_atomic_read(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_atomic_write - instrument atomic write access
> + *
> + * Instrument an atomic write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> +{
> + kasan_check_write(v, size);
> + kcsan_check_atomic_write(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> +{
> + /* Check before, to warn before potential memory corruption. */
> + kasan_check_read(src, size);

KMSAN: check that (src,size) is initialized

> +}
> +
> +/**
> + * instrument_copy_to_user_post - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> +{
> + /* Check after, to avoid false positive if memory was not accessed. */
> + kcsan_check_read(src, size - left);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_from_user_pre - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_from_user_pre(const volatile void *dst, size_t size)
> +{
> + /* Check before, to warn before potential memory corruption. */
> + kasan_check_write(dst, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_from_user_post - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
> +{
> + /* Check after, to avoid false positive if memory was not accessed. */
> + kcsan_check_write(dst, size - left);

KMSAN: mark (dst, size-left) as initialized

Dmitry Vyukov

unread,
Jan 20, 2020, 9:52:02 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <el...@google.com> wrote:
>
There is also something called strncpy_from_user() that has kasan
instrumentation now:
https://elixir.bootlin.com/linux/v5.5-rc6/source/lib/strncpy_from_user.c#L117

Dmitry Vyukov

unread,
Jan 20, 2020, 9:58:57 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
KMSAN also has instrumentation in
copy_to_user_page/copy_from_user_page. Do we need to do anything for
KASAN/KCSAN for these functions?

Marco Elver

unread,
Jan 20, 2020, 10:05:55 AM1/20/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
Yes, however, I think it's a special case for KASAN. The
implementation is already instrumented by the compiler. In the
original commit it says (1771c6e1a567e):

"Note: Unlike others strncpy_from_user() is written mostly in C and KASAN
sees memory accesses in it. However, it makes sense to add explicit
check for all @count bytes that *potentially* could be written to the
kernel."

I don't think we want unconditional double-instrumentation here. Let
me know if you think otherwise.

Thanks,
-- Marco

Dmitry Vyukov

unread,
Jan 20, 2020, 10:09:59 AM1/20/20
to Marco Elver, pau...@kernel.org, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
There is also copy_user_highpage.

And ioread/write8/16/32_rep: do we need any instrumentation there. It
seems we want both KSAN and KCSAN too. One may argue that KCSAN
instrumentation there is to super critical at this point, but KASAN
instrumentation is important, if anything to prevent silent memory
corruptions. How do we instrument there? I don't see how it maps to
any of the existing instrumentation functions.

There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
does not seem to map to any of the instrumentation functions.

Marco Elver

unread,
Jan 20, 2020, 10:40:55 AM1/20/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
copy_to_user_page/copy_from_user_page can be instrumented with
instrument_copy_{to,from}_user_. I prefer keeping this series with no
functional change intended for KASAN at least.

> There is also copy_user_highpage.
>
> And ioread/write8/16/32_rep: do we need any instrumentation there. It
> seems we want both KSAN and KCSAN too. One may argue that KCSAN
> instrumentation there is to super critical at this point, but KASAN
> instrumentation is important, if anything to prevent silent memory
> corruptions. How do we instrument there? I don't see how it maps to
> any of the existing instrumentation functions.

These should be able to use the regular instrument_{read,write}. I
prefer keeping this series with no functional change intended for
KASAN at least.

> There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> does not seem to map to any of the instrumentation functions.

For now, I would rather that there are some one-off special
instrumentation, like for KMSAN. Coming up with a unified interface
here that, without the use-cases even settled, seems hard to justify.
Once instrumentation for these have settled, unifying the interface
would have better justification.

This patch series is merely supposed to introduce instrumented.h and
replace the kasan_checks (also implicitly introducing kcsan_checks
there), however, with no further functional change intended.

I propose that adding entirely new instrumentation for both KASAN and
KCSAN, we should send a separate patch-series.

Thanks,
-- Marco

Marco Elver

unread,
Jan 20, 2020, 10:53:54 AM1/20/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
Fair enough. I can move this into the pre-hooks in v2.

However, note that, that leaves us with a bunch of empty post-hooks in
the patch. While this will probably change when we get KMSAN, is it
reasonable to keep them empty for now?

Thanks,
-- Marco

Dmitry Vyukov

unread,
Jan 20, 2020, 11:06:29 AM1/20/20
to Marco Elver, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
instrument_{read,write} will not contain any KMSAN instrumentation,
which means we will effectively remove KMSAN instrumentation, which is
weird because we instrumented these functions because of KMSAN in the
first place...

> > There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> > does not seem to map to any of the instrumentation functions.
>
> For now, I would rather that there are some one-off special
> instrumentation, like for KMSAN. Coming up with a unified interface
> here that, without the use-cases even settled, seems hard to justify.
> Once instrumentation for these have settled, unifying the interface
> would have better justification.

I would assume they may also require an annotation that checks the
memory region under all 3 tools and we don't have such annotation
(same as the previous case and effectively copy_to_user). I would
expect such annotation will be used in more places once we start
looking for more opportunities.

Marco Elver

unread,
Jan 20, 2020, 11:25:44 AM1/20/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
Agreed, I'm certainly not against adding these. We may need to
introduce 'instrument_dma_' etc. However, would it be reasonable to do
this in a separate follow-up patch-series, to avoid stalling bitops
instrumentation? Assuming that the 8 hooks in instrumented.h right
now are reasonable, and such future changes add new hooks, I think
that would be the more pragmatic approach.

Thanks,
-- Marco

Paul E. McKenney

unread,
Jan 20, 2020, 11:27:27 AM1/20/20
to Peter Zijlstra, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
True, but they all are either inline assembly or have either an
implicit or explicit cast to volatile, so they could be treated
the same as atomic_read(), correct? If not, what am I missing?

(There is one exception, but it is in arch/x86/boot/bitops.h,
which I UP-only, correct?)

Thanx, Paul

Dmitry Vyukov

unread,
Jan 20, 2020, 11:39:51 AM1/20/20
to Marco Elver, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
I think it would be a wrong direction. Just like this change does not
introduce all of instrument_test_and_set_bit,
instrument___clear_bit_unlock, instrument_copyin,
instrument_copyout_mcsafe, instrument_atomic_andnot, .... All of these
can be grouped into a very small set of cases with respect to what
type of memory access they do from the point of view of sanitizers.
And we introduce instrumentation for these _types_ of accesses, rather
than application functions (we don't care much if the access is for
atomic operations, copy to/from user, usb, dma, skb or something
else). It seems that our set of instrumentation annotations can't
handle some very basic cases...

Peter Zijlstra

unread,
Jan 20, 2020, 11:52:48 AM1/20/20
to Paul E. McKenney, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
On Mon, Jan 20, 2020 at 08:27:25AM -0800, Paul E. McKenney wrote:
> On Mon, Jan 20, 2020 at 03:40:48PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> > > Add explicit KCSAN checks for bitops.
> > >
> > > Note that test_bit() is an atomic bitop, and we instrument it as such,
> >
> > Well, it is 'atomic' in the same way that atomic_read() is. Both are
> > very much not atomic ops, but are part of an interface that facilitates
> > atomic operations.
>
> True, but they all are either inline assembly or have either an
> implicit or explicit cast to volatile, so they could be treated
> the same as atomic_read(), correct? If not, what am I missing?

Sure, but that is due to instrumentation requirements, not anything
else.

Also note the distinct lack of __test_bit(), to mirror the non-atomic
__set_bit() and __clear_bit().

Paul E. McKenney

unread,
Jan 20, 2020, 3:24:00 PM1/20/20
to Peter Zijlstra, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
OK, I will bite. ;-)

We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
and atomic_set() are considered to be non-racy, right?

Thanx, Paul

Peter Zijlstra

unread,
Jan 21, 2020, 4:15:52 AM1/21/20
to Paul E. McKenney, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> and atomic_set() are considered to be non-racy, right?

What is racy? :-) You can make data races with atomic_{read,set}() just
fine.

Anyway, traditionally we call the read-modify-write stuff atomic, not
the trivial load-store stuff. The only reason we care about the
load-store stuff in the first place is because C compilers are shit.

atomic_read() / test_bit() are just a load, all we need is the C
compiler not to be an ass and split it. Yes, we've invented the term
single-copy atomicity for that, but that doesn't make it more or less of
a load.

And exactly because it is just a load, there is no __test_bit(), which
would be the exact same load.

Marco Elver

unread,
Jan 21, 2020, 4:44:19 AM1/21/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
I missed this. Yes, you're right.
With the ioread/write, dma, skb, urb, user-copy cases in mind, it just
appears to me that attempting to find a minimal unifying set of
instrumentation hooks might lead us in circles, given we only have the
following options:

1. Do not introduce 'instrumented.h', and drop this series. With KMSAN
in mind, this is what I mentioned I preferred in the first place, and
just add a few dozen lines in each place we need to instrument. Yes,
yes, it's not as convenient, but at least we'll know it'll be correct
without having to reason about all cases and all sanitizers all at
once (with KMSAN not being in any kernel tree even).

2. This patch series, keeping 'instrumented.h', but only keep what we
use right now. This is knowing we'll likely have to add a number of
special cases (user-copy, ioread/write, etc) for now. Again,
KASAN/KCSAN probably want the same thing, but I don't know how much
conflict there will be with KMSAN after all is said and done. We will
incrementally add what is required, with unifying things later. This
will also satisfy Arnd's constraint of no empty functions:
http://lkml.kernel.org/r/CAK8P3a32sVU4umk2FLnWnMGM...@mail.gmail.com

3. Try to find a minimal set of instrumentation hooks that cater to
all tools (KASAN, KCSAN, KMSAN). Without even having all
instrumentation (without the 'instrumented.h' infrastructure) in
place, I feel this will not be too successful. I think we can do this
once we have instrumentation for all tools, in all places. Then
unifying all of them should be a non-functional-change refactor.
Essentially, this option depends on (1).

However, now we have some constraints which are difficult to satisfy
all at once:
1. Essentially we were told to avoid (1), based on Arnd's suggestion
to simplify the instrumentation. Therefore we thought (2) would be a
good idea.
2. Now that we know what (2) looks like, it seems you prefer (3),
because we should also cater to KMSAN with this patch series.
3. No unused hooks.

Given we have a KMSAN<-(1)<-(3) dependency, but we were told to avoid
(1), empty functions, and KMSAN hasn't yet landed, we can't reasonably
do (3). Since you dislike (2), we're stuck.

Any options I missed?

Thanks,
-- Marco

Dmitry Vyukov

unread,
Jan 21, 2020, 8:01:27 AM1/21/20
to Marco Elver, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvy...@google.com> wrote:
>
One detail I noticed for KMSAN is that kmsan_copy_to_user has a
special case when @to address is in kernel-space (compat syscalls
doing tricky things), in that case it only copies metadata. We can't
handle this with existing annotations.


* actually copied to ensure there was no information leak. If @to belongs to
* the kernel space (which is possible for compat syscalls), KMSAN just copies
* the metadata.
*/
void kmsan_copy_to_user(const void *to, const void *from, size_t
to_copy, size_t left);

Paul E. McKenney

unread,
Jan 21, 2020, 9:21:10 AM1/21/20
to Peter Zijlstra, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
On Tue, Jan 21, 2020 at 10:15:01AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> > We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> > and atomic_set() are considered to be non-racy, right?
>
> What is racy? :-) You can make data races with atomic_{read,set}() just
> fine.

Like "fairness", lots of definitions of "racy". ;-)

> Anyway, traditionally we call the read-modify-write stuff atomic, not
> the trivial load-store stuff. The only reason we care about the
> load-store stuff in the first place is because C compilers are shit.
>
> atomic_read() / test_bit() are just a load, all we need is the C
> compiler not to be an ass and split it. Yes, we've invented the term
> single-copy atomicity for that, but that doesn't make it more or less of
> a load.
>
> And exactly because it is just a load, there is no __test_bit(), which
> would be the exact same load.

Very good! Shouldn't KCSAN then define test_bit() as non-racy just as
for atomic_read()?

Thanx, Paul

Peter Zijlstra

unread,
Jan 21, 2020, 9:48:04 AM1/21/20
to Paul E. McKenney, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
Sure it does; but my comment was aimed at the gripe that test_bit()
lives in the non-atomic bitops header. That is arguably entirely
correct.

Marco Elver

unread,
Jan 21, 2020, 10:08:09 AM1/21/20
to Peter Zijlstra, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov, kasan-dev, LKML, Mark Rutland, Will Deacon, Boqun Feng, Arnd Bergmann, Al Viro, christophe leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
I will also point out that test_bit() is listed in
Documentation/atomic_bitops.txt. What I gather from
atomic_bitops.txt, is that the semantics of test_bit() is simply an
unordered atomic operation: the interface promises that the load will
be executed as one indivisible step, i.e. (single-copy) atomically
(after compiler optimizations etc.).

Although at this point probably not too important, I checked Alpha's
implementation of test_bit(), and there is no
smp_read_barrier_depends(). Is it safe to say that test_bit() should
then be weaker in terms of ordering than READ_ONCE()?

My assumption was that test_bit() is as strong as READ_ONCE().

Thanks,
-- Marco

Marco Elver

unread,
Jan 21, 2020, 11:05:27 AM1/21/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, d...@axtens.net, christop...@c-s.fr, m...@ellerman.id.au, mhir...@kernel.org, ros...@goodmis.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, kees...@chromium.org, cyp...@cyphar.com, linux...@vger.kernel.org
This adds instrumented.h, which provides generic wrappers for memory
access instrumentation that the compiler cannot emit for various
sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
future this will also include KMSAN instrumentation.

Note that, copy_{to,from}_user should use special instrumentation, since
we should be able to instrument both source and destination memory
accesses if both are kernel memory.

The current patch only instruments the memory access where the address
is always in kernel space, however, both may in fact be kernel addresses
when a compat syscall passes an argument allocated in the kernel to a
real syscall. In a future change, both KASAN and KCSAN should check both
addresses in such cases, as well as KMSAN will make use of both
addresses. [It made more sense to provide the completed function
signature, rather than updating it and changing all locations again at a
later time.]

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
Acked-by: Alexander Potapenko <gli...@google.com>
---
v2:
* Simplify header, since we currently do not need pre/post user-copy
distinction.
* Make instrument_copy_{to,from}_user function arguments match
copy_{to,from}_user and update rationale in commit message.
---
include/linux/instrumented.h | 109 +++++++++++++++++++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 include/linux/instrumented.h

diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
new file mode 100644
index 000000000000..43e6ea591975
--- /dev/null
+++ b/include/linux/instrumented.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This header provides generic wrappers for memory access instrumentation that
+ * the compiler cannot emit for: KASAN, KCSAN.
+ */
+#ifndef _LINUX_INSTRUMENTED_H
+#define _LINUX_INSTRUMENTED_H
+
+#include <linux/compiler.h>
+#include <linux/kasan-checks.h>
+#include <linux/kcsan-checks.h>
+#include <linux/types.h>
+
+/**
+ * instrument_read - instrument regular read access
+ *
+ * Instrument a regular read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_read(const volatile void *v, size_t size)
+{
+ kasan_check_read(v, size);
+ kcsan_check_read(v, size);
+}
+
+/**
+ * instrument_write - instrument regular write access
+ *
+ * Instrument a regular write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_write(v, size);
+}
+
+/**
+ * instrument_atomic_read - instrument atomic read access
+ *
+ * Instrument an atomic read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
+{
+ kasan_check_read(v, size);
+ kcsan_check_atomic_read(v, size);
+}
+
+/**
+ * instrument_atomic_write - instrument atomic write access
+ *
+ * Instrument an atomic write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_atomic_write(v, size);
+}
+
+/**
+ * instrument_copy_to_user - instrument reads of copy_to_user
+ *
+ * Instrument reads from kernel memory, that are due to copy_to_user (and
+ * variants). The instrumentation must be inserted before the accesses.
+ *
+ * @to destination address
+ * @from source address
+ * @n number of bytes to copy
+ */
+static __always_inline void
+instrument_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+ kasan_check_read(from, n);
+ kcsan_check_read(from, n);
+}
+
+/**
+ * instrument_copy_from_user - instrument writes of copy_from_user
+ *
+ * Instrument writes to kernel memory, that are due to copy_from_user (and
+ * variants). The instrumentation should be inserted before the accesses.
+ *
+ * @to destination address
+ * @from source address
+ * @n number of bytes to copy
+ */
+static __always_inline void
+instrument_copy_from_user(const void *to, const void __user *from, unsigned long n)
+{
+ kasan_check_write(to, n);
+ kcsan_check_write(to, n);

Marco Elver

unread,
Jan 21, 2020, 11:05:31 AM1/21/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, d...@axtens.net, christop...@c-s.fr, m...@ellerman.id.au, mhir...@kernel.org, ros...@goodmis.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, kees...@chromium.org, cyp...@cyphar.com, linux...@vger.kernel.org
This switches atomic-instrumented.h to use the generic instrumentation
wrappers provided by instrumented.h.

No functional change intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
Acked-by: Arnd Bergmann <ar...@arndb.de>
---
include/asm-generic/atomic-instrumented.h | 395 +++++++++++-----------
scripts/atomic/gen-atomic-instrumented.sh | 19 +-
2 files changed, 194 insertions(+), 220 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 63869ded73ac..379986e40159 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -19,25 +19,12 @@

#include <linux/build_bug.h>
#include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
- kasan_check_read(v, size);
- kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
- kasan_check_write(v, size);
- kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>

static __always_inline int
atomic_read(const atomic_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic_read(v);
}
#define atomic_read atomic_read
@@ -46,7 +33,7 @@ atomic_read(const atomic_t *v)
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic_read_acquire(v);
}
#define atomic_read_acquire atomic_read_acquire
@@ -55,7 +42,7 @@ atomic_read_acquire(const atomic_t *v)
static __always_inline void
atomic_set(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_set(v, i);
}
#define atomic_set atomic_set
@@ -64,7 +51,7 @@ atomic_set(atomic_t *v, int i)
static __always_inline void
atomic_set_release(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_set_release(v, i);
}
#define atomic_set_release atomic_set_release
@@ -73,7 +60,7 @@ atomic_set_release(atomic_t *v, int i)
static __always_inline void
atomic_add(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_add(i, v);
}
#define atomic_add atomic_add
@@ -82,7 +69,7 @@ atomic_add(int i, atomic_t *v)
static __always_inline int
atomic_add_return(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return(i, v);
}
#define atomic_add_return atomic_add_return
@@ -92,7 +79,7 @@ atomic_add_return(int i, atomic_t *v)
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_acquire(i, v);
}
#define atomic_add_return_acquire atomic_add_return_acquire
@@ -102,7 +89,7 @@ atomic_add_return_acquire(int i, atomic_t *v)
static __always_inline int
atomic_add_return_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_release(i, v);
}
#define atomic_add_return_release atomic_add_return_release
@@ -112,7 +99,7 @@ atomic_add_return_release(int i, atomic_t *v)
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_return_relaxed(i, v);
}
#define atomic_add_return_relaxed atomic_add_return_relaxed
@@ -122,7 +109,7 @@ atomic_add_return_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add(i, v);
}
#define atomic_fetch_add atomic_fetch_add
@@ -132,7 +119,7 @@ atomic_fetch_add(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_acquire(i, v);
}
#define atomic_fetch_add_acquire atomic_fetch_add_acquire
@@ -142,7 +129,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_release(i, v);
}
#define atomic_fetch_add_release atomic_fetch_add_release
@@ -152,7 +139,7 @@ atomic_fetch_add_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_relaxed(i, v);
}
#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
@@ -161,7 +148,7 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_sub(i, v);
}
#define atomic_sub atomic_sub
@@ -170,7 +157,7 @@ atomic_sub(int i, atomic_t *v)
static __always_inline int
atomic_sub_return(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return(i, v);
}
#define atomic_sub_return atomic_sub_return
@@ -180,7 +167,7 @@ atomic_sub_return(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_acquire(i, v);
}
#define atomic_sub_return_acquire atomic_sub_return_acquire
@@ -190,7 +177,7 @@ atomic_sub_return_acquire(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_release(i, v);
}
#define atomic_sub_return_release atomic_sub_return_release
@@ -200,7 +187,7 @@ atomic_sub_return_release(int i, atomic_t *v)
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_return_relaxed(i, v);
}
#define atomic_sub_return_relaxed atomic_sub_return_relaxed
@@ -210,7 +197,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub(i, v);
}
#define atomic_fetch_sub atomic_fetch_sub
@@ -220,7 +207,7 @@ atomic_fetch_sub(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_acquire(i, v);
}
#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
@@ -230,7 +217,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_release(i, v);
}
#define atomic_fetch_sub_release atomic_fetch_sub_release
@@ -240,7 +227,7 @@ atomic_fetch_sub_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_sub_relaxed(i, v);
}
#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
@@ -250,7 +237,7 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_inc(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_inc(v);
}
#define atomic_inc atomic_inc
@@ -260,7 +247,7 @@ atomic_inc(atomic_t *v)
static __always_inline int
atomic_inc_return(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return(v);
}
#define atomic_inc_return atomic_inc_return
@@ -270,7 +257,7 @@ atomic_inc_return(atomic_t *v)
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_acquire(v);
}
#define atomic_inc_return_acquire atomic_inc_return_acquire
@@ -280,7 +267,7 @@ atomic_inc_return_acquire(atomic_t *v)
static __always_inline int
atomic_inc_return_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_release(v);
}
#define atomic_inc_return_release atomic_inc_return_release
@@ -290,7 +277,7 @@ atomic_inc_return_release(atomic_t *v)
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_return_relaxed(v);
}
#define atomic_inc_return_relaxed atomic_inc_return_relaxed
@@ -300,7 +287,7 @@ atomic_inc_return_relaxed(atomic_t *v)
static __always_inline int
atomic_fetch_inc(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc(v);
}
#define atomic_fetch_inc atomic_fetch_inc
@@ -310,7 +297,7 @@ atomic_fetch_inc(atomic_t *v)
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_acquire(v);
}
#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire
@@ -320,7 +307,7 @@ atomic_fetch_inc_acquire(atomic_t *v)
static __always_inline int
atomic_fetch_inc_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_release(v);
}
#define atomic_fetch_inc_release atomic_fetch_inc_release
@@ -330,7 +317,7 @@ atomic_fetch_inc_release(atomic_t *v)
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_inc_relaxed(v);
}
#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed
@@ -340,7 +327,7 @@ atomic_fetch_inc_relaxed(atomic_t *v)
static __always_inline void
atomic_dec(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_dec(v);
}
#define atomic_dec atomic_dec
@@ -350,7 +337,7 @@ atomic_dec(atomic_t *v)
static __always_inline int
atomic_dec_return(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return(v);
}
#define atomic_dec_return atomic_dec_return
@@ -360,7 +347,7 @@ atomic_dec_return(atomic_t *v)
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_acquire(v);
}
#define atomic_dec_return_acquire atomic_dec_return_acquire
@@ -370,7 +357,7 @@ atomic_dec_return_acquire(atomic_t *v)
static __always_inline int
atomic_dec_return_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_release(v);
}
#define atomic_dec_return_release atomic_dec_return_release
@@ -380,7 +367,7 @@ atomic_dec_return_release(atomic_t *v)
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_return_relaxed(v);
}
#define atomic_dec_return_relaxed atomic_dec_return_relaxed
@@ -390,7 +377,7 @@ atomic_dec_return_relaxed(atomic_t *v)
static __always_inline int
atomic_fetch_dec(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec(v);
}
#define atomic_fetch_dec atomic_fetch_dec
@@ -400,7 +387,7 @@ atomic_fetch_dec(atomic_t *v)
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_acquire(v);
}
#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire
@@ -410,7 +397,7 @@ atomic_fetch_dec_acquire(atomic_t *v)
static __always_inline int
atomic_fetch_dec_release(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_release(v);
}
#define atomic_fetch_dec_release atomic_fetch_dec_release
@@ -420,7 +407,7 @@ atomic_fetch_dec_release(atomic_t *v)
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_dec_relaxed(v);
}
#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed
@@ -429,7 +416,7 @@ atomic_fetch_dec_relaxed(atomic_t *v)
static __always_inline void
atomic_and(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_and(i, v);
}
#define atomic_and atomic_and
@@ -438,7 +425,7 @@ atomic_and(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and(i, v);
}
#define atomic_fetch_and atomic_fetch_and
@@ -448,7 +435,7 @@ atomic_fetch_and(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_acquire(i, v);
}
#define atomic_fetch_and_acquire atomic_fetch_and_acquire
@@ -458,7 +445,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_release(i, v);
}
#define atomic_fetch_and_release atomic_fetch_and_release
@@ -468,7 +455,7 @@ atomic_fetch_and_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_and_relaxed(i, v);
}
#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
@@ -478,7 +465,7 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_andnot(i, v);
}
#define atomic_andnot atomic_andnot
@@ -488,7 +475,7 @@ atomic_andnot(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot(i, v);
}
#define atomic_fetch_andnot atomic_fetch_andnot
@@ -498,7 +485,7 @@ atomic_fetch_andnot(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_acquire(i, v);
}
#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
@@ -508,7 +495,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_release(i, v);
}
#define atomic_fetch_andnot_release atomic_fetch_andnot_release
@@ -518,7 +505,7 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_andnot_relaxed(i, v);
}
#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
@@ -527,7 +514,7 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_or(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_or(i, v);
}
#define atomic_or atomic_or
@@ -536,7 +523,7 @@ atomic_or(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or(i, v);
}
#define atomic_fetch_or atomic_fetch_or
@@ -546,7 +533,7 @@ atomic_fetch_or(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_acquire(i, v);
}
#define atomic_fetch_or_acquire atomic_fetch_or_acquire
@@ -556,7 +543,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_release(i, v);
}
#define atomic_fetch_or_release atomic_fetch_or_release
@@ -566,7 +553,7 @@ atomic_fetch_or_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_or_relaxed(i, v);
}
#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
@@ -575,7 +562,7 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic_xor(i, v);
}
#define atomic_xor atomic_xor
@@ -584,7 +571,7 @@ atomic_xor(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor(i, v);
}
#define atomic_fetch_xor atomic_fetch_xor
@@ -594,7 +581,7 @@ atomic_fetch_xor(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_acquire(i, v);
}
#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
@@ -604,7 +591,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_release(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_release(i, v);
}
#define atomic_fetch_xor_release atomic_fetch_xor_release
@@ -614,7 +601,7 @@ atomic_fetch_xor_release(int i, atomic_t *v)
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_xor_relaxed(i, v);
}
#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
@@ -624,7 +611,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
static __always_inline int
atomic_xchg(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg(v, i);
}
#define atomic_xchg atomic_xchg
@@ -634,7 +621,7 @@ atomic_xchg(atomic_t *v, int i)
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_acquire(v, i);
}
#define atomic_xchg_acquire atomic_xchg_acquire
@@ -644,7 +631,7 @@ atomic_xchg_acquire(atomic_t *v, int i)
static __always_inline int
atomic_xchg_release(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_release(v, i);
}
#define atomic_xchg_release atomic_xchg_release
@@ -654,7 +641,7 @@ atomic_xchg_release(atomic_t *v, int i)
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_xchg_relaxed(v, i);
}
#define atomic_xchg_relaxed atomic_xchg_relaxed
@@ -664,7 +651,7 @@ atomic_xchg_relaxed(atomic_t *v, int i)
static __always_inline int
atomic_cmpxchg(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg(v, old, new);
}
#define atomic_cmpxchg atomic_cmpxchg
@@ -674,7 +661,7 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_acquire(v, old, new);
}
#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
@@ -684,7 +671,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_release(v, old, new);
}
#define atomic_cmpxchg_release atomic_cmpxchg_release
@@ -694,7 +681,7 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_cmpxchg_relaxed(v, old, new);
}
#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
@@ -704,8 +691,8 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
static __always_inline bool
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg(v, old, new);
}
#define atomic_try_cmpxchg atomic_try_cmpxchg
@@ -715,8 +702,8 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_acquire(v, old, new);
}
#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire
@@ -726,8 +713,8 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_release(v, old, new);
}
#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release
@@ -737,8 +724,8 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic_try_cmpxchg_relaxed(v, old, new);
}
#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed
@@ -748,7 +735,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
static __always_inline bool
atomic_sub_and_test(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_sub_and_test(i, v);
}
#define atomic_sub_and_test atomic_sub_and_test
@@ -758,7 +745,7 @@ atomic_sub_and_test(int i, atomic_t *v)
static __always_inline bool
atomic_dec_and_test(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_and_test(v);
}
#define atomic_dec_and_test atomic_dec_and_test
@@ -768,7 +755,7 @@ atomic_dec_and_test(atomic_t *v)
static __always_inline bool
atomic_inc_and_test(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_and_test(v);
}
#define atomic_inc_and_test atomic_inc_and_test
@@ -778,7 +765,7 @@ atomic_inc_and_test(atomic_t *v)
static __always_inline bool
atomic_add_negative(int i, atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_negative(i, v);
}
#define atomic_add_negative atomic_add_negative
@@ -788,7 +775,7 @@ atomic_add_negative(int i, atomic_t *v)
static __always_inline int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_fetch_add_unless(v, a, u);
}
#define atomic_fetch_add_unless atomic_fetch_add_unless
@@ -798,7 +785,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
static __always_inline bool
atomic_add_unless(atomic_t *v, int a, int u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_add_unless(v, a, u);
}
#define atomic_add_unless atomic_add_unless
@@ -808,7 +795,7 @@ atomic_add_unless(atomic_t *v, int a, int u)
static __always_inline bool
atomic_inc_not_zero(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_not_zero(v);
}
#define atomic_inc_not_zero atomic_inc_not_zero
@@ -818,7 +805,7 @@ atomic_inc_not_zero(atomic_t *v)
static __always_inline bool
atomic_inc_unless_negative(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_inc_unless_negative(v);
}
#define atomic_inc_unless_negative atomic_inc_unless_negative
@@ -828,7 +815,7 @@ atomic_inc_unless_negative(atomic_t *v)
static __always_inline bool
atomic_dec_unless_positive(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_unless_positive(v);
}
#define atomic_dec_unless_positive atomic_dec_unless_positive
@@ -838,7 +825,7 @@ atomic_dec_unless_positive(atomic_t *v)
static __always_inline int
atomic_dec_if_positive(atomic_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic_dec_if_positive(v);
}
#define atomic_dec_if_positive atomic_dec_if_positive
@@ -847,7 +834,7 @@ atomic_dec_if_positive(atomic_t *v)
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic64_read(v);
}
#define atomic64_read atomic64_read
@@ -856,7 +843,7 @@ atomic64_read(const atomic64_t *v)
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
- __atomic_check_read(v, sizeof(*v));
+ instrument_atomic_read(v, sizeof(*v));
return arch_atomic64_read_acquire(v);
}
#define atomic64_read_acquire atomic64_read_acquire
@@ -865,7 +852,7 @@ atomic64_read_acquire(const atomic64_t *v)
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_set(v, i);
}
#define atomic64_set atomic64_set
@@ -874,7 +861,7 @@ atomic64_set(atomic64_t *v, s64 i)
static __always_inline void
atomic64_set_release(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_set_release(v, i);
}
#define atomic64_set_release atomic64_set_release
@@ -883,7 +870,7 @@ atomic64_set_release(atomic64_t *v, s64 i)
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_add(i, v);
}
#define atomic64_add atomic64_add
@@ -892,7 +879,7 @@ atomic64_add(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return(i, v);
}
#define atomic64_add_return atomic64_add_return
@@ -902,7 +889,7 @@ atomic64_add_return(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_acquire(i, v);
}
#define atomic64_add_return_acquire atomic64_add_return_acquire
@@ -912,7 +899,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_release(i, v);
}
#define atomic64_add_return_release atomic64_add_return_release
@@ -922,7 +909,7 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_return_relaxed(i, v);
}
#define atomic64_add_return_relaxed atomic64_add_return_relaxed
@@ -932,7 +919,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add(i, v);
}
#define atomic64_fetch_add atomic64_fetch_add
@@ -942,7 +929,7 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_acquire(i, v);
}
#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
@@ -952,7 +939,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_release(i, v);
}
#define atomic64_fetch_add_release atomic64_fetch_add_release
@@ -962,7 +949,7 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_relaxed(i, v);
}
#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
@@ -971,7 +958,7 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_sub(i, v);
}
#define atomic64_sub atomic64_sub
@@ -980,7 +967,7 @@ atomic64_sub(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return(i, v);
}
#define atomic64_sub_return atomic64_sub_return
@@ -990,7 +977,7 @@ atomic64_sub_return(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_acquire(i, v);
}
#define atomic64_sub_return_acquire atomic64_sub_return_acquire
@@ -1000,7 +987,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_release(i, v);
}
#define atomic64_sub_return_release atomic64_sub_return_release
@@ -1010,7 +997,7 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_return_relaxed(i, v);
}
#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
@@ -1020,7 +1007,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub(i, v);
}
#define atomic64_fetch_sub atomic64_fetch_sub
@@ -1030,7 +1017,7 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_acquire(i, v);
}
#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
@@ -1040,7 +1027,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_release(i, v);
}
#define atomic64_fetch_sub_release atomic64_fetch_sub_release
@@ -1050,7 +1037,7 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_sub_relaxed(i, v);
}
#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
@@ -1060,7 +1047,7 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_inc(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_inc(v);
}
#define atomic64_inc atomic64_inc
@@ -1070,7 +1057,7 @@ atomic64_inc(atomic64_t *v)
static __always_inline s64
atomic64_inc_return(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return(v);
}
#define atomic64_inc_return atomic64_inc_return
@@ -1080,7 +1067,7 @@ atomic64_inc_return(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_acquire(v);
}
#define atomic64_inc_return_acquire atomic64_inc_return_acquire
@@ -1090,7 +1077,7 @@ atomic64_inc_return_acquire(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_release(v);
}
#define atomic64_inc_return_release atomic64_inc_return_release
@@ -1100,7 +1087,7 @@ atomic64_inc_return_release(atomic64_t *v)
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_return_relaxed(v);
}
#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed
@@ -1110,7 +1097,7 @@ atomic64_inc_return_relaxed(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc(v);
}
#define atomic64_fetch_inc atomic64_fetch_inc
@@ -1120,7 +1107,7 @@ atomic64_fetch_inc(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_acquire(v);
}
#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire
@@ -1130,7 +1117,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_release(v);
}
#define atomic64_fetch_inc_release atomic64_fetch_inc_release
@@ -1140,7 +1127,7 @@ atomic64_fetch_inc_release(atomic64_t *v)
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_inc_relaxed(v);
}
#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed
@@ -1150,7 +1137,7 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
static __always_inline void
atomic64_dec(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_dec(v);
}
#define atomic64_dec atomic64_dec
@@ -1160,7 +1147,7 @@ atomic64_dec(atomic64_t *v)
static __always_inline s64
atomic64_dec_return(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return(v);
}
#define atomic64_dec_return atomic64_dec_return
@@ -1170,7 +1157,7 @@ atomic64_dec_return(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_acquire(v);
}
#define atomic64_dec_return_acquire atomic64_dec_return_acquire
@@ -1180,7 +1167,7 @@ atomic64_dec_return_acquire(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_release(v);
}
#define atomic64_dec_return_release atomic64_dec_return_release
@@ -1190,7 +1177,7 @@ atomic64_dec_return_release(atomic64_t *v)
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_return_relaxed(v);
}
#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed
@@ -1200,7 +1187,7 @@ atomic64_dec_return_relaxed(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec(v);
}
#define atomic64_fetch_dec atomic64_fetch_dec
@@ -1210,7 +1197,7 @@ atomic64_fetch_dec(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_acquire(v);
}
#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire
@@ -1220,7 +1207,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_release(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_release(v);
}
#define atomic64_fetch_dec_release atomic64_fetch_dec_release
@@ -1230,7 +1217,7 @@ atomic64_fetch_dec_release(atomic64_t *v)
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_dec_relaxed(v);
}
#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed
@@ -1239,7 +1226,7 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_and(i, v);
}
#define atomic64_and atomic64_and
@@ -1248,7 +1235,7 @@ atomic64_and(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and(i, v);
}
#define atomic64_fetch_and atomic64_fetch_and
@@ -1258,7 +1245,7 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_acquire(i, v);
}
#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
@@ -1268,7 +1255,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_release(i, v);
}
#define atomic64_fetch_and_release atomic64_fetch_and_release
@@ -1278,7 +1265,7 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_and_relaxed(i, v);
}
#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
@@ -1288,7 +1275,7 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_andnot(i, v);
}
#define atomic64_andnot atomic64_andnot
@@ -1298,7 +1285,7 @@ atomic64_andnot(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot(i, v);
}
#define atomic64_fetch_andnot atomic64_fetch_andnot
@@ -1308,7 +1295,7 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_acquire(i, v);
}
#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
@@ -1318,7 +1305,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_release(i, v);
}
#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
@@ -1328,7 +1315,7 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_andnot_relaxed(i, v);
}
#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
@@ -1337,7 +1324,7 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_or(i, v);
}
#define atomic64_or atomic64_or
@@ -1346,7 +1333,7 @@ atomic64_or(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or(i, v);
}
#define atomic64_fetch_or atomic64_fetch_or
@@ -1356,7 +1343,7 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_acquire(i, v);
}
#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
@@ -1366,7 +1353,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_release(i, v);
}
#define atomic64_fetch_or_release atomic64_fetch_or_release
@@ -1376,7 +1363,7 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_or_relaxed(i, v);
}
#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
@@ -1385,7 +1372,7 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
arch_atomic64_xor(i, v);
}
#define atomic64_xor atomic64_xor
@@ -1394,7 +1381,7 @@ atomic64_xor(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor(i, v);
}
#define atomic64_fetch_xor atomic64_fetch_xor
@@ -1404,7 +1391,7 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_acquire(i, v);
}
#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
@@ -1414,7 +1401,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_release(i, v);
}
#define atomic64_fetch_xor_release atomic64_fetch_xor_release
@@ -1424,7 +1411,7 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_xor_relaxed(i, v);
}
#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
@@ -1434,7 +1421,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_xchg(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg(v, i);
}
#define atomic64_xchg atomic64_xchg
@@ -1444,7 +1431,7 @@ atomic64_xchg(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_acquire(v, i);
}
#define atomic64_xchg_acquire atomic64_xchg_acquire
@@ -1454,7 +1441,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_release(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_release(v, i);
}
#define atomic64_xchg_release atomic64_xchg_release
@@ -1464,7 +1451,7 @@ atomic64_xchg_release(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 i)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_xchg_relaxed(v, i);
}
#define atomic64_xchg_relaxed atomic64_xchg_relaxed
@@ -1474,7 +1461,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i)
static __always_inline s64
atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg(v, old, new);
}
#define atomic64_cmpxchg atomic64_cmpxchg
@@ -1484,7 +1471,7 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_acquire(v, old, new);
}
#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire
@@ -1494,7 +1481,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_release(v, old, new);
}
#define atomic64_cmpxchg_release atomic64_cmpxchg_release
@@ -1504,7 +1491,7 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_cmpxchg_relaxed(v, old, new);
}
#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed
@@ -1514,8 +1501,8 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg(v, old, new);
}
#define atomic64_try_cmpxchg atomic64_try_cmpxchg
@@ -1525,8 +1512,8 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_acquire(v, old, new);
}
#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire
@@ -1536,8 +1523,8 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_release(v, old, new);
}
#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release
@@ -1547,8 +1534,8 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
- __atomic_check_write(v, sizeof(*v));
- __atomic_check_write(old, sizeof(*old));
+ instrument_atomic_write(v, sizeof(*v));
+ instrument_atomic_write(old, sizeof(*old));
return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
}
#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed
@@ -1558,7 +1545,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
static __always_inline bool
atomic64_sub_and_test(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_sub_and_test(i, v);
}
#define atomic64_sub_and_test atomic64_sub_and_test
@@ -1568,7 +1555,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
static __always_inline bool
atomic64_dec_and_test(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_and_test(v);
}
#define atomic64_dec_and_test atomic64_dec_and_test
@@ -1578,7 +1565,7 @@ atomic64_dec_and_test(atomic64_t *v)
static __always_inline bool
atomic64_inc_and_test(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_and_test(v);
}
#define atomic64_inc_and_test atomic64_inc_and_test
@@ -1588,7 +1575,7 @@ atomic64_inc_and_test(atomic64_t *v)
static __always_inline bool
atomic64_add_negative(s64 i, atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_negative(i, v);
}
#define atomic64_add_negative atomic64_add_negative
@@ -1598,7 +1585,7 @@ atomic64_add_negative(s64 i, atomic64_t *v)
static __always_inline s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_fetch_add_unless(v, a, u);
}
#define atomic64_fetch_add_unless atomic64_fetch_add_unless
@@ -1608,7 +1595,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
static __always_inline bool
atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_add_unless(v, a, u);
}
#define atomic64_add_unless atomic64_add_unless
@@ -1618,7 +1605,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
static __always_inline bool
atomic64_inc_not_zero(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_not_zero(v);
}
#define atomic64_inc_not_zero atomic64_inc_not_zero
@@ -1628,7 +1615,7 @@ atomic64_inc_not_zero(atomic64_t *v)
static __always_inline bool
atomic64_inc_unless_negative(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_inc_unless_negative(v);
}
#define atomic64_inc_unless_negative atomic64_inc_unless_negative
@@ -1638,7 +1625,7 @@ atomic64_inc_unless_negative(atomic64_t *v)
static __always_inline bool
atomic64_dec_unless_positive(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_unless_positive(v);
}
#define atomic64_dec_unless_positive atomic64_dec_unless_positive
@@ -1648,7 +1635,7 @@ atomic64_dec_unless_positive(atomic64_t *v)
static __always_inline s64
atomic64_dec_if_positive(atomic64_t *v)
{
- __atomic_check_write(v, sizeof(*v));
+ instrument_atomic_write(v, sizeof(*v));
return arch_atomic64_dec_if_positive(v);
}
#define atomic64_dec_if_positive atomic64_dec_if_positive
@@ -1658,7 +1645,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1667,7 +1654,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1676,7 +1663,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1685,7 +1672,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define xchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1694,7 +1681,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1703,7 +1690,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1712,7 +1699,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1721,7 +1708,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1730,7 +1717,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1739,7 +1726,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1748,7 +1735,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_release(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1757,7 +1744,7 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg64_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
})
#endif
@@ -1765,28 +1752,28 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
})

#define cmpxchg64_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
})

#define sync_cmpxchg(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
})

#define cmpxchg_double(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \
})

@@ -1794,9 +1781,9 @@ atomic64_dec_if_positive(atomic64_t *v)
#define cmpxchg_double_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
- __atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
+ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \
arch_cmpxchg_double_local(__ai_ptr, __VA_ARGS__); \
})

#endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */
-// 7b7e2af0e75c8ecb6f02298a7075f503f30d244c
+// 89bf97f3a7509b740845e51ddf31055b48a81f40
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index fb4222548b22..6afadf73da17 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -20,7 +20,7 @@ gen_param_check()
# We don't write to constant parameters
[ ${type#c} != ${type} ] && rw="read"

- printf "\t__atomic_check_${rw}(${name}, sizeof(*${name}));\n"
+ printf "\tinstrument_atomic_${rw}(${name}, sizeof(*${name}));\n"
}

#gen_param_check(arg...)
@@ -107,7 +107,7 @@ cat <<EOF
#define ${xchg}(ptr, ...) \\
({ \\
typeof(ptr) __ai_ptr = (ptr); \\
- __atomic_check_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
+ instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
arch_${xchg}(__ai_ptr, __VA_ARGS__); \\
})
EOF
@@ -148,20 +148,7 @@ cat << EOF

#include <linux/build_bug.h>
#include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
- kasan_check_read(v, size);
- kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
- kasan_check_write(v, size);
- kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>

EOF

--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 21, 2020, 11:05:34 AM1/21/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, d...@axtens.net, christop...@c-s.fr, m...@ellerman.id.au, mhir...@kernel.org, ros...@goodmis.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, kees...@chromium.org, cyp...@cyphar.com, linux...@vger.kernel.org
Add explicit KCSAN checks for bitops.

Signed-off-by: Marco Elver <el...@google.com>
Acked-by: Arnd Bergmann <ar...@arndb.de>
---
include/asm-generic/bitops/instrumented-atomic.h | 14 +++++++-------
include/asm-generic/bitops/instrumented-lock.h | 10 +++++-----
.../asm-generic/bitops/instrumented-non-atomic.h | 16 ++++++++--------
3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/include/asm-generic/bitops/instrumented-atomic.h b/include/asm-generic/bitops/instrumented-atomic.h
index 18ce3c9e8eec..fb2cb33a4013 100644
--- a/include/asm-generic/bitops/instrumented-atomic.h
+++ b/include/asm-generic/bitops/instrumented-atomic.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* set_bit - Atomically set a bit in memory
@@ -25,7 +25,7 @@
*/
static inline void set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_set_bit(nr, addr);
}

@@ -38,7 +38,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr)
*/
static inline void clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit(nr, addr);
}

@@ -54,7 +54,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr)
*/
static inline void change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_change_bit(nr, addr);
}

@@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit(nr, addr);
}

@@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_clear_bit(nr, addr);
}

@@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_change_bit(nr, addr);
}

diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h
index ec53fdeea9ec..b9bec468ae03 100644
--- a/include/asm-generic/bitops/instrumented-lock.h
+++ b/include/asm-generic/bitops/instrumented-lock.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* clear_bit_unlock - Clear a bit in memory, for unlock
@@ -22,7 +22,7 @@
*/
static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit_unlock(nr, addr);
}

@@ -37,7 +37,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
*/
static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit_unlock(nr, addr);
}

@@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
*/
static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit_lock(nr, addr);
}

@@ -71,7 +71,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
static inline bool
clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_clear_bit_unlock_is_negative_byte(nr, addr);
}
/* Let everybody know we have it. */
diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h
index 95ff28d128a1..20f788a25ef9 100644
--- a/include/asm-generic/bitops/instrumented-non-atomic.h
+++ b/include/asm-generic/bitops/instrumented-non-atomic.h
@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H

-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>

/**
* __set_bit - Set a bit in memory
@@ -24,7 +24,7 @@
*/
static inline void __set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___set_bit(nr, addr);
}

@@ -39,7 +39,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr)
*/
static inline void __clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit(nr, addr);
}

@@ -54,7 +54,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr)
*/
static inline void __change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___change_bit(nr, addr);
}

@@ -68,7 +68,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_set_bit(nr, addr);
}

@@ -82,7 +82,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_clear_bit(nr, addr);
}

@@ -96,7 +96,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
{
- kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_change_bit(nr, addr);
}

@@ -107,7 +107,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool test_bit(long nr, const volatile unsigned long *addr)
{
- kasan_check_read(addr + BIT_WORD(nr), sizeof(long));
+ instrument_atomic_read(addr + BIT_WORD(nr), sizeof(long));
return arch_test_bit(nr, addr);
}

--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 21, 2020, 11:05:37 AM1/21/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, d...@axtens.net, christop...@c-s.fr, m...@ellerman.id.au, mhir...@kernel.org, ros...@goodmis.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, kees...@chromium.org, cyp...@cyphar.com, linux...@vger.kernel.org
This replaces the kasan instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
v2:
* Use updated instrumented.h, removing post-hooks for user-copies.
---
lib/iov_iter.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index fb29c02c6a3c..614b6999d2da 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -8,6 +8,7 @@
#include <linux/splice.h>
#include <net/checksum.h>
#include <linux/scatterlist.h>
+#include <linux/instrumented.h>

#define PIPE_PARANOIA /* for now */

@@ -138,7 +139,7 @@
static int copyout(void __user *to, const void *from, size_t n)
{
if (access_ok(to, n)) {
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
n = raw_copy_to_user(to, from, n);
}
return n;
@@ -147,7 +148,7 @@ static int copyout(void __user *to, const void *from, size_t n)
static int copyin(void *to, const void __user *from, size_t n)
{
if (access_ok(from, n)) {
- kasan_check_write(to, n);
+ instrument_copy_from_user(to, from, n);
n = raw_copy_from_user(to, from, n);
}
return n;
@@ -639,7 +640,7 @@ EXPORT_SYMBOL(_copy_to_iter);
static int copyout_mcsafe(void __user *to, const void *from, size_t n)
{
if (access_ok(to, n)) {
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
n = copy_to_user_mcsafe((__force void *) to, from, n);
}
return n;
--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 21, 2020, 11:05:39 AM1/21/20
to el...@google.com, pau...@kernel.org, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, pet...@infradead.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, d...@axtens.net, christop...@c-s.fr, m...@ellerman.id.au, mhir...@kernel.org, ros...@goodmis.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, kees...@chromium.org, cyp...@cyphar.com, linux...@vger.kernel.org
This replaces the KASAN instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <ar...@arndb.de>
Signed-off-by: Marco Elver <el...@google.com>
---
v2:
* Use updated instrumented.h, removing post-hooks for user-copies.
---
include/linux/uaccess.h | 14 +++++++-------
lib/usercopy.c | 7 ++++---
2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 67f016010aad..8a215c5c1aed 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -2,9 +2,9 @@
#ifndef __LINUX_UACCESS_H__
#define __LINUX_UACCESS_H__

+#include <linux/instrumented.h>
#include <linux/sched.h>
#include <linux/thread_info.h>
-#include <linux/kasan-checks.h>

#define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)

@@ -58,7 +58,7 @@
static __always_inline __must_check unsigned long
__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
{
- kasan_check_write(to, n);
+ instrument_copy_from_user(to, from, n);
check_object_size(to, n, false);
return raw_copy_from_user(to, from, n);
}
@@ -67,7 +67,7 @@ static __always_inline __must_check unsigned long
__copy_from_user(void *to, const void __user *from, unsigned long n)
{
might_fault();
- kasan_check_write(to, n);
+ instrument_copy_from_user(to, from, n);
check_object_size(to, n, false);
return raw_copy_from_user(to, from, n);
}
@@ -88,7 +88,7 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
static __always_inline __must_check unsigned long
__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
{
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
check_object_size(from, n, true);
return raw_copy_to_user(to, from, n);
}
@@ -97,7 +97,7 @@ static __always_inline __must_check unsigned long
__copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_fault();
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
check_object_size(from, n, true);
return raw_copy_to_user(to, from, n);
}
@@ -109,7 +109,7 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
unsigned long res = n;
might_fault();
if (likely(access_ok(from, n))) {
- kasan_check_write(to, n);
+ instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n);
}
if (unlikely(res))
@@ -127,7 +127,7 @@ _copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_fault();
if (access_ok(to, n)) {
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
n = raw_copy_to_user(to, from, n);
}
return n;
diff --git a/lib/usercopy.c b/lib/usercopy.c
index cbb4d9ec00f2..4bb1c5e7a3eb 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/uaccess.h>
#include <linux/bitops.h>
+#include <linux/instrumented.h>
+#include <linux/uaccess.h>

/* out-of-line parts */

@@ -10,7 +11,7 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
unsigned long res = n;
might_fault();
if (likely(access_ok(from, n))) {
- kasan_check_write(to, n);
+ instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n);
}
if (unlikely(res))
@@ -25,7 +26,7 @@ unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_fault();
if (likely(access_ok(to, n))) {
- kasan_check_read(from, n);
+ instrument_copy_to_user(to, from, n);
n = raw_copy_to_user(to, from, n);
}
return n;
--
2.25.0.341.g760bfbb309-goog

Marco Elver

unread,
Jan 21, 2020, 11:14:18 AM1/21/20
to Dmitry Vyukov, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens, Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar, Christian Brauner, Daniel Borkmann, cyp...@cyphar.com, Kees Cook, linux-arch
Sent v2: http://lkml.kernel.org/r/20200121160512...@google.com
I hope it'll satisfy our various constraints for now.

Thanks,
-- Marco

Paul E. McKenney

unread,
Jan 21, 2020, 11:16:33 AM1/21/20
to Peter Zijlstra, Marco Elver, andre...@google.com, gli...@google.com, dvy...@google.com, kasa...@googlegroups.com, linux-...@vger.kernel.org, mark.r...@arm.com, wi...@kernel.org, boqun...@gmail.com, ar...@arndb.de, vi...@zeniv.linux.org.uk, christop...@c-s.fr, d...@axtens.net, m...@ellerman.id.au, ros...@goodmis.org, mhir...@kernel.org, mi...@kernel.org, christia...@ubuntu.com, dan...@iogearbox.net, cyp...@cyphar.com, kees...@chromium.org, linux...@vger.kernel.org
Fair enough!

Thanx, Paul

Dmitry Vyukov

unread,
Jan 24, 2020, 6:23:09 AM1/24/20
to Marco Elver, Paul E. McKenney, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Daniel Axtens, Christophe Leroy, Michael Ellerman, Masami Hiramatsu, Steven Rostedt, Ingo Molnar, Christian Brauner, Daniel Borkmann, Kees Cook, cyp...@cyphar.com, linux-arch
On Tue, Jan 21, 2020 at 5:05 PM 'Marco Elver' via kasan-dev
<kasa...@googlegroups.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user should use special instrumentation, since
> we should be able to instrument both source and destination memory
> accesses if both are kernel memory.
>
> The current patch only instruments the memory access where the address
> is always in kernel space, however, both may in fact be kernel addresses
> when a compat syscall passes an argument allocated in the kernel to a
> real syscall. In a future change, both KASAN and KCSAN should check both
> addresses in such cases, as well as KMSAN will make use of both
> addresses. [It made more sense to provide the completed function
> signature, rather than updating it and changing all locations again at a
> later time.]

Reviewed-by: Dmitry Vyukov <dvy...@google.com>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20200121160512.70887-1-elver%40google.com.

Paul E. McKenney

unread,
Jan 29, 2020, 2:13:49 PM1/29/20
to Dmitry Vyukov, Marco Elver, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng, Arnd Bergmann, Al Viro, Daniel Axtens, Christophe Leroy, Michael Ellerman, Masami Hiramatsu, Steven Rostedt, Ingo Molnar, Christian Brauner, Daniel Borkmann, Kees Cook, cyp...@cyphar.com, linux-arch
On Fri, Jan 24, 2020 at 12:22:56PM +0100, Dmitry Vyukov wrote:
> On Tue, Jan 21, 2020 at 5:05 PM 'Marco Elver' via kasan-dev
> <kasa...@googlegroups.com> wrote:
> >
> > This adds instrumented.h, which provides generic wrappers for memory
> > access instrumentation that the compiler cannot emit for various
> > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > future this will also include KMSAN instrumentation.
> >
> > Note that, copy_{to,from}_user should use special instrumentation, since
> > we should be able to instrument both source and destination memory
> > accesses if both are kernel memory.
> >
> > The current patch only instruments the memory access where the address
> > is always in kernel space, however, both may in fact be kernel addresses
> > when a compat syscall passes an argument allocated in the kernel to a
> > real syscall. In a future change, both KASAN and KCSAN should check both
> > addresses in such cases, as well as KMSAN will make use of both
> > addresses. [It made more sense to provide the completed function
> > signature, rather than updating it and changing all locations again at a
> > later time.]
>
> Reviewed-by: Dmitry Vyukov <dvy...@google.com>

I have applied this and the other four with Dmitry's Reviewed-by.

Thank you all!

Thanx, Paul
Reply all
Reply to author
Forward
0 new messages