[PATCH 0/7] x86: remove always-defined CONFIG_AS_* options

2 views
Skip to first unread message

Masahiro Yamada

unread,
Mar 22, 2020, 10:09:37 PM3/22/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-bu...@googlegroups.com, linux-...@vger.kernel.org
arch/x86/Makefile tests instruction code by $(call as-instr, ...)

Some of them are very old.
For example, the check for CONFIG_AS_CFI dates back to 2006.

We raise GCC versions from time to time, and we clean old code away.
The same policy applied to binutils.

The current minimal supported version of binutils is 2.21

This is new enough to recognize the instruction in most of
as-instr calls.



Masahiro Yamada (7):
x86: remove unneeded defined(__ASSEMBLY__) check from asm/dwarf2.h
x86: remove always-defined CONFIG_AS_CFI
x86: remove always-defined CONFIG_AS_CFI_SIGNAL_FRAME
x86: remove always-defined CONFIG_AS_CFI_SECTIONS
x86: remove always-defined CONFIG_AS_SSSE3
x86: remove always-defined CONFIG_AS_AVX
x86: add comments about the binutils version to support code in
as-instr

arch/x86/Makefile | 21 +++------
arch/x86/crypto/Makefile | 32 ++++++--------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-----
arch/x86/crypto/blake2s-core.S | 2 -
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 ----
arch/x86/crypto/poly1305_glue.c | 6 +--
arch/x86/crypto/sha1_ssse3_asm.S | 4 --
arch/x86/crypto/sha1_ssse3_glue.c | 9 +---
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +---
arch/x86/crypto/sha512-avx-asm.S | 2 -
arch/x86/crypto/sha512_ssse3_glue.c | 7 +--
arch/x86/include/asm/dwarf2.h | 43 -------------------
arch/x86/include/asm/xor_avx.h | 9 ----
lib/raid6/algos.c | 2 -
lib/raid6/recov_ssse3.c | 6 ---
lib/raid6/test/Makefile | 3 --
18 files changed, 26 insertions(+), 156 deletions(-)

--
2.17.1

Masahiro Yamada

unread,
Mar 22, 2020, 10:09:50 PM3/22/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, clang-bu...@googlegroups.com, linux-...@vger.kernel.org
CONFIG_AS_AVX was introduced by commit ea4d26ae24e5 ("raid5: add AVX
optimized RAID5 checksumming").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_AVX, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
---

arch/x86/Makefile | 5 ++-
arch/x86/crypto/Makefile | 32 +++++++------------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-------
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 -----
arch/x86/crypto/poly1305_glue.c | 6 ++--
arch/x86/crypto/sha1_ssse3_asm.S | 4 ---
arch/x86/crypto/sha1_ssse3_glue.c | 9 +-----
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +----
arch/x86/crypto/sha512-avx-asm.S | 2 --
arch/x86/crypto/sha512_ssse3_glue.c | 7 +---
arch/x86/include/asm/xor_avx.h | 9 ------
13 files changed, 21 insertions(+), 89 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 94f89612e024..f32ef7b8d5ca 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,15 +178,14 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 8c2e9eadee8a..1a044908d42d 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -5,7 +5,6 @@

OBJECT_FILES_NON_STANDARD := y

-avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no)
avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
$(comma)4)$(comma)%ymm2,yes,no)
avx512_supported :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,yes,no)
@@ -47,15 +46,12 @@ ifeq ($(adx_supported),yes)
endif

# These modules require assembler to support AVX.
-ifeq ($(avx_supported),yes)
- obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64) += \
- camellia-aesni-avx-x86_64.o
- obj-$(CONFIG_CRYPTO_CAST5_AVX_X86_64) += cast5-avx-x86_64.o
- obj-$(CONFIG_CRYPTO_CAST6_AVX_X86_64) += cast6-avx-x86_64.o
- obj-$(CONFIG_CRYPTO_TWOFISH_AVX_X86_64) += twofish-avx-x86_64.o
- obj-$(CONFIG_CRYPTO_SERPENT_AVX_X86_64) += serpent-avx-x86_64.o
- obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o
-endif
+obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64) += camellia-aesni-avx-x86_64.o
+obj-$(CONFIG_CRYPTO_CAST5_AVX_X86_64) += cast5-avx-x86_64.o
+obj-$(CONFIG_CRYPTO_CAST6_AVX_X86_64) += cast6-avx-x86_64.o
+obj-$(CONFIG_CRYPTO_TWOFISH_AVX_X86_64) += twofish-avx-x86_64.o
+obj-$(CONFIG_CRYPTO_SERPENT_AVX_X86_64) += serpent-avx-x86_64.o
+obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o

# These modules require assembler to support AVX2.
ifeq ($(avx2_supported),yes)
@@ -83,16 +79,12 @@ ifneq ($(CONFIG_CRYPTO_POLY1305_X86_64),)
targets += poly1305-x86_64-cryptogams.S
endif

-ifeq ($(avx_supported),yes)
- camellia-aesni-avx-x86_64-y := camellia-aesni-avx-asm_64.o \
- camellia_aesni_avx_glue.o
- cast5-avx-x86_64-y := cast5-avx-x86_64-asm_64.o cast5_avx_glue.o
- cast6-avx-x86_64-y := cast6-avx-x86_64-asm_64.o cast6_avx_glue.o
- twofish-avx-x86_64-y := twofish-avx-x86_64-asm_64.o \
- twofish_avx_glue.o
- serpent-avx-x86_64-y := serpent-avx-x86_64-asm_64.o \
- serpent_avx_glue.o
-endif
+camellia-aesni-avx-x86_64-y := camellia-aesni-avx-asm_64.o \
+ camellia_aesni_avx_glue.o
+cast5-avx-x86_64-y := cast5-avx-x86_64-asm_64.o cast5_avx_glue.o
+cast6-avx-x86_64-y := cast6-avx-x86_64-asm_64.o cast6_avx_glue.o
+twofish-avx-x86_64-y := twofish-avx-x86_64-asm_64.o twofish_avx_glue.o
+serpent-avx-x86_64-y := serpent-avx-x86_64-asm_64.o serpent_avx_glue.o

ifeq ($(avx2_supported),yes)
camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o
diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S
index bfa1c0b3e5b4..cc56ee43238b 100644
--- a/arch/x86/crypto/aesni-intel_avx-x86_64.S
+++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S
@@ -886,7 +886,6 @@ _less_than_8_bytes_left_\@:
_partial_block_done_\@:
.endm # PARTIAL_BLOCK

-#ifdef CONFIG_AS_AVX
###############################################################################
# GHASH_MUL MACRO to implement: Data*HashKey mod (128,127,126,121,0)
# Input: A and B (128-bits each, bit-reflected)
@@ -1869,8 +1868,6 @@ key_256_finalize:
ret
SYM_FUNC_END(aesni_gcm_finalize_avx_gen2)

-#endif /* CONFIG_AS_AVX */
-
#ifdef CONFIG_AS_AVX2
###############################################################################
# GHASH_MUL MACRO to implement: Data*HashKey mod (128,127,126,121,0)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index bbbebbd35b5d..e0f54e00edfd 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -185,7 +185,6 @@ static const struct aesni_gcm_tfm_s aesni_gcm_tfm_sse = {
.finalize = &aesni_gcm_finalize,
};

-#ifdef CONFIG_AS_AVX
asmlinkage void aes_ctr_enc_128_avx_by8(const u8 *in, u8 *iv,
void *keys, u8 *out, unsigned int num_bytes);
asmlinkage void aes_ctr_enc_192_avx_by8(const u8 *in, u8 *iv,
@@ -234,8 +233,6 @@ static const struct aesni_gcm_tfm_s aesni_gcm_tfm_avx_gen2 = {
.finalize = &aesni_gcm_finalize_avx_gen2,
};

-#endif
-
#ifdef CONFIG_AS_AVX2
/*
* asmlinkage void aesni_gcm_init_avx_gen4()
@@ -476,7 +473,6 @@ static void ctr_crypt_final(struct crypto_aes_ctx *ctx,
crypto_inc(ctrblk, AES_BLOCK_SIZE);
}

-#ifdef CONFIG_AS_AVX
static void aesni_ctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv)
{
@@ -493,7 +489,6 @@ static void aesni_ctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
else
aes_ctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len);
}
-#endif

static int ctr_crypt(struct skcipher_request *req)
{
@@ -715,10 +710,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
if (left < AVX_GEN4_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen4)
gcm_tfm = &aesni_gcm_tfm_avx_gen2;
#endif
-#ifdef CONFIG_AS_AVX
if (left < AVX_GEN2_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen2)
gcm_tfm = &aesni_gcm_tfm_sse;
-#endif

/* Linearize assoc, if not already linear */
if (req->src->length >= assoclen && req->src->length &&
@@ -1082,24 +1075,19 @@ static int __init aesni_init(void)
aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen4;
} else
#endif
-#ifdef CONFIG_AS_AVX
if (boot_cpu_has(X86_FEATURE_AVX)) {
pr_info("AVX version of gcm_enc/dec engaged.\n");
aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen2;
- } else
-#endif
- {
+ } else {
pr_info("SSE version of gcm_enc/dec engaged.\n");
aesni_gcm_tfm = &aesni_gcm_tfm_sse;
}
aesni_ctr_enc_tfm = aesni_ctr_enc;
-#ifdef CONFIG_AS_AVX
if (boot_cpu_has(X86_FEATURE_AVX)) {
/* optimize performance of ctr mode encryption transform */
aesni_ctr_enc_tfm = aesni_ctr_enc_avx_tfm;
pr_info("AES CTR mode by8 optimization enabled\n");
}
-#endif
#endif

err = crypto_register_alg(&aesni_cipher_alg);
diff --git a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
index 7a6b5380a46f..5bac2d533104 100644
--- a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
+++ b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
@@ -404,10 +404,6 @@ ___
&end_function("poly1305_emit_x86_64");
if ($avx) {

-if($kernel) {
- $code .= "#ifdef CONFIG_AS_AVX\n";
-}
-
########################################################################
# Layout of opaque area is following.
#
@@ -1516,10 +1512,6 @@ $code.=<<___;
___
&end_function("poly1305_emit_avx");

-if ($kernel) {
- $code .= "#endif\n";
-}
-
if ($avx>1) {

if ($kernel) {
diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
index 79bb58737d52..4a6226e1d15e 100644
--- a/arch/x86/crypto/poly1305_glue.c
+++ b/arch/x86/crypto/poly1305_glue.c
@@ -94,7 +94,7 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||
PAGE_SIZE % POLY1305_BLOCK_SIZE);

- if (!IS_ENABLED(CONFIG_AS_AVX) || !static_branch_likely(&poly1305_use_avx) ||
+ if (!static_branch_likely(&poly1305_use_avx) ||
(len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||
!crypto_simd_usable()) {
convert_to_base2_64(ctx);
@@ -123,7 +123,7 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
const u32 nonce[4])
{
- if (!IS_ENABLED(CONFIG_AS_AVX) || !static_branch_likely(&poly1305_use_avx))
+ if (!static_branch_likely(&poly1305_use_avx))
poly1305_emit_x86_64(ctx, mac, nonce);
else
poly1305_emit_avx(ctx, mac, nonce);
@@ -261,7 +261,7 @@ static struct shash_alg alg = {

static int __init poly1305_simd_mod_init(void)
{
- if (IS_ENABLED(CONFIG_AS_AVX) && boot_cpu_has(X86_FEATURE_AVX) &&
+ if (boot_cpu_has(X86_FEATURE_AVX) &&
cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL))
static_branch_enable(&poly1305_use_avx);
if (IS_ENABLED(CONFIG_AS_AVX2) && boot_cpu_has(X86_FEATURE_AVX) &&
diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
index 12e2d19d7402..d25668d2a1e9 100644
--- a/arch/x86/crypto/sha1_ssse3_asm.S
+++ b/arch/x86/crypto/sha1_ssse3_asm.S
@@ -467,8 +467,6 @@ W_PRECALC_SSSE3
*/
SHA1_VECTOR_ASM sha1_transform_ssse3

-#ifdef CONFIG_AS_AVX
-
.macro W_PRECALC_AVX

.purgem W_PRECALC_00_15
@@ -553,5 +551,3 @@ W_PRECALC_AVX
* const u8 *data, int blocks);
*/
SHA1_VECTOR_ASM sha1_transform_avx
-
-#endif
diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c
index d70b40ad594c..275b65dd30c9 100644
--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -114,7 +114,6 @@ static void unregister_sha1_ssse3(void)
crypto_unregister_shash(&sha1_ssse3_alg);
}

-#ifdef CONFIG_AS_AVX
asmlinkage void sha1_transform_avx(struct sha1_state *state,
const u8 *data, int blocks);

@@ -175,13 +174,7 @@ static void unregister_sha1_avx(void)
crypto_unregister_shash(&sha1_avx_alg);
}

-#else /* CONFIG_AS_AVX */
-static inline int register_sha1_avx(void) { return 0; }
-static inline void unregister_sha1_avx(void) { }
-#endif /* CONFIG_AS_AVX */
-
-
-#if defined(CONFIG_AS_AVX2) && (CONFIG_AS_AVX)
+#if defined(CONFIG_AS_AVX2)
#define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */

asmlinkage void sha1_transform_avx2(struct sha1_state *state,
diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
index fcbc30f58c38..4739cd31b9db 100644
--- a/arch/x86/crypto/sha256-avx-asm.S
+++ b/arch/x86/crypto/sha256-avx-asm.S
@@ -47,7 +47,6 @@
# This code schedules 1 block at a time, with 4 lanes per block
########################################################################

-#ifdef CONFIG_AS_AVX
#include <linux/linkage.h>

## assume buffers not aligned
@@ -498,5 +497,3 @@ _SHUF_00BA:
# shuffle xDxC -> DC00
_SHUF_DC00:
.octa 0x0b0a090803020100FFFFFFFFFFFFFFFF
-
-#endif
diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c
index 03ad657c04bd..8bdc3be31f64 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -144,7 +144,6 @@ static void unregister_sha256_ssse3(void)
ARRAY_SIZE(sha256_ssse3_algs));
}

-#ifdef CONFIG_AS_AVX
asmlinkage void sha256_transform_avx(struct sha256_state *state,
const u8 *data, int blocks);

@@ -221,12 +220,7 @@ static void unregister_sha256_avx(void)
ARRAY_SIZE(sha256_avx_algs));
}

-#else
-static inline int register_sha256_avx(void) { return 0; }
-static inline void unregister_sha256_avx(void) { }
-#endif
-
-#if defined(CONFIG_AS_AVX2) && defined(CONFIG_AS_AVX)
+#if defined(CONFIG_AS_AVX2)
asmlinkage void sha256_transform_rorx(struct sha256_state *state,
const u8 *data, int blocks);

diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
index 90ea945ba5e6..63470fd6ae32 100644
--- a/arch/x86/crypto/sha512-avx-asm.S
+++ b/arch/x86/crypto/sha512-avx-asm.S
@@ -47,7 +47,6 @@
#
########################################################################

-#ifdef CONFIG_AS_AVX
#include <linux/linkage.h>

.text
@@ -424,4 +423,3 @@ K512:
.quad 0x3c9ebe0a15c9bebc,0x431d67c49c100d4c
.quad 0x4cc5d4becb3e42b6,0x597f299cfc657e2a
.quad 0x5fcb6fab3ad6faec,0x6c44198c4a475817
-#endif
diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c
index 1c444f41037c..75214982a633 100644
--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -142,7 +142,6 @@ static void unregister_sha512_ssse3(void)
ARRAY_SIZE(sha512_ssse3_algs));
}

-#ifdef CONFIG_AS_AVX
asmlinkage void sha512_transform_avx(struct sha512_state *state,
const u8 *data, int blocks);
static bool avx_usable(void)
@@ -218,12 +217,8 @@ static void unregister_sha512_avx(void)
crypto_unregister_shashes(sha512_avx_algs,
ARRAY_SIZE(sha512_avx_algs));
}
-#else
-static inline int register_sha512_avx(void) { return 0; }
-static inline void unregister_sha512_avx(void) { }
-#endif

-#if defined(CONFIG_AS_AVX2) && defined(CONFIG_AS_AVX)
+#if defined(CONFIG_AS_AVX2)
asmlinkage void sha512_transform_rorx(struct sha512_state *state,
const u8 *data, int blocks);

diff --git a/arch/x86/include/asm/xor_avx.h b/arch/x86/include/asm/xor_avx.h
index d61ddf3d052b..0c4e5b5e3852 100644
--- a/arch/x86/include/asm/xor_avx.h
+++ b/arch/x86/include/asm/xor_avx.h
@@ -11,8 +11,6 @@
* Based on Ingo Molnar and Zach Brown's respective MMX and SSE routines
*/

-#ifdef CONFIG_AS_AVX
-
#include <linux/compiler.h>
#include <asm/fpu/api.h>

@@ -170,11 +168,4 @@ do { \
#define AVX_SELECT(FASTEST) \
(boot_cpu_has(X86_FEATURE_AVX) && boot_cpu_has(X86_FEATURE_OSXSAVE) ? &xor_block_avx : FASTEST)

-#else
-
-#define AVX_XOR_SPEED {}
-
-#define AVX_SELECT(FASTEST) (FASTEST)
-
-#endif
#endif
--
2.17.1

Masahiro Yamada

unread,
Mar 22, 2020, 10:09:50 PM3/22/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-bu...@googlegroups.com, linux-...@vger.kernel.org
CONFIG_AS_SSSE3 was introduced by commit 75aaf4c3e6a4 ("x86/raid6:
correctly check for assembler capabilities").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_SSSE3, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
---

arch/x86/Makefile | 5 ++---
arch/x86/crypto/blake2s-core.S | 2 --
lib/raid6/algos.c | 2 --
lib/raid6/recov_ssse3.c | 6 ------
lib/raid6/test/Makefile | 3 ---
5 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index e4a062313bb0..94f89612e024 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,7 +178,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
@@ -186,8 +185,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

diff --git a/arch/x86/crypto/blake2s-core.S b/arch/x86/crypto/blake2s-core.S
index 24910b766bdd..2ca79974f819 100644
--- a/arch/x86/crypto/blake2s-core.S
+++ b/arch/x86/crypto/blake2s-core.S
@@ -46,7 +46,6 @@ SIGMA2:
#endif /* CONFIG_AS_AVX512 */

.text
-#ifdef CONFIG_AS_SSSE3
SYM_FUNC_START(blake2s_compress_ssse3)
testq %rdx,%rdx
je .Lendofloop
@@ -174,7 +173,6 @@ SYM_FUNC_START(blake2s_compress_ssse3)
.Lendofloop:
ret
SYM_FUNC_END(blake2s_compress_ssse3)
-#endif /* CONFIG_AS_SSSE3 */

#ifdef CONFIG_AS_AVX512
SYM_FUNC_START(blake2s_compress_avx512)
diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c
index bf1b4765c8f6..77457ea5a239 100644
--- a/lib/raid6/algos.c
+++ b/lib/raid6/algos.c
@@ -103,9 +103,7 @@ const struct raid6_recov_calls *const raid6_recov_algos[] = {
#ifdef CONFIG_AS_AVX2
&raid6_recov_avx2,
#endif
-#ifdef CONFIG_AS_SSSE3
&raid6_recov_ssse3,
-#endif
#ifdef CONFIG_S390
&raid6_recov_s390xc,
#endif
diff --git a/lib/raid6/recov_ssse3.c b/lib/raid6/recov_ssse3.c
index 1de97d2405d0..4bfa3c6b60de 100644
--- a/lib/raid6/recov_ssse3.c
+++ b/lib/raid6/recov_ssse3.c
@@ -3,8 +3,6 @@
* Copyright (C) 2012 Intel Corporation
*/

-#ifdef CONFIG_AS_SSSE3
-
#include <linux/raid/pq.h>
#include "x86.h"

@@ -328,7 +326,3 @@ const struct raid6_recov_calls raid6_recov_ssse3 = {
#endif
.priority = 1,
};
-
-#else
-#warning "your version of binutils lacks SSSE3 support"
-#endif
diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
index 3ab8720aa2f8..79777645cac9 100644
--- a/lib/raid6/test/Makefile
+++ b/lib/raid6/test/Makefile
@@ -34,9 +34,6 @@ endif

ifeq ($(IS_X86),yes)
OBJS += mmx.o sse1.o sse2.o avx2.o recov_ssse3.o recov_avx2.o avx512.o recov_avx512.o
- CFLAGS += $(shell echo "pshufb %xmm0, %xmm0" | \
- gcc -c -x assembler - >&/dev/null && \
- rm ./-.o && echo -DCONFIG_AS_SSSE3=1)
CFLAGS += $(shell echo "vpbroadcastb %xmm0, %ymm1" | \
gcc -c -x assembler - >&/dev/null && \
rm ./-.o && echo -DCONFIG_AS_AVX2=1)
--
2.17.1

Jason A. Donenfeld

unread,
Mar 23, 2020, 12:08:06 AM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-bu...@googlegroups.com, Linux Crypto Mailing List
Hey Masahrio,

Thanks for this series. I'll rebase my recent RFC on top of these
changes, which makes the work I was doing slightly easier, as there
are now fewer flags to deal with.

Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>

Jason

Jason A. Donenfeld

unread,
Mar 23, 2020, 12:12:26 AM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-bu...@googlegroups.com, Linux Crypto Mailing List

Jason A. Donenfeld

unread,
Mar 23, 2020, 12:28:22 AM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-bu...@googlegroups.com, Linux Crypto Mailing List
Hi again,

I've consolidated your patches and rebased mine on top, and
incorporated your useful binutils comments. The result lives here:

https://git.zx2c4.com/linux-dev/log/?h=jd/kconfig-assembler-support

I can submit all of those to the list, if you want, or maybe you can
just pull them out of there, include them in your v2, and put them in
your tree for 5.7? However you want is fine with me.

Jason

Masahiro Yamada

unread,
Mar 23, 2020, 2:36:59 AM3/23/20
to Jason A. Donenfeld, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Hi Jason,
Your series does not work correctly.

I will comment why later.




--
Best Regards
Masahiro Yamada

Jason A. Donenfeld

unread,
Mar 23, 2020, 2:53:24 AM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Bummer, okay. Looking forward to learning what's up. Also, if some
parts of it are useful (like the resorting and organizing of
arch/x86/crypto/Makefile), feel free to cannibalize it, keeping what's
useful and discarding what's not.

Jason

Sedat Dilek

unread,
Mar 23, 2020, 5:53:03 AM3/23/20
to Jason A. Donenfeld, Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Hi Jason,

I have your patches on my radar.

I have not checked your Kconfig changes are really working, especially
I looked at [2] and comment on this.

I would have expected your arch/x86/Kconfig.assembler file as
arch/x86/crypto/Kconfig (source include needs to be adapted in
arch/x86/Kconfig).
What about a commit subject like "x86: crypto: Probe assembler options
via Kconfig instead of makefile"?
Not sure if the commit message needs some massage?

Maybe this is all irrelevant when Masahiro has commented.

Thanks.

Regards,
- Sedat -

[1] https://git.kernel.org/pub/scm/linux/kernel/git/zx2c4/linux.git/log/?h=jd/kconfig-assembler-support
[2] https://git.kernel.org/pub/scm/linux/kernel/git/zx2c4/linux.git/commit/?h=jd/kconfig-assembler-support&id=ac483ff6fb4c785cd0b10d9756b71696829cd117

Jason A. Donenfeld

unread,
Mar 23, 2020, 2:06:38 PM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
On Sun, Mar 22, 2020 at 8:10 PM Masahiro Yamada <masa...@kernel.org> wrote:
> diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c
> index bf1b4765c8f6..77457ea5a239 100644
> --- a/lib/raid6/algos.c
> +++ b/lib/raid6/algos.c
> @@ -103,9 +103,7 @@ const struct raid6_recov_calls *const raid6_recov_algos[] = {
> #ifdef CONFIG_AS_AVX2
> &raid6_recov_avx2,
> #endif
> -#ifdef CONFIG_AS_SSSE3
> &raid6_recov_ssse3,
> -#endif
> #ifdef CONFIG_S390
> &raid6_recov_s390xc,
> #endif

algos.c is compiled on all platforms, so you'll need to ifdef that x86
section where SSSE3 is no longer guarding it. The pattern in the rest
of the file, if you want to follow it, is "#if defined(__x86_64__) &&
!defined(__arch_um__)". That seems ugly and like there are better
ways, but in the interest of uniformity and a lack of desire to
rewrite all the raid6 code, I went with that in this cleanup:

https://git.zx2c4.com/linux-dev/commit/?h=jd/kconfig-assembler-support&id=512a00ddebbe5294a88487dcf1dc845cf56703d9

Nick Desaulniers

unread,
Mar 23, 2020, 3:46:09 PM3/23/20
to Masahiro Yamada, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT), Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Jason A . Donenfeld, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, open list:HARDWARE RANDOM NUMBER GENERATOR CORE
On Sun, Mar 22, 2020 at 7:09 PM Masahiro Yamada <masa...@kernel.org> wrote:
>
> arch/x86/Makefile tests instruction code by $(call as-instr, ...)
>
> Some of them are very old.
> For example, the check for CONFIG_AS_CFI dates back to 2006.
>
> We raise GCC versions from time to time, and we clean old code away.
> The same policy applied to binutils.
>
> The current minimal supported version of binutils is 2.21
>
> This is new enough to recognize the instruction in most of
> as-instr calls.

I'm quite happy to see this series; a few weekends ago I was playing
around with adding dwarf-5 support to the Linux kernel, and was
looking at these noticing there was quite a bit of cruft.
Unfortunately, I got detoured filing bugs against GNU as for dwarf-5
bugs, but the developers were very responsive and fixed them all. I
should go find and dust off that patchset. In the meantime, I'll try
to help review these patches. Thank you for sending them.
> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to clang-built-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/20200323020844.17064-1-masahiroy%40kernel.org.



--
Thanks,
~Nick Desaulniers

Jason A. Donenfeld

unread,
Mar 23, 2020, 3:50:15 PM3/23/20
to sedat...@gmail.com, Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
On Mon, Mar 23, 2020 at 3:53 AM Sedat Dilek <sedat...@gmail.com> wrote:
> Hi Jason,
> I have not checked your Kconfig changes are really working, especially
> I looked at [2] and comment on this.
>
> I would have expected your arch/x86/Kconfig.assembler file as
> arch/x86/crypto/Kconfig (source include needs to be adapted in
> arch/x86/Kconfig).

CONFIG_AS_* is required for more than just the crypto.

> What about a commit subject like "x86: crypto: Probe assembler options
> via Kconfig instead of makefile"?

Thanks. Will fix silly verbiage and update branch.

Jason

Masahiro Yamada

unread,
Mar 23, 2020, 4:45:38 PM3/23/20
to Jason A. Donenfeld, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Thanks for the pointer,
but I think guarding with CONFIG_X86 makes more sense.

raid6_recov_ssse3 is defined in lib/raid6/recov_ssse3.c,
which is guarded by like this:

raid6_pq-$(CONFIG_X86) += recov_ssse3.o recov_avx2.o mmx.o sse1.o
sse2.o avx2.o avx512.o recov_avx512.o


Indeed,

#if defined(__x86_64__) && !defined(__arch_um__)

is ugly.


I wonder why the code was written like that.

I rather want to check a single CONFIG option.
Please see the attached patch.




> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to clang-built-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/CAHmME9p3LAnrUMmcGPEUFqY5vOASe8MVk4%3DpzqFRj3E9C-bM%2BQ%40mail.gmail.com.
0001-x86-replace-arch-macros-from-compiler-with-CONFIG_X8.patch

Jason A. Donenfeld

unread,
Mar 23, 2020, 4:48:51 PM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Seems better indeed. Looks like you've cleaned up multiple cases.

Now if you could only tell me what is wrong with my series... "Your
series does not work correctly. I will comment why later." I've been
at the edge of my seat, Fermat's last theorem style. :)

By the way, it looks like 5.7 will be raising the minimum binutils to
2.23: https://lore.kernel.org/lkml/20200316160...@zn.tnic/ In
light of this, I'll place another patch on top of my branch handling
that transition.

Jason

Jason A. Donenfeld

unread,
Mar 23, 2020, 5:01:55 PM3/23/20
to Masahiro Yamada, Kees Cook, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
On Mon, Mar 23, 2020 at 2:48 PM Jason A. Donenfeld <Ja...@zx2c4.com> wrote:
> By the way, it looks like 5.7 will be raising the minimum binutils to
> 2.23: https://lore.kernel.org/lkml/20200316160...@zn.tnic/ In
> light of this, I'll place another patch on top of my branch handling
> that transition.

That now lives at the top of the usual branch:
https://git.zx2c4.com/linux-dev/log/?h=jd/kconfig-assembler-support

Masahiro Yamada

unread,
Mar 23, 2020, 6:04:58 PM3/23/20
to Jason A. Donenfeld, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Hi Jason,
The answer is mostly in my previous reply to Linus:
https://lkml.org/lkml/2020/3/13/27


I think this problem would happen
for CONFIG_AS_CFI and CONFIG_AS_ADX
since the register in instruction code
is machine-bit dependent.

The former is OK wince we are planning to
remove it.

We need to handle -m64 for the latter.
Otherwise, a problem like commit
3a7c733165a4799fa1 would happen.


So, I think we should merge this
https://lore.kernel.org/patchwork/patch/1214332/
then, fix-up CONFIG_AS_ADX on top of it.

(Or, if we do not need to rush,
we can delete CONFIG_AS_ADX entirely after
we bump the binutils version to 2.23)

Thanks.

Jason A. Donenfeld

unread,
Mar 23, 2020, 6:11:04 PM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
Oh, gotcha. The easiest thing to do for that case would actually be
passing 32-bit registers to adox, which are valid. I'll fix that up in
my tree.

And then indeed it looks like the binutils bump is coming anyway for 5.7.

Your flags patch looks fine and potentially useful for other things
down the line though. I suppose you had in mind something like:

def_bool $(as-instr,...,-m64) if 64BIT
def_bool $(as-instr,...,-m32) if !64BIT

Anyway, I'll fix up the ADX code to be biarch like the AVX test code.

Jason

Masahiro Yamada

unread,
Mar 23, 2020, 8:14:38 PM3/23/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, David S. Miller, Herbert Xu, Ingo Molnar, Jim Kukunas, NeilBrown, Yuanhan Liu, clang-bu...@googlegroups.com

arch/x86/Makefile tests instruction code by $(call as-instr, ...)

Some of them are very old.
For example, the check for CONFIG_AS_CFI dates back to 2006.

We raise GCC versions from time to time, and we clean old code away.
The same policy applied to binutils.

The current minimal supported version of binutils is 2.21

This is new enough to recognize the instruction in most of
as-instr calls.

If this series looks good, how to merge it?
Via x86 tree or maybe crypto ?


Changes in v2:
- New patch
- Remove CFI_SIGNAL_FRAME entirely (per Nick)
- add ifdef CONFIG_X86 to fix build errors on non-x86 arches

Masahiro Yamada (9):
lib/raid6/test: fix build on distros whose /bin/sh is not bash
x86: remove unneeded defined(__ASSEMBLY__) check from asm/dwarf2.h
x86: remove always-defined CONFIG_AS_CFI
x86: remove unneeded (CONFIG_AS_)CFI_SIGNAL_FRAME
x86: remove always-defined CONFIG_AS_CFI_SECTIONS
x86: remove always-defined CONFIG_AS_SSSE3
x86: remove always-defined CONFIG_AS_AVX
x86: add comments about the binutils version to support code in
as-instr
x86: replace arch macros from compiler with CONFIG_X86_{32,64}

arch/x86/Makefile | 21 +++------
arch/x86/crypto/Makefile | 32 +++++---------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-----
arch/x86/crypto/blake2s-core.S | 2 -
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 ----
arch/x86/crypto/poly1305_glue.c | 6 +--
arch/x86/crypto/sha1_ssse3_asm.S | 4 --
arch/x86/crypto/sha1_ssse3_glue.c | 9 +---
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +---
arch/x86/crypto/sha512-avx-asm.S | 2 -
arch/x86/crypto/sha512_ssse3_glue.c | 7 +--
arch/x86/include/asm/dwarf2.h | 44 -------------------
arch/x86/include/asm/xor_avx.h | 9 ----
kernel/signal.c | 2 +-
lib/raid6/algos.c | 6 +--
lib/raid6/recov_ssse3.c | 6 ---
lib/raid6/test/Makefile | 8 ++--
19 files changed, 33 insertions(+), 161 deletions(-)

--
2.17.1

Masahiro Yamada

unread,
Mar 23, 2020, 8:14:39 PM3/23/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, Ingo Molnar, clang-bu...@googlegroups.com
CONFIG_AS_CFI was introduced by commit e2414910f212 ("[PATCH] x86:
Detect CFI support in the assembler at runtime"), and extended by
commit f0f12d85af85 ("x86_64: Check for .cfi_rel_offset in CFI probe").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_CFI, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
Reviewed-by: Nick Desaulniers <ndesau...@google.com>
---

If this series is applied,
I can hard-code the assembler code, and delete CFI_* macros entirely.


Changes in v2: None

arch/x86/Makefile | 10 ++--------
arch/x86/include/asm/dwarf2.h | 36 -----------------------------------
2 files changed, 2 insertions(+), 44 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 513a55562d75..72f8f744ebd7 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -177,12 +177,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
KBUILD_CFLAGS += $(call cc-option,-maccumulate-outgoing-args,)
endif

-# Stackpointer is addressed different for 32 bit and 64 bit x86
-sp-$(CONFIG_X86_32) := esp
-sp-$(CONFIG_X86_64) := rsp
-
-# do binutils support CFI?
-cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
@@ -196,8 +190,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h
index 5a0502212bc5..90807583cad7 100644
--- a/arch/x86/include/asm/dwarf2.h
+++ b/arch/x86/include/asm/dwarf2.h
@@ -6,15 +6,6 @@
#warning "asm/dwarf2.h should be only included in pure assembly files"
#endif

-/*
- * Macros for dwarf2 CFI unwind table entries.
- * See "as.info" for details on these pseudo ops. Unfortunately
- * they are only supported in very new binutils, so define them
- * away for older version.
- */
-
-#ifdef CONFIG_AS_CFI
-
#define CFI_STARTPROC .cfi_startproc
#define CFI_ENDPROC .cfi_endproc
#define CFI_DEF_CFA .cfi_def_cfa
@@ -55,31 +46,4 @@
#endif
#endif

-#else
-
-/*
- * Due to the structure of pre-exisiting code, don't use assembler line
- * comment character # to ignore the arguments. Instead, use a dummy macro.
- */
-.macro cfi_ignore a=0, b=0, c=0, d=0
-.endm
-
-#define CFI_STARTPROC cfi_ignore
-#define CFI_ENDPROC cfi_ignore
-#define CFI_DEF_CFA cfi_ignore
-#define CFI_DEF_CFA_REGISTER cfi_ignore
-#define CFI_DEF_CFA_OFFSET cfi_ignore
-#define CFI_ADJUST_CFA_OFFSET cfi_ignore
-#define CFI_OFFSET cfi_ignore
-#define CFI_REL_OFFSET cfi_ignore
-#define CFI_REGISTER cfi_ignore
-#define CFI_RESTORE cfi_ignore
-#define CFI_REMEMBER_STATE cfi_ignore
-#define CFI_RESTORE_STATE cfi_ignore
-#define CFI_UNDEFINED cfi_ignore
-#define CFI_ESCAPE cfi_ignore
-#define CFI_SIGNAL_FRAME cfi_ignore
-
-#endif
-
#endif /* _ASM_X86_DWARF2_H */
--
2.17.1

Masahiro Yamada

unread,
Mar 23, 2020, 8:14:44 PM3/23/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, Ingo Molnar, clang-bu...@googlegroups.com
We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

We need to keep these as-instr checks because binutils 2.21 does not
support them.

I hope this will be a good hint which one can be dropped when we
bump the minimal binutils version next time.

As for the Clang/LLVM builds, we require very new LLVM version,
so the LLVM integrated assembler supports all of them.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

Changes in v2: None

arch/x86/Makefile | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index f32ef7b8d5ca..4c57cb3018fb 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,10 +178,15 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
+# binutils >= 2.22
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
+# binutils >= 2.25
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
+# binutils >= 2.24
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
+# binutils >= 2.24
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
+# binutils >= 2.23
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
--
2.17.1

Masahiro Yamada

unread,
Mar 23, 2020, 8:14:45 PM3/23/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, David S. Miller, Herbert Xu, Ingo Molnar, clang-bu...@googlegroups.com
CONFIG_AS_AVX was introduced by commit ea4d26ae24e5 ("raid5: add AVX
optimized RAID5 checksumming").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_AVX, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

Changes in v2: None

arch/x86/Makefile | 5 ++-
arch/x86/crypto/Makefile | 32 +++++++------------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-------
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 -----
arch/x86/crypto/poly1305_glue.c | 6 ++--
arch/x86/crypto/sha1_ssse3_asm.S | 4 ---
arch/x86/crypto/sha1_ssse3_glue.c | 9 +-----
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +----
arch/x86/crypto/sha512-avx-asm.S | 2 --
arch/x86/crypto/sha512_ssse3_glue.c | 7 +---
arch/x86/include/asm/xor_avx.h | 9 ------
13 files changed, 21 insertions(+), 89 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 94f89612e024..f32ef7b8d5ca 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,15 +178,14 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Masahiro Yamada

unread,
Mar 23, 2020, 8:14:45 PM3/23/20
to x...@kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, linux-...@vger.kernel.org, linux-...@vger.kernel.org, Jason A . Donenfeld, Masahiro Yamada, David S. Miller, Herbert Xu, Ingo Molnar, clang-bu...@googlegroups.com
CONFIG_AS_SSSE3 was introduced by commit 75aaf4c3e6a4 ("x86/raid6:
correctly check for assembler capabilities").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_SSSE3, which is always defined.

I added ifdef CONFIG_X86 to lib/raid6/algos.c to avoid link errors
on non-x86 architectures.

lib/raid6/algos.c is built not only for the kernel but also for
testing the library code from userspace. I added -DCONFIG_X86 to
lib/raid6/test/Makefile to cator to this usecase.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

Changes in v2:
- add ifdef CONFIG_X86 to fix build errors on non-x86 arches

arch/x86/Makefile | 5 ++---
arch/x86/crypto/blake2s-core.S | 2 --
lib/raid6/algos.c | 2 +-
lib/raid6/recov_ssse3.c | 6 ------
lib/raid6/test/Makefile | 4 +---
5 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index e4a062313bb0..94f89612e024 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,7 +178,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
@@ -186,8 +185,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

diff --git a/arch/x86/crypto/blake2s-core.S b/arch/x86/crypto/blake2s-core.S
index 24910b766bdd..2ca79974f819 100644
--- a/arch/x86/crypto/blake2s-core.S
+++ b/arch/x86/crypto/blake2s-core.S
@@ -46,7 +46,6 @@ SIGMA2:
#endif /* CONFIG_AS_AVX512 */

.text
-#ifdef CONFIG_AS_SSSE3
SYM_FUNC_START(blake2s_compress_ssse3)
testq %rdx,%rdx
je .Lendofloop
@@ -174,7 +173,6 @@ SYM_FUNC_START(blake2s_compress_ssse3)
.Lendofloop:
ret
SYM_FUNC_END(blake2s_compress_ssse3)
-#endif /* CONFIG_AS_SSSE3 */

#ifdef CONFIG_AS_AVX512
SYM_FUNC_START(blake2s_compress_avx512)
diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c
index bf1b4765c8f6..df08664d3432 100644
--- a/lib/raid6/algos.c
+++ b/lib/raid6/algos.c
@@ -97,13 +97,13 @@ void (*raid6_datap_recov)(int, size_t, int, void **);
EXPORT_SYMBOL_GPL(raid6_datap_recov);

const struct raid6_recov_calls *const raid6_recov_algos[] = {
+#ifdef CONFIG_X86
#ifdef CONFIG_AS_AVX512
&raid6_recov_avx512,
#endif
#ifdef CONFIG_AS_AVX2
&raid6_recov_avx2,
#endif
-#ifdef CONFIG_AS_SSSE3
&raid6_recov_ssse3,
#endif
#ifdef CONFIG_S390
diff --git a/lib/raid6/recov_ssse3.c b/lib/raid6/recov_ssse3.c
index 1de97d2405d0..4bfa3c6b60de 100644
--- a/lib/raid6/recov_ssse3.c
+++ b/lib/raid6/recov_ssse3.c
@@ -3,8 +3,6 @@
* Copyright (C) 2012 Intel Corporation
*/

-#ifdef CONFIG_AS_SSSE3
-
#include <linux/raid/pq.h>
#include "x86.h"

@@ -328,7 +326,3 @@ const struct raid6_recov_calls raid6_recov_ssse3 = {
#endif
.priority = 1,
};
-
-#else
-#warning "your version of binutils lacks SSSE3 support"
-#endif
diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
index b9e6c3648be1..60021319ac78 100644
--- a/lib/raid6/test/Makefile
+++ b/lib/raid6/test/Makefile
@@ -34,9 +34,7 @@ endif

ifeq ($(IS_X86),yes)
OBJS += mmx.o sse1.o sse2.o avx2.o recov_ssse3.o recov_avx2.o avx512.o recov_avx512.o
- CFLAGS += $(shell echo "pshufb %xmm0, %xmm0" | \
- gcc -c -x assembler - >/dev/null 2>&1 && \
- rm ./-.o && echo -DCONFIG_AS_SSSE3=1)
+ CFLAGS += -DCONFIG_X86
CFLAGS += $(shell echo "vpbroadcastb %xmm0, %ymm1" | \
gcc -c -x assembler - >/dev/null 2>&1 && \

Jason A. Donenfeld

unread,
Mar 23, 2020, 8:29:54 PM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, Linux Crypto Mailing List, LKML, David S. Miller, Herbert Xu, Ingo Molnar, Jim Kukunas, NeilBrown, Yuanhan Liu, clang-built-linux
On Mon, Mar 23, 2020 at 6:15 PM Masahiro Yamada <masa...@kernel.org> wrote:
>
>
> arch/x86/Makefile tests instruction code by $(call as-instr, ...)
>
> Some of them are very old.
> For example, the check for CONFIG_AS_CFI dates back to 2006.
>
> We raise GCC versions from time to time, and we clean old code away.
> The same policy applied to binutils.
>
> The current minimal supported version of binutils is 2.21
>
> This is new enough to recognize the instruction in most of
> as-instr calls.
>
> If this series looks good, how to merge it?
> Via x86 tree or maybe crypto ?

This series looks fine, but why is it still incomplete? That is, it's
missing your drm commit plus the 4 I layered on top for moving to a
Kconfig-based approach and accounting for the bump to binutils 2.23.
Everything is now rebased here:
https://git.zx2c4.com/linux-dev/log/?h=jd/kconfig-assembler-support

Would you be up for resubmitting those all together so we can handle
this in one go?

Jason

Masahiro Yamada

unread,
Mar 23, 2020, 8:53:17 PM3/23/20
to Jason A. Donenfeld, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, Linux Crypto Mailing List, LKML, David S. Miller, Herbert Xu, Ingo Molnar, Jim Kukunas, NeilBrown, Yuanhan Liu, clang-built-linux
The drm one was independent of the others,
so I just sent it to drm ML separately.

As for your 4, I just thought you would
send a fixed version.

But, folding everything in a series will clarify
the patch dependency.
OK, I will do it.
Who/which ML should I send it to?

Jason A. Donenfeld

unread,
Mar 23, 2020, 9:29:44 PM3/23/20
to Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, Linux Crypto Mailing List, LKML, David S. Miller, Herbert Xu, Ingo Molnar, Jim Kukunas, NeilBrown, Yuanhan Liu, clang-built-linux
On Mon, Mar 23, 2020 at 6:53 PM Masahiro Yamada <masa...@kernel.org> wrote:
> The drm one was independent of the others,
> so I just sent it to drm ML separately.
> As for your 4, I just thought you would
> send a fixed version.
> But, folding everything in a series will clarify
> the patch dependency.
> OK, I will do it.

Great, thanks. The ones in that branch now are ready to go, so grab
them out of there.

> Who/which ML should I send it to?

This seems to make sense, IMHO, for x86 or just as a pull to Linus
(i.e. the "kbuild mailing list", in which case, you'd send a pull from
your tree).

Jason

Sedat Dilek

unread,
Mar 24, 2020, 4:46:42 AM3/24/20
to Jason A. Donenfeld, Masahiro Yamada, X86 ML, Ingo Molnar, Thomas Gleixner, Borislav Petkov, H . Peter Anvin, LKML, Allison Randal, Armijn Hemel, David S. Miller, Greg Kroah-Hartman, Herbert Xu, Ingo Molnar, Kate Stewart, Song Liu, Zhengyuan Liu, clang-built-linux, Linux Crypto Mailing List
On Mon, Mar 23, 2020 at 8:50 PM Jason A. Donenfeld <Ja...@zx2c4.com> wrote:

> > I would have expected your arch/x86/Kconfig.assembler file as
> > arch/x86/crypto/Kconfig (source include needs to be adapted in
> > arch/x86/Kconfig).
>
> CONFIG_AS_* is required for more than just the crypto.
>

OK. I was not aware of this.

> > What about a commit subject like "x86: crypto: Probe assembler options
> > via Kconfig instead of makefile"?
>
> Thanks. Will fix silly verbiage and update branch.
>

Just looked to what I see new in [0].

Would you mind to add the patch

"Documentation/changes: Raise minimum supported binutils version to 2.23"

from [1] to your series, please?

For the meantime and clarification - you can drop it later (with
adding Link-tag to [1]) when it landed in tip Git [2] where I am not
seeing it.

Thanks for taking care to you an Masah*e*ro :-).

- Sedat -

[0] https://git.zx2c4.com/linux-dev/log/?h=jd/kconfig-assembler-support
[1] https://lore.kernel.org/lkml/20200316160...@zn.tnic/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/

Masahiro Yamada

unread,
Mar 24, 2020, 4:49:37 AM3/24/20
to linux-...@vger.kernel.org, David S . Miller, Linus Torvalds, Kees Cook, clang-bu...@googlegroups.com, Herbert Xu, linux-...@vger.kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, linux-...@vger.kernel.org, Masahiro Yamada
CONFIG_AS_CFI was introduced by commit e2414910f212 ("[PATCH] x86:
Detect CFI support in the assembler at runtime"), and extended by
commit f0f12d85af85 ("x86_64: Check for .cfi_rel_offset in CFI probe").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_CFI, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
Reviewed-by: Nick Desaulniers <ndesau...@google.com>
---

If this series is applied,
we can hard-code the assembler code, and delete CFI_* macros entirely.


arch/x86/Makefile | 10 ++--------
arch/x86/include/asm/dwarf2.h | 36 -----------------------------------
2 files changed, 2 insertions(+), 44 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 513a55562d75..72f8f744ebd7 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -177,12 +177,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
KBUILD_CFLAGS += $(call cc-option,-maccumulate-outgoing-args,)
endif

-# Stackpointer is addressed different for 32 bit and 64 bit x86
-sp-$(CONFIG_X86_32) := esp
-sp-$(CONFIG_X86_64) := rsp
-
-# do binutils support CFI?
-cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
@@ -196,8 +190,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Masahiro Yamada

unread,
Mar 24, 2020, 4:49:37 AM3/24/20
to linux-...@vger.kernel.org, David S . Miller, Linus Torvalds, Kees Cook, clang-bu...@googlegroups.com, Herbert Xu, linux-...@vger.kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, linux-...@vger.kernel.org, Masahiro Yamada
CONFIG_AS_SSSE3 was introduced by commit 75aaf4c3e6a4 ("x86/raid6:
correctly check for assembler capabilities").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_SSSE3, which is always defined.

I added ifdef CONFIG_X86 to lib/raid6/algos.c to avoid link errors
on non-x86 architectures.

lib/raid6/algos.c is built not only for the kernel but also for
testing the library code from userspace. I added -DCONFIG_X86 to
lib/raid6/test/Makefile to cator to this usecase.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

arch/x86/Makefile | 5 ++---
arch/x86/crypto/blake2s-core.S | 2 --
lib/raid6/algos.c | 2 +-
lib/raid6/recov_ssse3.c | 6 ------
lib/raid6/test/Makefile | 4 +---
5 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index e4a062313bb0..94f89612e024 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,7 +178,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
@@ -186,8 +185,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Masahiro Yamada

unread,
Mar 24, 2020, 4:49:37 AM3/24/20
to linux-...@vger.kernel.org, David S . Miller, Linus Torvalds, Kees Cook, clang-bu...@googlegroups.com, Herbert Xu, linux-...@vger.kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, linux-...@vger.kernel.org, Masahiro Yamada
We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

We need to keep these as-instr checks because binutils 2.21 does not
support them.

I hope this will be a good hint which one can be dropped when we
bump the minimal binutils version next time.

As for the Clang/LLVM builds, we require very new LLVM version,
so the LLVM integrated assembler supports all of them.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

arch/x86/Makefile | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index f32ef7b8d5ca..4c57cb3018fb 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,10 +178,15 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
+# binutils >= 2.22
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
+# binutils >= 2.25
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
+# binutils >= 2.24
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
+# binutils >= 2.24
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
+# binutils >= 2.23
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
--
2.17.1

Masahiro Yamada

unread,
Mar 24, 2020, 4:49:38 AM3/24/20
to linux-...@vger.kernel.org, David S . Miller, Linus Torvalds, Kees Cook, clang-bu...@googlegroups.com, Herbert Xu, linux-...@vger.kernel.org, Ingo Molnar, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, linux-...@vger.kernel.org, Masahiro Yamada
CONFIG_AS_AVX was introduced by commit ea4d26ae24e5 ("raid5: add AVX
optimized RAID5 checksumming").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_AVX, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

arch/x86/Makefile | 5 ++-
arch/x86/crypto/Makefile | 32 +++++++------------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-------
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 -----
arch/x86/crypto/poly1305_glue.c | 6 ++--
arch/x86/crypto/sha1_ssse3_asm.S | 4 ---
arch/x86/crypto/sha1_ssse3_glue.c | 9 +-----
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +----
arch/x86/crypto/sha512-avx-asm.S | 2 --
arch/x86/crypto/sha512_ssse3_glue.c | 7 +---
arch/x86/include/asm/xor_avx.h | 9 ------
13 files changed, 21 insertions(+), 89 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 94f89612e024..f32ef7b8d5ca 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,15 +178,14 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Nick Desaulniers

unread,
Mar 24, 2020, 12:56:31 PM3/24/20
to Masahiro Yamada, LKML, David S . Miller, Linus Torvalds, Kees Cook, clang-built-linux, Herbert Xu, open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Ingo Molnar, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT), Linux Kbuild mailing list
On Tue, Mar 24, 2020 at 1:49 AM Masahiro Yamada <masa...@kernel.org> wrote:
>
> CONFIG_AS_SSSE3 was introduced by commit 75aaf4c3e6a4 ("x86/raid6:
> correctly check for assembler capabilities").
>
> We raise the minimal supported binutils version from time to time.
> The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
> required binutils version to 2.21").

Looks like binutils gained SSE3 support in 2005; 2.21 was released in 2010.
Reviewed-by: Nick Desaulniers <ndesau...@google.com>
> --

--
Thanks,
~Nick Desaulniers

Masahiro Yamada

unread,
Mar 26, 2020, 4:02:21 AM3/26/20
to linux-...@vger.kernel.org, Thomas Gleixner, Nick Desaulniers, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, Jason A . Donenfeld, clang-bu...@googlegroups.com, Masahiro Yamada, Ingo Molnar, linux-...@vger.kernel.org
We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

We need to keep these as-instr checks because binutils 2.21 does not
support them.

I hope this will be a good hint which one can be dropped when we
bump the minimal binutils version next time.

As for the Clang/LLVM builds, we require very new LLVM version,
so the LLVM integrated assembler supports all of them.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

Changes in v2:
- Change the patch order and rebase

arch/x86/Kconfig.assembler | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler
index 91230bf11a14..a5a1d2766b3a 100644
--- a/arch/x86/Kconfig.assembler
+++ b/arch/x86/Kconfig.assembler
@@ -3,15 +3,25 @@

config AS_AVX2
def_bool $(as-instr,vpbroadcastb %xmm0$(comma)%ymm1)
+ help
+ Supported by binutils >= 2.22 and LLVM integrated assembler

config AS_AVX512
def_bool $(as-instr,vpmovm2b %k1$(comma)%zmm5)
+ help
+ Supported by binutils >= 2.25 and LLVM integrated assembler

config AS_SHA1_NI
def_bool $(as-instr,sha1msg1 %xmm0$(comma)%xmm1)
+ help
+ Supported by binutils >= 2.24 and LLVM integrated assembler

config AS_SHA256_NI
def_bool $(as-instr,sha256msg1 %xmm0$(comma)%xmm1)
+ help
+ Supported by binutils >= 2.24 and LLVM integrated assembler

config AS_ADX
def_bool $(as-instr,adox %eax$(comma)%eax)
+ help
+ Supported by binutils >= 2.23 and LLVM integrated assembler
--
2.17.1

Masahiro Yamada

unread,
Mar 26, 2020, 4:02:21 AM3/26/20
to linux-...@vger.kernel.org, Thomas Gleixner, Nick Desaulniers, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, Jason A . Donenfeld, clang-bu...@googlegroups.com, Masahiro Yamada, David S. Miller, Herbert Xu, Ingo Molnar, linux-...@vger.kernel.org, linux-...@vger.kernel.org
CONFIG_AS_AVX was introduced by commit ea4d26ae24e5 ("raid5: add AVX
optimized RAID5 checksumming").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_AVX, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---

Changes in v2: None

arch/x86/Makefile | 5 ++-
arch/x86/crypto/Makefile | 32 +++++++------------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --
arch/x86/crypto/aesni-intel_glue.c | 14 +-------
arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 -----
arch/x86/crypto/poly1305_glue.c | 6 ++--
arch/x86/crypto/sha1_ssse3_asm.S | 4 ---
arch/x86/crypto/sha1_ssse3_glue.c | 9 +-----
arch/x86/crypto/sha256-avx-asm.S | 3 --
arch/x86/crypto/sha256_ssse3_glue.c | 8 +----
arch/x86/crypto/sha512-avx-asm.S | 2 --
arch/x86/crypto/sha512_ssse3_glue.c | 7 +---
arch/x86/include/asm/xor_avx.h | 9 ------
13 files changed, 21 insertions(+), 89 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 94f89612e024..f32ef7b8d5ca 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,15 +178,14 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Masahiro Yamada

unread,
Mar 26, 2020, 4:02:21 AM3/26/20
to linux-...@vger.kernel.org, Thomas Gleixner, Nick Desaulniers, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, Jason A . Donenfeld, clang-bu...@googlegroups.com, Masahiro Yamada, David S. Miller, Herbert Xu, Ingo Molnar, linux-...@vger.kernel.org, linux-...@vger.kernel.org
CONFIG_AS_SSSE3 was introduced by commit 75aaf4c3e6a4 ("x86/raid6:
correctly check for assembler capabilities").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_SSSE3, which is always defined.

I added ifdef CONFIG_X86 to lib/raid6/algos.c to avoid link errors
on non-x86 architectures.

lib/raid6/algos.c is built not only for the kernel but also for
testing the library code from userspace. I added -DCONFIG_X86 to
lib/raid6/test/Makefile to cator to this usecase.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
Reviewed-by: Nick Desaulniers <ndesau...@google.com>
---

Changes in v2: None

arch/x86/Makefile | 5 ++---
arch/x86/crypto/blake2s-core.S | 2 --
lib/raid6/algos.c | 2 +-
lib/raid6/recov_ssse3.c | 6 ------
lib/raid6/test/Makefile | 4 +---
5 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index e4a062313bb0..94f89612e024 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -178,7 +178,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
endif

# does binutils support specific instructions?
-asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
avx512_instr :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,-DCONFIG_AS_AVX512=1)
@@ -186,8 +185,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Masahiro Yamada

unread,
Mar 26, 2020, 4:02:21 AM3/26/20
to linux-...@vger.kernel.org, Thomas Gleixner, Nick Desaulniers, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, x...@kernel.org, Jason A . Donenfeld, clang-bu...@googlegroups.com, Masahiro Yamada, Ingo Molnar, linux-...@vger.kernel.org
CONFIG_AS_CFI was introduced by commit e2414910f212 ("[PATCH] x86:
Detect CFI support in the assembler at runtime"), and extended by
commit f0f12d85af85 ("x86_64: Check for .cfi_rel_offset in CFI probe").

We raise the minimal supported binutils version from time to time.
The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
required binutils version to 2.21").

I confirmed the code in $(call as-instr,...) can be assembled by the
binutils 2.21 assembler and also by LLVM integrated assembler.

Remove CONFIG_AS_CFI, which is always defined.

Signed-off-by: Masahiro Yamada <masa...@kernel.org>
Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>
Reviewed-by: Nick Desaulniers <ndesau...@google.com>
---

If this series is applied,
I can hard-code the assembler code, and delete CFI_* macros entirely.


Changes in v2: None

arch/x86/Makefile | 10 ++--------
arch/x86/include/asm/dwarf2.h | 36 -----------------------------------
2 files changed, 2 insertions(+), 44 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 513a55562d75..72f8f744ebd7 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -177,12 +177,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
KBUILD_CFLAGS += $(call cc-option,-maccumulate-outgoing-args,)
endif

-# Stackpointer is addressed different for 32 bit and 64 bit x86
-sp-$(CONFIG_X86_32) := esp
-sp-$(CONFIG_X86_64) := rsp
-
-# do binutils support CFI?
-cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
@@ -196,8 +190,8 @@ sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
adx_instr := $(call as-instr,adox %r10$(comma)%r10,-DCONFIG_AS_ADX=1)

-KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
-KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_AFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)
+KBUILD_CFLAGS += $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(avx512_instr) $(sha1_ni_instr) $(sha256_ni_instr) $(adx_instr)

KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)

Nick Desaulniers

unread,
Mar 26, 2020, 1:50:31 PM3/26/20
to Masahiro Yamada, Linux Kbuild mailing list, Thomas Gleixner, Borislav Petkov, Peter Zijlstra, H . Peter Anvin, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT), Jason A . Donenfeld, clang-built-linux, Ingo Molnar, LKML
On Thu, Mar 26, 2020 at 1:02 AM Masahiro Yamada <masa...@kernel.org> wrote:
>
> We raise the minimal supported binutils version from time to time.
> The last bump was commit 1fb12b35e5ff ("kbuild: Raise the minimum
> required binutils version to 2.21").
>
> We need to keep these as-instr checks because binutils 2.21 does not
> support them.
>
> I hope this will be a good hint which one can be dropped when we
> bump the minimal binutils version next time.
>
> As for the Clang/LLVM builds, we require very new LLVM version,
> so the LLVM integrated assembler supports all of them.
>
> Signed-off-by: Masahiro Yamada <masa...@kernel.org>
> Acked-by: Jason A. Donenfeld <Ja...@zx2c4.com>

Acked-by: Nick Desaulniers <ndesau...@google.com>
--
Thanks,
~Nick Desaulniers
Reply all
Reply to author
Forward
0 new messages