[go] cmd/compile: on AMD64 add an already zero argument for SETcc to write into

5 views
Skip to first unread message

Jorropo (Gerrit)

unread,
Dec 20, 2025, 7:28:30 AM (yesterday) Dec 20
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com

Jorropo has uploaded the change for review

Commit message

cmd/compile: on AMD64 add an already zero argument for SETcc to write into

Fixes #76066

goos: linux
goarch: amd64
pkg: crypto/subtle
cpu: AMD Ryzen 5 3600 6-Core Processor
│ /tmp/old.logs │ /tmp/new.logs │
│ sec/op │ sec/op vs base │
ConstantTimeSelect-12 0.5172n ± 1% 0.5162n ± 1% ~ (p=0.343 n=10)
ConstantTimeByteEq-12 0.5171n ± 1% 0.5150n ± 0% -0.41% (p=0.007 n=10)
ConstantTimeEq-12 0.7739n ± 0% 0.5157n ± 0% -33.36% (p=0.000 n=10)
ConstantTimeLessOrEq-12 0.7734n ± 1% 0.5166n ± 1% -33.21% (p=0.000 n=10)
geomean 0.6326n 0.5159n -18.45%

This adds an always zero argument to ssa SETcc.
In AMD64.rules we inject it as a const 0.
It is configured with resultInArg0 which let us leverage regalloc & flagalloc
to handle setting up a zero register for SETcc to write into.

I only implemented ints since floats SETccF have a different pattern
I couldn't mechanically handle them the same way.

An other CL could be done for floats SET if someone feels about them.

This is a performance boost in two ways:
1. < 32bits writes have a dependency on the previous
instruction which touched this register.
This is a consequence of register result merging and dispatchers
Not taking the register size of an instruction into a count.
2. If we need to use the result of SETcc as a bigger than 8 bits,
operation we can skip extending the result since it's already 64bits.

The results look funky, the way other compilers handle this is with:
XORL AX, AX
CMPQ BX, CX
SETNE AX

But we will do:
CMPQ BX, CX
MOVL $0, AX
SETNE AX

This incurs an encoding overhead,
but execute just the same.

This is because we translate MOVLconst [0] & MOVQconst [0] → XORL
extremely late (while generating assembly).

This is way past flagalloc, regalloc & schedule and as far as they
(corrently) know the second construction to be more optimal because
it incurs no register pressure overhead, unlike the first one.

But then when the cmd/compile/internal/amd64 tries to change MOVL $0, AX into
a XORL AX, AX it does not since flags are live at the MOVL and
XORL would clobber them.

It is fine, this CL is better than what we where doing before in latency
and throughput at the cost of 3 more encoded bytes.
Change-Id: Ia53052aaa04a7613cad453a5a59109267685cda8

Change diff

diff --git a/src/cmd/compile/internal/ssa/_gen/AMD64.rules b/src/cmd/compile/internal/ssa/_gen/AMD64.rules
index 38ca44f..e32a3ff 100644
--- a/src/cmd/compile/internal/ssa/_gen/AMD64.rules
+++ b/src/cmd/compile/internal/ssa/_gen/AMD64.rules
@@ -16,7 +16,7 @@

(Select0 (Mul64uover x y)) => (Select0 <typ.UInt64> (MULQU x y))
(Select0 (Mul32uover x y)) => (Select0 <typ.UInt32> (MULLU x y))
-(Select1 (Mul(64|32)uover x y)) => (SETO (Select1 <types.TypeFlags> (MUL(Q|L)U x y)))
+(Select1 (Mul(64|32)uover x y)) => (SETO <typ.UInt64> (Const64 <typ.UInt64> [0]) (Select1 <types.TypeFlags> (MUL(Q|L)U x y)))

(Hmul(64|32) ...) => (HMUL(Q|L) ...)
(Hmul(64|32)u ...) => (HMUL(Q|L)U ...)
@@ -233,12 +233,12 @@
(Rsh8x(64|32|16|8) x y) && shiftIsBounded(v) => (SARB x y)

// Lowering integer comparisons
-(Less(64|32|16|8) x y) => (SETL (CMP(Q|L|W|B) x y))
-(Less(64|32|16|8)U x y) => (SETB (CMP(Q|L|W|B) x y))
-(Leq(64|32|16|8) x y) => (SETLE (CMP(Q|L|W|B) x y))
-(Leq(64|32|16|8)U x y) => (SETBE (CMP(Q|L|W|B) x y))
-(Eq(Ptr|64|32|16|8|B) x y) => (SETEQ (CMP(Q|Q|L|W|B|B) x y))
-(Neq(Ptr|64|32|16|8|B) x y) => (SETNE (CMP(Q|Q|L|W|B|B) x y))
+(Less(64|32|16|8) x y) => (SETL <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|L|W|B) x y))
+(Less(64|32|16|8)U x y) => (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|L|W|B) x y))
+(Leq(64|32|16|8) x y) => (SETLE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|L|W|B) x y))
+(Leq(64|32|16|8)U x y) => (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|L|W|B) x y))
+(Eq(Ptr|64|32|16|8|B) x y) => (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|Q|L|W|B|B) x y))
+(Neq(Ptr|64|32|16|8|B) x y) => (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMP(Q|Q|L|W|B|B) x y))

// Lowering floating point comparisons
// Note Go assembler gets UCOMISx operand order wrong, but it is right here
@@ -390,12 +390,18 @@
// If the condition is a SETxx, we can just run a CMOV from the comparison that was
// setting the flags.
// Legend: HI=unsigned ABOVE, CS=unsigned BELOW, CC=unsigned ABOVE EQUAL, LS=unsigned BELOW EQUAL
-(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE|EQF|NEF|GF|GEF) cond)) && (is64BitInt(t) || isPtr(t))
- => (CMOVQ(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS|EQF|NEF|GTF|GEF) y x cond)
-(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE|EQF|NEF|GF|GEF) cond)) && is32BitInt(t)
- => (CMOVL(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS|EQF|NEF|GTF|GEF) y x cond)
-(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE|EQF|NEF|GF|GEF) cond)) && is16BitInt(t)
- => (CMOVW(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS|EQF|NEF|GTF|GEF) y x cond)
+(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE) _ cond)) && (is64BitInt(t) || isPtr(t))
+ => (CMOVQ(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS) y x cond)
+(CondSelect <t> x y (SET(EQF|NEF|GF|GEF) cond)) && (is64BitInt(t) || isPtr(t))
+ => (CMOVQ(EQF|NEF|GTF|GEF) y x cond)
+(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE) _ cond)) && is32BitInt(t)
+ => (CMOVL(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS) y x cond)
+(CondSelect <t> x y (SET(EQF|NEF|GF|GEF) cond)) && is32BitInt(t)
+ => (CMOVL(EQF|NEF|GTF|GEF) y x cond)
+(CondSelect <t> x y (SET(EQ|NE|L|G|LE|GE|A|B|AE|BE) _ cond)) && is16BitInt(t)
+ => (CMOVW(EQ|NE|LT|GT|LE|GE|HI|CS|CC|LS) y x cond)
+(CondSelect <t> x y (SET(EQF|NEF|GF|GEF) cond)) && is16BitInt(t)
+ => (CMOVW(EQF|NEF|GTF|GEF) y x cond)

(CondSelect <t> x y check) && !check.Type.IsFlags() && check.Type.Size() == 8 && (is64BitInt(t) || isPtr(t))
=> (CMOVQNE y x (CMPQconst [0] check))
@@ -443,43 +449,43 @@
(CMOV(QEQ|QGT|QGE|QCS|QLS|LEQ|LGT|LGE|LCS|LLS|WEQ|WGT|WGE|WCS|WLS) y _ (FlagLT_UGT)) => y

// Miscellaneous
-(IsNonNil p) => (SETNE (TESTQ p p))
-(IsInBounds idx len) => (SETB (CMPQ idx len))
-(IsSliceInBounds idx len) => (SETBE (CMPQ idx len))
+(IsNonNil p) => (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (TESTQ p p ))
+(IsInBounds idx len) => (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ idx len))
+(IsSliceInBounds idx len) => (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ idx len))
(NilCheck ...) => (LoweredNilCheck ...)
(GetG mem) && v.Block.Func.OwnAux.Fn.ABI() != obj.ABIInternal => (LoweredGetG mem) // only lower in old ABI. in new ABI we have a G register.
(GetClosurePtr ...) => (LoweredGetClosurePtr ...)
(GetCallerPC ...) => (LoweredGetCallerPC ...)
(GetCallerSP ...) => (LoweredGetCallerSP ...)

-(HasCPUFeature {s}) => (SETNE (CMPLconst [0] (LoweredHasCPUFeature {s})))
+(HasCPUFeature {s}) => (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPLconst [0] (LoweredHasCPUFeature {s})))
(Addr {sym} base) => (LEAQ {sym} base)
(LocalAddr <t> {sym} base mem) && t.Elem().HasPointers() => (LEAQ {sym} (SPanchored base mem))
(LocalAddr <t> {sym} base _) && !t.Elem().HasPointers() => (LEAQ {sym} base)

-(MOVBstore [off] {sym} ptr y:(SETL x) mem) && y.Uses == 1 => (SETLstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETLE x) mem) && y.Uses == 1 => (SETLEstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETG x) mem) && y.Uses == 1 => (SETGstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETGE x) mem) && y.Uses == 1 => (SETGEstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETEQ x) mem) && y.Uses == 1 => (SETEQstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETNE x) mem) && y.Uses == 1 => (SETNEstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETB x) mem) && y.Uses == 1 => (SETBstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETBE x) mem) && y.Uses == 1 => (SETBEstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETA x) mem) && y.Uses == 1 => (SETAstore [off] {sym} ptr x mem)
-(MOVBstore [off] {sym} ptr y:(SETAE x) mem) && y.Uses == 1 => (SETAEstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETL _ x) mem) && y.Uses == 1 => (SETLstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETLE _ x) mem) && y.Uses == 1 => (SETLEstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETG _ x) mem) && y.Uses == 1 => (SETGstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETGE _ x) mem) && y.Uses == 1 => (SETGEstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETEQ _ x) mem) && y.Uses == 1 => (SETEQstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETNE _ x) mem) && y.Uses == 1 => (SETNEstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETB _ x) mem) && y.Uses == 1 => (SETBstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETBE _ x) mem) && y.Uses == 1 => (SETBEstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETA _ x) mem) && y.Uses == 1 => (SETAstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr y:(SETAE _ x) mem) && y.Uses == 1 => (SETAEstore [off] {sym} ptr x mem)

// block rewrites
-(If (SETL cmp) yes no) => (LT cmp yes no)
-(If (SETLE cmp) yes no) => (LE cmp yes no)
-(If (SETG cmp) yes no) => (GT cmp yes no)
-(If (SETGE cmp) yes no) => (GE cmp yes no)
-(If (SETEQ cmp) yes no) => (EQ cmp yes no)
-(If (SETNE cmp) yes no) => (NE cmp yes no)
-(If (SETB cmp) yes no) => (ULT cmp yes no)
-(If (SETBE cmp) yes no) => (ULE cmp yes no)
-(If (SETA cmp) yes no) => (UGT cmp yes no)
-(If (SETAE cmp) yes no) => (UGE cmp yes no)
-(If (SETO cmp) yes no) => (OS cmp yes no)
+(If (SETL _ cmp) yes no) => (LT cmp yes no)
+(If (SETLE _ cmp) yes no) => (LE cmp yes no)
+(If (SETG _ cmp) yes no) => (GT cmp yes no)
+(If (SETGE _ cmp) yes no) => (GE cmp yes no)
+(If (SETEQ _ cmp) yes no) => (EQ cmp yes no)
+(If (SETNE _ cmp) yes no) => (NE cmp yes no)
+(If (SETB _ cmp) yes no) => (ULT cmp yes no)
+(If (SETBE _ cmp) yes no) => (ULE cmp yes no)
+(If (SETA _ cmp) yes no) => (UGT cmp yes no)
+(If (SETAE _ cmp) yes no) => (UGE cmp yes no)
+(If (SETO _ cmp) yes no) => (OS cmp yes no)

// Special case for floating point - LF/LEF not generated
(If (SETGF cmp) yes no) => (UGT cmp yes no)
@@ -552,37 +558,37 @@
// TODO: Should the optimizations be a separate pass?

// Fold boolean tests into blocks
-(NE (TESTB (SETL cmp) (SETL cmp)) yes no) => (LT cmp yes no)
-(NE (TESTB (SETLE cmp) (SETLE cmp)) yes no) => (LE cmp yes no)
-(NE (TESTB (SETG cmp) (SETG cmp)) yes no) => (GT cmp yes no)
-(NE (TESTB (SETGE cmp) (SETGE cmp)) yes no) => (GE cmp yes no)
-(NE (TESTB (SETEQ cmp) (SETEQ cmp)) yes no) => (EQ cmp yes no)
-(NE (TESTB (SETNE cmp) (SETNE cmp)) yes no) => (NE cmp yes no)
-(NE (TESTB (SETB cmp) (SETB cmp)) yes no) => (ULT cmp yes no)
-(NE (TESTB (SETBE cmp) (SETBE cmp)) yes no) => (ULE cmp yes no)
-(NE (TESTB (SETA cmp) (SETA cmp)) yes no) => (UGT cmp yes no)
-(NE (TESTB (SETAE cmp) (SETAE cmp)) yes no) => (UGE cmp yes no)
-(NE (TESTB (SETO cmp) (SETO cmp)) yes no) => (OS cmp yes no)
+(NE (TESTB (SETL _ cmp) (SETL _ cmp)) yes no) => (LT cmp yes no)
+(NE (TESTB (SETLE _ cmp) (SETLE _ cmp)) yes no) => (LE cmp yes no)
+(NE (TESTB (SETG _ cmp) (SETG _ cmp)) yes no) => (GT cmp yes no)
+(NE (TESTB (SETGE _ cmp) (SETGE _ cmp)) yes no) => (GE cmp yes no)
+(NE (TESTB (SETEQ _ cmp) (SETEQ _ cmp)) yes no) => (EQ cmp yes no)
+(NE (TESTB (SETNE _ cmp) (SETNE _ cmp)) yes no) => (NE cmp yes no)
+(NE (TESTB (SETB _ cmp) (SETB _ cmp)) yes no) => (ULT cmp yes no)
+(NE (TESTB (SETBE _ cmp) (SETBE _ cmp)) yes no) => (ULE cmp yes no)
+(NE (TESTB (SETA _ cmp) (SETA _ cmp)) yes no) => (UGT cmp yes no)
+(NE (TESTB (SETAE _ cmp) (SETAE _ cmp)) yes no) => (UGE cmp yes no)
+(NE (TESTB (SETO _ cmp) (SETO _ cmp)) yes no) => (OS cmp yes no)

// Unsigned comparisons to 0/1
(ULT (TEST(Q|L|W|B) x x) yes no) => (First no yes)
(UGE (TEST(Q|L|W|B) x x) yes no) => (First yes no)
-(SETB (TEST(Q|L|W|B) x x)) => (ConstBool [false])
-(SETAE (TEST(Q|L|W|B) x x)) => (ConstBool [true])
+(SETB _ (TEST(Q|L|W|B) x x)) => (ConstBool [false])
+(SETAE _ (TEST(Q|L|W|B) x x)) => (ConstBool [true])

// x & 1 != 0 -> x & 1
-(SETNE (TEST(B|W)const [1] x)) => (AND(L|L)const [1] x)
-(SETB (BT(L|Q)const [0] x)) => (AND(L|Q)const [1] x)
+(SETNE _ (TEST(B|W)const [1] x)) => (AND(L|L)const [1] x)
+(SETB _ (BT(L|Q)const [0] x)) => (AND(L|Q)const [1] x)
// x & 1 == 0 -> (x & 1) ^ 1
-(SETAE (BT(L|Q)const [0] x)) => (XORLconst [1] (ANDLconst <typ.Bool> [1] x))
+(SETAE _ (BT(L|Q)const [0] x)) => (XORLconst [1] (ANDLconst <typ.Bool> [1] x))

// Shorten compare by rewriting x < 128 as x <= 127, which can be encoded in a single-byte immediate on x86.
-(SETL c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETLE (CMP(Q|L)const [127] x))
-(SETB c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETBE (CMP(Q|L)const [127] x))
+(SETL zero c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETLE zero (CMP(Q|L)const [127] x))
+(SETB zero c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETBE zero (CMP(Q|L)const [127] x))

// x >= 128 -> x > 127
-(SETGE c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETG (CMP(Q|L)const [127] x))
-(SETAE c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETA (CMP(Q|L)const [127] x))
+(SETGE zero c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETG zero (CMP(Q|L)const [127] x))
+(SETAE zero c:(CMP(Q|L)const [128] x)) && c.Uses == 1 => (SETA zero (CMP(Q|L)const [127] x))

(CMOVQLT x y c:(CMP(Q|L)const [128] z)) && c.Uses == 1 => (CMOVQLE x y (CMP(Q|L)const [127] z))
(CMOVLLT x y c:(CMP(Q|L)const [128] z)) && c.Uses == 1 => (CMOVLLE x y (CMP(Q|L)const [127] z))
@@ -604,14 +610,14 @@
=> ((ULT|UGE) (BTQconst [int8(log32u(uint32(c)))] x))
((NE|EQ) (TESTQ (MOVQconst [c]) x)) && isUnsignedPowerOfTwo(uint64(c))
=> ((ULT|UGE) (BTQconst [int8(log64u(uint64(c)))] x))
-(SET(NE|EQ) (TESTL (SHLL (MOVLconst [1]) x) y)) => (SET(B|AE) (BTL x y))
-(SET(NE|EQ) (TESTQ (SHLQ (MOVQconst [1]) x) y)) => (SET(B|AE) (BTQ x y))
-(SET(NE|EQ) (TESTLconst [c] x)) && isUnsignedPowerOfTwo(uint32(c))
- => (SET(B|AE) (BTLconst [int8(log32u(uint32(c)))] x))
-(SET(NE|EQ) (TESTQconst [c] x)) && isUnsignedPowerOfTwo(uint64(c))
- => (SET(B|AE) (BTQconst [int8(log32u(uint32(c)))] x))
-(SET(NE|EQ) (TESTQ (MOVQconst [c]) x)) && isUnsignedPowerOfTwo(uint64(c))
- => (SET(B|AE) (BTQconst [int8(log64u(uint64(c)))] x))
+(SET(NE|EQ) zero (TESTL (SHLL (MOVLconst [1]) x) y)) => (SET(B|AE) zero (BTL x y))
+(SET(NE|EQ) zero (TESTQ (SHLQ (MOVQconst [1]) x) y)) => (SET(B|AE) zero (BTQ x y))
+(SET(NE|EQ) zero (TESTLconst [c] x)) && isUnsignedPowerOfTwo(uint32(c))
+ => (SET(B|AE) zero (BTLconst [int8(log32u(uint32(c)))] x))
+(SET(NE|EQ) zero (TESTQconst [c] x)) && isUnsignedPowerOfTwo(uint64(c))
+ => (SET(B|AE) zero (BTQconst [int8(log32u(uint32(c)))] x))
+(SET(NE|EQ) zero (TESTQ (MOVQconst [c]) x)) && isUnsignedPowerOfTwo(uint64(c))
+ => (SET(B|AE) zero (BTQconst [int8(log64u(uint64(c)))] x))
// SET..store variant
(SET(NE|EQ)store [off] {sym} ptr (TESTL (SHLL (MOVLconst [1]) x) y) mem)
=> (SET(B|AE)store [off] {sym} ptr (BTL x y) mem)
@@ -637,9 +643,9 @@

// Rewrite a & 1 != 1 into a & 1 == 0.
// Among other things, this lets us turn (a>>b)&1 != 1 into a bit test.
-(SET(NE|EQ) (CMPLconst [1] s:(ANDLconst [1] _))) => (SET(EQ|NE) (CMPLconst [0] s))
+(SET(NE|EQ) zero (CMPLconst [1] s:(ANDLconst [1] _))) => (SET(EQ|NE) zero (CMPLconst [0] s))
(SET(NE|EQ)store [off] {sym} ptr (CMPLconst [1] s:(ANDLconst [1] _)) mem) => (SET(EQ|NE)store [off] {sym} ptr (CMPLconst [0] s) mem)
-(SET(NE|EQ) (CMPQconst [1] s:(ANDQconst [1] _))) => (SET(EQ|NE) (CMPQconst [0] s))
+(SET(NE|EQ) zero (CMPQconst [1] s:(ANDQconst [1] _))) => (SET(EQ|NE) zero (CMPQconst [0] s))
(SET(NE|EQ)store [off] {sym} ptr (CMPQconst [1] s:(ANDQconst [1] _)) mem) => (SET(EQ|NE)store [off] {sym} ptr (CMPQconst [0] s) mem)

// Recognize bit setting (a |= 1<<b) and toggling (a ^= 1<<b)
@@ -675,29 +681,41 @@
=> (BTRQconst [63] x)

// Special case testing first/last bit (with double-shift generated by generic.rules)
-((SETNE|SETEQ|NE|EQ) (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTQconst [63] x))
-((SETNE|SETEQ|NE|EQ) (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTQconst [31] x))
+((SETNE|SETEQ) zero (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTQconst [63] x))
+((SETNE|SETEQ) zero (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTQconst [31] x))
+((NE|EQ) (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2)) && z1==z2
+ => ((ULT|UGE) (BTQconst [63] x))
+((NE|EQ) (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2)) && z1==z2
+ => ((ULT|UGE) (BTQconst [31] x))
(SET(NE|EQ)store [off] {sym} ptr (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2) mem) && z1==z2
=> (SET(B|AE)store [off] {sym} ptr (BTQconst [63] x) mem)
(SET(NE|EQ)store [off] {sym} ptr (TESTL z1:(SHLLconst [31] (SHRLconst [31] x)) z2) mem) && z1==z2
=> (SET(B|AE)store [off] {sym} ptr (BTLconst [31] x) mem)

-((SETNE|SETEQ|NE|EQ) (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTQconst [0] x))
-((SETNE|SETEQ|NE|EQ) (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTLconst [0] x))
+((SETNE|SETEQ) zero (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTQconst [0] x))
+((SETNE|SETEQ) zero (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTLconst [0] x))
+((NE|EQ) (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2)) && z1==z2
+ => ((ULT|UGE) (BTQconst [0] x))
+((NE|EQ) (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2)) && z1==z2
+ => ((ULT|UGE) (BTLconst [0] x))
(SET(NE|EQ)store [off] {sym} ptr (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2) mem) && z1==z2
=> (SET(B|AE)store [off] {sym} ptr (BTQconst [0] x) mem)
(SET(NE|EQ)store [off] {sym} ptr (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2) mem) && z1==z2
=> (SET(B|AE)store [off] {sym} ptr (BTLconst [0] x) mem)

// Special-case manually testing last bit with "a>>63 != 0" (without "&1")
-((SETNE|SETEQ|NE|EQ) (TESTQ z1:(SHRQconst [63] x) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTQconst [63] x))
-((SETNE|SETEQ|NE|EQ) (TESTL z1:(SHRLconst [31] x) z2)) && z1==z2
- => ((SETB|SETAE|ULT|UGE) (BTLconst [31] x))
+((SETNE|SETEQ) zero (TESTQ z1:(SHRQconst [63] x) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTQconst [63] x))
+((SETNE|SETEQ) zero (TESTL z1:(SHRLconst [31] x) z2)) && z1==z2
+ => ((SETB|SETAE) zero (BTLconst [31] x))
+((NE|EQ) (TESTQ z1:(SHRQconst [63] x) z2)) && z1==z2
+ => ((ULT|UGE) (BTQconst [63] x))
+((NE|EQ) (TESTL z1:(SHRLconst [31] x) z2)) && z1==z2
+ => ((ULT|UGE) (BTLconst [31] x))
(SET(NE|EQ)store [off] {sym} ptr (TESTQ z1:(SHRQconst [63] x) z2) mem) && z1==z2
=> (SET(B|AE)store [off] {sym} ptr (BTQconst [63] x) mem)
(SET(NE|EQ)store [off] {sym} ptr (TESTL z1:(SHRLconst [31] x) z2) mem) && z1==z2
@@ -710,16 +728,16 @@
(BTRQconst [c] (BTCQconst [c] x)) => (BTRQconst [c] x)

// Fold boolean negation into SETcc.
-(XORLconst [1] (SETNE x)) => (SETEQ x)
-(XORLconst [1] (SETEQ x)) => (SETNE x)
-(XORLconst [1] (SETL x)) => (SETGE x)
-(XORLconst [1] (SETGE x)) => (SETL x)
-(XORLconst [1] (SETLE x)) => (SETG x)
-(XORLconst [1] (SETG x)) => (SETLE x)
-(XORLconst [1] (SETB x)) => (SETAE x)
-(XORLconst [1] (SETAE x)) => (SETB x)
-(XORLconst [1] (SETBE x)) => (SETA x)
-(XORLconst [1] (SETA x)) => (SETBE x)
+(XORLconst [1] (SETNE zero x)) => (SETEQ zero x)
+(XORLconst [1] (SETEQ zero x)) => (SETNE zero x)
+(XORLconst [1] (SETL zero x)) => (SETGE zero x)
+(XORLconst [1] (SETGE zero x)) => (SETL zero x)
+(XORLconst [1] (SETLE zero x)) => (SETG zero x)
+(XORLconst [1] (SETG zero x)) => (SETLE zero x)
+(XORLconst [1] (SETB zero x)) => (SETAE zero x)
+(XORLconst [1] (SETAE zero x)) => (SETB zero x)
+(XORLconst [1] (SETBE zero x)) => (SETA zero x)
+(XORLconst [1] (SETA zero x)) => (SETBE zero x)

// Special case for floating point - LF/LEF not generated
(NE (TESTB (SETGF cmp) (SETGF cmp)) yes no) => (UGT cmp yes no)
@@ -858,6 +876,11 @@
// adverse interactions with other passes.
// (ANDQconst [0xFFFFFFFF] x) => (MOVLQZX x)

+// SETcc include a preceding XORL $ $ to rename registers.
+// So it is already all zeros, never bother zero extending it.
+(MOVBQZX set:(SET(EQ|NE|L|G|LE|GE|A|B|AE|BE) _ _)) => set
+(MOVBQSX set:(SET(EQ|NE|L|G|LE|GE|A|B|AE|BE) _ _)) => set
+
// strength reduction
(MUL(Q|L)const [ 0] _) => (MOV(Q|L)const [0])
(MUL(Q|L)const [ 1] x) => x
@@ -912,21 +935,21 @@
(SHL(Q|L)const [c] (ADD(Q|L) x x)) => (SHL(Q|L)const [c+1] x)

// reverse ordering of compare instruction
-(SETL (InvertFlags x)) => (SETG x)
-(SETG (InvertFlags x)) => (SETL x)
-(SETB (InvertFlags x)) => (SETA x)
-(SETA (InvertFlags x)) => (SETB x)
-(SETLE (InvertFlags x)) => (SETGE x)
-(SETGE (InvertFlags x)) => (SETLE x)
-(SETBE (InvertFlags x)) => (SETAE x)
-(SETAE (InvertFlags x)) => (SETBE x)
-(SETEQ (InvertFlags x)) => (SETEQ x)
-(SETNE (InvertFlags x)) => (SETNE x)
+(SETL zero (InvertFlags x)) => (SETG zero x)
+(SETG zero (InvertFlags x)) => (SETL zero x)
+(SETB zero (InvertFlags x)) => (SETA zero x)
+(SETA zero (InvertFlags x)) => (SETB zero x)
+(SETLE zero (InvertFlags x)) => (SETGE zero x)
+(SETGE zero (InvertFlags x)) => (SETLE zero x)
+(SETBE zero (InvertFlags x)) => (SETAE zero x)
+(SETAE zero (InvertFlags x)) => (SETBE zero x)
+(SETEQ zero (InvertFlags x)) => (SETEQ zero x)
+(SETNE zero (InvertFlags x)) => (SETNE zero x)

-(SETLstore [off] {sym} ptr (InvertFlags x) mem) => (SETGstore [off] {sym} ptr x mem)
-(SETGstore [off] {sym} ptr (InvertFlags x) mem) => (SETLstore [off] {sym} ptr x mem)
-(SETBstore [off] {sym} ptr (InvertFlags x) mem) => (SETAstore [off] {sym} ptr x mem)
-(SETAstore [off] {sym} ptr (InvertFlags x) mem) => (SETBstore [off] {sym} ptr x mem)
+(SETLstore [off] {sym} ptr (InvertFlags x) mem) => (SETGstore [off] {sym} ptr x mem)
+(SETGstore [off] {sym} ptr (InvertFlags x) mem) => (SETLstore [off] {sym} ptr x mem)
+(SETBstore [off] {sym} ptr (InvertFlags x) mem) => (SETAstore [off] {sym} ptr x mem)
+(SETAstore [off] {sym} ptr (InvertFlags x) mem) => (SETBstore [off] {sym} ptr x mem)
(SETLEstore [off] {sym} ptr (InvertFlags x) mem) => (SETGEstore [off] {sym} ptr x mem)
(SETGEstore [off] {sym} ptr (InvertFlags x) mem) => (SETLEstore [off] {sym} ptr x mem)
(SETBEstore [off] {sym} ptr (InvertFlags x) mem) => (SETAEstore [off] {sym} ptr x mem)
@@ -1214,16 +1237,16 @@
((EQ|LT|LE|ULT|ULE) (FlagGT_UGT) yes no) => (First no yes)

// Absorb flag constants into SETxx ops.
-((SETEQ|SETLE|SETGE|SETBE|SETAE) (FlagEQ)) => (MOVLconst [1])
-((SETNE|SETL|SETG|SETB|SETA) (FlagEQ)) => (MOVLconst [0])
-((SETNE|SETL|SETLE|SETB|SETBE) (FlagLT_ULT)) => (MOVLconst [1])
-((SETEQ|SETG|SETGE|SETA|SETAE) (FlagLT_ULT)) => (MOVLconst [0])
-((SETNE|SETL|SETLE|SETA|SETAE) (FlagLT_UGT)) => (MOVLconst [1])
-((SETEQ|SETG|SETGE|SETB|SETBE) (FlagLT_UGT)) => (MOVLconst [0])
-((SETNE|SETG|SETGE|SETB|SETBE) (FlagGT_ULT)) => (MOVLconst [1])
-((SETEQ|SETL|SETLE|SETA|SETAE) (FlagGT_ULT)) => (MOVLconst [0])
-((SETNE|SETG|SETGE|SETA|SETAE) (FlagGT_UGT)) => (MOVLconst [1])
-((SETEQ|SETL|SETLE|SETB|SETBE) (FlagGT_UGT)) => (MOVLconst [0])
+((SETEQ|SETLE|SETGE|SETBE|SETAE) _ (FlagEQ)) => (MOVLconst [1])
+((SETNE|SETL|SETG|SETB|SETA) _ (FlagEQ)) => (MOVLconst [0])
+((SETNE|SETL|SETLE|SETB|SETBE) _ (FlagLT_ULT)) => (MOVLconst [1])
+((SETEQ|SETG|SETGE|SETA|SETAE) _ (FlagLT_ULT)) => (MOVLconst [0])
+((SETNE|SETL|SETLE|SETA|SETAE) _ (FlagLT_UGT)) => (MOVLconst [1])
+((SETEQ|SETG|SETGE|SETB|SETBE) _ (FlagLT_UGT)) => (MOVLconst [0])
+((SETNE|SETG|SETGE|SETB|SETBE) _ (FlagGT_ULT)) => (MOVLconst [1])
+((SETEQ|SETL|SETLE|SETA|SETAE) _ (FlagGT_ULT)) => (MOVLconst [0])
+((SETNE|SETG|SETGE|SETA|SETAE) _ (FlagGT_UGT)) => (MOVLconst [1])
+((SETEQ|SETL|SETLE|SETB|SETBE) _ (FlagGT_UGT)) => (MOVLconst [0])

(SETEQstore [off] {sym} ptr (FlagEQ) mem) => (MOVBstore [off] {sym} ptr (MOVLconst <typ.UInt8> [1]) mem)
(SETEQstore [off] {sym} ptr (FlagLT_ULT) mem) => (MOVBstore [off] {sym} ptr (MOVLconst <typ.UInt8> [0]) mem)
@@ -1618,11 +1641,11 @@
(XOR(Q|L) x (ADD(Q|L)const [-1] x)) && buildcfg.GOAMD64 >= 3 => (BLSMSK(Q|L) x)
(AND(Q|L) <t> x (ADD(Q|L)const [-1] x)) && buildcfg.GOAMD64 >= 3 => (Select0 <t> (BLSR(Q|L) x))
// eliminate TEST instruction in classical "isPowerOfTwo" check
-(SETEQ (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (SETEQ (Select1 <types.TypeFlags> blsr))
+(SETEQ zero (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (SETEQ zero (Select1 <types.TypeFlags> blsr))
(CMOVQEQ x y (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (CMOVQEQ x y (Select1 <types.TypeFlags> blsr))
(CMOVLEQ x y (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (CMOVLEQ x y (Select1 <types.TypeFlags> blsr))
(EQ (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s) yes no) => (EQ (Select1 <types.TypeFlags> blsr) yes no)
-(SETNE (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (SETNE (Select1 <types.TypeFlags> blsr))
+(SETNE zero (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (SETNE zero (Select1 <types.TypeFlags> blsr))
(CMOVQNE x y (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (CMOVQNE x y (Select1 <types.TypeFlags> blsr))
(CMOVLNE x y (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s)) => (CMOVLNE x y (Select1 <types.TypeFlags> blsr))
(NE (TEST(Q|L) s:(Select0 blsr:(BLSR(Q|L) _)) s) yes no) => (NE (Select1 <types.TypeFlags> blsr) yes no)
@@ -1728,7 +1751,7 @@
(StoreMasked64 {t} ptr mask val mem) && t.Size() == 32 => (VPMASK64store256 ptr mask val mem)

// Misc
-(IsZeroVec x) => (SETEQ (VPTEST x x))
+(IsZeroVec x) => (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (VPTEST x x))

// SIMD vector K-masked loads and stores

@@ -1809,10 +1832,10 @@
(VMOVDQUstore(128|256|512) [off1] {sym1} x:(LEAQ [off2] {sym2} base) val mem) && is32Bit(int64(off1)+int64(off2)) && canMergeSym(sym1, sym2) => (VMOVDQUstore(128|256|512) [off1+off2] {mergeSym(sym1, sym2)} base val mem)

// 2-op VPTEST optimizations
-(SETEQ (VPTEST x:(VPAND(128|256) j k) y)) && x == y && x.Uses == 2 => (SETEQ (VPTEST j k))
-(SETEQ (VPTEST x:(VPAND(D|Q)512 j k) y)) && x == y && x.Uses == 2 => (SETEQ (VPTEST j k))
-(SETEQ (VPTEST x:(VPANDN(128|256) j k) y)) && x == y && x.Uses == 2 => (SETB (VPTEST k j)) // AndNot has swapped its operand order
-(SETEQ (VPTEST x:(VPANDN(D|Q)512 j k) y)) && x == y && x.Uses == 2 => (SETB (VPTEST k j)) // AndNot has swapped its operand order
+(SETEQ zero (VPTEST x:(VPAND(128|256) j k) y)) && x == y && x.Uses == 2 => (SETEQ zero (VPTEST j k))
+(SETEQ zero (VPTEST x:(VPAND(D|Q)512 j k) y)) && x == y && x.Uses == 2 => (SETEQ zero (VPTEST j k))
+(SETEQ zero (VPTEST x:(VPANDN(128|256) j k) y)) && x == y && x.Uses == 2 => (SETB zero (VPTEST k j)) // AndNot has swapped its operand order
+(SETEQ zero (VPTEST x:(VPANDN(D|Q)512 j k) y)) && x == y && x.Uses == 2 => (SETB zero (VPTEST k j)) // AndNot has swapped its operand order
(EQ (VPTEST x:(VPAND(128|256) j k) y) yes no) && x == y && x.Uses == 2 => (EQ (VPTEST j k) yes no)
(EQ (VPTEST x:(VPAND(D|Q)512 j k) y) yes no) && x == y && x.Uses == 2 => (EQ (VPTEST j k) yes no)
(EQ (VPTEST x:(VPANDN(128|256) j k) y) yes no) && x == y && x.Uses == 2 => (ULT (VPTEST k j) yes no) // AndNot has swapped its operand order
diff --git a/src/cmd/compile/internal/ssa/_gen/AMD64Ops.go b/src/cmd/compile/internal/ssa/_gen/AMD64Ops.go
index 2fb4fdf..932cad6 100644
--- a/src/cmd/compile/internal/ssa/_gen/AMD64Ops.go
+++ b/src/cmd/compile/internal/ssa/_gen/AMD64Ops.go
@@ -175,8 +175,6 @@
gp11flags = regInfo{inputs: []regMask{gp}, outputs: []regMask{gp, 0}}
gp1flags1flags = regInfo{inputs: []regMask{gp, 0}, outputs: []regMask{gp, 0}}

- readflags = regInfo{inputs: nil, outputs: gponly}
-
gpload = regInfo{inputs: []regMask{gpspsbg, 0}, outputs: gponly}
gp21load = regInfo{inputs: []regMask{gp, gpspsbg, 0}, outputs: gponly}
gploadidx = regInfo{inputs: []regMask{gpspsbg, gpsp, 0}, outputs: gponly}
@@ -819,17 +817,18 @@
{name: "SBBLcarrymask", argLength: 1, reg: flagsgp, asm: "SBBL"}, // (int32)(-1) if carry is set, 0 if carry is clear.
// Note: SBBW and SBBB are subsumed by SBBL

- {name: "SETEQ", argLength: 1, reg: readflags, asm: "SETEQ"}, // extract == condition from arg0
- {name: "SETNE", argLength: 1, reg: readflags, asm: "SETNE"}, // extract != condition from arg0
- {name: "SETL", argLength: 1, reg: readflags, asm: "SETLT"}, // extract signed < condition from arg0
- {name: "SETLE", argLength: 1, reg: readflags, asm: "SETLE"}, // extract signed <= condition from arg0
- {name: "SETG", argLength: 1, reg: readflags, asm: "SETGT"}, // extract signed > condition from arg0
- {name: "SETGE", argLength: 1, reg: readflags, asm: "SETGE"}, // extract signed >= condition from arg0
- {name: "SETB", argLength: 1, reg: readflags, asm: "SETCS"}, // extract unsigned < condition from arg0
- {name: "SETBE", argLength: 1, reg: readflags, asm: "SETLS"}, // extract unsigned <= condition from arg0
- {name: "SETA", argLength: 1, reg: readflags, asm: "SETHI"}, // extract unsigned > condition from arg0
- {name: "SETAE", argLength: 1, reg: readflags, asm: "SETCC"}, // extract unsigned >= condition from arg0
- {name: "SETO", argLength: 1, reg: readflags, asm: "SETOS"}, // extract if overflow flag is set from arg0
+ // The first argument must be a zero register for dependency breaking.
+ {name: "SETEQ", argLength: 2, reg: gp11, asm: "SETEQ", resultInArg0: true}, // extract == condition from arg0
+ {name: "SETNE", argLength: 2, reg: gp11, asm: "SETNE", resultInArg0: true}, // extract != condition from arg0
+ {name: "SETL", argLength: 2, reg: gp11, asm: "SETLT", resultInArg0: true}, // extract signed < condition from arg0
+ {name: "SETLE", argLength: 2, reg: gp11, asm: "SETLE", resultInArg0: true}, // extract signed <= condition from arg0
+ {name: "SETG", argLength: 2, reg: gp11, asm: "SETGT", resultInArg0: true}, // extract signed > condition from arg0
+ {name: "SETGE", argLength: 2, reg: gp11, asm: "SETGE", resultInArg0: true}, // extract signed >= condition from arg0
+ {name: "SETB", argLength: 2, reg: gp11, asm: "SETCS", resultInArg0: true}, // extract unsigned < condition from arg0
+ {name: "SETBE", argLength: 2, reg: gp11, asm: "SETLS", resultInArg0: true}, // extract unsigned <= condition from arg0
+ {name: "SETA", argLength: 2, reg: gp11, asm: "SETHI", resultInArg0: true}, // extract unsigned > condition from arg0
+ {name: "SETAE", argLength: 2, reg: gp11, asm: "SETCC", resultInArg0: true}, // extract unsigned >= condition from arg0
+ {name: "SETO", argLength: 2, reg: gp11, asm: "SETOS", resultInArg0: true}, // extract if overflow flag is set from arg0
// Variants that store result to memory
{name: "SETEQstore", argLength: 3, reg: gpstoreconst, asm: "SETEQ", aux: "SymOff", typ: "Mem", faultOnNilArg0: true, symEffect: "Write"}, // extract == condition from arg1 to arg0+auxint+aux, arg2=mem
{name: "SETNEstore", argLength: 3, reg: gpstoreconst, asm: "SETNE", aux: "SymOff", typ: "Mem", faultOnNilArg0: true, symEffect: "Write"}, // extract != condition from arg1 to arg0+auxint+aux, arg2=mem
diff --git a/src/cmd/compile/internal/ssa/opGen.go b/src/cmd/compile/internal/ssa/opGen.go
index 9ba5767..e32807e 100644
--- a/src/cmd/compile/internal/ssa/opGen.go
+++ b/src/cmd/compile/internal/ssa/opGen.go
@@ -16347,110 +16347,154 @@
},
},
{
- name: "SETEQ",
- argLen: 1,
- asm: x86.ASETEQ,
+ name: "SETEQ",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETEQ,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETNE",
- argLen: 1,
- asm: x86.ASETNE,
+ name: "SETNE",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETNE,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETL",
- argLen: 1,
- asm: x86.ASETLT,
+ name: "SETL",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETLT,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETLE",
- argLen: 1,
- asm: x86.ASETLE,
+ name: "SETLE",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETLE,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETG",
- argLen: 1,
- asm: x86.ASETGT,
+ name: "SETG",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETGT,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETGE",
- argLen: 1,
- asm: x86.ASETGE,
+ name: "SETGE",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETGE,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETB",
- argLen: 1,
- asm: x86.ASETCS,
+ name: "SETB",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETCS,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETBE",
- argLen: 1,
- asm: x86.ASETLS,
+ name: "SETBE",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETLS,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETA",
- argLen: 1,
- asm: x86.ASETHI,
+ name: "SETA",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETHI,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETAE",
- argLen: 1,
- asm: x86.ASETCC,
+ name: "SETAE",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETCC,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
},
},
{
- name: "SETO",
- argLen: 1,
- asm: x86.ASETOS,
+ name: "SETO",
+ argLen: 2,
+ resultInArg0: true,
+ asm: x86.ASETOS,
reg: regInfo{
+ inputs: []inputInfo{
+ {0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
+ },
outputs: []outputInfo{
{0, 49135}, // AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R15
},
diff --git a/src/cmd/compile/internal/ssa/rewriteAMD64.go b/src/cmd/compile/internal/ssa/rewriteAMD64.go
index 35e9516..7a83721 100644
--- a/src/cmd/compile/internal/ssa/rewriteAMD64.go
+++ b/src/cmd/compile/internal/ssa/rewriteAMD64.go
@@ -15128,6 +15128,106 @@
func rewriteValueAMD64_OpAMD64MOVBQSX(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
+ // match: (MOVBQSX set:(SETEQ _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETEQ {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETNE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETNE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETL _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETL {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETG _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETG {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETLE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETLE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETGE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETGE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETA _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETA {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETB _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETB {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETAE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETAE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQSX set:(SETBE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETBE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
// match: (MOVBQSX x:(MOVBload [off] {sym} ptr mem))
// cond: x.Uses == 1 && clobber(x)
// result: @x.Block (MOVBQSXload <v.Type> [off] {sym} ptr mem)
@@ -15314,6 +15414,106 @@
func rewriteValueAMD64_OpAMD64MOVBQZX(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
+ // match: (MOVBQZX set:(SETEQ _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETEQ {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETNE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETNE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETL _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETL {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETG _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETG {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETLE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETLE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETGE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETGE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETA _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETA {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETB _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETB {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETAE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETAE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
+ // match: (MOVBQZX set:(SETBE _ _))
+ // result: set
+ for {
+ set := v_0
+ if set.Op != OpAMD64SETBE {
+ break
+ }
+ v.copyOf(set)
+ return true
+ }
// match: (MOVBQZX x:(MOVBload [off] {sym} ptr mem))
// cond: x.Uses == 1 && clobber(x)
// result: @x.Block (MOVBload <v.Type> [off] {sym} ptr mem)
@@ -15566,7 +15766,7 @@
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
- // match: (MOVBstore [off] {sym} ptr y:(SETL x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETL _ x) mem)
// cond: y.Uses == 1
// result: (SETLstore [off] {sym} ptr x mem)
for {
@@ -15577,7 +15777,7 @@
if y.Op != OpAMD64SETL {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15588,7 +15788,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETLE x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETLE _ x) mem)
// cond: y.Uses == 1
// result: (SETLEstore [off] {sym} ptr x mem)
for {
@@ -15599,7 +15799,7 @@
if y.Op != OpAMD64SETLE {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15610,7 +15810,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETG x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETG _ x) mem)
// cond: y.Uses == 1
// result: (SETGstore [off] {sym} ptr x mem)
for {
@@ -15621,7 +15821,7 @@
if y.Op != OpAMD64SETG {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15632,7 +15832,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETGE x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETGE _ x) mem)
// cond: y.Uses == 1
// result: (SETGEstore [off] {sym} ptr x mem)
for {
@@ -15643,7 +15843,7 @@
if y.Op != OpAMD64SETGE {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15654,7 +15854,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETEQ x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETEQ _ x) mem)
// cond: y.Uses == 1
// result: (SETEQstore [off] {sym} ptr x mem)
for {
@@ -15665,7 +15865,7 @@
if y.Op != OpAMD64SETEQ {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15676,7 +15876,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETNE x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETNE _ x) mem)
// cond: y.Uses == 1
// result: (SETNEstore [off] {sym} ptr x mem)
for {
@@ -15687,7 +15887,7 @@
if y.Op != OpAMD64SETNE {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15698,7 +15898,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETB x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETB _ x) mem)
// cond: y.Uses == 1
// result: (SETBstore [off] {sym} ptr x mem)
for {
@@ -15709,7 +15909,7 @@
if y.Op != OpAMD64SETB {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15720,7 +15920,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETBE x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETBE _ x) mem)
// cond: y.Uses == 1
// result: (SETBEstore [off] {sym} ptr x mem)
for {
@@ -15731,7 +15931,7 @@
if y.Op != OpAMD64SETBE {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15742,7 +15942,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETA x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETA _ x) mem)
// cond: y.Uses == 1
// result: (SETAstore [off] {sym} ptr x mem)
for {
@@ -15753,7 +15953,7 @@
if y.Op != OpAMD64SETA {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -15764,7 +15964,7 @@
v.AddArg3(ptr, x, mem)
return true
}
- // match: (MOVBstore [off] {sym} ptr y:(SETAE x) mem)
+ // match: (MOVBstore [off] {sym} ptr y:(SETAE _ x) mem)
// cond: y.Uses == 1
// result: (SETAEstore [off] {sym} ptr x mem)
for {
@@ -15775,7 +15975,7 @@
if y.Op != OpAMD64SETAE {
break
}
- x := y.Args[0]
+ x := y.Args[1]
mem := v_2
if !(y.Uses == 1) {
break
@@ -21581,62 +21781,64 @@
return false
}
func rewriteValueAMD64_OpAMD64SETA(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
- // match: (SETA (InvertFlags x))
- // result: (SETB x)
+ // match: (SETA zero (InvertFlags x))
+ // result: (SETB zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETB)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETA (FlagEQ))
+ // match: (SETA _ (FlagEQ))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETA (FlagLT_ULT))
+ // match: (SETA _ (FlagLT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETA (FlagLT_UGT))
+ // match: (SETA _ (FlagLT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETA (FlagGT_ULT))
+ // match: (SETA _ (FlagGT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETA (FlagGT_UGT))
+ // match: (SETA _ (FlagGT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -21646,72 +21848,73 @@
return false
}
func rewriteValueAMD64_OpAMD64SETAE(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
- // match: (SETAE (TESTQ x x))
+ // match: (SETAE _ (TESTQ x x))
// result: (ConstBool [true])
for {
- if v_0.Op != OpAMD64TESTQ {
+ if v_1.Op != OpAMD64TESTQ {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(true)
return true
}
- // match: (SETAE (TESTL x x))
+ // match: (SETAE _ (TESTL x x))
// result: (ConstBool [true])
for {
- if v_0.Op != OpAMD64TESTL {
+ if v_1.Op != OpAMD64TESTL {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(true)
return true
}
- // match: (SETAE (TESTW x x))
+ // match: (SETAE _ (TESTW x x))
// result: (ConstBool [true])
for {
- if v_0.Op != OpAMD64TESTW {
+ if v_1.Op != OpAMD64TESTW {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(true)
return true
}
- // match: (SETAE (TESTB x x))
+ // match: (SETAE _ (TESTB x x))
// result: (ConstBool [true])
for {
- if v_0.Op != OpAMD64TESTB {
+ if v_1.Op != OpAMD64TESTB {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(true)
return true
}
- // match: (SETAE (BTLconst [0] x))
+ // match: (SETAE _ (BTLconst [0] x))
// result: (XORLconst [1] (ANDLconst <typ.Bool> [1] x))
for {
- if v_0.Op != OpAMD64BTLconst || auxIntToInt8(v_0.AuxInt) != 0 {
+ if v_1.Op != OpAMD64BTLconst || auxIntToInt8(v_1.AuxInt) != 0 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64XORLconst)
v.AuxInt = int32ToAuxInt(1)
v0 := b.NewValue0(v.Pos, OpAMD64ANDLconst, typ.Bool)
@@ -21720,13 +21923,13 @@
v.AddArg(v0)
return true
}
- // match: (SETAE (BTQconst [0] x))
+ // match: (SETAE _ (BTQconst [0] x))
// result: (XORLconst [1] (ANDLconst <typ.Bool> [1] x))
for {
- if v_0.Op != OpAMD64BTQconst || auxIntToInt8(v_0.AuxInt) != 0 {
+ if v_1.Op != OpAMD64BTQconst || auxIntToInt8(v_1.AuxInt) != 0 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64XORLconst)
v.AuxInt = int32ToAuxInt(1)
v0 := b.NewValue0(v.Pos, OpAMD64ANDLconst, typ.Bool)
@@ -21735,11 +21938,12 @@
v.AddArg(v0)
return true
}
- // match: (SETAE c:(CMPQconst [128] x))
+ // match: (SETAE zero c:(CMPQconst [128] x))
// cond: c.Uses == 1
- // result: (SETA (CMPQconst [127] x))
+ // result: (SETA zero (CMPQconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPQconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -21751,14 +21955,15 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETAE c:(CMPLconst [128] x))
+ // match: (SETAE zero c:(CMPLconst [128] x))
// cond: c.Uses == 1
- // result: (SETA (CMPLconst [127] x))
+ // result: (SETA zero (CMPLconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPLconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -21770,64 +21975,65 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETAE (InvertFlags x))
- // result: (SETBE x)
+ // match: (SETAE zero (InvertFlags x))
+ // result: (SETBE zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETBE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETAE (FlagEQ))
+ // match: (SETAE _ (FlagEQ))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETAE (FlagLT_ULT))
+ // match: (SETAE _ (FlagLT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETAE (FlagLT_UGT))
+ // match: (SETAE _ (FlagLT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETAE (FlagGT_ULT))
+ // match: (SETAE _ (FlagGT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETAE (FlagGT_UGT))
+ // match: (SETAE _ (FlagGT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -22157,93 +22363,95 @@
return false
}
func rewriteValueAMD64_OpAMD64SETB(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (SETB (TESTQ x x))
+ // match: (SETB _ (TESTQ x x))
// result: (ConstBool [false])
for {
- if v_0.Op != OpAMD64TESTQ {
+ if v_1.Op != OpAMD64TESTQ {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(false)
return true
}
- // match: (SETB (TESTL x x))
+ // match: (SETB _ (TESTL x x))
// result: (ConstBool [false])
for {
- if v_0.Op != OpAMD64TESTL {
+ if v_1.Op != OpAMD64TESTL {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(false)
return true
}
- // match: (SETB (TESTW x x))
+ // match: (SETB _ (TESTW x x))
// result: (ConstBool [false])
for {
- if v_0.Op != OpAMD64TESTW {
+ if v_1.Op != OpAMD64TESTW {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(false)
return true
}
- // match: (SETB (TESTB x x))
+ // match: (SETB _ (TESTB x x))
// result: (ConstBool [false])
for {
- if v_0.Op != OpAMD64TESTB {
+ if v_1.Op != OpAMD64TESTB {
break
}
- x := v_0.Args[1]
- if x != v_0.Args[0] {
+ x := v_1.Args[1]
+ if x != v_1.Args[0] {
break
}
v.reset(OpConstBool)
v.AuxInt = boolToAuxInt(false)
return true
}
- // match: (SETB (BTLconst [0] x))
+ // match: (SETB _ (BTLconst [0] x))
// result: (ANDLconst [1] x)
for {
- if v_0.Op != OpAMD64BTLconst || auxIntToInt8(v_0.AuxInt) != 0 {
+ if v_1.Op != OpAMD64BTLconst || auxIntToInt8(v_1.AuxInt) != 0 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64ANDLconst)
v.AuxInt = int32ToAuxInt(1)
v.AddArg(x)
return true
}
- // match: (SETB (BTQconst [0] x))
+ // match: (SETB _ (BTQconst [0] x))
// result: (ANDQconst [1] x)
for {
- if v_0.Op != OpAMD64BTQconst || auxIntToInt8(v_0.AuxInt) != 0 {
+ if v_1.Op != OpAMD64BTQconst || auxIntToInt8(v_1.AuxInt) != 0 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64ANDQconst)
v.AuxInt = int32ToAuxInt(1)
v.AddArg(x)
return true
}
- // match: (SETB c:(CMPQconst [128] x))
+ // match: (SETB zero c:(CMPQconst [128] x))
// cond: c.Uses == 1
- // result: (SETBE (CMPQconst [127] x))
+ // result: (SETBE zero (CMPQconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPQconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -22255,14 +22463,15 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETB c:(CMPLconst [128] x))
+ // match: (SETB zero c:(CMPLconst [128] x))
// cond: c.Uses == 1
- // result: (SETBE (CMPLconst [127] x))
+ // result: (SETBE zero (CMPLconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPLconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -22274,64 +22483,65 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETB (InvertFlags x))
- // result: (SETA x)
+ // match: (SETB zero (InvertFlags x))
+ // result: (SETA zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETA)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETB (FlagEQ))
+ // match: (SETB _ (FlagEQ))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETB (FlagLT_ULT))
+ // match: (SETB _ (FlagLT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETB (FlagLT_UGT))
+ // match: (SETB _ (FlagLT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETB (FlagGT_ULT))
+ // match: (SETB _ (FlagGT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETB (FlagGT_UGT))
+ // match: (SETB _ (FlagGT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -22341,62 +22551,64 @@
return false
}
func rewriteValueAMD64_OpAMD64SETBE(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
- // match: (SETBE (InvertFlags x))
- // result: (SETAE x)
+ // match: (SETBE zero (InvertFlags x))
+ // result: (SETAE zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETAE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETBE (FlagEQ))
+ // match: (SETBE _ (FlagEQ))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETBE (FlagLT_ULT))
+ // match: (SETBE _ (FlagLT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETBE (FlagLT_UGT))
+ // match: (SETBE _ (FlagLT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETBE (FlagGT_ULT))
+ // match: (SETBE _ (FlagGT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETBE (FlagGT_UGT))
+ // match: (SETBE _ (FlagGT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -22726,71 +22938,75 @@
return false
}
func rewriteValueAMD64_OpAMD64SETEQ(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (SETEQ (TESTL (SHLL (MOVLconst [1]) x) y))
- // result: (SETAE (BTL x y))
+ // match: (SETEQ zero (TESTL (SHLL (MOVLconst [1]) x) y))
+ // result: (SETAE zero (BTL x y))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64SHLL {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64SHLL {
continue
}
- x := v_0_0.Args[1]
- v_0_0_0 := v_0_0.Args[0]
- if v_0_0_0.Op != OpAMD64MOVLconst || auxIntToInt32(v_0_0_0.AuxInt) != 1 {
+ x := v_1_0.Args[1]
+ v_1_0_0 := v_1_0.Args[0]
+ if v_1_0_0.Op != OpAMD64MOVLconst || auxIntToInt32(v_1_0_0.AuxInt) != 1 {
continue
}
- y := v_0_1
+ y := v_1_1
v.reset(OpAMD64SETAE)
v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTQ (SHLQ (MOVQconst [1]) x) y))
- // result: (SETAE (BTQ x y))
+ // match: (SETEQ zero (TESTQ (SHLQ (MOVQconst [1]) x) y))
+ // result: (SETAE zero (BTQ x y))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64SHLQ {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64SHLQ {
continue
}
- x := v_0_0.Args[1]
- v_0_0_0 := v_0_0.Args[0]
- if v_0_0_0.Op != OpAMD64MOVQconst || auxIntToInt64(v_0_0_0.AuxInt) != 1 {
+ x := v_1_0.Args[1]
+ v_1_0_0 := v_1_0.Args[0]
+ if v_1_0_0.Op != OpAMD64MOVQconst || auxIntToInt64(v_1_0_0.AuxInt) != 1 {
continue
}
- y := v_0_1
+ y := v_1_1
v.reset(OpAMD64SETAE)
v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTLconst [c] x))
+ // match: (SETEQ zero (TESTLconst [c] x))
// cond: isUnsignedPowerOfTwo(uint32(c))
- // result: (SETAE (BTLconst [int8(log32u(uint32(c)))] x))
+ // result: (SETAE zero (BTLconst [int8(log32u(uint32(c)))] x))
for {
- if v_0.Op != OpAMD64TESTLconst {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTLconst {
break
}
- c := auxIntToInt32(v_0.AuxInt)
- x := v_0.Args[0]
+ c := auxIntToInt32(v_1.AuxInt)
+ x := v_1.Args[0]
if !(isUnsignedPowerOfTwo(uint32(c))) {
break
}
@@ -22798,18 +23014,19 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log32u(uint32(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (TESTQconst [c] x))
+ // match: (SETEQ zero (TESTQconst [c] x))
// cond: isUnsignedPowerOfTwo(uint64(c))
- // result: (SETAE (BTQconst [int8(log32u(uint32(c)))] x))
+ // result: (SETAE zero (BTQconst [int8(log32u(uint32(c)))] x))
for {
- if v_0.Op != OpAMD64TESTQconst {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQconst {
break
}
- c := auxIntToInt32(v_0.AuxInt)
- x := v_0.Args[0]
+ c := auxIntToInt32(v_1.AuxInt)
+ x := v_1.Args[0]
if !(isUnsignedPowerOfTwo(uint64(c))) {
break
}
@@ -22817,25 +23034,26 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log32u(uint32(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (TESTQ (MOVQconst [c]) x))
+ // match: (SETEQ zero (TESTQ (MOVQconst [c]) x))
// cond: isUnsignedPowerOfTwo(uint64(c))
- // result: (SETAE (BTQconst [int8(log64u(uint64(c)))] x))
+ // result: (SETAE zero (BTQconst [int8(log64u(uint64(c)))] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64MOVQconst {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64MOVQconst {
continue
}
- c := auxIntToInt64(v_0_0.AuxInt)
- x := v_0_1
+ c := auxIntToInt64(v_1_0.AuxInt)
+ x := v_1_1
if !(isUnsignedPowerOfTwo(uint64(c))) {
continue
}
@@ -22843,18 +23061,19 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log64u(uint64(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (CMPLconst [1] s:(ANDLconst [1] _)))
- // result: (SETNE (CMPLconst [0] s))
+ // match: (SETEQ zero (CMPLconst [1] s:(ANDLconst [1] _)))
+ // result: (SETNE zero (CMPLconst [0] s))
for {
- if v_0.Op != OpAMD64CMPLconst || auxIntToInt32(v_0.AuxInt) != 1 {
+ zero := v_0
+ if v_1.Op != OpAMD64CMPLconst || auxIntToInt32(v_1.AuxInt) != 1 {
break
}
- s := v_0.Args[0]
+ s := v_1.Args[0]
if s.Op != OpAMD64ANDLconst || auxIntToInt32(s.AuxInt) != 1 {
break
}
@@ -22862,16 +23081,17 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(0)
v0.AddArg(s)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (CMPQconst [1] s:(ANDQconst [1] _)))
- // result: (SETNE (CMPQconst [0] s))
+ // match: (SETEQ zero (CMPQconst [1] s:(ANDQconst [1] _)))
+ // result: (SETNE zero (CMPQconst [0] s))
for {
- if v_0.Op != OpAMD64CMPQconst || auxIntToInt32(v_0.AuxInt) != 1 {
+ zero := v_0
+ if v_1.Op != OpAMD64CMPQconst || auxIntToInt32(v_1.AuxInt) != 1 {
break
}
- s := v_0.Args[0]
+ s := v_1.Args[0]
if s.Op != OpAMD64ANDQconst || auxIntToInt32(s.AuxInt) != 1 {
break
}
@@ -22879,21 +23099,22 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(0)
v0.AddArg(s)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2))
+ // match: (SETEQ zero (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2))
// cond: z1==z2
- // result: (SETAE (BTQconst [63] x))
+ // result: (SETAE zero (BTQconst [63] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHLQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
@@ -22902,7 +23123,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -22910,23 +23131,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(63)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2))
+ // match: (SETEQ zero (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2))
// cond: z1==z2
- // result: (SETAE (BTQconst [31] x))
+ // result: (SETAE zero (BTQconst [31] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHLLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
@@ -22935,7 +23157,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -22943,23 +23165,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(31)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2))
+ // match: (SETEQ zero (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2))
// cond: z1==z2
- // result: (SETAE (BTQconst [0] x))
+ // result: (SETAE zero (BTQconst [0] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
@@ -22968,7 +23191,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -22976,23 +23199,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(0)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2))
+ // match: (SETEQ zero (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2))
// cond: z1==z2
- // result: (SETAE (BTLconst [0] x))
+ // result: (SETAE zero (BTLconst [0] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
@@ -23001,7 +23225,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -23009,28 +23233,29 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(0)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTQ z1:(SHRQconst [63] x) z2))
+ // match: (SETEQ zero (TESTQ z1:(SHRQconst [63] x) z2))
// cond: z1==z2
- // result: (SETAE (BTQconst [63] x))
+ // result: (SETAE zero (BTQconst [63] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
x := z1.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -23038,28 +23263,29 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(63)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTL z1:(SHRLconst [31] x) z2))
+ // match: (SETEQ zero (TESTL z1:(SHRLconst [31] x) z2))
// cond: z1==z2
- // result: (SETAE (BTLconst [31] x))
+ // result: (SETAE zero (BTLconst [31] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
x := z1.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -23067,133 +23293,137 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(31)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (InvertFlags x))
- // result: (SETEQ x)
+ // match: (SETEQ zero (InvertFlags x))
+ // result: (SETEQ zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETEQ)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETEQ (FlagEQ))
+ // match: (SETEQ _ (FlagEQ))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETEQ (FlagLT_ULT))
+ // match: (SETEQ _ (FlagLT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETEQ (FlagLT_UGT))
+ // match: (SETEQ _ (FlagLT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETEQ (FlagGT_ULT))
+ // match: (SETEQ _ (FlagGT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETEQ (FlagGT_UGT))
+ // match: (SETEQ _ (FlagGT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETEQ (TESTQ s:(Select0 blsr:(BLSRQ _)) s))
- // result: (SETEQ (Select1 <types.TypeFlags> blsr))
+ // match: (SETEQ zero (TESTQ s:(Select0 blsr:(BLSRQ _)) s))
+ // result: (SETEQ zero (Select1 <types.TypeFlags> blsr))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- s := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ s := v_1_0
if s.Op != OpSelect0 {
continue
}
blsr := s.Args[0]
- if blsr.Op != OpAMD64BLSRQ || s != v_0_1 {
+ if blsr.Op != OpAMD64BLSRQ || s != v_1_1 {
continue
}
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
v0.AddArg(blsr)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (TESTL s:(Select0 blsr:(BLSRL _)) s))
- // result: (SETEQ (Select1 <types.TypeFlags> blsr))
+ // match: (SETEQ zero (TESTL s:(Select0 blsr:(BLSRL _)) s))
+ // result: (SETEQ zero (Select1 <types.TypeFlags> blsr))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- s := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ s := v_1_0
if s.Op != OpSelect0 {
continue
}
blsr := s.Args[0]
- if blsr.Op != OpAMD64BLSRL || s != v_0_1 {
+ if blsr.Op != OpAMD64BLSRL || s != v_1_1 {
continue
}
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
v0.AddArg(blsr)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETEQ (VPTEST x:(VPAND128 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPAND128 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETEQ (VPTEST j k))
+ // result: (SETEQ zero (VPTEST j k))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPAND128 {
break
}
@@ -23205,18 +23435,19 @@
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(j, k)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPAND256 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPAND256 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETEQ (VPTEST j k))
+ // result: (SETEQ zero (VPTEST j k))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPAND256 {
break
}
@@ -23228,18 +23459,19 @@
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(j, k)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDD512 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDD512 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETEQ (VPTEST j k))
+ // result: (SETEQ zero (VPTEST j k))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDD512 {
break
}
@@ -23251,18 +23483,19 @@
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(j, k)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDQ512 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDQ512 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETEQ (VPTEST j k))
+ // result: (SETEQ zero (VPTEST j k))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDQ512 {
break
}
@@ -23274,18 +23507,19 @@
v.reset(OpAMD64SETEQ)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(j, k)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDN128 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDN128 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETB (VPTEST k j))
+ // result: (SETB zero (VPTEST k j))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDN128 {
break
}
@@ -23297,18 +23531,19 @@
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(k, j)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDN256 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDN256 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETB (VPTEST k j))
+ // result: (SETB zero (VPTEST k j))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDN256 {
break
}
@@ -23320,18 +23555,19 @@
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(k, j)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDND512 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDND512 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETB (VPTEST k j))
+ // result: (SETB zero (VPTEST k j))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDND512 {
break
}
@@ -23343,18 +23579,19 @@
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(k, j)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETEQ (VPTEST x:(VPANDNQ512 j k) y))
+ // match: (SETEQ zero (VPTEST x:(VPANDNQ512 j k) y))
// cond: x == y && x.Uses == 2
- // result: (SETB (VPTEST k j))
+ // result: (SETB zero (VPTEST k j))
for {
- if v_0.Op != OpAMD64VPTEST {
+ zero := v_0
+ if v_1.Op != OpAMD64VPTEST {
break
}
- y := v_0.Args[1]
- x := v_0.Args[0]
+ y := v_1.Args[1]
+ x := v_1.Args[0]
if x.Op != OpAMD64VPANDNQ512 {
break
}
@@ -23366,7 +23603,7 @@
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
v0.AddArg2(k, j)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
return false
@@ -23954,62 +24191,64 @@
return false
}
func rewriteValueAMD64_OpAMD64SETG(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
- // match: (SETG (InvertFlags x))
- // result: (SETL x)
+ // match: (SETG zero (InvertFlags x))
+ // result: (SETL zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETL)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETG (FlagEQ))
+ // match: (SETG _ (FlagEQ))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETG (FlagLT_ULT))
+ // match: (SETG _ (FlagLT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETG (FlagLT_UGT))
+ // match: (SETG _ (FlagLT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETG (FlagGT_ULT))
+ // match: (SETG _ (FlagGT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETG (FlagGT_UGT))
+ // match: (SETG _ (FlagGT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -24019,13 +24258,15 @@
return false
}
func rewriteValueAMD64_OpAMD64SETGE(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (SETGE c:(CMPQconst [128] x))
+ // match: (SETGE zero c:(CMPQconst [128] x))
// cond: c.Uses == 1
- // result: (SETG (CMPQconst [127] x))
+ // result: (SETG zero (CMPQconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPQconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -24037,14 +24278,15 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETGE c:(CMPLconst [128] x))
+ // match: (SETGE zero c:(CMPLconst [128] x))
// cond: c.Uses == 1
- // result: (SETG (CMPLconst [127] x))
+ // result: (SETG zero (CMPLconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPLconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -24056,64 +24298,65 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETGE (InvertFlags x))
- // result: (SETLE x)
+ // match: (SETGE zero (InvertFlags x))
+ // result: (SETLE zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETLE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETGE (FlagEQ))
+ // match: (SETGE _ (FlagEQ))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETGE (FlagLT_ULT))
+ // match: (SETGE _ (FlagLT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETGE (FlagLT_UGT))
+ // match: (SETGE _ (FlagLT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETGE (FlagGT_ULT))
+ // match: (SETGE _ (FlagGT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETGE (FlagGT_UGT))
+ // match: (SETGE _ (FlagGT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -24443,13 +24686,15 @@
return false
}
func rewriteValueAMD64_OpAMD64SETL(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (SETL c:(CMPQconst [128] x))
+ // match: (SETL zero c:(CMPQconst [128] x))
// cond: c.Uses == 1
- // result: (SETLE (CMPQconst [127] x))
+ // result: (SETLE zero (CMPQconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPQconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -24461,14 +24706,15 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETL c:(CMPLconst [128] x))
+ // match: (SETL zero c:(CMPLconst [128] x))
// cond: c.Uses == 1
- // result: (SETLE (CMPLconst [127] x))
+ // result: (SETLE zero (CMPLconst [127] x))
for {
- c := v_0
+ zero := v_0
+ c := v_1
if c.Op != OpAMD64CMPLconst || auxIntToInt32(c.AuxInt) != 128 {
break
}
@@ -24480,64 +24726,65 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(127)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETL (InvertFlags x))
- // result: (SETG x)
+ // match: (SETL zero (InvertFlags x))
+ // result: (SETG zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETG)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETL (FlagEQ))
+ // match: (SETL _ (FlagEQ))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETL (FlagLT_ULT))
+ // match: (SETL _ (FlagLT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETL (FlagLT_UGT))
+ // match: (SETL _ (FlagLT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETL (FlagGT_ULT))
+ // match: (SETL _ (FlagGT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETL (FlagGT_UGT))
+ // match: (SETL _ (FlagGT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -24547,62 +24794,64 @@
return false
}
func rewriteValueAMD64_OpAMD64SETLE(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
- // match: (SETLE (InvertFlags x))
- // result: (SETGE x)
+ // match: (SETLE zero (InvertFlags x))
+ // result: (SETGE zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETGE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETLE (FlagEQ))
+ // match: (SETLE _ (FlagEQ))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETLE (FlagLT_ULT))
+ // match: (SETLE _ (FlagLT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETLE (FlagLT_UGT))
+ // match: (SETLE _ (FlagLT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETLE (FlagGT_ULT))
+ // match: (SETLE _ (FlagGT_ULT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETLE (FlagGT_UGT))
+ // match: (SETLE _ (FlagGT_UGT))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
@@ -24932,95 +25181,99 @@
return false
}
func rewriteValueAMD64_OpAMD64SETNE(v *Value) bool {
+ v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (SETNE (TESTBconst [1] x))
+ // match: (SETNE _ (TESTBconst [1] x))
// result: (ANDLconst [1] x)
for {
- if v_0.Op != OpAMD64TESTBconst || auxIntToInt8(v_0.AuxInt) != 1 {
+ if v_1.Op != OpAMD64TESTBconst || auxIntToInt8(v_1.AuxInt) != 1 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64ANDLconst)
v.AuxInt = int32ToAuxInt(1)
v.AddArg(x)
return true
}
- // match: (SETNE (TESTWconst [1] x))
+ // match: (SETNE _ (TESTWconst [1] x))
// result: (ANDLconst [1] x)
for {
- if v_0.Op != OpAMD64TESTWconst || auxIntToInt16(v_0.AuxInt) != 1 {
+ if v_1.Op != OpAMD64TESTWconst || auxIntToInt16(v_1.AuxInt) != 1 {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64ANDLconst)
v.AuxInt = int32ToAuxInt(1)
v.AddArg(x)
return true
}
- // match: (SETNE (TESTL (SHLL (MOVLconst [1]) x) y))
- // result: (SETB (BTL x y))
+ // match: (SETNE zero (TESTL (SHLL (MOVLconst [1]) x) y))
+ // result: (SETB zero (BTL x y))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64SHLL {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64SHLL {
continue
}
- x := v_0_0.Args[1]
- v_0_0_0 := v_0_0.Args[0]
- if v_0_0_0.Op != OpAMD64MOVLconst || auxIntToInt32(v_0_0_0.AuxInt) != 1 {
+ x := v_1_0.Args[1]
+ v_1_0_0 := v_1_0.Args[0]
+ if v_1_0_0.Op != OpAMD64MOVLconst || auxIntToInt32(v_1_0_0.AuxInt) != 1 {
continue
}
- y := v_0_1
+ y := v_1_1
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTQ (SHLQ (MOVQconst [1]) x) y))
- // result: (SETB (BTQ x y))
+ // match: (SETNE zero (TESTQ (SHLQ (MOVQconst [1]) x) y))
+ // result: (SETB zero (BTQ x y))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64SHLQ {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64SHLQ {
continue
}
- x := v_0_0.Args[1]
- v_0_0_0 := v_0_0.Args[0]
- if v_0_0_0.Op != OpAMD64MOVQconst || auxIntToInt64(v_0_0_0.AuxInt) != 1 {
+ x := v_1_0.Args[1]
+ v_1_0_0 := v_1_0.Args[0]
+ if v_1_0_0.Op != OpAMD64MOVQconst || auxIntToInt64(v_1_0_0.AuxInt) != 1 {
continue
}
- y := v_0_1
+ y := v_1_1
v.reset(OpAMD64SETB)
v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTLconst [c] x))
+ // match: (SETNE zero (TESTLconst [c] x))
// cond: isUnsignedPowerOfTwo(uint32(c))
- // result: (SETB (BTLconst [int8(log32u(uint32(c)))] x))
+ // result: (SETB zero (BTLconst [int8(log32u(uint32(c)))] x))
for {
- if v_0.Op != OpAMD64TESTLconst {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTLconst {
break
}
- c := auxIntToInt32(v_0.AuxInt)
- x := v_0.Args[0]
+ c := auxIntToInt32(v_1.AuxInt)
+ x := v_1.Args[0]
if !(isUnsignedPowerOfTwo(uint32(c))) {
break
}
@@ -25028,18 +25281,19 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log32u(uint32(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETNE (TESTQconst [c] x))
+ // match: (SETNE zero (TESTQconst [c] x))
// cond: isUnsignedPowerOfTwo(uint64(c))
- // result: (SETB (BTQconst [int8(log32u(uint32(c)))] x))
+ // result: (SETB zero (BTQconst [int8(log32u(uint32(c)))] x))
for {
- if v_0.Op != OpAMD64TESTQconst {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQconst {
break
}
- c := auxIntToInt32(v_0.AuxInt)
- x := v_0.Args[0]
+ c := auxIntToInt32(v_1.AuxInt)
+ x := v_1.Args[0]
if !(isUnsignedPowerOfTwo(uint64(c))) {
break
}
@@ -25047,25 +25301,26 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log32u(uint32(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETNE (TESTQ (MOVQconst [c]) x))
+ // match: (SETNE zero (TESTQ (MOVQconst [c]) x))
// cond: isUnsignedPowerOfTwo(uint64(c))
- // result: (SETB (BTQconst [int8(log64u(uint64(c)))] x))
+ // result: (SETB zero (BTQconst [int8(log64u(uint64(c)))] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- if v_0_0.Op != OpAMD64MOVQconst {
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ if v_1_0.Op != OpAMD64MOVQconst {
continue
}
- c := auxIntToInt64(v_0_0.AuxInt)
- x := v_0_1
+ c := auxIntToInt64(v_1_0.AuxInt)
+ x := v_1_1
if !(isUnsignedPowerOfTwo(uint64(c))) {
continue
}
@@ -25073,18 +25328,19 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(int8(log64u(uint64(c))))
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (CMPLconst [1] s:(ANDLconst [1] _)))
- // result: (SETEQ (CMPLconst [0] s))
+ // match: (SETNE zero (CMPLconst [1] s:(ANDLconst [1] _)))
+ // result: (SETEQ zero (CMPLconst [0] s))
for {
- if v_0.Op != OpAMD64CMPLconst || auxIntToInt32(v_0.AuxInt) != 1 {
+ zero := v_0
+ if v_1.Op != OpAMD64CMPLconst || auxIntToInt32(v_1.AuxInt) != 1 {
break
}
- s := v_0.Args[0]
+ s := v_1.Args[0]
if s.Op != OpAMD64ANDLconst || auxIntToInt32(s.AuxInt) != 1 {
break
}
@@ -25092,16 +25348,17 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(0)
v0.AddArg(s)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETNE (CMPQconst [1] s:(ANDQconst [1] _)))
- // result: (SETEQ (CMPQconst [0] s))
+ // match: (SETNE zero (CMPQconst [1] s:(ANDQconst [1] _)))
+ // result: (SETEQ zero (CMPQconst [0] s))
for {
- if v_0.Op != OpAMD64CMPQconst || auxIntToInt32(v_0.AuxInt) != 1 {
+ zero := v_0
+ if v_1.Op != OpAMD64CMPQconst || auxIntToInt32(v_1.AuxInt) != 1 {
break
}
- s := v_0.Args[0]
+ s := v_1.Args[0]
if s.Op != OpAMD64ANDQconst || auxIntToInt32(s.AuxInt) != 1 {
break
}
@@ -25109,21 +25366,22 @@
v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = int32ToAuxInt(0)
v0.AddArg(s)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
- // match: (SETNE (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2))
+ // match: (SETNE zero (TESTQ z1:(SHLQconst [63] (SHRQconst [63] x)) z2))
// cond: z1==z2
- // result: (SETB (BTQconst [63] x))
+ // result: (SETB zero (BTQconst [63] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHLQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
@@ -25132,7 +25390,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25140,23 +25398,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(63)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2))
+ // match: (SETNE zero (TESTL z1:(SHLLconst [31] (SHRQconst [31] x)) z2))
// cond: z1==z2
- // result: (SETB (BTQconst [31] x))
+ // result: (SETB zero (BTQconst [31] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHLLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
@@ -25165,7 +25424,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25173,23 +25432,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(31)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2))
+ // match: (SETNE zero (TESTQ z1:(SHRQconst [63] (SHLQconst [63] x)) z2))
// cond: z1==z2
- // result: (SETB (BTQconst [0] x))
+ // result: (SETB zero (BTQconst [0] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
@@ -25198,7 +25458,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25206,23 +25466,24 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(0)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2))
+ // match: (SETNE zero (TESTL z1:(SHRLconst [31] (SHLLconst [31] x)) z2))
// cond: z1==z2
- // result: (SETB (BTLconst [0] x))
+ // result: (SETB zero (BTLconst [0] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
@@ -25231,7 +25492,7 @@
continue
}
x := z1_0.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25239,28 +25500,29 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(0)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTQ z1:(SHRQconst [63] x) z2))
+ // match: (SETNE zero (TESTQ z1:(SHRQconst [63] x) z2))
// cond: z1==z2
- // result: (SETB (BTQconst [63] x))
+ // result: (SETB zero (BTQconst [63] x))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRQconst || auxIntToInt8(z1.AuxInt) != 63 {
continue
}
x := z1.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25268,28 +25530,29 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(63)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTL z1:(SHRLconst [31] x) z2))
+ // match: (SETNE zero (TESTL z1:(SHRLconst [31] x) z2))
// cond: z1==z2
- // result: (SETB (BTLconst [31] x))
+ // result: (SETB zero (BTLconst [31] x))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- z1 := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ z1 := v_1_0
if z1.Op != OpAMD64SHRLconst || auxIntToInt8(z1.AuxInt) != 31 {
continue
}
x := z1.Args[0]
- z2 := v_0_1
+ z2 := v_1_1
if !(z1 == z2) {
continue
}
@@ -25297,120 +25560,123 @@
v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = int8ToAuxInt(31)
v0.AddArg(x)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (InvertFlags x))
- // result: (SETNE x)
+ // match: (SETNE zero (InvertFlags x))
+ // result: (SETNE zero x)
for {
- if v_0.Op != OpAMD64InvertFlags {
+ zero := v_0
+ if v_1.Op != OpAMD64InvertFlags {
break
}
- x := v_0.Args[0]
+ x := v_1.Args[0]
v.reset(OpAMD64SETNE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (SETNE (FlagEQ))
+ // match: (SETNE _ (FlagEQ))
// result: (MOVLconst [0])
for {
- if v_0.Op != OpAMD64FlagEQ {
+ if v_1.Op != OpAMD64FlagEQ {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(0)
return true
}
- // match: (SETNE (FlagLT_ULT))
+ // match: (SETNE _ (FlagLT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_ULT {
+ if v_1.Op != OpAMD64FlagLT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETNE (FlagLT_UGT))
+ // match: (SETNE _ (FlagLT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagLT_UGT {
+ if v_1.Op != OpAMD64FlagLT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETNE (FlagGT_ULT))
+ // match: (SETNE _ (FlagGT_ULT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_ULT {
+ if v_1.Op != OpAMD64FlagGT_ULT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETNE (FlagGT_UGT))
+ // match: (SETNE _ (FlagGT_UGT))
// result: (MOVLconst [1])
for {
- if v_0.Op != OpAMD64FlagGT_UGT {
+ if v_1.Op != OpAMD64FlagGT_UGT {
break
}
v.reset(OpAMD64MOVLconst)
v.AuxInt = int32ToAuxInt(1)
return true
}
- // match: (SETNE (TESTQ s:(Select0 blsr:(BLSRQ _)) s))
- // result: (SETNE (Select1 <types.TypeFlags> blsr))
+ // match: (SETNE zero (TESTQ s:(Select0 blsr:(BLSRQ _)) s))
+ // result: (SETNE zero (Select1 <types.TypeFlags> blsr))
for {
- if v_0.Op != OpAMD64TESTQ {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTQ {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- s := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ s := v_1_0
if s.Op != OpSelect0 {
continue
}
blsr := s.Args[0]
- if blsr.Op != OpAMD64BLSRQ || s != v_0_1 {
+ if blsr.Op != OpAMD64BLSRQ || s != v_1_1 {
continue
}
v.reset(OpAMD64SETNE)
v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
v0.AddArg(blsr)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
}
- // match: (SETNE (TESTL s:(Select0 blsr:(BLSRL _)) s))
- // result: (SETNE (Select1 <types.TypeFlags> blsr))
+ // match: (SETNE zero (TESTL s:(Select0 blsr:(BLSRL _)) s))
+ // result: (SETNE zero (Select1 <types.TypeFlags> blsr))
for {
- if v_0.Op != OpAMD64TESTL {
+ zero := v_0
+ if v_1.Op != OpAMD64TESTL {
break
}
- _ = v_0.Args[1]
- v_0_0 := v_0.Args[0]
- v_0_1 := v_0.Args[1]
- for _i0 := 0; _i0 <= 1; _i0, v_0_0, v_0_1 = _i0+1, v_0_1, v_0_0 {
- s := v_0_0
+ _ = v_1.Args[1]
+ v_1_0 := v_1.Args[0]
+ v_1_1 := v_1.Args[1]
+ for _i0 := 0; _i0 <= 1; _i0, v_1_0, v_1_1 = _i0+1, v_1_1, v_1_0 {
+ s := v_1_0
if s.Op != OpSelect0 {
continue
}
blsr := s.Args[0]
- if blsr.Op != OpAMD64BLSRL || s != v_0_1 {
+ if blsr.Op != OpAMD64BLSRL || s != v_1_1 {
continue
}
v.reset(OpAMD64SETNE)
v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
v0.AddArg(blsr)
- v.AddArg(v0)
+ v.AddArg2(zero, v0)
return true
}
break
@@ -65023,114 +65289,124 @@
}
func rewriteValueAMD64_OpAMD64XORLconst(v *Value) bool {
v_0 := v.Args[0]
- // match: (XORLconst [1] (SETNE x))
- // result: (SETEQ x)
+ // match: (XORLconst [1] (SETNE zero x))
+ // result: (SETEQ zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETNE {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETEQ)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETEQ x))
- // result: (SETNE x)
+ // match: (XORLconst [1] (SETEQ zero x))
+ // result: (SETNE zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETEQ {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETNE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETL x))
- // result: (SETGE x)
+ // match: (XORLconst [1] (SETL zero x))
+ // result: (SETGE zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETL {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETGE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETGE x))
- // result: (SETL x)
+ // match: (XORLconst [1] (SETGE zero x))
+ // result: (SETL zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETGE {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETL)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETLE x))
- // result: (SETG x)
+ // match: (XORLconst [1] (SETLE zero x))
+ // result: (SETG zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETLE {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETG)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETG x))
- // result: (SETLE x)
+ // match: (XORLconst [1] (SETG zero x))
+ // result: (SETLE zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETG {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETLE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETB x))
- // result: (SETAE x)
+ // match: (XORLconst [1] (SETB zero x))
+ // result: (SETAE zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETB {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETAE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETAE x))
- // result: (SETB x)
+ // match: (XORLconst [1] (SETAE zero x))
+ // result: (SETB zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETAE {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETB)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETBE x))
- // result: (SETA x)
+ // match: (XORLconst [1] (SETBE zero x))
+ // result: (SETA zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETBE {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETA)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
- // match: (XORLconst [1] (SETA x))
- // result: (SETBE x)
+ // match: (XORLconst [1] (SETA zero x))
+ // result: (SETBE zero x)
for {
if auxIntToInt32(v.AuxInt) != 1 || v_0.Op != OpAMD64SETA {
break
}
- x := v_0.Args[0]
+ x := v_0.Args[1]
+ zero := v_0.Args[0]
v.reset(OpAMD64SETBE)
- v.AddArg(x)
+ v.AddArg2(zero, x)
return true
}
// match: (XORLconst [c] (XORLconst [d] x))
@@ -66940,7 +67216,7 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
- // match: (CondSelect <t> x y (SETEQ cond))
+ // match: (CondSelect <t> x y (SETEQ _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQEQ y x cond)
for {
@@ -66950,7 +67226,7 @@
if v_2.Op != OpAMD64SETEQ {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -66958,7 +67234,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETNE cond))
+ // match: (CondSelect <t> x y (SETNE _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQNE y x cond)
for {
@@ -66968,7 +67244,7 @@
if v_2.Op != OpAMD64SETNE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -66976,7 +67252,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETL cond))
+ // match: (CondSelect <t> x y (SETL _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQLT y x cond)
for {
@@ -66986,7 +67262,7 @@
if v_2.Op != OpAMD64SETL {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -66994,7 +67270,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETG cond))
+ // match: (CondSelect <t> x y (SETG _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQGT y x cond)
for {
@@ -67004,7 +67280,7 @@
if v_2.Op != OpAMD64SETG {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67012,7 +67288,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETLE cond))
+ // match: (CondSelect <t> x y (SETLE _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQLE y x cond)
for {
@@ -67022,7 +67298,7 @@
if v_2.Op != OpAMD64SETLE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67030,7 +67306,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETGE cond))
+ // match: (CondSelect <t> x y (SETGE _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQGE y x cond)
for {
@@ -67040,7 +67316,7 @@
if v_2.Op != OpAMD64SETGE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67048,7 +67324,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETA cond))
+ // match: (CondSelect <t> x y (SETA _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQHI y x cond)
for {
@@ -67058,7 +67334,7 @@
if v_2.Op != OpAMD64SETA {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67066,7 +67342,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETB cond))
+ // match: (CondSelect <t> x y (SETB _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQCS y x cond)
for {
@@ -67076,7 +67352,7 @@
if v_2.Op != OpAMD64SETB {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67084,7 +67360,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETAE cond))
+ // match: (CondSelect <t> x y (SETAE _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQCC y x cond)
for {
@@ -67094,7 +67370,7 @@
if v_2.Op != OpAMD64SETAE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67102,7 +67378,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETBE cond))
+ // match: (CondSelect <t> x y (SETBE _ cond))
// cond: (is64BitInt(t) || isPtr(t))
// result: (CMOVQLS y x cond)
for {
@@ -67112,7 +67388,7 @@
if v_2.Op != OpAMD64SETBE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is64BitInt(t) || isPtr(t)) {
break
}
@@ -67192,7 +67468,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETEQ cond))
+ // match: (CondSelect <t> x y (SETEQ _ cond))
// cond: is32BitInt(t)
// result: (CMOVLEQ y x cond)
for {
@@ -67202,7 +67478,7 @@
if v_2.Op != OpAMD64SETEQ {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67210,7 +67486,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETNE cond))
+ // match: (CondSelect <t> x y (SETNE _ cond))
// cond: is32BitInt(t)
// result: (CMOVLNE y x cond)
for {
@@ -67220,7 +67496,7 @@
if v_2.Op != OpAMD64SETNE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67228,7 +67504,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETL cond))
+ // match: (CondSelect <t> x y (SETL _ cond))
// cond: is32BitInt(t)
// result: (CMOVLLT y x cond)
for {
@@ -67238,7 +67514,7 @@
if v_2.Op != OpAMD64SETL {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67246,7 +67522,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETG cond))
+ // match: (CondSelect <t> x y (SETG _ cond))
// cond: is32BitInt(t)
// result: (CMOVLGT y x cond)
for {
@@ -67256,7 +67532,7 @@
if v_2.Op != OpAMD64SETG {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67264,7 +67540,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETLE cond))
+ // match: (CondSelect <t> x y (SETLE _ cond))
// cond: is32BitInt(t)
// result: (CMOVLLE y x cond)
for {
@@ -67274,7 +67550,7 @@
if v_2.Op != OpAMD64SETLE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67282,7 +67558,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETGE cond))
+ // match: (CondSelect <t> x y (SETGE _ cond))
// cond: is32BitInt(t)
// result: (CMOVLGE y x cond)
for {
@@ -67292,7 +67568,7 @@
if v_2.Op != OpAMD64SETGE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67300,7 +67576,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETA cond))
+ // match: (CondSelect <t> x y (SETA _ cond))
// cond: is32BitInt(t)
// result: (CMOVLHI y x cond)
for {
@@ -67310,7 +67586,7 @@
if v_2.Op != OpAMD64SETA {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67318,7 +67594,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETB cond))
+ // match: (CondSelect <t> x y (SETB _ cond))
// cond: is32BitInt(t)
// result: (CMOVLCS y x cond)
for {
@@ -67328,7 +67604,7 @@
if v_2.Op != OpAMD64SETB {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67336,7 +67612,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETAE cond))
+ // match: (CondSelect <t> x y (SETAE _ cond))
// cond: is32BitInt(t)
// result: (CMOVLCC y x cond)
for {
@@ -67346,7 +67622,7 @@
if v_2.Op != OpAMD64SETAE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67354,7 +67630,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETBE cond))
+ // match: (CondSelect <t> x y (SETBE _ cond))
// cond: is32BitInt(t)
// result: (CMOVLLS y x cond)
for {
@@ -67364,7 +67640,7 @@
if v_2.Op != OpAMD64SETBE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is32BitInt(t)) {
break
}
@@ -67444,7 +67720,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETEQ cond))
+ // match: (CondSelect <t> x y (SETEQ _ cond))
// cond: is16BitInt(t)
// result: (CMOVWEQ y x cond)
for {
@@ -67454,7 +67730,7 @@
if v_2.Op != OpAMD64SETEQ {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67462,7 +67738,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETNE cond))
+ // match: (CondSelect <t> x y (SETNE _ cond))
// cond: is16BitInt(t)
// result: (CMOVWNE y x cond)
for {
@@ -67472,7 +67748,7 @@
if v_2.Op != OpAMD64SETNE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67480,7 +67756,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETL cond))
+ // match: (CondSelect <t> x y (SETL _ cond))
// cond: is16BitInt(t)
// result: (CMOVWLT y x cond)
for {
@@ -67490,7 +67766,7 @@
if v_2.Op != OpAMD64SETL {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67498,7 +67774,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETG cond))
+ // match: (CondSelect <t> x y (SETG _ cond))
// cond: is16BitInt(t)
// result: (CMOVWGT y x cond)
for {
@@ -67508,7 +67784,7 @@
if v_2.Op != OpAMD64SETG {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67516,7 +67792,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETLE cond))
+ // match: (CondSelect <t> x y (SETLE _ cond))
// cond: is16BitInt(t)
// result: (CMOVWLE y x cond)
for {
@@ -67526,7 +67802,7 @@
if v_2.Op != OpAMD64SETLE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67534,7 +67810,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETGE cond))
+ // match: (CondSelect <t> x y (SETGE _ cond))
// cond: is16BitInt(t)
// result: (CMOVWGE y x cond)
for {
@@ -67544,7 +67820,7 @@
if v_2.Op != OpAMD64SETGE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67552,7 +67828,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETA cond))
+ // match: (CondSelect <t> x y (SETA _ cond))
// cond: is16BitInt(t)
// result: (CMOVWHI y x cond)
for {
@@ -67562,7 +67838,7 @@
if v_2.Op != OpAMD64SETA {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67570,7 +67846,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETB cond))
+ // match: (CondSelect <t> x y (SETB _ cond))
// cond: is16BitInt(t)
// result: (CMOVWCS y x cond)
for {
@@ -67580,7 +67856,7 @@
if v_2.Op != OpAMD64SETB {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67588,7 +67864,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETAE cond))
+ // match: (CondSelect <t> x y (SETAE _ cond))
// cond: is16BitInt(t)
// result: (CMOVWCC y x cond)
for {
@@ -67598,7 +67874,7 @@
if v_2.Op != OpAMD64SETAE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -67606,7 +67882,7 @@
v.AddArg3(y, x, cond)
return true
}
- // match: (CondSelect <t> x y (SETBE cond))
+ // match: (CondSelect <t> x y (SETBE _ cond))
// cond: is16BitInt(t)
// result: (CMOVWLS y x cond)
for {
@@ -67616,7 +67892,7 @@
if v_2.Op != OpAMD64SETBE {
break
}
- cond := v_2.Args[0]
+ cond := v_2.Args[1]
if !(is16BitInt(t)) {
break
}
@@ -68897,15 +69173,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Eq16 x y)
- // result: (SETEQ (CMPW x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -68913,15 +69193,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Eq32 x y)
- // result: (SETEQ (CMPL x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -68945,15 +69229,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Eq64 x y)
- // result: (SETEQ (CMPQ x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -68977,15 +69265,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Eq8 x y)
- // result: (SETEQ (CMPB x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -68993,15 +69285,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (EqB x y)
- // result: (SETEQ (CMPB x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -69009,15 +69305,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (EqPtr x y)
- // result: (SETEQ (CMPQ x y))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -70928,16 +71228,19 @@
b := v.Block
typ := &b.Func.Config.Types
// match: (HasCPUFeature {s})
- // result: (SETNE (CMPLconst [0] (LoweredHasCPUFeature {s})))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPLconst [0] (LoweredHasCPUFeature {s})))
for {
s := auxToSym(v.Aux)
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
- v0.AuxInt = int32ToAuxInt(0)
- v1 := b.NewValue0(v.Pos, OpAMD64LoweredHasCPUFeature, typ.UInt64)
- v1.Aux = symToAux(s)
- v0.AddArg(v1)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
+ v1.AuxInt = int32ToAuxInt(0)
+ v2 := b.NewValue0(v.Pos, OpAMD64LoweredHasCPUFeature, typ.UInt64)
+ v2.Aux = symToAux(s)
+ v1.AddArg(v2)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -70945,15 +71248,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (IsInBounds idx len)
- // result: (SETB (CMPQ idx len))
+ // result: (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ idx len))
for {
idx := v_0
len := v_1
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(idx, len)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(idx, len)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71052,14 +71359,18 @@
func rewriteValueAMD64_OpIsNonNil(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (IsNonNil p)
- // result: (SETNE (TESTQ p p))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (TESTQ p p ))
for {
p := v_0
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64TESTQ, types.TypeFlags)
- v0.AddArg2(p, p)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64TESTQ, types.TypeFlags)
+ v1.AddArg2(p, p)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71067,29 +71378,37 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (IsSliceInBounds idx len)
- // result: (SETBE (CMPQ idx len))
+ // result: (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ idx len))
for {
idx := v_0
len := v_1
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(idx, len)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(idx, len)
+ v.AddArg2(v0, v1)
return true
}
}
func rewriteValueAMD64_OpIsZeroVec(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (IsZeroVec x)
- // result: (SETEQ (VPTEST x x))
+ // result: (SETEQ <typ.UInt64> (Const64 <typ.UInt64> [0]) (VPTEST x x))
for {
x := v_0
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
- v0.AddArg2(x, x)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64VPTEST, types.TypeFlags)
+ v1.AddArg2(x, x)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71097,15 +71416,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq16 x y)
- // result: (SETLE (CMPW x y))
+ // result: (SETLE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71113,15 +71436,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq16U x y)
- // result: (SETBE (CMPW x y))
+ // result: (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71129,15 +71456,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq32 x y)
- // result: (SETLE (CMPL x y))
+ // result: (SETLE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71161,15 +71492,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq32U x y)
- // result: (SETBE (CMPL x y))
+ // result: (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71177,15 +71512,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq64 x y)
- // result: (SETLE (CMPQ x y))
+ // result: (SETLE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71209,15 +71548,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq64U x y)
- // result: (SETBE (CMPQ x y))
+ // result: (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71225,15 +71568,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq8 x y)
- // result: (SETLE (CMPB x y))
+ // result: (SETLE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71241,15 +71588,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Leq8U x y)
- // result: (SETBE (CMPB x y))
+ // result: (SETBE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71257,15 +71608,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less16 x y)
- // result: (SETL (CMPW x y))
+ // result: (SETL <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71273,15 +71628,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less16U x y)
- // result: (SETB (CMPW x y))
+ // result: (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71289,15 +71648,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less32 x y)
- // result: (SETL (CMPL x y))
+ // result: (SETL <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71321,15 +71684,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less32U x y)
- // result: (SETB (CMPL x y))
+ // result: (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71337,15 +71704,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less64 x y)
- // result: (SETL (CMPQ x y))
+ // result: (SETL <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71369,15 +71740,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less64U x y)
- // result: (SETB (CMPQ x y))
+ // result: (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71385,15 +71760,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less8 x y)
- // result: (SETL (CMPB x y))
+ // result: (SETL <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -71401,15 +71780,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Less8U x y)
- // result: (SETB (CMPB x y))
+ // result: (SETB <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73477,15 +73860,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Neq16 x y)
- // result: (SETNE (CMPW x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPW x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73493,15 +73880,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Neq32 x y)
- // result: (SETNE (CMPL x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPL x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73525,15 +73916,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Neq64 x y)
- // result: (SETNE (CMPQ x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73557,15 +73952,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (Neq8 x y)
- // result: (SETNE (CMPB x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73573,15 +73972,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (NeqB x y)
- // result: (SETNE (CMPB x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPB x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -73589,15 +73992,19 @@
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
+ typ := &b.Func.Config.Types
// match: (NeqPtr x y)
- // result: (SETNE (CMPQ x y))
+ // result: (SETNE <typ.UInt64> (Const64 <typ.UInt64> [0]) (CMPQ x y))
for {
x := v_0
y := v_1
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
- v0.AddArg2(x, y)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
+ v1.AddArg2(x, y)
+ v.AddArg2(v0, v1)
return true
}
}
@@ -75600,7 +76007,7 @@
b := v.Block
typ := &b.Func.Config.Types
// match: (Select1 (Mul64uover x y))
- // result: (SETO (Select1 <types.TypeFlags> (MULQU x y)))
+ // result: (SETO <typ.UInt64> (Const64 <typ.UInt64> [0]) (Select1 <types.TypeFlags> (MULQU x y)))
for {
if v_0.Op != OpMul64uover {
break
@@ -75608,15 +76015,18 @@
y := v_0.Args[1]
x := v_0.Args[0]
v.reset(OpAMD64SETO)
- v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
- v1 := b.NewValue0(v.Pos, OpAMD64MULQU, types.NewTuple(typ.UInt64, types.TypeFlags))
- v1.AddArg2(x, y)
- v0.AddArg(v1)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64MULQU, types.NewTuple(typ.UInt64, types.TypeFlags))
+ v2.AddArg2(x, y)
+ v1.AddArg(v2)
+ v.AddArg2(v0, v1)
return true
}
// match: (Select1 (Mul32uover x y))
- // result: (SETO (Select1 <types.TypeFlags> (MULLU x y)))
+ // result: (SETO <typ.UInt64> (Const64 <typ.UInt64> [0]) (Select1 <types.TypeFlags> (MULLU x y)))
for {
if v_0.Op != OpMul32uover {
break
@@ -75624,11 +76034,14 @@
y := v_0.Args[1]
x := v_0.Args[0]
v.reset(OpAMD64SETO)
- v0 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
- v1 := b.NewValue0(v.Pos, OpAMD64MULLU, types.NewTuple(typ.UInt32, types.TypeFlags))
- v1.AddArg2(x, y)
- v0.AddArg(v1)
- v.AddArg(v0)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
+ v0.AuxInt = int64ToAuxInt(0)
+ v1 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64MULLU, types.NewTuple(typ.UInt32, types.TypeFlags))
+ v2.AddArg2(x, y)
+ v1.AddArg(v2)
+ v.AddArg2(v0, v1)
return true
}
// match: (Select1 (Add64carry x y c))
@@ -78131,91 +78544,91 @@
return true
}
case BlockIf:
- // match: (If (SETL cmp) yes no)
+ // match: (If (SETL _ cmp) yes no)
// result: (LT cmp yes no)
for b.Controls[0].Op == OpAMD64SETL {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64LT, cmp)
return true
}
- // match: (If (SETLE cmp) yes no)
+ // match: (If (SETLE _ cmp) yes no)
// result: (LE cmp yes no)
for b.Controls[0].Op == OpAMD64SETLE {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64LE, cmp)
return true
}
- // match: (If (SETG cmp) yes no)
+ // match: (If (SETG _ cmp) yes no)
// result: (GT cmp yes no)
for b.Controls[0].Op == OpAMD64SETG {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64GT, cmp)
return true
}
- // match: (If (SETGE cmp) yes no)
+ // match: (If (SETGE _ cmp) yes no)
// result: (GE cmp yes no)
for b.Controls[0].Op == OpAMD64SETGE {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64GE, cmp)
return true
}
- // match: (If (SETEQ cmp) yes no)
+ // match: (If (SETEQ _ cmp) yes no)
// result: (EQ cmp yes no)
for b.Controls[0].Op == OpAMD64SETEQ {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64EQ, cmp)
return true
}
- // match: (If (SETNE cmp) yes no)
+ // match: (If (SETNE _ cmp) yes no)
// result: (NE cmp yes no)
for b.Controls[0].Op == OpAMD64SETNE {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64NE, cmp)
return true
}
- // match: (If (SETB cmp) yes no)
+ // match: (If (SETB _ cmp) yes no)
// result: (ULT cmp yes no)
for b.Controls[0].Op == OpAMD64SETB {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64ULT, cmp)
return true
}
- // match: (If (SETBE cmp) yes no)
+ // match: (If (SETBE _ cmp) yes no)
// result: (ULE cmp yes no)
for b.Controls[0].Op == OpAMD64SETBE {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64ULE, cmp)
return true
}
- // match: (If (SETA cmp) yes no)
+ // match: (If (SETA _ cmp) yes no)
// result: (UGT cmp yes no)
for b.Controls[0].Op == OpAMD64SETA {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64UGT, cmp)
return true
}
- // match: (If (SETAE cmp) yes no)
+ // match: (If (SETAE _ cmp) yes no)
// result: (UGE cmp yes no)
for b.Controls[0].Op == OpAMD64SETAE {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64UGE, cmp)
return true
}
- // match: (If (SETO cmp) yes no)
+ // match: (If (SETO _ cmp) yes no)
// result: (OS cmp yes no)
for b.Controls[0].Op == OpAMD64SETO {
v_0 := b.Controls[0]
- cmp := v_0.Args[0]
+ cmp := v_0.Args[1]
b.resetWithControl(BlockAMD64OS, cmp)
return true
}
@@ -78393,7 +78806,7 @@
return true
}
case BlockAMD64NE:
- // match: (NE (TESTB (SETL cmp) (SETL cmp)) yes no)
+ // match: (NE (TESTB (SETL _ cmp) (SETL _ cmp)) yes no)
// result: (LT cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78402,15 +78815,19 @@
if v_0_0.Op != OpAMD64SETL {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETL || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETL {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64LT, cmp)
return true
}
- // match: (NE (TESTB (SETLE cmp) (SETLE cmp)) yes no)
+ // match: (NE (TESTB (SETLE _ cmp) (SETLE _ cmp)) yes no)
// result: (LE cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78419,15 +78836,19 @@
if v_0_0.Op != OpAMD64SETLE {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETLE || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETLE {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64LE, cmp)
return true
}
- // match: (NE (TESTB (SETG cmp) (SETG cmp)) yes no)
+ // match: (NE (TESTB (SETG _ cmp) (SETG _ cmp)) yes no)
// result: (GT cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78436,15 +78857,19 @@
if v_0_0.Op != OpAMD64SETG {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETG || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETG {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64GT, cmp)
return true
}
- // match: (NE (TESTB (SETGE cmp) (SETGE cmp)) yes no)
+ // match: (NE (TESTB (SETGE _ cmp) (SETGE _ cmp)) yes no)
// result: (GE cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78453,15 +78878,19 @@
if v_0_0.Op != OpAMD64SETGE {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETGE || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETGE {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64GE, cmp)
return true
}
- // match: (NE (TESTB (SETEQ cmp) (SETEQ cmp)) yes no)
+ // match: (NE (TESTB (SETEQ _ cmp) (SETEQ _ cmp)) yes no)
// result: (EQ cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78470,15 +78899,19 @@
if v_0_0.Op != OpAMD64SETEQ {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETEQ || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETEQ {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64EQ, cmp)
return true
}
- // match: (NE (TESTB (SETNE cmp) (SETNE cmp)) yes no)
+ // match: (NE (TESTB (SETNE _ cmp) (SETNE _ cmp)) yes no)
// result: (NE cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78487,15 +78920,19 @@
if v_0_0.Op != OpAMD64SETNE {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETNE || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETNE {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64NE, cmp)
return true
}
- // match: (NE (TESTB (SETB cmp) (SETB cmp)) yes no)
+ // match: (NE (TESTB (SETB _ cmp) (SETB _ cmp)) yes no)
// result: (ULT cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78504,15 +78941,19 @@
if v_0_0.Op != OpAMD64SETB {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETB || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETB {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64ULT, cmp)
return true
}
- // match: (NE (TESTB (SETBE cmp) (SETBE cmp)) yes no)
+ // match: (NE (TESTB (SETBE _ cmp) (SETBE _ cmp)) yes no)
// result: (ULE cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78521,15 +78962,19 @@
if v_0_0.Op != OpAMD64SETBE {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETBE || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETBE {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64ULE, cmp)
return true
}
- // match: (NE (TESTB (SETA cmp) (SETA cmp)) yes no)
+ // match: (NE (TESTB (SETA _ cmp) (SETA _ cmp)) yes no)
// result: (UGT cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78538,15 +78983,19 @@
if v_0_0.Op != OpAMD64SETA {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETA || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETA {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64UGT, cmp)
return true
}
- // match: (NE (TESTB (SETAE cmp) (SETAE cmp)) yes no)
+ // match: (NE (TESTB (SETAE _ cmp) (SETAE _ cmp)) yes no)
// result: (UGE cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78555,15 +79004,19 @@
if v_0_0.Op != OpAMD64SETAE {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETAE || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETAE {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64UGE, cmp)
return true
}
- // match: (NE (TESTB (SETO cmp) (SETO cmp)) yes no)
+ // match: (NE (TESTB (SETO _ cmp) (SETO _ cmp)) yes no)
// result: (OS cmp yes no)
for b.Controls[0].Op == OpAMD64TESTB {
v_0 := b.Controls[0]
@@ -78572,9 +79025,13 @@
if v_0_0.Op != OpAMD64SETO {
break
}
- cmp := v_0_0.Args[0]
+ cmp := v_0_0.Args[1]
v_0_1 := v_0.Args[1]
- if v_0_1.Op != OpAMD64SETO || cmp != v_0_1.Args[0] {
+ if v_0_1.Op != OpAMD64SETO {
+ break
+ }
+ _ = v_0_1.Args[1]
+ if cmp != v_0_1.Args[1] {
break
}
b.resetWithControl(BlockAMD64OS, cmp)
diff --git a/src/crypto/subtle/constant_time_test.go b/src/crypto/subtle/constant_time_test.go
index 9db1140..ca9b488 100644
--- a/src/crypto/subtle/constant_time_test.go
+++ b/src/crypto/subtle/constant_time_test.go
@@ -142,7 +142,7 @@
func BenchmarkConstantTimeByteEq(b *testing.B) {
var x, y uint8

- for i := 0; i < b.N; i++ {
+ for range b.N {
x, y = uint8(ConstantTimeByteEq(x, y)), x
}

@@ -152,7 +152,7 @@
func BenchmarkConstantTimeEq(b *testing.B) {
var x, y int

- for i := 0; i < b.N; i++ {
+ for range b.N {
x, y = ConstantTimeEq(int32(x), int32(y)), x
}

@@ -162,7 +162,7 @@
func BenchmarkConstantTimeLessOrEq(b *testing.B) {
var x, y int

- for i := 0; i < b.N; i++ {
+ for range b.N {
x, y = ConstantTimeLessOrEq(x, y), x
}

diff --git a/test/codegen/bool.go b/test/codegen/bool.go
index 8fe7a94..8998719 100644
--- a/test/codegen/bool.go
+++ b/test/codegen/bool.go
@@ -328,3 +328,23 @@
}
return 7
}
+
+func compareNoExtensionU(x, y uint64) (r uint64) {
+ // amd64:-"MOVB.[ZS]X"
+ if x <= y {
+ // amd64:-"MOVB.[ZS]X"
+ r = 1
+ }
+ // amd64:-"MOVB.[ZS]X"
+ return
+}
+
+func compareNoExtensionS(x, y uint64) (r int64) {
+ // amd64:-"MOVB.[ZS]X"
+ if x <= y {
+ // amd64:-"MOVB.[ZS]X"
+ r = 1
+ }
+ // amd64:-"MOVB.[ZS]X"
+ return
+}

Change information

Files:
  • M src/cmd/compile/internal/ssa/_gen/AMD64.rules
  • M src/cmd/compile/internal/ssa/_gen/AMD64Ops.go
  • M src/cmd/compile/internal/ssa/opGen.go
  • M src/cmd/compile/internal/ssa/rewriteAMD64.go
  • M src/crypto/subtle/constant_time_test.go
  • M test/codegen/bool.go
Change size: XL
Delta: 6 files changed, 1593 insertions(+), 1050 deletions(-)
Open in Gerrit

Related details

Attention set is empty
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedNo-Wait-Release
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newchange
Gerrit-Project: go
Gerrit-Branch: master
Gerrit-Change-Id: Ia53052aaa04a7613cad453a5a59109267685cda8
Gerrit-Change-Number: 731620
Gerrit-PatchSet: 1
Gerrit-Owner: Jorropo <jorro...@gmail.com>
Gerrit-Reviewer: Jorropo <jorro...@gmail.com>
unsatisfied_requirement
satisfied_requirement
open
diffy

Jorropo (Gerrit)

unread,
Dec 20, 2025, 8:24:47 AM (yesterday) Dec 20
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com
Attention needed from Keith Randall, Martin Möhrmann and Roland Shoemaker

Jorropo uploaded new patchset

Jorropo uploaded patch set #2 to this change.
Following approvals got outdated and were removed:
  • TryBots-Pass: LUCI-TryBot-Result+1 by Go LUCI
Open in Gerrit

Related details

Attention is currently required from:
  • Keith Randall
  • Martin Möhrmann
  • Roland Shoemaker
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedNo-Wait-Release
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newpatchset
Gerrit-Project: go
Gerrit-Branch: master
Gerrit-Change-Id: Ia53052aaa04a7613cad453a5a59109267685cda8
Gerrit-Change-Number: 731620
Gerrit-PatchSet: 2
Gerrit-Owner: Jorropo <jorro...@gmail.com>
Gerrit-Reviewer: Jorropo <jorro...@gmail.com>
Gerrit-Reviewer: Keith Randall <k...@golang.org>
Gerrit-Reviewer: Martin Möhrmann <moeh...@google.com>
Gerrit-Reviewer: Roland Shoemaker <rol...@golang.org>
Gerrit-CC: Gopher Robot <go...@golang.org>
Gerrit-Attention: Keith Randall <k...@golang.org>
Gerrit-Attention: Roland Shoemaker <rol...@golang.org>
Gerrit-Attention: Martin Möhrmann <moeh...@google.com>
unsatisfied_requirement
satisfied_requirement
open
diffy

Jorropo (Gerrit)

unread,
Dec 20, 2025, 8:33:15 AM (yesterday) Dec 20
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com
Attention needed from Keith Randall, Martin Möhrmann and Roland Shoemaker

Jorropo uploaded new patchset

Jorropo uploaded patch set #3 to this change.
Open in Gerrit

Related details

Attention is currently required from:
  • Keith Randall
  • Martin Möhrmann
  • Roland Shoemaker
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedNo-Wait-Release
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newpatchset
Gerrit-Project: go
Gerrit-Branch: master
Gerrit-Change-Id: Ia53052aaa04a7613cad453a5a59109267685cda8
Gerrit-Change-Number: 731620
Gerrit-PatchSet: 3
unsatisfied_requirement
satisfied_requirement
open
diffy

Jorropo (Gerrit)

unread,
Dec 20, 2025, 11:59:42 AM (yesterday) Dec 20
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com
Attention needed from Jorropo, Keith Randall, Martin Möhrmann and Roland Shoemaker

Jorropo uploaded new patchset

Jorropo uploaded patch set #4 to this change.
Following approvals got outdated and were removed:
  • TryBots-Pass: LUCI-TryBot-Result+1 by Go LUCI
Open in Gerrit

Related details

Attention is currently required from:
  • Jorropo
  • Keith Randall
  • Martin Möhrmann
  • Roland Shoemaker
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedNo-Wait-Release
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newpatchset
Gerrit-Project: go
Gerrit-Branch: master
Gerrit-Change-Id: Ia53052aaa04a7613cad453a5a59109267685cda8
Gerrit-Change-Number: 731620
Gerrit-PatchSet: 4
Gerrit-Owner: Jorropo <jorro...@gmail.com>
Gerrit-Reviewer: Jorropo <jorro...@gmail.com>
Gerrit-Reviewer: Keith Randall <k...@golang.org>
Gerrit-Reviewer: Martin Möhrmann <moeh...@google.com>
Gerrit-Reviewer: Roland Shoemaker <rol...@golang.org>
Gerrit-CC: Gopher Robot <go...@golang.org>
Gerrit-Attention: Keith Randall <k...@golang.org>
Gerrit-Attention: Roland Shoemaker <rol...@golang.org>
Gerrit-Attention: Jorropo <jorro...@gmail.com>
Gerrit-Attention: Martin Möhrmann <moeh...@google.com>
unsatisfied_requirement
satisfied_requirement
open
diffy
Reply all
Reply to author
Forward
0 new messages