I recently ran into some interesting issues with generation of rotate instructions - the details are in the bug tracker (https://bugs.llvm.org/show_bug.cgi?id=37387 and related bugs) for those interested - and it brought up the issue of rotates in the IR again. Now this is a proposal that has been made (and been rejected) several times, but I've been told that this time round we ran into issues not previously considered, and that I should bring the topic up on llvm-dev.
I'll try to summarize the status quo first and then bring up the tricky cases that might tip the balance in favor of rotate intrinsics.
Rotates
=======
Rotates, or circular shifts, are bit shifts where bits "dropped out" on one side of the operand are shifted back in on the other.
They occur less frequently in real-world code than regular (non-circular) bit shifts, but are nevertheless directly supported in most popular ISAs. They are useful primitives for cryptography, non-cryptographic hashing, pseudo-random number generation, bit I/O, and probably other cases as well.
They do not have a direct representation in C/C++, but several compilers support them via intrinsics (including MSVC, which Clang-CL tries to be compatible to), and some languages such as Rust include them explicitly.
The status quo
==============
Rotates by compile-time constant amounts are fairly easy to express in C-like languages by ORing together two opposite-direction shifts. For example, rotating a uint32_t x left by 3 positions in C boils down to
(x << 3) | (x >> 29)
which is simple enough. The backends with rotate support detect this type of pattern at the DAG stage and map it to a rotate instruction.
This, in a nutshell, is the argument against a direct rotate in the IR: the existing primitives are sufficient to model it, and doing so allows LLVM to leverage transforms and analyses such as SimplifyDemandedBits that know how to deal with the more common regular shifts to work on bit rotate expressions as well. Adding a new primitive would mean that a whole bunch of optimization passes would need to know how to deal with it (or else they might lose on optimization opportunities).
Variable-distance rotates
=========================
Things become murkier when we consider not just rotates by a compile-time constant, but also rotates where the amount is determined at run time.
For one thing, it's not immediately obvious how to write a well-defined corresponding C expression: the most "natural" version (for a 32-bit left rotate) is
(x << k) | (x >> (32 - k))
but this is undefined for k==0 due to the right-shift of a 32-bit value by 32. That said, both GCC and Clang will generally compile this particular expression into a rotate-left as of this writing, but having UB is unfortunate. A trickier variant that is always well-defined is
(x << (k & 31)) | (x >> (-k & 31))
but this is a substantially more complicated sub-dag than the OR of two shifts you would see for a compile-time constant rotate distance, and there are more opportunities for things to go wrong along the way. Bug 37387 contains a few examples, the simplest of which was given by Sanjay:
void rotateInLoop(unsigned *x, unsigned N, unsigned *a, int b) {
for (unsigned i = 0; i < N; ++i)
x[ (a[i] >> b) | (a[i] << (32 - b)) ] = i; // shift amt is loop invariant
}
"32 - b" is loop-invariant and gets hoisted, and what's left of the expression doesn't match a known "rotate" pattern (since the DAG can't see across BB boundaries at the moment), so instead we get two shifts and an OR inside the loop.
The counterargument here was that GlobalISel should eventually fix this particular problem, but nevertheless, as of today, LICM breaks recognition of the rotate pattern in this (very simple) loop, and there is reason to believe that other transforms (existing or forthcoming) might well break the pattern as well; the issues referenced in the linked bug illustrate some of the existing failure modes.
This situation is frustrating for users of variable-distance rotates; right now, there simply is no way to write code in a source language that will reliably turn into rotates; you either have to watch the generated code closely, or just throw up your hands and use platform-dependent inline assembly. (It is assumed that the code in question is in a hot spot.)
The proposal
============
The idea would be to add rotate intrinsics, primarily intended to be used for the variable-distance case, which is otherwise fragile.
Frontends can emit them directly (for source language-level intrinsics, or when matched from "known good" rotate patterns) and the backends can match them and select appropriate instructions.
For the constant-distance case, the preferred form would still be the OR-of-shifts variant, which plays nice with SimplifyDemandedBits and is known to existing optimization passes.
SDB does not currently do anything with shifts by non-constant amounts anyway; and, for the use cases in question, a rotate that is not touched by other optimization passes is still preferable to a more complex expression that can be partially optimized, but then ends up in a form that the backend doesn't know how to turn back into a rotate, which is a net loss.
Thoughts?
-Fabian
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
I'd vote for the "valign" variant that David proposed. It becomes a
rotate when both inputs are the same.
<ty> %result = @llvm.valign.<ty>(<ty> %a0, <ty> %a1, i32 s)
result = truncate (concat(a1,a0) >> s)
Where "concat" concatenates the bits of both inputs to form a value of a
twice as long type, and "truncate" takes the lower half of the bits of
its input.
-Krzysztof
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
Given that this is a general problem that occurs with other instruction
sequences as well, wouldn't it make more sense to make the cost model
handle more than one instruction, as suggested in PR31274 [1]?
[1] https://bugs.llvm.org/show_bug.cgi?id=31274
In all these cases (rotate, min, max, abs, fma, add-with-overflow, and
probably many more) there's a tradeoff between elaborating them as
simpler IR instructions and modelling them as its own instruction /
intrinsic. A big disadvantage of introducing new instructions /
intrinsics is that all optimizations have to be told about this,
increasing the compiler code base and maintainability burden. On the
other hand, too few instructions can make optimization difficult as well
(in theory, one instruction like "subtract and branch if not equal to
zero" could emulate all the others, but this wouldn't be very helpful
for optimization). Since you put a lot of thought into how to
canonicalize IR, can you eleborate more on this tradeoff? Can we find a
set of criteria to determine whether an instruction pattern should get
an intrinsic or not?
-Manuel
On 2018-05-16 00:34, Sanjay Patel via llvm-dev wrote:
Vectorization goes overboard because the throughput cost model used by the
vectorizers doesn't match the 6 IR instructions that correspond to 1 x86
rotate instruction. Instead, we have:
[...]
The broken cost model also affects unrolling and inlining. Size costs are
overestimated for a target that has a rotate instruction.
This cost problem isn't limited to rotate patterns (it's come up in the
context of min/max/abs/fma too). But it would be simpler if we had a rotate
intrinsic, and the 6-to-1 margin is the biggest I've seen.
Given that this is a general problem that occurs with other instruction sequences as well, wouldn't it make more sense to make the cost model handle more than one instruction, as suggested in PR31274 [1]?
[1] https://bugs.llvm.org/show_bug.cgi?id=31274
In all these cases (rotate, min, max, abs, fma, add-with-overflow, and probably many more) there's a tradeoff between elaborating them as simpler IR instructions and modelling them as its own instruction / intrinsic. A big disadvantage of introducing new instructions / intrinsics is that all optimizations have to be told about this, increasing the compiler code base and maintainability burden. On the other hand, too few instructions can make optimization difficult as well (in theory, one instruction like "subtract and branch if not equal to zero" could emulate all the others, but this wouldn't be very helpful for optimization). Since you put a lot of thought into how to canonicalize IR, can you eleborate more on this tradeoff? Can we find a set of criteria to determine whether an instruction pattern should get an intrinsic or not?
> An informal metric might be: if the operation is supported as a
> primitive op or built-in in source languages and it is supported as a
> single target instruction, can we guarantee that 1-to-1 translation
> through optimization?
It seems perfectly reasonable for LLVM users to expect this to happen
reliably.
I'd like to take a look at the other side of the equation: the cost of
adding a new intrinsic in terms of teaching passes to see through it, so
we don't lose optimizations that worked before the intrinsic was added.
For example, clearly ValueTracking needs a few lines added so that
computeKnownBits and friends don't get stopped by a rotate. Anyone have
a reasonably complete list of files that need similar changes?
John
At this point the cost/benefit tradeoff for rotate intrinsics seems
pretty good.
John
> llvm...@lists.llvm.org <mailto:llvm...@lists.llvm.org>
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
The scalar rotate moves bits and such an operation doesn't make much
sense for moving data across lanes in vectors. I voted for the valign
variant originally, but I think that a per-element rotate would be the
natural vector version of the scalar operation.
It could still be doing the "double shift", since it's more general, it
just shouldn't be called valign. A per-byte cross-lane vector rotate (an
actual vector valign) can be implemented using shuffle, so I don't think
that an intrinsic for that is necessary.
-Krzysztof
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
Agreed. The per-element definition is the correct one.
– Steve
I think the most useful generalization of a vector funnel shift in this
context is lane-wise
result[i] = trunc(concat(a[i], b[i]) >> c[i])
(or the equivalent for a left shift); the special case a==b is a rotate.
2. For operand sizes that have native rotate instructions, at least x86,
x86-64, ARM A32/T32 and AArch64 A64 agree that rotate distances are
modulo the operand width. I believe PPC and MIPS do the same but am not
sure (it's been a while), no clue about other architectures.
It certainly seems the most natural way to define it, since rotates are
cyclic to begin with.
8- and 16-bit rotates will need to be lowered into multi-instruction
sequences on most targets (x86 and x86-64 can do them directly, but
RISC-lineage archs usually don't have rotates at smaller than word
size). Having explicit modulo semantics might end up forcing an explicit
extra AND there, so that's an extra cost there, but it would certainly
be nice to have the rotate definition be total.
-Fabian
> <mailto:llvm...@lists.llvm.org
> <mailto:llvm...@lists.llvm.org <mailto:llvm...@lists.llvm.org>>
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
> <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>>
>
>
>
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
declare i32 @llvm.catshift.i32(i32 %a, i32 %b, i32 %shift_amount) declare <2 x i32> @llvm.catshift.v2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %shift_amount)For scalars, @llvm.catshift concatenates %a and %b, shifts the concatenated value right by the number of bits specified by %shift_amount modulo the bit-width, and truncates to the original bit-width.
Or just negating, iff the shift amount is defined to be modulo and the
machine is two's complement.
I'm a bit worried that while modulo is the Obviously Right Thing for
rotates, the situation is less clear for general funnel shifts.
I looked over some of the ISAs I have docs at hand for:
- x86 (32b/64b variants) has SHRD/SHLD, so both right and left variants.
Count is modulo (mod 32 for 32b instruction variants, mod 64 for 64b
instruction variants). As of BMI2, we also get RORX (non-flag-setting
ROR) but no ROLX.
- ARM AArch64 has EXTR, which is a right funnel shift, but shift
distances must be literal constants. EXTR with both source registers
equal disassembles as ROR and is often special-cased in implementations.
(EXTR with source 1 != source 2 often has an extra cycle of latency).
There is RORV which is right rotate by a variable (register) amount;
there is no EXTRV.
- NVPTX has SHF
(https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#logic-and-shift-instructions-shf)
with both left/right shift variants and with both "clamp" (clamps shift
count at 32) and "wrap" (shift count taken mod 32) modes.
- GCN has v_alignbit_b32 which is a right funnel shift, and it seems to
be defined to take shift distances mod 32.
based on that sampling, modulo behavior seems like a good choice for a
generic IR instruction, and if you're going to pick one direction, right
shifts are the one to use. Not sure about other ISAs.
-Fabian
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
Sorry for the late reply to this thread, I'd like to mention that the
existing ISD ROTL/ROTR opcodes currently do not properly assume modulo
behaviour so that definition would need to be tidied up and made
explicit; the recent legalization code might need fixing as well. Are
you intending to add CONCATSHL/CONCATSRL ISD opcodes as well?
Additionally the custom SSE lowering that I added doesn't assume modulo
(although I think the vXi8 lowering might work already), and it only
lowers for ROTL at the moment (mainly due to a legacy of how the XOP
instructions work), but adding ROTR support shouldn't be difficult.