I am learning Register Allocation algorithms and I am clear that:
* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)
* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it
has to spill code when PhysReg is unavailable
* Folding spill code into instructions, handling register coallescing,
splitting live ranges, doing rematerialization, doing shrink wrapping
are harder than RegAlloc
* LRA and IRA is default Passes in RA for GCC:
$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409
* Greedy is default Pass for LLVM
But I have some questions, please give me some hint, thanks a lot!
* IRA is regional register allocator performing graph coloring on a
top-down traversal of nested regions, is it Global? compares with Local LRA
* The papers by Briggs and Chaiten contradict[2] themselves when examine
the text of the paper vs. the pseudocode provided?
* Why interference graph is expensive to build[3]?
And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for
LLVM firstly.
[1] https://reviews.llvm.org/D39712
[2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html
[3] https://github.com/joaotavio/llvm-register-allocator
[4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol
--
Regards,
Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Thanks for your kind and very detailed response!
在 2017年12月15日 12:40, Vladimir Makarov 写道:
>
>
> On 12/14/2017 10:18 PM, Leslie Zhai wrote:
>> Hi GCC and LLVM developers,
>>
>> I am learning Register Allocation algorithms and I am clear that:
>>
>> * Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg
>> (hard)
>>
>> * Memory (20 - 100 cycles) is expensive than Register (1 cycle), but
>> it has to spill code when PhysReg is unavailable
>>
> It might be much less if memory value is in L1 cache.
>> * Folding spill code into instructions, handling register
>> coallescing, splitting live ranges, doing rematerialization, doing
>> shrink wrapping are harder than RegAlloc
>>
> RegAlloc is in a wide sense includes all this tasks and more. For
> some architectures, other tasks like a right live range splitting
> might be even more important for generated code quality than just
> better graph coloring.
>> * LRA and IRA is default Passes in RA for GCC:
>>
>> $ /opt/gcc-git/bin/gcc hello.c
>> DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
>> DEBUG: ../../gcc/ira-build.c, ira_build, line 3409
>>
>> * Greedy is default Pass for LLVM
>>
>> But I have some questions, please give me some hint, thanks a lot!
>>
>> * IRA is regional register allocator performing graph coloring on a
>> top-down traversal of nested regions, is it Global? compares with
>> Local LRA
> IRA is a global RA. The description of its initial version can be found
>
> https://vmakarov.fedorapeople.org/vmakarov-submission-cgo2008.pdf
I am reading this paper at present :)
>
>
> LRA in some way is also global RA but it is a very simplified version
> of global RA (e.g. LRA does not use conflict graph and its coloring
> algoritm is closer to priority coloring). LRA does a lot of other
> very complicated things besides RA, for example instruction selection
> which is quite specific to GCC machine description. Usually code
> selection task is a separate pass in other compilers. Generally
> speaking LRA is more complicated, machine dependent and more buggy
> than IRA. But fortunately LRA is less complicated than its
> predecessor so called reload pass.
>
> IRA and LRA names have a long history and they do not reflect
> correctly the current situation.
>
> It would be possible to incorporate LRA tasks into IRA, but the final
> RA would be much slower, even more complicated and hard to maintain
> and the generated code would be not much better. So to improve RA
> maintainability, RA is divided on two parts solving a bit different
> tasks. This is a typical engineering approach.
I am debugging by printf to be familiar with LRA and IRA.
>
>>
>> * The papers by Briggs and Chaiten contradict[2] themselves when
>> examine the text of the paper vs. the pseudocode provided?
> I don't examine Preston Briggs work so thoroughly. So I can not say
> that is true. Even so it is natural that there are discrepancy in
> pseudocode and its description especially for such size description.
>
> For me Preston Briggs is famous for his introduction of optimistic
> coloring.
>>
>> * Why interference graph is expensive to build[3]?
>>
> That is because it might be N^2 algorithm. There are a lot of
> publications investigating building conflict graphs and its cost in RAs.
>> And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis,
>> for LLVM firstly.
>>
> When I just started to work on RAs very long ago I used about the same
> approach: a lot of tiny transformations directed by a cost function
> and using metaheuristics (I also used tabu search as HEA). Nothing
> good came out of this.
Thanks for your lesson! But are there some benchmarks when you used Tabu
search as HEA, AntCol, etc. such as
https://pbs.twimg.com/media/DRD-kxcUMAAxZec.jpg
>
>
> If you are interesting in RA algorithms and architectures, I'd
> recommend Michael Matz article
>
> ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf
>
>
> as a start point.
Thanks! I am reading it.
https://vmakarov.fedorapeople.org/vmakarov-submission-cgo2008.pdf
LRA in some way is also global RA but it is a very simplified version of
global RA (e.g. LRA does not use conflict graph and its coloring
algoritm is closer to priority coloring). LRA does a lot of other very
complicated things besides RA, for example instruction selection which
is quite specific to GCC machine description. Usually code selection
task is a separate pass in other compilers. Generally speaking LRA is
more complicated, machine dependent and more buggy than IRA. But
fortunately LRA is less complicated than its predecessor so called
reload pass.
IRA and LRA names have a long history and they do not reflect correctly
the current situation.
It would be possible to incorporate LRA tasks into IRA, but the final RA
would be much slower, even more complicated and hard to maintain and the
generated code would be not much better. So to improve RA
maintainability, RA is divided on two parts solving a bit different
tasks. This is a typical engineering approach.
>
> * The papers by Briggs and Chaiten contradict[2] themselves when
> examine the text of the paper vs. the pseudocode provided?
I don't examine Preston Briggs work so thoroughly. So I can not say
that is true. Even so it is natural that there are discrepancy in
pseudocode and its description especially for such size description.
For me Preston Briggs is famous for his introduction of optimistic coloring.
>
> * Why interference graph is expensive to build[3]?
>
That is because it might be N^2 algorithm. There are a lot of
publications investigating building conflict graphs and its cost in RAs.
> And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for
> LLVM firstly.
>
When I just started to work on RAs very long ago I used about the same
approach: a lot of tiny transformations directed by a cost function and
using metaheuristics (I also used tabu search as HEA). Nothing good came
out of this.
If you are interesting in RA algorithms and architectures, I'd recommend
Michael Matz article
ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf
as a start point.
>
> [1] https://reviews.llvm.org/D39712
>
> [2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html
>
> [3] https://github.com/joaotavio/llvm-register-allocator
>
> [4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol
>
_______________________________________________
I've read both of these papers many times (in the past) and I don't recall
any contradictions in them. Can you (Dave?) be more specific about what you
think are contradictions?
I do admit that pseudo code in papers can be very terse, to the point that
they don't show all the little details that are needed to actually implement
them, but they definitely shouldn't contradict their written description.
I was very grateful that Preston was more than willing to answer all my many
questions regarding his allocator and the many many details he couldn't
mention in his Ph.D. thesis, let alone a short paper.
Peter
I am trying to build Dimacs Graph with VirtReg (pseudo) I could counting
G.Nodes and Edges
https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L359
It just translated gCol/HybridEA's inputDimacsGraph to LLVM pseudo
registers' traverse, but I am not clear what is Node1 or Node2, is it
refer to the Index of vertices?
In the gCol/HybridEA/graph.txt, for example:
e 2 1
Is it means there is an edge between Node2 and Node1? if so, it might be
equivalent to LLVM's VirtReg1->overlaps(*VirtReg2)
And follow your mind,
Node1 = 2, Node2 =1;
if (Node1 < 1 || Node1 > G.Nodes || Node2 < 1 || Node2 > G.Nodes) errs()
<< "Node is out of range\n";
Node1--, Node2--; // Why minus?
if (G[Node1][Node2] == 0) G.Edges++;
G[Node1][Node2] = 1, G[Node2][Node1] = 1;
Please give me some hint, thanks a lot!
在 2017年12月15日 11:18, Leslie Zhai 写道:
> Hi GCC and LLVM developers,
>
> I am learning Register Allocation algorithms and I am clear that:
>
> * Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg
> (hard)
>
> * Memory (20 - 100 cycles) is expensive than Register (1 cycle), but
> it has to spill code when PhysReg is unavailable
>
> * Folding spill code into instructions, handling register coallescing,
> splitting live ranges, doing rematerialization, doing shrink wrapping
> are harder than RegAlloc
>
> * LRA and IRA is default Passes in RA for GCC:
>
> $ /opt/gcc-git/bin/gcc hello.c
> DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
> DEBUG: ../../gcc/ira-build.c, ira_build, line 3409
>
> * Greedy is default Pass for LLVM
>
> But I have some questions, please give me some hint, thanks a lot!
>
> * IRA is regional register allocator performing graph coloring on a
> top-down traversal of nested regions, is it Global? compares with
> Local LRA
>
> * The papers by Briggs and Chaiten contradict[2] themselves when
> examine the text of the paper vs. the pseudocode provided?
>
> * Why interference graph is expensive to build[3]?
>
> And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for
> LLVM firstly.
>
>
> [1] https://reviews.llvm.org/D39712
>
> [2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html
>
> [3] https://github.com/joaotavio/llvm-register-allocator
>
> [4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol
>
--
Regards,
Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/
_______________________________________________
> * Memory (20 - 100 cycles) is expensive than Register (1 cycle), but
> it has to spill code when PhysReg is unavailable
As Vladimir said, the cache makes this kind of analysis much more
tricky. It's not necessarily the case that memory=bad and
register=good. Since there are tradeoffs involved, one needs to
evaluate different strategies to determine *when* memory is worse than a
register. It may very well be the case that leaving something in memory
frees up a register for something much more important to use it. All of
the register allocation algorithms try to determine this kind of thing
through various heuristics. Which heuristic is most effective is highly
target-dependent.
In my experience, changing heuristics can swing performance 20% or more
in some cases. Today's processors and memory systems are so complex
that 2nd- and even 3rd-order effects become important.
It is very, very wrong on today's machines to use # of spills as a
metric to determine the "goodness" of an allocation. Determining *what*
to spill is much more important than the raw number of spills. Many
times I have a seen codes generated with more spills perform much better
than the code generated with fewer spills. Almost all of the papers
around the time of Chaiten-Briggs used # of spills as the metric. That
may have been appropriate at that time but take those results with giant
grains of salt today. Of course they are still very good papers for
understanding algorithms and heuristics.
The best way I know to determine what's best for your target is to run a
whole bunch of *real* codes on them, trying different allocation
algorithms and heuristics. It is a lot of work, still worthy of a
Ph.D. even today. Register allocation is not a "solved" problem and
probably never will be as architectures continue to change and become
ever more diverse.
> * Folding spill code into instructions, handling register coallescing,
> splitting live ranges, doing rematerialization, doing shrink wrapping
> are harder than RegAlloc
Again, as Vladimir said, they are all part of register allocation.
Sometimes they are implemented as separate passes but logically they all
contribute work to the task of assigning registers as efficiently as
possible. And all of them use heuristics. Choosing when and when not
to, for example, coalesce can be important. Splitting live ranges is
logically the opposite of coalescing. Theoretically one can "undo" a
bad coalescing decision by re-splitting the live range but it's usually
not that simple as other decisions in the interim can make that tricky
if not impossible. It is a delicate balancing act.
> * The papers by Briggs and Chaiten contradict[2] themselves when
> examine the text of the paper vs. the pseudocode provided?
As others have said, I don't recall any inconsistencies but that doesn't
mean there aren't bugs and/or incomplete descriptions open to
interpretation. One of my biggest issues with our computer academic
culture is that we do not value repeatability. It is virtually
impossible to take a paper, implement the algorithm as described (or as
best as possible given ambiguity) and obtain the same results. Heck,
half my Ph.D. dissertation was dissecting a published paper, examining
the sensitivity of the described algorithm to various parts of the
described heuristic that were ambiguous. By interpreting the heuristic
description in different ways I observed very different results. I read
papers for the algorithms, heuristics and ideas in them. I pay no
attention to results because in the real world we have to implement the
ideas and test them ourselves to see if they will help us.
Peter is right to point you to Preston. He is very accessible, friendly
and helpful. I had the good fortune to work with him for a few years
and he taught me a lot. He has much real-world experience on codes
actually used in production. That experience is gold.
Good luck to you! You're entering a world that some computing
professionals think is a waste of time because "we already know how to
do that." Those of us in the trenches know better. :)
-David
Thanks for your teaching!
I am a newbie in compiler area, I only learned Compiler Principle in
2002 https://www.leetcode.cn/2017/12/ilove-compiler-principle.html
But I like to practice and learn :)
https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327
because theory is not always correct, or misunderstood by people, so I
want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.
Thanks for your lessons to correct my mistakes, such as memory=bad
register=good, and I need to find the answer *when* memory is worse than
a register, I am maintaining AVR target, there are 32 general registers,
32K flash, 2K sram
http://www.atmel.com/Images/Atmel-42735-8-bit-AVR-Microcontroller-ATmega328-328P_Datasheet.pdf
so perhaps to MCU, memory might be expensive than register? but what
about AMDGPU or VLIW processors? I don't have experienced on them,
please teach me.
I am reading LLVM's code SpillXXX, LiveRangeXXX, RegisterCoalescer, etc.
to get the whole view of CodeGen.
I am reading Dr. Rhydian Lewis's book: A Guide to Graph Colouring:
Algorithms and Applications
http://www.springer.com/us/book/9783319257280 and other papers, even
if HEA is not the best solution, I still want to practice and see the
benchmark, I am not computing professionals, I am only 34 olds, perhaps
I have enough time to waste :)
--
Regards,
Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/
_______________________________________________
> But I like to practice and learn :)
> https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327because theory is not always correct, or misunderstood by people, so I
> want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.
That is a very good way to learn. Learn by doing and observing how
results change as parameters vary. You will never stop learning. :)
> Thanks for your lessons to correct my mistakes, such as memory=bad
> register=good, and I need to find the answer *when* memory is worse
> than a register, I am maintaining AVR target, there are 32 general
> registers, 32K flash, 2K sram
> http://www.atmel.com/Images/Atmel-42735-8-bit-AVR-Microcontroller-ATmega328-328P_Datasheet.pdfso perhaps to MCU, memory might be expensive than register? but what
> about AMDGPU or VLIW processors? I don't have experienced on them,
> please teach me.
I do not have much experience with those architectures either. As I
said, the "best" algorithm for register allocation is very
target-dependent. What works well on AVR might work very poorly on a
GPU. The only way to know is to test, test, test. Of course one can
make some educated guesses to narrow the amount of testing. Many times
a "good" allocator is "good enough" on many targets. I work for a
company that tries to squeeze every last bit of performance out of
codes. We're a bit fanatical that way so we try lots of things. Most
places aren't that obsessive. :)
> I am reading LLVM's code SpillXXX, LiveRangeXXX, RegisterCoalescer,
> etc. to get the whole view of CodeGen.
Those are great places to learn about register allocation! They can
also be complicated and a bit daunting. The folks on the LLVM list can
help guide you but you will also do well just making observations,
stepping through with a debugger, etc. I certainly don't claim to
understand all of the nuances in this code. Lots of people have
contributed to it over the years.
> I am reading Dr. Rhydian Lewis's book: A Guide to Graph Colouring:
> Algorithms and Applications
> http://www.springer.com/us/book/9783319257280 and other papers, even
> if HEA is not the best solution, I still want to practice and see the
> benchmark, I am not computing professionals, I am only 34 olds,
> perhaps I have enough time to waste :)
I am not familiar with that book but lots of reading will do you well.
There's an endless supply of papers to look at. And practice, practice,
practice. You seem to be on the right track!
-David
> 在 2017年12月19日 00:41, d...@cray.com 写道:
I suggest adding these 3 papers to your reading list.
Register allocation for programs in SSA-form
Sebastian Hack, Daniel Grund, and Gerhard Goos
http://www.rw.cdl.uni-saarland.de/~grund/papers/cc06-ra_ssa.pdf
Simple and Efficient Construction of Static Single Assignment Form
Matthias Braun , Sebastian Buchwald , Sebastian Hack , Roland Leißa , Christoph Mallon , and Andreas Zwinkau
https://www.info.uni-karlsruhe.de/uploads/publikationen/braun13cc.pdf
Optimal register allocation for SSA-form programs in polynomial time
Sebastian Hack, Gerhard Goos
http://web.cs.ucla.edu/~palsberg/course/cs232/papers/HackGoos-ipl06.pdf
A lot of the earlier literature regarding the register allocation problem describes the general graph colouring problem as NP-complete, however previous research in Register Allocation has been in the context of programs that were not in SSA form. i.e. the Chaitin-Briggs paper states that register allocation is NP-complete and proposes an iterative algorithm.
If one reads Hack Goos, there is the discovery that programs that are in SSA form have chordal graphs, and the colouring algorithm for chordal graphs can be completed in polynomial time. After the cost of building the interference graph, it is possible to perform register allocation in a single pass. The key is in not modifying the graph.
If one has frequency for each basic block, then one can sort basic blocks by frequency, allocating the highest frequency blocks first, and where possible assigning the same physcial register to all virtual registers in each phi node (to avoid copies). At program points where the live set is larger than k, the set of physical registers, one spills the the register that has the largest distance between its next use. At least that is how I am thinking about this problem. I am also a novice with regards to register allocation.
I intend to develop a register allocator for use in this JIT engine:
rv8: a high performance RISC-V to x86 binary translator
Michael Clark, Bruce Hoult
https://carrv.github.io/papers/clark-rv8-carrv2017.pdf
Our JIT already performs almost twice as fast a QEMU and we are using a static register allocation, and QEMU i believe has a register allocator. We are mapping a 31 register RISC architecture to a 16 register CISC architecture. I have statistics on the current slow-down, and it appears to mainly be L1 accesses to spilled registers. I believe after we have implemented a register allocator in our JIT we will be able to run RISC-V binaries at near native performance on x86-64. In fact given we are performing runtime profile guided optimisation, it may even be possible for some benchmarks to run faster than register allocations that are based on static estimates of loop frequencies.
https://anarch128.org/~mclark/rv8-slides.pdf
https://rv8.io/bench
We’ve still got a long way to go to get to ~1:1 performance for RISC-V on x86-64, but I think it is possible, at least for some codes…
Regards,
Michael.
> On 15/12/2017, at 4:18 PM, Leslie Zhai <lesli...@llvm.org.cn> wrote:
>
> Hi GCC and LLVM developers,
>
> I am learning Register Allocation algorithms and I am clear that:
>
> * Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)
>
> * Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it has to spill code when PhysReg is unavailable
>
> * Folding spill code into instructions, handling register coallescing, splitting live ranges, doing rematerialization, doing shrink wrapping are harder than RegAlloc
>
> * LRA and IRA is default Passes in RA for GCC:
>
> $ /opt/gcc-git/bin/gcc hello.c
> DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
> DEBUG: ../../gcc/ira-build.c, ira_build, line 3409
>
> * Greedy is default Pass for LLVM
>
> But I have some questions, please give me some hint, thanks a lot!
>
> * IRA is regional register allocator performing graph coloring on a top-down traversal of nested regions, is it Global? compares with Local LRA
>
> * The papers by Briggs and Chaiten contradict[2] themselves when examine the text of the paper vs. the pseudocode provided?
>
> * Why interference graph is expensive to build[3]?
>
> And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for LLVM firstly.
>
>
> [1] https://reviews.llvm.org/D39712
>
> [2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html
>
> [3] https://github.com/joaotavio/llvm-register-allocator
>
> [4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol
>
> --
> Regards,
> Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/
>
>
>
_______________________________________________
On Dec 18, 2017, at 9:52 AM, Leslie Zhai via llvm-dev <llvm...@lists.llvm.org> wrote:
Hi David,
Thanks for your teaching!
I am a newbie in compiler area, I only learned Compiler Principle in 2002 https://www.leetcode.cn/2017/12/ilove-compiler-principle.html
But I like to practice and learn :) https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327 because theory is not always correct, or misunderstood by people, so I want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.
Thanks for your lessons to correct my mistakes, such as memory=bad register=good, and I need to find the answer *when* memory is worse than a register, I am maintaining AVR target, there are 32 general registers, 32K flash, 2K sram http://www.atmel.com/Images/Atmel-42735-8-bit-AVR-Microcontroller-ATmega328-328P_Datasheet.pdf so perhaps to MCU, memory might be expensive than register? but what about AMDGPU or VLIW processors? I don't have experienced on them, please teach me.
I am reading LLVM's code SpillXXX, LiveRangeXXX, RegisterCoalescer, etc. to get the whole view of CodeGen.
I am reading Dr. Rhydian Lewis's book: A Guide to Graph Colouring: Algorithms and Applications http://www.springer.com/us/book/9783319257280 and other papers, even if HEA is not the best solution, I still want to practice and see the benchmark, I am not computing professionals, I am only 34 olds, perhaps I have enough time to waste :)
Thanks for your sharing!
I will read the papers as you suggested, and I asked Quantum Computing
professionals about solving the NP-complete problem
https://github.com/epiqc/ScaffCC/issues/14 but GCC's IRA and LRA proved
that there is a solution in SSA form for classical computer.
Sorting MachineBasicBlock by frequency firstly is a good idea, and I
only experienced some PASS, such as LoopUnroll
http://lists.llvm.org/pipermail/llvm-dev/2017-October/118419.html but I
will practice your idea in my solutionByHEA
https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327
I experienced binary translation from X86 to Mips, QEMU is expensive
https://www.leetcode.cn/2017/09/tcg-ir-to-llvm-ir.html I will bookmark
your JIT engine to my blog :)
Thanks for your hint!
It is just for learning and practicing for me, just like migrate
DragonEgg
http://lists.llvm.org/pipermail/llvm-dev/2017-September/117201.html the
motivating is for learning from GCC and LLVM developers.
在 2017年12月19日 10:07, Matthias Braun 写道:
>
>
>> On Dec 18, 2017, at 9:52 AM, Leslie Zhai via llvm-dev
>> <llvm...@lists.llvm.org <mailto:llvm...@lists.llvm.org>> wrote:
>>
>> Hi David,
>>
>> Thanks for your teaching!
>>
>> I am a newbie in compiler area, I only learned Compiler Principle in
>> 2002https://www.leetcode.cn/2017/12/ilove-compiler-principle.html
>> Applicationshttp://www.springer.com/us/book/9783319257280 and other
>> papers, even if HEA is not the best solution, I still want to
>> practice and see the benchmark, I am not computing professionals, I
>> am only 34 olds, perhaps I have enough time to waste :)
>>
>>
>> 在 2017年12月19日 00:41,d...@cray.com <mailto:d...@cray.com>写道:
>>> Leslie Zhai <lesli...@llvm.org.cn <mailto:lesli...@llvm.org.cn>>
>> Leslie Zhai -https://reviews.llvm.org/p/xiangzhai/
>>
>>
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm...@lists.llvm.org <mailto:llvm...@lists.llvm.org>
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
On 12/18/2017 07:07 PM, Michael Clark wrote:
> Hi Leslie,
>
> I suggest adding these 3 papers to your reading list.
>
> Register allocation for programs in SSA-form
> Sebastian Hack, Daniel Grund, and Gerhard Goos
> http://www.rw.cdl.uni-saarland.de/~grund/papers/cc06-ra_ssa.pdf
>
> Simple and Efficient Construction of Static Single Assignment Form
> Matthias Braun , Sebastian Buchwald , Sebastian Hack , Roland Leißa , Christoph Mallon , and Andreas Zwinkau
> https://www.info.uni-karlsruhe.de/uploads/publikationen/braun13cc.pdf
>
> Optimal register allocation for SSA-form programs in polynomial time
> Sebastian Hack, Gerhard Goos
> http://web.cs.ucla.edu/~palsberg/course/cs232/papers/HackGoos-ipl06.pdf
>
> A lot of the earlier literature regarding the register allocation problem describes the general graph colouring problem as NP-complete, however previous research in Register Allocation has been in the context of programs that were not in SSA form. i.e. the Chaitin-Briggs paper states that register allocation is NP-complete and proposes an iterative algorithm.
>
>
I am sceptical about SSA-register allocators for high quality code
generation. But I should acknowledge although I tried a lot of RA
algorithms I never tried SSA one.
One task of RA is to deal with moves but when you are destroying SSA you
are creating moves or swaps. Of course there are optimizations to
decrease its number when you are going out of SSA. But IMHO a good RA
should see all moves created before or during own work.
As I already wrote coloring is just one out of many task of RA. All
these tasks in a good RA should be solved in cooperation. Optimistic
coloring is good and fast enough (although building a conflict graph for
it takes a lot of time). I think better results can be obtained by
improving other tasks than by improving optimistic coloring.
For example, nobody mentioned irregular file architectures (with
subregisters and register subclasses) like x86/x86-64. Even regular
file architectures with caller/callee saved hard registers have
irregularity because it creates register subclasses, e.g. class of saved
general registers is a subclass of the overall general register class.
Usually hard registers are present in IR and if some pseudos conflict
with the hard registers, it also create a lot (intersected) subclasses.
Taking such irregularity, e.g. in coloring criteria based on Kempe's
heuristic, can improve the final RA significantly (as I remember GCC
generated code for SPECInt2000 even on ppc64 with mostly regular
register file was improved by 1% by taking such subclasses into account
during coloring). A classical article about this
https://www.researchgate.net/publication/2440424_Graph-Coloring_Register_Allocation_for_Irregular_Architectures
could be a start point for such implementation.
On Dec 18, 2017, at 8:16 PM, Vladimir Makarov via llvm-dev <llvm...@lists.llvm.org> wrote:
On 12/18/2017 07:07 PM, Michael Clark wrote:Hi Leslie,I am sceptical about SSA-register allocators for high quality code generation. But I should acknowledge although I tried a lot of RA algorithms I never tried SSA one.
I suggest adding these 3 papers to your reading list.
Register allocation for programs in SSA-form
Sebastian Hack, Daniel Grund, and Gerhard Goos
http://www.rw.cdl.uni-saarland.de/~grund/papers/cc06-ra_ssa.pdf
Simple and Efficient Construction of Static Single Assignment Form
Matthias Braun , Sebastian Buchwald , Sebastian Hack , Roland Leißa , Christoph Mallon , and Andreas Zwinkau
https://www.info.uni-karlsruhe.de/uploads/publikationen/braun13cc.pdf
Optimal register allocation for SSA-form programs in polynomial time
Sebastian Hack, Gerhard Goos
http://web.cs.ucla.edu/~palsberg/course/cs232/papers/HackGoos-ipl06.pdf
A lot of the earlier literature regarding the register allocation problem describes the general graph colouring problem as NP-complete, however previous research in Register Allocation has been in the context of programs that were not in SSA form. i.e. the Chaitin-Briggs paper states that register allocation is NP-complete and proposes an iterative algorithm.
One task of RA is to deal with moves but when you are destroying SSA you are creating moves or swaps. Of course there are optimizations to decrease its number when you are going out of SSA. But IMHO a good RA should see all moves created before or during own work.
As I already wrote coloring is just one out of many task of RA. All these tasks in a good RA should be solved in cooperation. Optimistic coloring is good and fast enough (although building a conflict graph for it takes a lot of time). I think better results can be obtained by improving other tasks than by improving optimistic coloring.
For example, nobody mentioned irregular file architectures (with subregisters and register subclasses) like x86/x86-64. Even regular file architectures with caller/callee saved hard registers have irregularity because it creates register subclasses, e.g. class of saved general registers is a subclass of the overall general register class. Usually hard registers are present in IR and if some pseudos conflict with the hard registers, it also create a lot (intersected) subclasses.
On Dec 18, 2017, at 19:03, Leslie Zhai <lesli...@llvm.org.cn> wrote:
Guo, J., Garzarán, M. J., & Padua, D. (2004). The Power of Belady’s Algorithm in Register Allocation for Long Basic Blocks. In Languages and Compilers for Parallel Computing (Vol. 2958, pp. 374–389). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-540-24644-2_24Braun, M., & Hack, S. (2009). Register spilling and live-range splitting for SSA-form programs. International Conference on Compiler Construction.
Pereira, F. Q., & Palsberg, J. (2008). Register allocation by puzzle solving.
Thanks for your kind response!
My usecase is for AVR and RISCV targets, and I just want to learn and
practice HEA in RA, thanks for your sharing.
在 2017年12月21日 01:25, Jakob Stoklund Olesen 写道:
>
>> On Dec 18, 2017, at 19:03, Leslie Zhai <lesli...@llvm.org.cn
--
Hi Jakob,
Thanks for your kind response!
My usecase is for AVR and RISCV targets, and I just want to learn and practice HEA in RA, thanks for your sharing.
在 2017年12月21日 01:25, Jakob Stoklund Olesen 写道:
Thanks for your sharing!
I am porting GlobalISel to RISCV target[1], the highest priority in the
TODO list[2], welcome to contribute to lowRISC, if fixed all the issues,
then I could try to implement RegAllocGraphColoring in HEA and write
great Machine Schedulers.
[1] https://github.com/lowRISC/riscv-llvm/issues/19
[2] https://github.com/lowRISC/riscv-llvm/issues
> <mailto:lesli...@llvm.org.cn
> llvm...@lists.llvm.org <mailto:llvm...@lists.llvm.org>
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>