long live interval that has low usage density.
1. Change order of coalescing to join physical registers with virtual
registers first before virtual register intervals become too long.
2. Check size and usage density to determine if it's worthwhile to join.
3. If joining is aborted, assign virtual register live interval allocation
preference field to the physical register.
4. Register allocator should try to allocate to the preferred register
first (if available) to create identify moves that can be eliminated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36218 91177308-0d34-0410-b5e6-96231b3b80d8
of dead def live interval at 1 to avoid multiple def's targeting the same
register. The previous patch missed a case where the source operand is live-in.
In that case, remove the whole interval.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35512 91177308-0d34-0410-b5e6-96231b3b80d8
to be really bad. Once they are joined they are not broken apart. Also, physical
intervals cannot be spilled!
Added a heuristic as a workaround for this. Be careful coalescing with a
physical register if the virtual register uses are "far". Check if there are
uses in the same loop as the source (copy instruction). Check if it is in the
loop preheader, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35134 91177308-0d34-0410-b5e6-96231b3b80d8
entry (0x8b056f0, LLVM BB @0x8b01b30, ID#0):
Live Ins: %r0 %r1 %r2 %r3
%reg1032 = tMOVrr %r3<kill>
%reg1033 = tMOVri8 1
%reg1034 = tMOVri8 0
tCMPi8 %reg1029<kill>, 0
tBcc mbb<entry,0x8b06a10>, 0
Successors according to CFG: 0x8b06980 0x8b06a10
entry (0x8b06980, LLVM BB @0x8b01b30, ID#12):
Predecessors according to CFG: 0x8b056f0
%reg1036 = tMOVrr %reg1034<kill>
Successors according to CFG: 0x8b06a10
entry (0x8b06a10, LLVM BB @0x8b01b30, ID#13):
Predecessors according to CFG: 0x8b056f0 0x8b06980
%reg1024<dead> = tMOVrr %reg1030<kill>
...
reg1030 and r1 have already been joined. When reg1024 and reg1030 are joined,
r1 live range from function entry to the tMOVrr instruction are dead. Eliminate
r1 from the livein set of the entry BB, not the BB where the copy is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34866 91177308-0d34-0410-b5e6-96231b3b80d8
Revert patches that caused the problem. Evan, please investigate and reapply
when you've discovered the problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34399 91177308-0d34-0410-b5e6-96231b3b80d8
- When coalescing a copy MI, if its destination is "dead", propagate the
property to the source MI's destination if there are no intervening uses.
- Detect dead function live-in's and remove them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34383 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRanges, creates a new LiveInterval from them. The LiveRanges should
have existed already in another LiveInterval, but removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31780 91177308-0d34-0410-b5e6-96231b3b80d8
by 40%, FreeBench/fourinarow by 20%, and many other programs 10-25%.
On PPC, this speeds up fourinarow by 18%, and probably other things as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31504 91177308-0d34-0410-b5e6-96231b3b80d8
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31380 91177308-0d34-0410-b5e6-96231b3b80d8
actually *removes* one of the operands, instead of just assigning both operands
the same register. This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.
Changing this also gets rid of a bunch of hacky code in various places.
This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30108 91177308-0d34-0410-b5e6-96231b3b80d8
number of copies, potentially defining live ranges that appear to have
differing value numbers that become identical when coallsced. Among other
things, this fixes CodeGen/X86/shift-coalesce.ll and PR687.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29968 91177308-0d34-0410-b5e6-96231b3b80d8
paves the way for future changes, increases coallescing opportunities (in
theory, not witnessed in practice), and eliminates the really expensive
LiveIntervals::overlapsAliases method.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29890 91177308-0d34-0410-b5e6-96231b3b80d8
instructions which define each value#) to simplify and improve the coallescer.
In particular, this patch:
1. Implements iterative coallescing.
2. Reverts an unsafe hack from handlePhysRegDef, superceeding it with a
better solution.
3. Implements PR865, "coallescing" away the second copy in code like:
A = B
...
B = A
This also includes changes to symbolically print registers in intervals
when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29862 91177308-0d34-0410-b5e6-96231b3b80d8
But this is incorrect if the spilled value live range extends beyond the
current BB.
It is currently controlled by a temporary option -spiller-check-liveout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28024 91177308-0d34-0410-b5e6-96231b3b80d8
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2)
with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range.
The resultant range ends up being [0-30:0)[31-60:1).
This fires a lot through-out the test suite (e.g. shrinking bc from
19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about
50 copies eliminated from crafty).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23866 91177308-0d34-0410-b5e6-96231b3b80d8
only add a reload live range once for the instruction. This is one step
towards fixing a regalloc pessimization that Nate notice, but is later undone
by the spiller (so no code is changed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23293 91177308-0d34-0410-b5e6-96231b3b80d8
numbering values in live ranges for physical registers.
The alpha backend currently generates code that looks like this:
vreg = preg
...
preg = vreg
use preg
...
preg = vreg
use preg
etc. Because vreg contains the value of preg coming in, each of the
copies back into preg contain that initial value as well.
In the case of the Alpha, this allows this testcase:
void "foo"(int %blah) {
store int 5, int *%MyVar
store int 12, int* %MyVar2
ret void
}
to compile to:
foo:
ldgp $29, 0($27)
ldiq $0,5
stl $0,MyVar
ldiq $0,12
stl $0,MyVar2
ret $31,($26),1
instead of:
foo:
ldgp $29, 0($27)
bis $29,$29,$0
ldiq $1,5
bis $0,$0,$29
stl $1,MyVar
ldiq $1,12
bis $0,$0,$29
stl $1,MyVar2
ret $31,($26),1
This does not seem to have any noticable effect on X86 code.
This fixes PR535.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20536 91177308-0d34-0410-b5e6-96231b3b80d8
Make only one print method to avoid overloaded virtual warnings when \
compiled with -Woverloaded-virtual
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18589 91177308-0d34-0410-b5e6-96231b3b80d8
it was a use, def, or both. This allows us to be less pessimistic in our
analysis of them. In practice, this doesn't make a big difference, but it
doesn't hurt either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16632 91177308-0d34-0410-b5e6-96231b3b80d8
* Add const_iterator stuff
* Add a print method, which means that I can now call dump() from the
debugger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16612 91177308-0d34-0410-b5e6-96231b3b80d8
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
Regression.CodeGen.Generic.2004-04-09-SameValueCoalescing.llx and the
code size problem.
This bug prevented us from doing most register coallesces.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16031 91177308-0d34-0410-b5e6-96231b3b80d8
same as the PHI use. This is not correct as the PHI use value is different
depending on which branch is taken. This fixes espresso with aggressive
coallescing, and perhaps others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15189 91177308-0d34-0410-b5e6-96231b3b80d8
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15167 91177308-0d34-0410-b5e6-96231b3b80d8
Interval. This generalizes the isDefinedOnce mechanism that we used before
to help us coallesce ranges that overlap. As part of this, every logical
range with a different value is assigned a different number in the interval.
For example, for code that looks like this:
0 X = ...
4 X += ...
...
N = X
We now generate a live interval that contains two ranges: [2,6:0),[6,?:1)
reflecting the fact that there are two different values in the range at
different positions in the code.
Currently we are not using this information at all, so this just slows down
liveintervals. In the future, this will change.
Note that this change also substantially refactors the joinIntervalsInMachineBB
method to merge the cases for virt-virt and phys-virt joining into a single
case, adds comments, and makes the code a bit easier to follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15154 91177308-0d34-0410-b5e6-96231b3b80d8
* Inline some functions
* Eliminate some comparisons from the release build
This is good for another .3 on gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15144 91177308-0d34-0410-b5e6-96231b3b80d8
will soon be renamed) into their own file. The new file should not emit
DEBUG output or have other side effects. The LiveInterval class also now
doesn't know whether its working on registers or some other thing.
In the future we will want to use the LiveInterval class and friends to do
stack packing. In addition to a code simplification, this will allow us to
do it more easily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15134 91177308-0d34-0410-b5e6-96231b3b80d8
Use an explicit LiveRange class to represent ranges instead of an std::pair.
This is a minor cleanup, but is really intended to make a future patch simpler
and less invasive.
Alkis, could you please take a look at LiveInterval::liveAt? I suspect that
you can add an operator<(unsigned) to LiveRange, allowing us to speed up the
upper_bound call by quite a bit (this would also apply to other callers of
upper/lower_bound). I would do it myself, but I still don't understand that
crazy liveAt function, despite the comment. :)
Basically I would like to see this:
LiveRange dummy(index, index+1);
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
dummy);
Turn into:
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
index);
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15130 91177308-0d34-0410-b5e6-96231b3b80d8
interfere. Because these intervals have a single definition, and one of them
is a copy instruction, they are always safe to merge even if their lifetimes
interfere. This slightly reduces the amount of spill code, for example on
252.eon, from:
12837 spiller - Number of loads added
7604 spiller - Number of stores added
5842 spiller - Number of register spills
18155 liveintervals - Number of identity moves eliminated after coalescing
to:
12754 spiller - Number of loads added
7585 spiller - Number of stores added
5803 spiller - Number of register spills
18262 liveintervals - Number of identity moves eliminated after coalescing
The much much bigger win would be to merge intervals with multiple definitions
(aka phi nodes) but this is not that day.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15124 91177308-0d34-0410-b5e6-96231b3b80d8
intervals need not be sorted anymore. Removing this redundant step
improves LiveIntervals running time by 5% on 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15106 91177308-0d34-0410-b5e6-96231b3b80d8
fortunately, they are easy to handle if we know about them. This patch fixes
some serious pessimization of code produced by the linscan register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15092 91177308-0d34-0410-b5e6-96231b3b80d8
is a simple change, but seems to improve code a little. For example, on
256.bzip2, we went from 75.0s -> 73.33s (2% speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15004 91177308-0d34-0410-b5e6-96231b3b80d8
* vreg <-> vreg joining now works, enable it unconditionally when joining
is enabled (which is the default).
* Fix a serious pessimization of spill code where we were saying that a
spilled DEF operand was live into the subsequent instruction. This allows
for substantially better code when spilling starts to happen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14993 91177308-0d34-0410-b5e6-96231b3b80d8
order, causing the inactive list in the linearscan list to get unsorted, which
basically fuxored everything up severely.
These seems to fix the joiner, so with more testing I will enable it by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14992 91177308-0d34-0410-b5e6-96231b3b80d8
Heavily refactor handleVirtualRegisterDef, adding comments and making it more
efficient. It is also much easier to follow and convince ones self that it is
correct :)
Add -debug output to the joine, showing the result of joining the intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14989 91177308-0d34-0410-b5e6-96231b3b80d8
1. LiveIntervals now implement a 4 slot per instruction model. Load,
Use, Def and a Store slot. This is required in order to correctly
represent caller saved register clobbering on function calls,
register reuse in the same instruction (def resues last use) and
also spill code added later by the allocator. The previous
representation (2 slots per instruction) was insufficient and as a
result was causing subtle bugs.
2. Fixes in spill code generation. This was the major cause of
failures in the test suite.
3. Linear scan now has core support for folding memory operands. This
is untested and not enabled (the live interval update function does
not attempt to fold loads/stores in instructions).
4. Lots of improvements in the debugging output of both live intervals
and linear scan. Give it a try... it is beautiful :-)
In summary the above fixes all the issues with the recent reserved
register elimination changes and get the allocator very close to the
next big step: folding memory operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11654 91177308-0d34-0410-b5e6-96231b3b80d8
ilist of MachineInstr objects. This allows constant time removal and
insertion of MachineInstr instances from anywhere in each
MachineBasicBlock. It also allows for constant time splicing of
MachineInstrs into or out of MachineBasicBlocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11340 91177308-0d34-0410-b5e6-96231b3b80d8
slots each. As a concequence they get numbered as 0, 2, 4 and so
on. The first slot is used for operand uses and the second for
defs. Here's an example:
0: A = ...
2: B = ...
4: C = A + B ;; last use of A
The live intervals should look like:
A = [1, 5)
B = [3, x)
C = [5, y)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11141 91177308-0d34-0410-b5e6-96231b3b80d8