the iterator hints we have to speed up overlaps(). This speeds linscan up
by about .2s (out of 8.7) on 175.vpr for PPC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17935 91177308-0d34-0410-b5e6-96231b3b80d8
* Eliminate the releaseMemory method, this is not an analysis
* Change the fixed, active, and inactive lists of intervals to maintain an
iterator for the current position in the interval. This allows us to do
constant time increments of the iterator instead of having to do a binary
search to find our liverange in our liveinterval all of the time, which
substantially speeds up cases where LiveIntervals have many LiveRanges
- which is very common for physical registers. On targets with many
physregs, this can make a noticable difference.
With a release build of LLC for PPC, this halves the time in
processInactiveIntervals and processActiveIntervals, from 1.5s to .75s.
This also lays the ground for more to come.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17933 91177308-0d34-0410-b5e6-96231b3b80d8
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
lists. Instead of scanning the vector backwards, scan it forward and
swap each element we want to erase. Then at the end erase all removed
intervals at once. This doesn't save much: 0.08s out of 4s when
compiling 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16136 91177308-0d34-0410-b5e6-96231b3b80d8
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15167 91177308-0d34-0410-b5e6-96231b3b80d8
compilation of gcc:
* Use vectors instead of lists for the intervals sets
* Use a heap for the unhandled set to keep intervals always sorted and
makes insertions back to the heap very fast (compared to scanning a
list)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15103 91177308-0d34-0410-b5e6-96231b3b80d8
unhandled + handled. So unhandled is now including all fixed intervals
and fixed intervals never changes when processing a function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12462 91177308-0d34-0410-b5e6-96231b3b80d8
allocator.
The implementation is completely rewritten and now employs several
optimizations not exercised before. For example for 164.gzip we have
997 loads and 699 stores vs the 1221 loads and 880 stores we have
before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11798 91177308-0d34-0410-b5e6-96231b3b80d8
1. LiveIntervals now implement a 4 slot per instruction model. Load,
Use, Def and a Store slot. This is required in order to correctly
represent caller saved register clobbering on function calls,
register reuse in the same instruction (def resues last use) and
also spill code added later by the allocator. The previous
representation (2 slots per instruction) was insufficient and as a
result was causing subtle bugs.
2. Fixes in spill code generation. This was the major cause of
failures in the test suite.
3. Linear scan now has core support for folding memory operands. This
is untested and not enabled (the live interval update function does
not attempt to fold loads/stores in instructions).
4. Lots of improvements in the debugging output of both live intervals
and linear scan. Give it a try... it is beautiful :-)
In summary the above fixes all the issues with the recent reserved
register elimination changes and get the allocator very close to the
next big step: folding memory operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11654 91177308-0d34-0410-b5e6-96231b3b80d8
ilist of MachineInstr objects. This allows constant time removal and
insertion of MachineInstr instances from anywhere in each
MachineBasicBlock. It also allows for constant time splicing of
MachineInstrs into or out of MachineBasicBlocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11340 91177308-0d34-0410-b5e6-96231b3b80d8
registers (not as the max number of registers).
Change toSpill from a std::set into a std::vector<bool>.
Use the reverse iterator adapter to do a reverse scan of allocatable
registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11061 91177308-0d34-0410-b5e6-96231b3b80d8
is a move between two registers, at least one of the registers is
virtual and the two live intervals do not overlap.
This results in about 40% reduction in intervals, 30% decrease in the
register allocators running time and a 20% increase in peephole
optimizations (mainly move eliminations).
The option can be enabled by passing -join-liveintervals where
appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10965 91177308-0d34-0410-b5e6-96231b3b80d8
virtReg lives on the stack. Now a virtual register has an entry in the
virtual->physical map or the virtual->stack slot map but never in
both.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10958 91177308-0d34-0410-b5e6-96231b3b80d8
of the register allocator as follows:
before after
mesa 2.3790 1.5994
vpr 2.6008 1.2078
gcc 1.9840 0.5273
mcf 0.2569 0.0470
eon 1.8468 1.4359
twolf 0.9475 0.2004
burg 1.6807 1.3300
lambda 1.2191 0.3764
Speedups range anyware from 30% to over 400% :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10712 91177308-0d34-0410-b5e6-96231b3b80d8
saved register it has a longer free range than ECX (which is defined
every time there is a fnuction call) which makes ECX a better register
to reserve.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10635 91177308-0d34-0410-b5e6-96231b3b80d8
which denotes the register we would like to be assigned to (virtual or
physical). In register allocation, if this hint exists and we can map
it to a physical register (it is either a physical register or it is a
virtual register that already got assigned to a physical one) we use
that register if it is available instead of a random one in the free
pool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10634 91177308-0d34-0410-b5e6-96231b3b80d8
nesting level when computing it. Right now the allocator uses:
w = sum_over_defs_uses( 10 ^ nesting level );
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10569 91177308-0d34-0410-b5e6-96231b3b80d8
instruction pass. This also fixes all remaining bugs for this new
allocator to pass all tests under test/Programs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10515 91177308-0d34-0410-b5e6-96231b3b80d8
a) remove opIsUse(), opIsDefOnly(), opIsDefAndUse()
b) add isUse(), isDef()
c) rename opHiBits32() to isHiBits32(),
opLoBits32() to isLoBits32(),
opHiBits64() to isHiBits64(),
opLoBits64() to isLoBits64().
This results to much more readable code, for example compare
"op.opIsDef() || op.opIsDefAndUse()" to "op.isDef()" a pattern used
very often in the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10461 91177308-0d34-0410-b5e6-96231b3b80d8
bug where spill instructions were added to the next basic block
instead of the end of the current one if the instruction that required
the spill was the last in the block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10272 91177308-0d34-0410-b5e6-96231b3b80d8