instead of randomly groping about inside its outEdges array.
Make SchedGraph::addDummyEdges() use getNumOutEdges() instead of
outEdges.size().
Get rid of ifdefed-out code in SchedGraph::buildGraph().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11238 91177308-0d34-0410-b5e6-96231b3b80d8
the Virt2PhysRegMap std::map with an std::vector. This speeds up the
register allocator another (almost) 40%, from .72->.45s in a release build
of LLC on 253.perlbmk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11219 91177308-0d34-0410-b5e6-96231b3b80d8
This speeds up live variables a lot, from .60/.39s -> .47/.26s in LLC, for
the first/second pass respectively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11216 91177308-0d34-0410-b5e6-96231b3b80d8
from physical registers, and they are always dense, it makes sense to not have
a ton of RBtree overhead. This change speeds up regalloclocal about ~30% on
253.perlbmk, from .35s -> .27s in the JIT (in LLC, it goes from .74 -> .55).
Now live variable analysis is the slowest codegen pass. Of course it doesn't
help that we have to run it twice, because regalloclocal doesn't update it,
but even if it did it would be the slowest pass (now it's just the 2x slowest
pass :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11215 91177308-0d34-0410-b5e6-96231b3b80d8
slots each. As a concequence they get numbered as 0, 2, 4 and so
on. The first slot is used for operand uses and the second for
defs. Here's an example:
0: A = ...
2: B = ...
4: C = A + B ;; last use of A
The live intervals should look like:
A = [1, 5)
B = [3, x)
C = [5, y)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11141 91177308-0d34-0410-b5e6-96231b3b80d8
registers (not as the max number of registers).
Change toSpill from a std::set into a std::vector<bool>.
Use the reverse iterator adapter to do a reverse scan of allocatable
registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11061 91177308-0d34-0410-b5e6-96231b3b80d8
Move Passes.h (which defines the interface to this file) to the top.
Move statistics to the top of the file.
Add a comment
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11034 91177308-0d34-0410-b5e6-96231b3b80d8
of a linear search to find the first range for comparisons. This cuts
down the linear scan register allocator running time by a factor of 3
in 254.perlbmk and by a factor of 2.2 in 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11030 91177308-0d34-0410-b5e6-96231b3b80d8
Simplification of LiveIntervals::Interval::overlaps() and addition of
examples to overlaps() and liveAt() to make them clearer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11028 91177308-0d34-0410-b5e6-96231b3b80d8
when joining we need to check if we overlap with the second interval
or any of its aliases.
Also make joining intervals the default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10973 91177308-0d34-0410-b5e6-96231b3b80d8
is a move between two registers, at least one of the registers is
virtual and the two live intervals do not overlap.
This results in about 40% reduction in intervals, 30% decrease in the
register allocators running time and a 20% increase in peephole
optimizations (mainly move eliminations).
The option can be enabled by passing -join-liveintervals where
appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10965 91177308-0d34-0410-b5e6-96231b3b80d8
virtReg lives on the stack. Now a virtual register has an entry in the
virtual->physical map or the virtual->stack slot map but never in
both.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10958 91177308-0d34-0410-b5e6-96231b3b80d8