This allows the allocator to free any resources used by the virtual register,
including physical register assignments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127560 91177308-0d34-0410-b5e6-96231b3b80d8
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127540 91177308-0d34-0410-b5e6-96231b3b80d8
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.
Work in progress with known bugs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127529 91177308-0d34-0410-b5e6-96231b3b80d8
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.
I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127522 91177308-0d34-0410-b5e6-96231b3b80d8
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127497 91177308-0d34-0410-b5e6-96231b3b80d8
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270
This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127462 91177308-0d34-0410-b5e6-96231b3b80d8
flexible.
If it returns a register class that's different from the input, then that's the
register class used for cross-register class copies.
If it returns a register class that's the same as the input, then no cross-
register class copies are needed (normal copies would do).
If it returns null, then it's not at all possible to copy registers of the
specified register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127368 91177308-0d34-0410-b5e6-96231b3b80d8
The damage done by physreg coalescing only depends on the number of instructions
the extended physreg live range covers. This fixes PR9438.
The heuristic is still luck-based, and physreg coalescing really should be
disabled completely. We need a register allocator with better hinting support
before that is possible.
Convert a test to FileCheck and force spilling by inserting an extra call. The
previous spilling behavior was dependent on misguided physreg coalescing
decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127351 91177308-0d34-0410-b5e6-96231b3b80d8
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127300 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing,
splitting, and spilling for dead code elimination. It can delete chains of dead
instructions as long as there are no dependency loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127287 91177308-0d34-0410-b5e6-96231b3b80d8
with this before since none of the register tracking or nightly tests
had unschedulable nodes.
This should probably be refixed with a special default Node that just
returns some "don't touch me" values.
Fixes PR9427
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127263 91177308-0d34-0410-b5e6-96231b3b80d8
This change uses the MaxReorderWindow for both height and depth, which
tends to limit the negative effects of high register pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127203 91177308-0d34-0410-b5e6-96231b3b80d8
The coalescer can in very rare cases leave too large live intervals around after
rematerializing cheap-as-a-move instructions.
Linear scan doesn't really care, but live range splitting gets very confused
when a live range is killed by a ghost instruction.
I will fix this properly in the coalescer after 2.9 branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127096 91177308-0d34-0410-b5e6-96231b3b80d8
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.
Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.
Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127067 91177308-0d34-0410-b5e6-96231b3b80d8
The global cost is the sum of block frequencies for spill code that must be
inserted because preferences weren't met.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127062 91177308-0d34-0410-b5e6-96231b3b80d8
This simplifies the code and makes it faster too.
The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127054 91177308-0d34-0410-b5e6-96231b3b80d8