operator* on the by-operand iterators to return a MachineOperand& rather than
a MachineInstr&. At this point they almost behave like normal iterators!
Again, this requires making some existing loops more verbose, but should pave
the way for the big range-based for-loop cleanups in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203865 91177308-0d34-0410-b5e6-96231b3b80d8
This makes the API a bit more natural to use and makes it easier to make
LiveRanges implementation details private.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192394 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRange just manages a list of segments and a list of value numbers
now as LiveInterval did previously, but without having details like spill
weight or a fixed register number.
LiveInterval is now a subclass of LiveRange and simply adds the spill weight
and the register number.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192393 91177308-0d34-0410-b5e6-96231b3b80d8
The Segment struct contains a single interval; multiple instances of this struct
are used to construct a live range, but the struct is not a live range by
itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192392 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.
This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182592 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite value numbers directly in the 'Other' LiveInterval which is
moribund anyway. This avoids allocating the OtherAssignments vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175690 91177308-0d34-0410-b5e6-96231b3b80d8
Adding new segments to large LiveIntervals can be expensive because the
LiveRange objects after the insertion point may need to be moved left or
right. This can cause quadratic behavior when adding a large number of
segments to a live range.
The LiveRangeUpdater class allows the LIveInterval to be in a temporary
invalid state while segments are being added. It maintains an internal
gap in the LiveInterval when it is shrinking, and it has a spill area
for new segments when the LiveInterval is growing.
The behavior is similar to the existing mergeIntervalRanges() function,
except it allocates less memory for the spill area, and the algorithm is
turned inside out so the loop is driven by the clients.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175644 91177308-0d34-0410-b5e6-96231b3b80d8
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169131 91177308-0d34-0410-b5e6-96231b3b80d8
The fix is obvious and the only test case I have is horrible, so I am
not including it. The problem shows up when self-hosting clang on i386
with -new-coalescer enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164793 91177308-0d34-0410-b5e6-96231b3b80d8
The RegisterCoalescer understands overlapping live ranges where one
register is defined as a copy of the other. With this change, register
allocators using LiveRegMatrix can do the same, at least for copies
between physical and virtual registers.
When a physreg is defined by a copy from a virtreg, allow those live
ranges to overlap:
%CL<def> = COPY %vreg11:sub_8bit; GR32_ABCD:%vreg11
%vreg13<def,tied1> = SAR32rCL %vreg13<tied0>, %CL<imp-use,kill>
We can assign %vreg11 to %ECX, overlapping the live range of %CL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163336 91177308-0d34-0410-b5e6-96231b3b80d8
The 'unused' state of a value number can be represented as an invalid
def SlotIndex. This also exposed code that shouldn't have been looking
at unused value VNInfos.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161258 91177308-0d34-0410-b5e6-96231b3b80d8
The only real user of the flag was removeCopyByCommutingDef(), and it
has been switched to LiveIntervals::hasPHIKill().
All the code changed by this patch was only concerned with computing and
propagating the flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161255 91177308-0d34-0410-b5e6-96231b3b80d8
When a live range splits into multiple connected components, we would
arbitrarily assign <undef> uses to component 0. This is wrong when the
use is tied to a def that gets assigned to a different component:
%vreg69<def> = ADD8ri %vreg68<undef>, 1
The use and def must get the same virtual register.
Fix this by assigning <undef> uses to the same component as the value
defined by the instruction, if any:
%vreg69<def> = ADD8ri %vreg69<undef>, 1
This fixes PR13402. The PR has a test case which I am not including
because it is unlikely to keep exposing this behavior in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160739 91177308-0d34-0410-b5e6-96231b3b80d8
generalizing its implementation sufficiently to support this value
number scenario as well.
This cuts out another significant performance hit in large functions
(over 10k basic blocks, etc), especially those with "natural" CFG
structures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160026 91177308-0d34-0410-b5e6-96231b3b80d8
back of it.
I don't have anything even remotely close to a test case for this. It
only broke two build bots, both of them doing bootstrap builds, one of
them a dragonegg bootstrap. It doesn't break for me when I bootstrap
either. It doesn't reproduce every time or on many machines during the
bootstrap. Many thanks to Duncan Sands who got the exact command (and
stage of the bootstrap) which failed on the dragonegg bootstrap and
managed to get it to trigger under valgrind with debug symbols. The fix
was then found by inspection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159993 91177308-0d34-0410-b5e6-96231b3b80d8
quadratic behavior when performing pathological merges. Fixes the core
element of PR12652.
There is only one user of addRangeFrom left: join. I'm hoping to
refactor further in a future patch and have join use this merge
operation as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159982 91177308-0d34-0410-b5e6-96231b3b80d8
of the trick merge routines. This adds a layer of testing that was
necessary when implementing more efficient (and complex) merge logic for
this datastructure.
No functionality changed here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159981 91177308-0d34-0410-b5e6-96231b3b80d8
Don't print out the register number and spill weight, making the TRI
argument unnecessary.
This allows callers to interpret the reg field. It can currently be a
virtual register, a physical register, a spill slot, or a register unit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158031 91177308-0d34-0410-b5e6-96231b3b80d8
Dead copies cause problems because they are trivial to coalesce, but
removing them gived the live range a dangling end point. This patch
enables full dead code elimination which trims live ranges to their uses
so end points don't dangle.
DCE may erase multiple instructions. Put the pointers in an ErasedInstrs
set so we never risk visiting erased instructions in the work list.
There isn't supposed to be any dead copies entering RegisterCoalescer,
but they do slip by as evidenced by test/CodeGen/X86/coalescer-dce.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157101 91177308-0d34-0410-b5e6-96231b3b80d8
A live range that has an early clobber tied redef now looks like a
normal tied redef, except the early clobber def uses the early clobber
slot.
This is enough to handle any strange interference problems.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149769 91177308-0d34-0410-b5e6-96231b3b80d8
more than two adjacent ranges needed to be merged. The new version should be
able to handle an arbitrary sequence of adjancent ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149588 91177308-0d34-0410-b5e6-96231b3b80d8
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.
The load and store slots are not needed after the deferred spill code
insertion framework was deleted.
The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals. In reality, these slots were used to distinguish
early-clobber defs from normal defs.
The new naming scheme also has 4 slots, but the names match how the
slots are really used. This is a purely mechanical renaming, but some
of the code makes a lot more sense now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144503 91177308-0d34-0410-b5e6-96231b3b80d8
It is conservatively correct to keep the hasPHIKill flags, even after
deleting PHI-defs.
The calculation can be very expensive after taildup has created a
quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag
isn't used for anything after RenumberValues().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139780 91177308-0d34-0410-b5e6-96231b3b80d8
Three out of four clients prefer this interface which is consistent with
extendIntervalEndTo() and LiveRangeCalc::extend().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139604 91177308-0d34-0410-b5e6-96231b3b80d8
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.
I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127522 91177308-0d34-0410-b5e6-96231b3b80d8
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270
This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127462 91177308-0d34-0410-b5e6-96231b3b80d8