32-bit offset jump tables just use real branch instructions and so aren't
marked as data regions. We were still emitting the .end_data_region
marker though, which assert()ed.
rdar://11499158
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157221 91177308-0d34-0410-b5e6-96231b3b80d8
This helps compile time when the greedy register allocator splits live
ranges in giant functions. Without the bias, we would try to grow
regions through the giant edge bundles, usually to find out that the
region became too big and expensive.
If a live range has many uses in blocks near the giant bundle, the small
negative bias doesn't make a big difference, and we still consider
regions including the giant edge bundle.
Giant edge bundles are usually connected to landing pads or indirect
branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157174 91177308-0d34-0410-b5e6-96231b3b80d8
With physreg joining out of the way, it is easy to recognize the
instructions that need their kill flags cleared while testing for
interference.
This allows us to skip the final scan of all instructions for an 11%
speedup of the coalescer pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157169 91177308-0d34-0410-b5e6-96231b3b80d8
may be RAUW'd by the recursive call to LegalizeOps; instead, retrieve
the other operands when calling UpdateNodeOperands. Fixes PR12889.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157162 91177308-0d34-0410-b5e6-96231b3b80d8
There should be no difference in the resulting binary, given a sufficiently
smart compiler. However we already had compiler timeouts on the generated
code in Intrinsics.gen, this hopefully makes the lives of slow buildbots a
little easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157161 91177308-0d34-0410-b5e6-96231b3b80d8
X86 has 2-addr instructions with different constraints on the tied def
and use operands. One is GR32, one is GR32_NOSP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157149 91177308-0d34-0410-b5e6-96231b3b80d8
This class is meant to be the primary interface for examining a live
range in the vicinity on a given instruction. It avoids all the messy
dealings with iterators and early clobbers.
This is a more abstract interface to live ranges, hiding the
implementation as a vector of segments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157141 91177308-0d34-0410-b5e6-96231b3b80d8
Dead code elimination during coalescing could cause a virtual register
to be split into connected components. The following rewriting would be
confused about the already joined copies present in the code, but
without a corresponding value number in the live range.
Erase all joined copies instantly when joining intervals such that the
MI and LiveInterval representations are always in sync.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157135 91177308-0d34-0410-b5e6-96231b3b80d8
The current code will generate a prologue which starts with something like:
mflr 0
stw 31, -4(1)
stw 0, 4(1)
stwu 1, -16(1)
But under the PPC32 SVR4 ABI, access to negative offsets from R1 is not allowed.
This was pointed out by Peter Bergner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157133 91177308-0d34-0410-b5e6-96231b3b80d8
Dead code and joined copies are now eliminated on the fly, and there is
no need for a post pass.
This makes the coalescer work like other modern register allocator
passes: Code is changed on the fly, there is no pending list of changes
to be committed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157132 91177308-0d34-0410-b5e6-96231b3b80d8
The late dead code elimination is no longer necessary.
The test changes are cause by a register hint that can be either %rdi or
%rax. The choice depends on the use list order, which this patch changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157131 91177308-0d34-0410-b5e6-96231b3b80d8
Before rewriting uses of one value in A to register B, check that there
are no tied uses. That would require multiple A values to be rewritten.
This bug can't bite in the current version of the code for a fairly
subtle reason: A tied use would have caused 2-addr to insert a copy
before the use. If the copy has been coalesced, it will be found by the
same loop changed by this patch, and the optimization is aborted.
This was exposed by 400.perlbench and lua after applying a patch that
deletes joined copies aggressively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157130 91177308-0d34-0410-b5e6-96231b3b80d8
There is no reason to defer the collection of virtual registers whose
register class may be replaced with a larger class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157125 91177308-0d34-0410-b5e6-96231b3b80d8
Otherwise just looking up a value in the map requires creating a VH, adding it to the use lists and destroying it again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157124 91177308-0d34-0410-b5e6-96231b3b80d8