Coalescing can remove copy-like instructions with sub-register operands
that constrained the register class. Examples are:
x86: GR32_ABCD:sub_8bit_hi -> GR32
arm: DPR_VFP2:ssub0 -> DPR
Recompute the register class of any virtual registers that are used by
less instructions after coalescing.
This affects code generation for the Cortex-A8 where we use NEON
instructions for f32 operations, c.f. fp_convert.ll:
vadd.f32 d16, d1, d0
vcvt.s32.f32 d0, d16
The register allocator is now free to use d16 for the temporary, and
that comes first in the allocation order because it doesn't interfere
with any s-registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137133 91177308-0d34-0410-b5e6-96231b3b80d8
This function doesn't have anything to do with spill weights, and MRI
already has functions for manipulating the register class of a virtual
register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137123 91177308-0d34-0410-b5e6-96231b3b80d8
These the methods are target-independent since they simply scan the
memory operands. They can live in TargetInstrInfoImpl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137063 91177308-0d34-0410-b5e6-96231b3b80d8
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137002 91177308-0d34-0410-b5e6-96231b3b80d8
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.
This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137001 91177308-0d34-0410-b5e6-96231b3b80d8
These functions are no longer used, and they are easily replaced with a
loop calling shouldSplitSingleBlock and splitSingleBlock.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136993 91177308-0d34-0410-b5e6-96231b3b80d8
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136992 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.
The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136989 91177308-0d34-0410-b5e6-96231b3b80d8
Some instructions require restricted register classes, but most of the
time that doesn't affect register allocation. For example, some
instructions don't work with the stack pointer, but that is a reserved
register anyway.
Sometimes it matters, GR32_ABCD only has 4 allocatable registers. For
such a proper sub-class, the register allocator should try to enable
register class inflation since that makes more registers available for
allocation.
Make sure only legal super-classes are considered. For example, tGPR is
not a proper sub-class in Thumb mode, but in ARM mode it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136981 91177308-0d34-0410-b5e6-96231b3b80d8
The old code would look at kills and defs in one pass over the
instruction operands, causing problems with this code:
%R0<def>, %CPSR<def,dead> = tLSLri %R5<kill>, 2, pred:14, pred:%noreg
%R0<def>, %CPSR<def,dead> = tADDrr %R4<kill>, %R0<kill>, pred:14, %pred:%noreg
The last instruction kills and redefines %R0, so it is still live after
the instruction.
This caused a register scavenger crash when compiling 483.xalancbmk for
armv6. I am not including a test case because it requires too much bad
luck to expose this old bug.
First you need to convince the register allocator to use %R0 twice on
the tADDrr instruction, then you have to convince BranchFolding to do
something that causes it to run the register scavenger on he bad block.
<rdar://problem/9898200>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136973 91177308-0d34-0410-b5e6-96231b3b80d8
inlined variable, based on the discussion in PR10542.
This explodes the runtime of several passes down the pipeline due to
a large number of "copies" remaining live across a large function. This
only shows up with both debug and opt, but when it does it creates
a many-minute compile when self-hosting LLVM+Clang. There are several
other cases that show these types of regressions.
All of this is tracked in PR10542, and progress is being made on fixing
the issue. Once its addressed, the re-instated, but until then this
restores the performance for self-hosting and other opt+debug builds.
Devang, let me know if this causes any trouble, or impedes fixing it in
any way, and thanks for working on this!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136953 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible to have multiple DBG_VALUEs for the same variable:
32L TEST32rr %vreg0<kill>, %vreg0, %EFLAGS<imp-def>; GR32:%vreg0
DBG_VALUE 2, 0, !"i"
DBG_VALUE %noreg, %0, !"i"
When that happens, keep the last one instead of the first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136842 91177308-0d34-0410-b5e6-96231b3b80d8
This helps generate better code in functions with high register
pressure.
The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136836 91177308-0d34-0410-b5e6-96231b3b80d8
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.
Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.
When both the header and latch blocks use the register, we still keep it
live on the backedge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136832 91177308-0d34-0410-b5e6-96231b3b80d8
With a 'FirstDef' field right there, it is very confusing that FirstUse
refers to an instruction that may be a def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136739 91177308-0d34-0410-b5e6-96231b3b80d8
This is either an invalid SlotIndex, or valno->def for the first value
defined inside the block. PHI values are not counted as defined inside
the block.
The FirstDef field will be used when estimating the cost of spilling
around a block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136736 91177308-0d34-0410-b5e6-96231b3b80d8
The PrefBoth constraint is used for blocks that ideally want a live-in
value both on the stack and in a register. This would be used by a block
that has a use before interference forces a spill.
Secondly, add the ChangesValue flag to BlockConstraint. This tells
SpillPlacement if a live-in value on the stack can be reused as a
live-out stack value for free. If the block redefines the virtual
register, a spill would be required for that.
This extra information will be used by SpillPlacement to more accurately
calculate spill costs when a value can exist both on the stack and in a
register.
The simplest example is a basic block that reads the virtual register,
but doesn't change its value. Spilling around such a block requires a
reload, but no spill in the block.
The spiller already knows this, but the spill placer doesn't. That can
sometimes lead to suboptimal regions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136731 91177308-0d34-0410-b5e6-96231b3b80d8
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136711 91177308-0d34-0410-b5e6-96231b3b80d8