use before rematerializing the load.
This allows us to produce:
addps LCPI0_1(%rip), %xmm2
Instead of:
movaps LCPI0_1(%rip), %xmm3
addps %xmm3, %xmm2
Saving a register and an instruction. The standard spiller already knows how to
do this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122133 91177308-0d34-0410-b5e6-96231b3b80d8
createMachineVerifierPass and MachineFunction::verify.
The banner is printed before the machine code dump, just like the printer pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122113 91177308-0d34-0410-b5e6-96231b3b80d8
The spiller should only spill. The register allocator will drive live range
splitting, it has the needed information about register pressure and
interferences.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121590 91177308-0d34-0410-b5e6-96231b3b80d8
live ranges for the spill register are also defined at the use slot instead of
the normal def slot.
This fixes PR8612 for the inline spiller. A use was being allocated to the same
register as a spilled early clobber def.
This problem exists in all the spillers. A fix for the standard spiller is
forthcoming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119182 91177308-0d34-0410-b5e6-96231b3b80d8
benchmarks hitting an assertion.
Adds LiveIntervalUnion::collectInterferingVRegs.
Fixes "late spilling" by checking for any unspillable live vregs among
all physReg aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118701 91177308-0d34-0410-b5e6-96231b3b80d8
This way, InlineSpiller does the same amount of splitting as the standard
spiller. Splitting should really be guided by the register allocator, and
doesn't belong in the spiller at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118216 91177308-0d34-0410-b5e6-96231b3b80d8
proper SSA updating.
This doesn't cause MachineDominators to be recomputed since we are already
requiring MachineLoopInfo which uses dominators as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117598 91177308-0d34-0410-b5e6-96231b3b80d8
All registers created during splitting or spilling are assigned to the same
stack slot as the parent register.
When splitting or rematting, we may not spill at all. In that case the stack
slot is still assigned, but it will be dead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116546 91177308-0d34-0410-b5e6-96231b3b80d8
splitting or spillling, and to help with rematerialization.
Use LiveRangeEdit in InlineSpiller and SplitKit. This will eventually make it
possible to share remat code between InlineSpiller and SplitKit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116543 91177308-0d34-0410-b5e6-96231b3b80d8
never kept after splitting.
Keeping the original interval made sense when the split region doesn't modify
the register, and the original is spilled. We can get the same effect by
detecting reloaded values when spilling around copies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115695 91177308-0d34-0410-b5e6-96231b3b80d8
The earliestStart argument is entirely specific to linear scan allocation, and
can be easily calculated by RegAllocLinearScan.
Replace std::vector with SmallVector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111055 91177308-0d34-0410-b5e6-96231b3b80d8
When a live range is contained a single block, we can split it around
instruction clusters. The current approach is very primitive, splitting before
and after the largest gap between uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111043 91177308-0d34-0410-b5e6-96231b3b80d8
Before spilling a live range, we split it into a separate range for each basic
block where it is used. That way we only get one reload per basic block if the
new smaller ranges can allocate to a register.
This type of splitting is already present in the standard spiller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110934 91177308-0d34-0410-b5e6-96231b3b80d8
The live interval may be used for a spill slot as well, and that spill slot
could be shared by split registers. We cannot shrink it, even if we know the
current register won't need the spill slot in that range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110721 91177308-0d34-0410-b5e6-96231b3b80d8
necessary.
Sometimes, live range splitting doesn't shrink the current interval, but simply
changes some instructions to use a new interval. That makes the original more
suitable for spilling. In this case, we don't need to duplicate the original.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110481 91177308-0d34-0410-b5e6-96231b3b80d8
We are now at a point where we can split around simple single-entry, single-exit
loops, although still with some bugs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110257 91177308-0d34-0410-b5e6-96231b3b80d8
The spillers can pluck the analyses they need from the pass reference.
Switch some never-null pointers to references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108969 91177308-0d34-0410-b5e6-96231b3b80d8
This is a work in progress. So far we have some basic loop analysis to help
determine where it is useful to split a live range around a loop.
The actual loop splitting code from Splitter.cpp is also going to move in here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108842 91177308-0d34-0410-b5e6-96231b3b80d8
inserted in a MBB, and return an already inserted MI.
This target API change is necessary to allow foldMemoryOperand to call
storeToStackSlot and loadFromStackSlot when folding a COPY to a stack slot
reference in a target independent way.
The foldMemoryOperandImpl hook is going to change in the same way, but I'll wait
until COPY folding is actually implemented. Most targets only fold copies and
won't need to specialize this hook at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107991 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to recognize the common case where all uses could be
rematerialized, and no stack slot allocation is necessary.
If some values could be fully rematerialized, remove them from the live range
before allocating a stack slot for the rest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107492 91177308-0d34-0410-b5e6-96231b3b80d8
LocalRewriter::runOnMachineFunction uses this information to mark dead spill
slots.
This means that InlineSpiller now also works for functions that spill.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107302 91177308-0d34-0410-b5e6-96231b3b80d8
InlineSpiller inserts loads and spills immediately instead of deferring to
VirtRegMap. This is possible now because SlotIndexes allows instructions to be
inserted and renumbered.
This is work in progress, and is mostly a copy of TrivialSpiller so far. It
works very well for functions that don't require spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107227 91177308-0d34-0410-b5e6-96231b3b80d8