Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.
This speeds up both linear scan and the greedy register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129283 91177308-0d34-0410-b5e6-96231b3b80d8
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:
%vreg75<def> = COPY %ESP; GR32:%vreg75
MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
CALLpcrel32 ...
Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.
The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.
I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128845 91177308-0d34-0410-b5e6-96231b3b80d8
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128629 91177308-0d34-0410-b5e6-96231b3b80d8
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127857 91177308-0d34-0410-b5e6-96231b3b80d8
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127827 91177308-0d34-0410-b5e6-96231b3b80d8
We need to wait until we meet a PHIDef in its defining block before resurrecting
PHIKills in the predecessors.
This should unbreak the llvm-gcc-build-x86_64-darwin10-x-mingw32-x-armeabi bot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126905 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify the spill weight calculation a bit by bypassing
getApproximateInstructionCount() and using LiveInterval::getSize() directly.
This changes the computed spill weights, but only by a constant factor in each
function. It should not affect how spill weights compare against each other, and
so it shouldn't affect code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125530 91177308-0d34-0410-b5e6-96231b3b80d8
This is a lot easier than trying to get kill flags right during live range
splitting and rematerialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125113 91177308-0d34-0410-b5e6-96231b3b80d8
After uses of a live range are removed, recompute the live range to only cover
the remaining uses. This is necessary after rematerializing the value before
some (but not all) uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125058 91177308-0d34-0410-b5e6-96231b3b80d8
A live range cannot be split everywhere in a basic block. A split must go before
the first terminator, and if the variable is live into a landing pad, the split
must happen before the call that can throw.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124894 91177308-0d34-0410-b5e6-96231b3b80d8
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123155 91177308-0d34-0410-b5e6-96231b3b80d8
Print virtual registers numbered from 0 instead of the arbitrary
FirstVirtualRegister. The first virtual register is printed as %vreg0.
TRI::NoRegister is printed as %noreg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123107 91177308-0d34-0410-b5e6-96231b3b80d8
Always spill the full representative register at any point where any subregister
is live.
This fixes PR8620 which caused the old logic to get confused and not spill
anything at all.
The fundamental problem here is that the coalescer is too aggressive about
physical register coalescing. It sometimes makes it impossible to allocate
registers without these emergency spills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119375 91177308-0d34-0410-b5e6-96231b3b80d8
The live range of a register defined by an early clobber starts at the use slot,
not the def slot.
Except when it is an early clobber tied to a use operand. Then it starts at the
def slot like a standard def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119305 91177308-0d34-0410-b5e6-96231b3b80d8
benchmarks hitting an assertion.
Adds LiveIntervalUnion::collectInterferingVRegs.
Fixes "late spilling" by checking for any unspillable live vregs among
all physReg aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118701 91177308-0d34-0410-b5e6-96231b3b80d8
perform initialization without static constructors AND without explicit initialization
by the client. For the moment, passes are required to initialize both their
(potential) dependencies and any passes they preserve. I hope to be able to relax
the latter requirement in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116334 91177308-0d34-0410-b5e6-96231b3b80d8
When the normalizeSpillWeights function was introduced, I forgot to remove this
normalization.
This change could affect register allocation. Hopefully for the better.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110119 91177308-0d34-0410-b5e6-96231b3b80d8
inserted in a MBB, and return an already inserted MI.
This target API change is necessary to allow foldMemoryOperand to call
storeToStackSlot and loadFromStackSlot when folding a COPY to a stack slot
reference in a target independent way.
The foldMemoryOperandImpl hook is going to change in the same way, but I'll wait
until COPY folding is actually implemented. Most targets only fold copies and
won't need to specialize this hook at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107991 91177308-0d34-0410-b5e6-96231b3b80d8
EXTRACT_SUBREG no longer appears as a machine instruction. Use COPY instead.
Add isCopy() checks in many places using isMoveInstr() and isExtractSubreg().
The isMoveInstr hook will be removed later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107879 91177308-0d34-0410-b5e6-96231b3b80d8
INSERT_SUBREG will now only appear in SSA machine instructions.
Fix the handling of partial redefs in ProcessImplicitDefs. This is now relevant
since partial redef COPY instructions appear.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107726 91177308-0d34-0410-b5e6-96231b3b80d8
This code is transitional, it will soon be possible to eliminate
isExtractSubreg, isInsertSubreg, and isMoveInstr in most places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107547 91177308-0d34-0410-b5e6-96231b3b80d8
The VNInfo.kills vector was almost unused except for all the code keeping it
updated. The few places using it were easily rewritten to check for interval
ends instead.
The two new methods LiveInterval::killedAt and killedInRange are replacements.
This brings us down to 3 independent data structures tracking kills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106905 91177308-0d34-0410-b5e6-96231b3b80d8
This method was always a bit too simplistic for the real world. It didn't really
deal with subregisters and such.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106781 91177308-0d34-0410-b5e6-96231b3b80d8