No functional change intended.
Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.
This makes it possible to do so without changing all clients (again).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157854 91177308-0d34-0410-b5e6-96231b3b80d8
It helps compile exotic inline asm. In the test case, normal GR32
virtual registers use up eax-edx so the final GR32_ABCD live range has
no registers left. Since all the live ranges were tiny, we had no way of
prioritizing the smaller register class.
This patch allows tiny unspillable live ranges to be evicted by tiny
unspillable live ranges from a smaller register class.
<rdar://problem/11542429>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157715 91177308-0d34-0410-b5e6-96231b3b80d8
Live ranges with a constrained register class may benefit from splitting
around individual uses. It allows the remaining live range to use a
larger register class where it may allocate. This is like spilling to a
different register class.
This is only attempted on constrained register classes.
<rdar://problem/11438902>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157354 91177308-0d34-0410-b5e6-96231b3b80d8
This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.
Patch by Shamil Kurmangaleev!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153904 91177308-0d34-0410-b5e6-96231b3b80d8
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().
PEI resets virtual regs when it's done scavenging.
PTX will either have to provide its own PEI pass or assign physregs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151032 91177308-0d34-0410-b5e6-96231b3b80d8
Perform all comparisons at instruction granularity, and make sure
register masks on uses count in both gaps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150530 91177308-0d34-0410-b5e6-96231b3b80d8
This makes global live range splitting behave identically with and
without register mask operands.
This is not necessarily the best way of using register masks for live
range splitting. It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.
For now the goal is to produce identical assembly when enabling register
masks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150259 91177308-0d34-0410-b5e6-96231b3b80d8
Creates a configurable regalloc pipeline.
Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.
When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.
CodeGen transformation passes are never "required" as an analysis
ProcessImplicitDefs does not require LiveVariables.
We have a plan to massively simplify some of the early passes within the regalloc superpass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150226 91177308-0d34-0410-b5e6-96231b3b80d8
This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150191 91177308-0d34-0410-b5e6-96231b3b80d8
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet. In that case, there are no
data structures to update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139702 91177308-0d34-0410-b5e6-96231b3b80d8
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.
Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions. This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important. The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting. Stack slots are cheap.
This patch adds the interface to enable spill modes in SplitKit. In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler. In some cases, it can become much simpler because no
extra PHI-defs are required. This will speed up both splitting and
spilling.
This is only the interface to enable spill modes, no implementation yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139500 91177308-0d34-0410-b5e6-96231b3b80d8
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137002 91177308-0d34-0410-b5e6-96231b3b80d8
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.
This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137001 91177308-0d34-0410-b5e6-96231b3b80d8
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136992 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.
The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136989 91177308-0d34-0410-b5e6-96231b3b80d8
This helps generate better code in functions with high register
pressure.
The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136836 91177308-0d34-0410-b5e6-96231b3b80d8
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.
Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.
When both the header and latch blocks use the register, we still keep it
live on the backedge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136832 91177308-0d34-0410-b5e6-96231b3b80d8
With a 'FirstDef' field right there, it is very confusing that FirstUse
refers to an instruction that may be a def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136739 91177308-0d34-0410-b5e6-96231b3b80d8
There are two conflicting strategies in play:
- Under high register pressure, we want to assign large live ranges
first. Smaller live ranges are easier to place afterwards.
- Live range splitting is guided by interference, so splitting should be
deferred until interference is as realistic as possible.
With the recent changes to the live range stages, and with compact
regions enabled, it is less traumatic to split a live range too early.
If some of the split products were too big, they can often be split
again.
By reversing the RS_Split order, we get this queue order:
1. Normal live ranges, large to small.
2. RS_Split live ranges, large to small.
The large-to-small order improves RAGreedy's puzzle solving skills under
high register pressure. It may cause a bit more iterated splitting, but
we handle that better now.
With this change, -compact-regions is mostly an improvement on SPEC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136388 91177308-0d34-0410-b5e6-96231b3b80d8
When splitting global live ranges, it is now possible to split for
multiple destination intervals at once. Previously, we only had the main
and stack intervals.
Each edge bundle is assigned to a split candidate, and splitAroundRegion
will insert copies between the candidate intervals and the stack
interval as needed.
The multi-way splitting is used to split around compact regions when
enabled with -compact-regions. The best candidate register still gets
all the bundles it wants, but everything outside the main interval is
first split around compact regions before we create single-block
intervals.
Compact region splitting still causes some regressions, so it is not
enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136186 91177308-0d34-0410-b5e6-96231b3b80d8
When dead code elimination deletes a PHI value, the virtual register may
split into multiple connected components. In that case, revert each
component to the RS_Assign stage.
The new components are guaranteed to be smaller (the original value
numbers are distributed among the components), so this will always be
making progress. The components are now allowed to evict other live
ranges or be split again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136034 91177308-0d34-0410-b5e6-96231b3b80d8
This mechanism already exists, but the RS_Split2 stage makes it clearer.
When live range splitting creates ranges that may not be making
progress, they are marked RS_Split2 instead of RS_New. These ranges may
be split again, but only in a way that can be proven to make progress.
For local ranges, that means they must be split into ranges used by
strictly fewer instructions.
For global ranges, region splitting is bypassed and the RS_Split2
ranges go straight to per-block splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135912 91177308-0d34-0410-b5e6-96231b3b80d8
The stage is used to control where a live range is going, not where it
is coming from. Live ranges created by splitting will usually be marked
RS_New, but some are marked RS_Spill to avoid wasting time trying to
split them again.
The old RS_Global and RS_Local stages are merged - they are really the
same thing for local and global live ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135911 91177308-0d34-0410-b5e6-96231b3b80d8
This method computes the edge bundles that should be live when splitting
around a compact region. This is independent of interference.
The function returns false if the live range was already a compact
region, or the compact region doesn't have any live bundles - it would
be the same as splitting around basic blocks.
Compact regions are computed using the normal spill placement code. We
pretend there is interference in all live-through blocks that don't use
the live range. This removes all edges from the Hopfield network used
for spill placement, so it converges instantly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135847 91177308-0d34-0410-b5e6-96231b3b80d8
A split candidate can have a null PhysReg which means that it doesn't
map to a real interference pattern. Instead, pretend that all through
blocks have interference.
This makes it possible to generate compact regions where the live range
doesn't go through blocks that don't use it. The live range will still
be live between directly connected blocks with uses.
Splitting around a compact region tends to produce a live range with a
high spill weight, so it may evict a less dense live range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135845 91177308-0d34-0410-b5e6-96231b3b80d8