When tryEvict() is looking for a cheaper register in the allocation
order, skip the tail of too expensive registers when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172281 91177308-0d34-0410-b5e6-96231b3b80d8
Virtual registers with a known preferred register are prioritized by
RAGreedy. This function makes the condition explicit without depending
on getRegAllocPref().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169179 91177308-0d34-0410-b5e6-96231b3b80d8
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169131 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change, just moved header files.
Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168806 91177308-0d34-0410-b5e6-96231b3b80d8
because LiveStackAnalysis was not preserved by VirtRegWriter. This caused
big stack usage regression in some cases.
rdar://12340383
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164408 91177308-0d34-0410-b5e6-96231b3b80d8
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162367 91177308-0d34-0410-b5e6-96231b3b80d8
Even looking at the revision history I couldn't quite piece together why this
cast was ever written in the first place, but I assume it was because of some
change in the inheritance, perhaps this function was reimplemented in a
derived type & this caller was meant to get the base version (& it wasn't
virtual)?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162301 91177308-0d34-0410-b5e6-96231b3b80d8
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.
The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.
InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.
The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158867 91177308-0d34-0410-b5e6-96231b3b80d8
This deduplicates some code from the optimizing register allocators, and
it means that it is now possible to change the register allocators'
solutions simply by editing the VirtRegMap between the register
allocator pass and the rewriter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158249 91177308-0d34-0410-b5e6-96231b3b80d8
OK, not really. We don't want to reintroduce the old rewriter hacks.
This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.
The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.
These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158244 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158096 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change intended.
Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.
This makes it possible to do so without changing all clients (again).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157854 91177308-0d34-0410-b5e6-96231b3b80d8
It helps compile exotic inline asm. In the test case, normal GR32
virtual registers use up eax-edx so the final GR32_ABCD live range has
no registers left. Since all the live ranges were tiny, we had no way of
prioritizing the smaller register class.
This patch allows tiny unspillable live ranges to be evicted by tiny
unspillable live ranges from a smaller register class.
<rdar://problem/11542429>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157715 91177308-0d34-0410-b5e6-96231b3b80d8
Live ranges with a constrained register class may benefit from splitting
around individual uses. It allows the remaining live range to use a
larger register class where it may allocate. This is like spilling to a
different register class.
This is only attempted on constrained register classes.
<rdar://problem/11438902>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157354 91177308-0d34-0410-b5e6-96231b3b80d8
This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.
Patch by Shamil Kurmangaleev!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153904 91177308-0d34-0410-b5e6-96231b3b80d8
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().
PEI resets virtual regs when it's done scavenging.
PTX will either have to provide its own PEI pass or assign physregs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151032 91177308-0d34-0410-b5e6-96231b3b80d8
Perform all comparisons at instruction granularity, and make sure
register masks on uses count in both gaps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150530 91177308-0d34-0410-b5e6-96231b3b80d8
This makes global live range splitting behave identically with and
without register mask operands.
This is not necessarily the best way of using register masks for live
range splitting. It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.
For now the goal is to produce identical assembly when enabling register
masks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150259 91177308-0d34-0410-b5e6-96231b3b80d8
Creates a configurable regalloc pipeline.
Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.
When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.
CodeGen transformation passes are never "required" as an analysis
ProcessImplicitDefs does not require LiveVariables.
We have a plan to massively simplify some of the early passes within the regalloc superpass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150226 91177308-0d34-0410-b5e6-96231b3b80d8
This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150191 91177308-0d34-0410-b5e6-96231b3b80d8
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet. In that case, there are no
data structures to update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139702 91177308-0d34-0410-b5e6-96231b3b80d8
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.
Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions. This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important. The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting. Stack slots are cheap.
This patch adds the interface to enable spill modes in SplitKit. In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler. In some cases, it can become much simpler because no
extra PHI-defs are required. This will speed up both splitting and
spilling.
This is only the interface to enable spill modes, no implementation yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139500 91177308-0d34-0410-b5e6-96231b3b80d8
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137002 91177308-0d34-0410-b5e6-96231b3b80d8
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.
This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137001 91177308-0d34-0410-b5e6-96231b3b80d8
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136992 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.
The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136989 91177308-0d34-0410-b5e6-96231b3b80d8