This new register allocator is initially identical to RegAllocBasic, but it will
receive all of the tricks that RegAllocBasic won't get.
RegAllocGreedy will eventually replace linear scan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121234 91177308-0d34-0410-b5e6-96231b3b80d8
This analysis is going to run immediately after LiveIntervals. It will stay
alive during register allocation and keep track of user variables mentioned in
DBG_VALUE instructions.
When the register allocator is moving values between registers and the stack, it
is very hard to keep track of DBG_VALUE instructions. We usually get it wrong.
This analysis maintains a data structure that makes it easy to update DBG_VALUE
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120385 91177308-0d34-0410-b5e6-96231b3b80d8
framework. It's purpose is not to improve register allocation per se,
but to make it easier to develop powerful live range splitting. I call
it the basic allocator because it is as simple as a global allocator
can be but provides the building blocks for sophisticated register
allocation with live range splitting.
A minimal implementation is provided that trivially spills whenever it
runs out of registers. I'm checking in now to get high-level design
and style feedback. I've only done minimal testing. The next step is
implementing a "greedy" allocation algorithm that does some register
reassignment and makes better splitting decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117174 91177308-0d34-0410-b5e6-96231b3b80d8
splitting or spillling, and to help with rematerialization.
Use LiveRangeEdit in InlineSpiller and SplitKit. This will eventually make it
possible to share remat code between InlineSpiller and SplitKit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116543 91177308-0d34-0410-b5e6-96231b3b80d8
experimental pass that allocates locals relative to one another before
register allocation and then assigns them to actual stack slots as a block
later in PEI. This will eventually allow targets with limited index offset
range to allocate additional base registers (not just FP and SP) to
more efficiently reference locals, as well as handle situations where
locals cannot be referenced via SP or FP at all (dynamic stack realignment
together with variable sized objects, for example). It's currently
incomplete and almost certainly buggy. Work in progress.
Disabled by default and gated via the -enable-local-stack-alloc command
line option.
rdar://8277890
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111059 91177308-0d34-0410-b5e6-96231b3b80d8
This is a work in progress. So far we have some basic loop analysis to help
determine where it is useful to split a live range around a loop.
The actual loop splitting code from Splitter.cpp is also going to move in here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108842 91177308-0d34-0410-b5e6-96231b3b80d8
InlineSpiller inserts loads and spills immediately instead of deferring to
VirtRegMap. This is possible now because SlotIndexes allows instructions to be
inserted and renumbered.
This is work in progress, and is mostly a copy of TrivialSpiller so far. It
works very well for functions that don't require spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107227 91177308-0d34-0410-b5e6-96231b3b80d8
So far this is just a clone of -regalloc=local that has been lobotomized to run
25% faster. It drops the least-recently-used calculations, and is just plain
stupid when it runs out of registers.
The plan is to make this go even faster for -O0 by taking advantage of the short
live intervals in unoptimized code. It should not be necessary to calculate
liveness when most virtual registers are killed 2-3 instructions after they are
born.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102006 91177308-0d34-0410-b5e6-96231b3b80d8
source addition. Apparently the buildbots were wrong about failures.
---
Add some switches helpful for debugging:
-print-before=<Pass Name>
Dump IR before running pass <Pass Name>.
-print-before-all
Dump IR before running each pass.
-print-after-all
Dump IR after running each pass.
These are helpful when tracking down a miscompilation. It is easy to
get IR dumps and do diffs on them, etc.
To make this work well, add a new getPrinterPass API to Pass so that
each kind of pass (ModulePass, FunctionPass, etc.) can create a Pass
suitable for dumping out the kind of object the Pass works on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100249 91177308-0d34-0410-b5e6-96231b3b80d8
reduce down to a single value. InstCombine already does this transformation
but DAG legalization may introduce new opportunities. This has turned out to
be important for ARM where 64-bit values are split up during type legalization:
InstCombine is not able to remove the PHI cycles on the 64-bit values but
the separate 32-bit values can be optimized. I measured the compile time
impact of this (running llc on 176.gcc) and it was not significant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95951 91177308-0d34-0410-b5e6-96231b3b80d8
running tail duplication when doing branch folding for if-conversion, and
we also want to be able to run tail duplication earlier to fix some
reg alloc problems. Move the CanFallThrough function from BranchFolding
to MachineBasicBlock so that it can be shared by TailDuplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89904 91177308-0d34-0410-b5e6-96231b3b80d8
code in preparation for code generation. The main thing it does
is handle the case when eh.exception calls (and, in a future
patch, eh.selector calls) are far away from landing pads. Right
now in practice you only find eh.exception calls close to landing
pads: either in a landing pad (the common case) or in a landing
pad successor, due to loop passes shifting them about. However
future exception handling improvements will result in calls far
from landing pads:
(1) Inlining of rewinds. Consider the following case:
In function @f:
...
invoke @g to label %normal unwind label %unwinds
...
unwinds:
%ex = call i8* @llvm.eh.exception()
...
In function @g:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
"rethrow exception"
Now inline @g into @f. Currently this is turned into:
In function @f:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
invoke "rethrow exception" to label %normal unwind label %unwinds
unwinds:
%ex = call i8* @llvm.eh.exception()
...
However we would like to simplify invoke of "rethrow exception" into
a branch to the %unwinds label. Then %unwinds is no longer a landing
pad, and the eh.exception call there is then far away from any landing
pads.
(2) Using the unwind instruction for cleanups.
It would be nice to have codegen handle the following case:
invoke @something to label %continue unwind label %run_cleanups
...
handler:
... perform cleanups ...
unwind
This requires turning "unwind" into a library call, which
necessarily takes a pointer to the exception as an argument
(this patch also does this unwind lowering). But that means
you are using eh.exception again far from a landing pad.
(3) Bugpoint simplifications. When bugpoint is simplifying
exception handling code it often generates eh.exception calls
far from a landing pad, which then causes codegen to assert.
Bugpoint then latches on to this assertion and loses sight
of the original problem.
Note that it is currently rare for this pass to actually do
anything. And in fact it normally shouldn't do anything at
all given the code coming out of llvm-gcc! But it does fire
a few times in the testsuite. As far as I can see this is
almost always due to the LoopStrengthReduce codegen pass
introducing pointless loop preheader blocks which are landing
pads and only contain a branch to another block. This other
block contains an eh.exception call. So probably by tweaking
LoopStrengthReduce a bit this can be avoided.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72276 91177308-0d34-0410-b5e6-96231b3b80d8
The following is checked:
* Operand counts: All explicit operands must be present.
* Register classes: All physical and virtual register operands must be
compatible with the register class required by the instruction descriptor.
* Register live intervals: Registers must be defined only once, and must be
defined before use.
The machine code verifier is enabled with the command-line option
'-verify-machineinstrs', or by defining the environment variable
LLVM_VERIFY_MACHINEINSTRS to the name of a file that will receive all the
verifier errors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71918 91177308-0d34-0410-b5e6-96231b3b80d8
is currently off by default, and can be enabled with
-disable-post-RA-scheduler=false.
This doesn't have a significant impact on most code yet because it doesn't
yet do anything to address anti-dependencies and it doesn't attempt to
disambiguate memory references. Also, several popular targets
don't have pipeline descriptions yet.
The majority of the changes here are splitting the SelectionDAG-specific
code out of ScheduleDAG, so that ScheduleDAG can be moved to
libLLVMCodeGen.a. The interface between ScheduleDAG-using code and
the rest of the scheduling code is somewhat rough and will evolve.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59676 91177308-0d34-0410-b5e6-96231b3b80d8