expensive the most useful interface to this analysis is.
Fun story -- it's also not correct. That's getting fixed in another
patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144523 91177308-0d34-0410-b5e6-96231b3b80d8
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.
The load and store slots are not needed after the deferred spill code
insertion framework was deleted.
The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals. In reality, these slots were used to distinguish
early-clobber defs from normal defs.
The new naming scheme also has 4 slots, but the names match how the
slots are really used. This is a purely mechanical renaming, but some
of the code makes a lot more sense now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144503 91177308-0d34-0410-b5e6-96231b3b80d8
RegAllocGreedy has been the default for six months now.
Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144475 91177308-0d34-0410-b5e6-96231b3b80d8
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144100 91177308-0d34-0410-b5e6-96231b3b80d8
the mailing list. Suggestions for other statistics to collect would be
awesome. =]
Currently these are implemented as a separate pass guarded by a separate
flag. I'm not thrilled by that, but I wanted to be able to collect the
statistics for the old code placement as well as the new in order to
have a point of comparison. I'm planning on folding them into the single
pass if / when there is only one pass of interest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143537 91177308-0d34-0410-b5e6-96231b3b80d8
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142641 91177308-0d34-0410-b5e6-96231b3b80d8
In machine code, you can't just replaceRegWith() the same way you can
replaceAllUsesWith() in IR. Virtual registers may have different
register classes that need to be merged first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142201 91177308-0d34-0410-b5e6-96231b3b80d8
Most instructions have some requirements for their register operands.
Usually, this is expressed as register class constraints in the
MCInstrDesc, but for inline assembly the constraints are encoded in the
flag words.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141835 91177308-0d34-0410-b5e6-96231b3b80d8
to the landing pad. This will be used by the back-end to generate the jump
tables for dispatching the arriving longjmp in sjlj eh.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141224 91177308-0d34-0410-b5e6-96231b3b80d8
The <undef> flag says that a MachineOperand doesn't read its register,
or doesn't depend on the previous value of its register.
A full register def never depends on the previous register value. A
partial register def may depend on the previous value if it is intended
to update part of a register.
For example:
%vreg10:dsub_0<def,undef> = COPY %vreg1
%vreg10:dsub_1<def> = COPY %vreg2
The first copy instruction defines the full %vreg10 register with the
bits not covered by dsub_0 defined as <undef>. It is not considered a
read of %vreg10.
The second copy modifies part of %vreg10 while preserving the rest. It
has an implicit read of %vreg10.
This patch adds a MachineOperand::readsReg() method to determine if an
operand reads its register.
Previously, this was modelled by adding a full-register <imp-def>
operand to the instruction. This approach makes it possible to
determine directly from a MachineOperand if it reads its register. No
scanning of MI operands is required.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141124 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic is used to pass the index of the function context to the back-end
for further processing. The back-end is in charge of filling in the rest of the
entries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140676 91177308-0d34-0410-b5e6-96231b3b80d8
This also enables domain swizzling for AVX code which required a few
trivial test changes.
The pass will be moved to lib/CodeGen shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140659 91177308-0d34-0410-b5e6-96231b3b80d8
The function will refuse to use a register class with fewer registers
than MinNumRegs. This can be used by clients to avoid accidentally
increase register pressure too much.
The default value of MinNumRegs=0 doesn't affect how constrainRegClass()
works.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140339 91177308-0d34-0410-b5e6-96231b3b80d8
The getPrevIndex() function moves to the same slot in the previous
instruction. For getVNInfoBefore(), we just need the previous slot in
the same instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139793 91177308-0d34-0410-b5e6-96231b3b80d8
There is only one legitimate use remaining, in addIntervalsForSpills().
All other calls to hasPHIKill() are only used to update PHIKill flags.
The addIntervalsForSpills() function is part of the old spilling
framework, only used by linearscan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139783 91177308-0d34-0410-b5e6-96231b3b80d8
It is conservatively correct to keep the hasPHIKill flags, even after
deleting PHI-defs.
The calculation can be very expensive after taildup has created a
quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag
isn't used for anything after RenumberValues().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139780 91177308-0d34-0410-b5e6-96231b3b80d8
An improper SlotIndex->VNInfo lookup was leading to unsafe copy removal.
Fixes PR10920 401.bzip2 miscompile with no IV rewrite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139765 91177308-0d34-0410-b5e6-96231b3b80d8
Three out of four clients prefer this interface which is consistent with
extendIntervalEndTo() and LiveRangeCalc::extend().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139604 91177308-0d34-0410-b5e6-96231b3b80d8
(The fix for the related failures on x86 is going to be nastier because we actually need Acquire memoperands attached to the atomic load instrs, etc.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139221 91177308-0d34-0410-b5e6-96231b3b80d8
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons. Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all"). Patch mostly by
Nadav Rotem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139159 91177308-0d34-0410-b5e6-96231b3b80d8
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC. While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function. To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function. Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!). Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC. Patch mostly by Sanjoy Das.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139140 91177308-0d34-0410-b5e6-96231b3b80d8
The landingpad instruction is lowered into the EXCEPTIONADDR and EHSELECTION
SDNodes. The information from the landingpad instruction is harvested by the
'AddLandingPadInfo' function. The new EH uses the current EH scheme in the
back-end. This will change once we switch over to the new scheme. (Reviewed by
Jakob!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137880 91177308-0d34-0410-b5e6-96231b3b80d8
This function doesn't have anything to do with spill weights, and MRI
already has functions for manipulating the register class of a virtual
register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137123 91177308-0d34-0410-b5e6-96231b3b80d8
This flag is true from isel to register allocation when the machine
function is required to be in SSA form. The TwoAddressInstructionPass
and PHIElimination passes clear the flag.
The SSA flag wil be used by the machine code verifier to check for SSA
form, and eventually an assertion can enforce it in +Asserts builds.
This will catch the common target error of creating machine code with
multiple defs of a virtual register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136532 91177308-0d34-0410-b5e6-96231b3b80d8
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136457 91177308-0d34-0410-b5e6-96231b3b80d8
AddLandingPadInfo takes a landingpad instruction and grabs all of the
information from it that it needs for EH table generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136429 91177308-0d34-0410-b5e6-96231b3b80d8
There is still a bit more refactoring left to do in Targets. But we are now very
close to fixing all the layering issues in MC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135611 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLoweringObjectFileImpl down to MCObjectFileInfo.
TargetAsmInfo is done to one last method. It's *almost* gone!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135569 91177308-0d34-0410-b5e6-96231b3b80d8
to MCRegisterInfo. Also initialize the mapping at construction time.
This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135424 91177308-0d34-0410-b5e6-96231b3b80d8
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.
Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135307 91177308-0d34-0410-b5e6-96231b3b80d8
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134735 91177308-0d34-0410-b5e6-96231b3b80d8
hasPredecessorHelper function allows predecessors to be cached to speed up
repeated invocations. This fixes PR10186.
X.isPredecessorOf(Y) now just calls Y.hasPredecessor(X)
Y.hasPredecessor(X) calls Y.hasPredecessorHelper(X, Visited, Worklist) with
empty Visited and Worklist sets (i.e. no caching over invocations).
Y.hasPredecessorHelper(X, Visited, Worklist) caches search state in Visited
and Worklist to speed up repeated calls. The Visited set is searched for X
before going to the worklist to further search the DAG if necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134592 91177308-0d34-0410-b5e6-96231b3b80d8
Add a MI->emitError() method that the backend can use to report errors
related to inline assembly. Call it from X86FloatingPoint.cpp when the
constraints are wrong.
This enables proper clang diagnostics from the backend:
$ clang -c pr30848.c
pr30848.c:5:12: error: Inline asm output regs must be last on the x87 stack
__asm__ ("" : "=u" (d)); /* { dg-error "output regs" } */
^
1 error generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134307 91177308-0d34-0410-b5e6-96231b3b80d8
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134021 91177308-0d34-0410-b5e6-96231b3b80d8
BranchProbabilityInfo (expect setEdgeWeight which is not available here).
Branch Weights are kept in MachineBasicBlocks. To turn off this analysis
set -use-mbpi=false.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133184 91177308-0d34-0410-b5e6-96231b3b80d8
suffix (e.g. .xdata$myfunc). The suffix part isn't implemented yet, but
I'll get to it in the next patch.
Fix up all callers of the affected functions. Make them pass said suffix to
the function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132205 91177308-0d34-0410-b5e6-96231b3b80d8
When instructions are deleted, they leave tombstone SlotIndex entries.
The isZeroLength method should ignore these null indexes.
This causes RABasic to sometimes spill a callee-saved register in the
abi-isel.ll test, so don't run that test with -regalloc=basic. Prioritizing
register allocation according to spill weight can cause more registers to be
used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131436 91177308-0d34-0410-b5e6-96231b3b80d8
markers. In some cases a register def is dead on one path, but not on
another.
This is passing Clang self-hosting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131214 91177308-0d34-0410-b5e6-96231b3b80d8
this clang will use .debug_frame in, for example,
clang -g -c -m32 test.c
This matches gcc's behaviour. It looks like .debug_frame is a bit bigger
than .eh_frame, but has the big advantage of not being allocated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131140 91177308-0d34-0410-b5e6-96231b3b80d8
This makes a difference if a live interval is referring to a deleted
instruction. It can be important to insert an instruction before or after a
deleted instruction to avoid interference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130686 91177308-0d34-0410-b5e6-96231b3b80d8
This could happen when trying to use a value that had been eliminated after dead
code elimination and folding loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130597 91177308-0d34-0410-b5e6-96231b3b80d8
give it a bit more responsibility. Also implement it for MachO.
If hacked to use cfi, 32 bit MachO will produce
.cfi_personality 155, L___gxx_personality_v0$non_lazy_ptr
and 64 bit will produce
.cfi_presonality ___gxx_personality_v0
The general idea is that .cfi_personality gets passed the final symbol. It is
up to codegen to produce it if using indirect representation (like 32 bit
MachO), but it is up to MC to decide which relocations to create.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130341 91177308-0d34-0410-b5e6-96231b3b80d8
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.
Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.
rdar://9329627
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130245 91177308-0d34-0410-b5e6-96231b3b80d8
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130229 91177308-0d34-0410-b5e6-96231b3b80d8
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
would cause an assert.
2. The load may not be used by the current chain of instructions,
and we could move it past a side-effecting instruction. Change
how we process uses to define the problem away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130018 91177308-0d34-0410-b5e6-96231b3b80d8
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo. Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:
cmpb $0, 52(%rax)
je LBB4_2
instead of:
movb 52(%rax), %cl
cmpb $0, %cl
je LBB4_2
This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and
generating less code.
This was one of the biggest classes of missing load folding. Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129656 91177308-0d34-0410-b5e6-96231b3b80d8
Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129571 91177308-0d34-0410-b5e6-96231b3b80d8
This is done by pushing physical register definitions close to their
use, which happens to handle flag definitions if they're not glued to
the branch. This seems to be generally a good thing though, so I
didn't need to add a target hook yet.
The primary motivation is to generate code closer to what people
expect and rule out missed opportunity from enabling macro-op
fusion. As a side benefit, we get several 2-5% gains on x86
benchmarks. There is one regression:
SingleSource/Benchmarks/Shootout/lists slows down be -10%. But this is
an independent scheduler bug that will be tracked separately.
See rdar://problem/9283108.
Incidentally, pre-RA scheduling is only half the solution. Fixing the
later passes is tracked by:
<rdar://problem/8932804> [pre-RA-sched] on x86, attempt to schedule CMP/TEST adjacent with condition jump
Fixes:
<rdar://problem/9262453> Scheduler unnecessary break of cmp/jump fusion
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129508 91177308-0d34-0410-b5e6-96231b3b80d8
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129188 91177308-0d34-0410-b5e6-96231b3b80d8
induction variable. The preRA scheduler is unaware of induction vars,
so we look for potential "virtual register cycles" instead.
Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129100 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to always keep the smaller slot for an instruction which is what
we want when a register has early clobber defines.
Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128886 91177308-0d34-0410-b5e6-96231b3b80d8
inlined path for the common case.
Most basic blocks don't contain a call that may throw, so the last split point
os simply the first terminator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128874 91177308-0d34-0410-b5e6-96231b3b80d8
Correctly terminate the range of register DBG_VALUEs when the register is
clobbered or when the basic block ends.
The code is now ready to deal with variables that are sometimes in a register
and sometimes on the stack. We just need to teach emitDebugLoc to say 'stack
slot'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128327 91177308-0d34-0410-b5e6-96231b3b80d8
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127827 91177308-0d34-0410-b5e6-96231b3b80d8
Initially, slot indexes are quad-spaced. There is room for inserting up to 3
new instructions between the original instructions.
When we run out of indexes between two instructions, renumber locally using
double-spaced indexes. The original quad-spacing means that we catch up quickly,
and we only have to renumber a handful of instructions to get a monotonic
sequence. This is much faster than renumbering the whole function as we did
before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127023 91177308-0d34-0410-b5e6-96231b3b80d8
it. It's been assumed up til now that it would be in its immediate
successor. However, this isn't necessarily the case. It could be in one of its
successor's successors.
Modify the code to more thoroughly check for an 'eh.selector' call in
successors. It only looks at a successor if we get there as a result of an
unconditional branch.
Testcase ObjC/exceptions-4.m in r126968.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126969 91177308-0d34-0410-b5e6-96231b3b80d8
This is much faster than using a pointer to a ManagedStatic object accessed with
a function call. The greedy register allocator is 5% faster overall just from
the SlotIndex default constructor savings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126925 91177308-0d34-0410-b5e6-96231b3b80d8
The SlotIndex created by the default construction does not represent a position
in the function, and it doesn't make sense to compare it to other indexes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126924 91177308-0d34-0410-b5e6-96231b3b80d8
This method could probably be used by LiveIntervalAnalysis::shrinkToUses, and
now it can use extendIntervalEndTo() which coalesces ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126803 91177308-0d34-0410-b5e6-96231b3b80d8
registers at phis. This enables us to eliminate a lot of pointless zexts during
the DAGCombine phase. This fixes <rdar://problem/8760114>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126380 91177308-0d34-0410-b5e6-96231b3b80d8
share entries. Add a DenseSet to MachineConstantPool for the MachineCPVs that
it owns.
This will hopefully fix the MC/ARM/elf-reloc-01.ll failure on the leaks bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126218 91177308-0d34-0410-b5e6-96231b3b80d8
at phis. This enables us to eliminate a lot of pointless zexts during the DAGCombine
phase. This fixes <rdar://problem/8760114>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126170 91177308-0d34-0410-b5e6-96231b3b80d8
In other words, do not keep track of argument's location. The debugger (gdb) is not prepared to see line table entries for arguments. For the debugger, "second" line table entry marks beginning of function body.
This requires some coordination with debugger to get this working.
- The debugger needs to be aware of prolog_end attribute attached with line table entries.
- The compiler needs to accurately mark prolog_end in line table entries (at -O0 and at -O1+)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126155 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify the spill weight calculation a bit by bypassing
getApproximateInstructionCount() and using LiveInterval::getSize() directly.
This changes the computed spill weights, but only by a constant factor in each
function. It should not affect how spill weights compare against each other, and
so it shouldn't affect code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125530 91177308-0d34-0410-b5e6-96231b3b80d8
generating i8 shift amounts for things like i1024 types. Add
an assert in getNode to prevent this from occuring in the future,
fix the buggy transformation, revert my previous patch, and
document this gotcha in ISDOpcodes.h
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125465 91177308-0d34-0410-b5e6-96231b3b80d8
This is a lot easier than trying to get kill flags right during live range
splitting and rematerialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125113 91177308-0d34-0410-b5e6-96231b3b80d8
After uses of a live range are removed, recompute the live range to only cover
the remaining uses. This is necessary after rematerializing the value before
some (but not all) uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125058 91177308-0d34-0410-b5e6-96231b3b80d8
A live range cannot be split everywhere in a basic block. A split must go before
the first terminator, and if the variable is live into a landing pad, the split
must happen before the call that can throw.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124894 91177308-0d34-0410-b5e6-96231b3b80d8
precisely track pressure on a selection DAG, but we can at least keep
it balanced. This design accounts for various interesting aspects of
selection DAGS: register and subregister copies, glued nodes, dead
nodes, unused registers, etc.
Added SUnit::NumRegDefsLeft and ScheduleDAGSDNodes::RegDefIter.
Note: I disabled PrescheduleNodesWithMultipleUses when register
pressure is enabled, based on no evidence other than I don't think it
makes sense to have both enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124853 91177308-0d34-0410-b5e6-96231b3b80d8
The greedy register allocator revealed some problems with the value mapping in
SplitKit. We would sometimes start mapping values before all defs were known,
and that could change a value from a simple 1-1 mapping to a multi-def mapping
that requires ssa update.
The new approach collects all defs and register assignments first without
filling in any live intervals. Only when finish() is called, do we compute
liveness and mapped values. At this time we know with certainty which values map
to multiple values in a split range.
This also has the advantage that we can compute live ranges based on the
remaining uses after rematerializing at split points.
The current implementation has many opportunities for compile time optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124765 91177308-0d34-0410-b5e6-96231b3b80d8
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR. Eventually this
will get matched to VINSERTF128 if AVX is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124307 91177308-0d34-0410-b5e6-96231b3b80d8
flags. They are still not enable in this revision.
Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.
Generalized unit tests to work with sched-cycles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123969 91177308-0d34-0410-b5e6-96231b3b80d8
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123155 91177308-0d34-0410-b5e6-96231b3b80d8
when no virtual registers have been allocated.
It was only used to resize IndexedMaps, so provide an IndexedMap::resize()
method such that
Map.grow(MRI.getLastVirtReg());
can be replaced with the simpler
Map.resize(MRI.getNumVirtRegs());
This works correctly when no virtuals are allocated, and it bypasses the to/from
index conversions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123130 91177308-0d34-0410-b5e6-96231b3b80d8
physical register numbers.
This makes the hack used in LiveInterval official, and lets LiveInterval be
oblivious of stack slots.
The isPhysicalRegister() and isVirtualRegister() predicates don't know about
this, so when a variable may contain a stack slot, isStackSlot() should always
be tested first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123128 91177308-0d34-0410-b5e6-96231b3b80d8
of using a Location class with the same information.
When making a copy of a MachineOperand that was already stored in a
MachineInstr, it is necessary to clear the parent pointer on the copy. Otherwise
the register use-def lists become inconsistent.
Add MachineOperand::clearParent() to do that. An alternative would be a custom
MachineOperand copy constructor that cleared ParentMI. I didn't want to do that
because of the performance impact.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123109 91177308-0d34-0410-b5e6-96231b3b80d8
Provide MRI::getNumVirtRegs() and TRI::index2VirtReg() functions to allow
iteration over virtual registers without depending on the representation of
virtual register numbers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123098 91177308-0d34-0410-b5e6-96231b3b80d8
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.
This allows memory instructions to be moved around INLINEASM instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123044 91177308-0d34-0410-b5e6-96231b3b80d8
We were never generating any of these nodes with variable indices, and there
was one legalizer function asserting on a non-constant index. If we ever have
a need to support variable indices, we can add this back again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122993 91177308-0d34-0410-b5e6-96231b3b80d8
This pass precomputes CFG block frequency information that can be used by the
register allocator to find optimal spill code placement.
Given an interference pattern, placeSpills() will compute which basic blocks
should have the current variable enter or exit in a register, and which blocks
prefer the stack.
The algorithm is ready to consume block frequencies from profiling data, but for
now it gets by with the static estimates used for spill weights.
This is a work in progress and still not hooked up to RegAllocGreedy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122938 91177308-0d34-0410-b5e6-96231b3b80d8