I've been sitting on this long enough trying to find a test case. I
think the fix should go in now, but I'll keep working on the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132701 91177308-0d34-0410-b5e6-96231b3b80d8
When local live range splitting creates a live range with the same
number of instructions as the old range, mark it as RS_Local. When such
a range is seen again, require that it be split in a way that reduces
the number of instructions. That guarantees we are making progress while
still being able to perform 3 -> 2+3 splits as required by PR10070.
This also means that the PrevSlot map is no longer needed. This was also
used to estimate new spill weights, but that is no longer necessary
after slotIndexes::insertMachineInstrInMaps() got the extra Late
insertion argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132697 91177308-0d34-0410-b5e6-96231b3b80d8
Only target-dependent hints require callbacks. The RCI allocation order
has CSR aliases last according to their order of appearance in the
getCalleeSavedRegs list. This can depend on the calling convention.
This way, AllocationOrder::next doesn't have to check for reserved
registers, and CSRs are always allocated last, even with weird calling
conventions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132690 91177308-0d34-0410-b5e6-96231b3b80d8
The order of registers returned by getCalleeSavedRegs is used to lay out
the fixed stack slots for CSRs. Some targets like their CSRs used from
one end, and some targets want them used from the other end.
When computing an allocation order, simply preserve the relative
ordering of CSRs that the target specifies in its allocation order.
Reordering CSRs would break some targets, ARM in particular.
We still place volatiles before the CSRs, providing slightly better
results with different calling conventions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132680 91177308-0d34-0410-b5e6-96231b3b80d8
(only happens when using the -promote-elements option).
The correct legalization order is to first try to promote element. Next, we try
to widen vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132648 91177308-0d34-0410-b5e6-96231b3b80d8
of reserved registers.
Use RegisterClassInfo in RABasic as well. This slightly changes som
allocation orders because RegisterClassInfo puts CSR aliases last.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132581 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, these aliases would be ordered alphabetically. (BH, BL)
Print out the computed allocation orders.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132580 91177308-0d34-0410-b5e6-96231b3b80d8
When compiling a program with lots of small functions like
483.xalancbmk, this makes RAFast 11% faster.
Add some comments to clarify the difference between unallocatable and
reserved registers. It's quite subtle.
The fast register allocator depends on EFLAGS' not being allocatable on
x86. That way it can completely avoid tracking liveness, and it won't
mind when there are multiple uses of a single def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132514 91177308-0d34-0410-b5e6-96231b3b80d8
Some register classes are only used for instruction operand constraints.
They should never be used for virtual registers. Previously, those
register classes were given an empty allocation order, but now you can
say 'let isAllocatable=0' in the register class definition.
TableGen calculates if a register is part of any allocatable register
class, and makes that information available in TargetRegisterDesc::inAllocatableClass.
The goal here is to eliminate use cases for overriding allocation_order_*
methods.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132508 91177308-0d34-0410-b5e6-96231b3b80d8
I was confused whether new uint8_t[] would zero-initialize the returned
array, and it seems that so is gcc-4.0.
This should fix the test failures on darwin 9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132500 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, use simpler approach and let DBG_VALUE follow its predecessor instruction. After live debug value analysis pass, all DBG_VALUE instruction are placed at the right place. Thanks Jakob for the hint!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132483 91177308-0d34-0410-b5e6-96231b3b80d8
register classes.
It provides information for each register class that cannot be
determined statically, like:
- The number of allocatable registers in a class after filtering out the
reserved and invalid registers.
- The preferred allocation order with registers that overlap callee-saved
registers last.
- The last callee-saved register that overlaps a given physical register.
This information usually doesn't change between functions, so it is
reused for compiling multiple functions when possible. The many
possible combinations of reserved and callee saves registers makes it
unfeasible to compute this information statically in TableGen.
Use RegisterClassInfo to count available registers in various heuristics
in SimpleRegisterCoalescing, making the pass run 4% faster.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132450 91177308-0d34-0410-b5e6-96231b3b80d8
patch we add a flag to enable a new type legalization decision - to promote
integer elements in vectors. Currently, the rest of the codegen does not support
this kind of legalization. This flag will be removed when the transition is
complete.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132394 91177308-0d34-0410-b5e6-96231b3b80d8
For targets with no itinerary (x86) it is a nop by default. For
targets with issue width already expressed in the itinerary (ARM) it
bypasses a scoreboard check but otherwise does not affect the
schedule. It does make the code more consistent and complete and
allows new targets to specify their issue width in an arbitrary way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132385 91177308-0d34-0410-b5e6-96231b3b80d8
turns out that it could cause an infinite loop in some situations. If this code
is triggered and it converts a cleanup into a catchall, but that cleanup was in
already in a cleanup, then the _Unwind_SjLj_Resume could infinite loop. I.e.,
the code doesn't consume the exception object and passes it on to
_Unwind_SjLj_Resume. But _USjLjR expects it to be consumed (since it's landing
at a catchall instead of a cleanup). So it uses the values that are presently
there, which are the values that tell it to jump to the fake landing pad.
<rdar://problem/9508402>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132381 91177308-0d34-0410-b5e6-96231b3b80d8
When assigned ranges are evicted, they are put in the RS_Evicted stage and are
not allowed to evict anything else. That prevents looping automatically.
When evicting ranges just to get a cheaper register, use only spill weights to
find the possible candidates. Avoid breaking hints for this purpose, it is not
worth it.
Start implementing more complex eviction heuristics, guarded by the temporary
-complex-eviction flag. The initial version permits a heavier range to be
evicted if it doesn't have any uses where the evicting range is live. This makes
it a good candidate for live ranfge splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132358 91177308-0d34-0410-b5e6-96231b3b80d8
handler's data area starts with a 4-byte reference to the personality
function, followed by the DWARF LSDA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132302 91177308-0d34-0410-b5e6-96231b3b80d8
This only affects targets like Mips where branch instructions may kill virtual
registers. Most other targets branch on flag values, so virtual registers are
not involved.
The problem is that MachineBasicBlock::updateTerminator deletes branches and
inserts new ones while LiveVariables keeps a list of pointers to instructions
that kill virtual registers. That list wasn't properly updated in
MBB::SplitCriticalEdge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132298 91177308-0d34-0410-b5e6-96231b3b80d8
handler.
At this moment, only GCC-style exceptions are supported. Other kinds
of exceptions, including "traditional" SEH and Microsoft Visual C++ exceptions,
need more work--and an compiler exception model that isn't specific to
GCC-style exceptions!
In particular, I imagine that it would be possible to mix "traditional" SEH
with GCC-style EH or Microsoft C++ EH. Currently LLVM has no way (beyond some
target-specific defaults and whole-module compiler switches) of knowing which
scheme to use when.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132283 91177308-0d34-0410-b5e6-96231b3b80d8
This patch does not change the behavior of the type legalizer. The codegen
produces the same code.
This infrastructural change is needed in order to enable complex decisions
for vector types (needed by the vector-select patch).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132263 91177308-0d34-0410-b5e6-96231b3b80d8
transformed by the inliner into a branch to the enclosing landing pad
(when inlined through an invoke). If not so optimized, it is lowered
DWARF EH preparation into a call to _Unwind_Resume (or _Unwind_SjLj_Resume
as appropriate). Its chief advantage is that it takes both the
exception value and the selector value as arguments, meaning that there
is zero effort in recovering these; however, the frontend is required
to pass these down, which is not actually particularly difficult.
Also document the behavior of landing pads a bit better, and make it
clearer that it's okay that personality functions don't always land at
landing pads. This is just a fact of life. Don't write optimizations that
rely on pushing things over an unwind edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132253 91177308-0d34-0410-b5e6-96231b3b80d8
Delete the Kill and Def markers in BlockInfo. They are no longer
necessary when BlockInfo describes a continuous live range.
This only affects the relatively rare kind of basic block where a live
range looks like this:
|---x o---|
Now live range splitting can pretend that it is looking at two blocks:
|---x
o---|
This allows the code to be simplified a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132245 91177308-0d34-0410-b5e6-96231b3b80d8
It is important that this function returns the same number of live blocks as
countLiveBlocks(CurLI) because live range splitting uses the number of live
blocks to ensure it is making progress.
This is in preparation of supporting duplicate UseBlock entries for basic blocks
that have a virtual register live-in and live-out, but not live-though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132244 91177308-0d34-0410-b5e6-96231b3b80d8
There was no way to check if a given register/mode pair was valid. We now return
an error code (-2) instead of asserting. If anyone thinks that an assert
at this point is really needed, we can autogen a hasValidDwarfRegNum instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132236 91177308-0d34-0410-b5e6-96231b3b80d8
the Win64 EH mechanism to implement GCC-style exceptions. LLVM supports
hardly anything else at this point!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132234 91177308-0d34-0410-b5e6-96231b3b80d8
subregisters:
When a value is in a subregister, at least report the location as being
the superregister. We should extend the .td files to encode the bit
range so that we can produce a DW_OP_bit_piece.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132224 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't change functionality (much), but it allows for a more fine-grained
eviction policy. The current policy only compares spill weights, and that is not
always the best thing to do. Spill weights are designed to serve linear scan,
and they don't consider live range splitting.
Add a mechanism so canEvict() can request that a live range be evicted and
split/spilled. This is to avoid infinite eviction loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132101 91177308-0d34-0410-b5e6-96231b3b80d8
The practical effects here are that x86-64 fast-isel can now handle trunc from i8 to i1, and ARM fast-isel can handle many more constructs involving integers narrower than 32 bits (including loads, stores, and many integer casts).
rdar://9437928 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132099 91177308-0d34-0410-b5e6-96231b3b80d8
LTO friendly as we can now correctly merge files compiled with or without
-fasynchronous-unwind-tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132033 91177308-0d34-0410-b5e6-96231b3b80d8
non-zero.
- Teach X86 cmov optimization to eliminate the cmov from ctlz, cttz extension
when the source of X86ISD::BSR / X86ISD::BSF is proven to be non-zero.
rdar://9490949
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131948 91177308-0d34-0410-b5e6-96231b3b80d8
similarly for stores. Now "make check" passes with the MachineVerifier forced
on with the VerifyCoalescing option!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131705 91177308-0d34-0410-b5e6-96231b3b80d8
on CodeGen/X86/2007-05-07-InvokeSRet.ll. There is probably a bug here that was
fixed by r128961, but since there is no test or reference to a source file I have
to revert it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131618 91177308-0d34-0410-b5e6-96231b3b80d8
Original log entry:
Refactor getActionType and getTypeToTransformTo ; place all of the 'decision'
code in one place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131536 91177308-0d34-0410-b5e6-96231b3b80d8
LiveInterval::shrinkToUses recomputes the live range from scratch instead of
removing snippets. This should avoid the problem with dangling live ranges.
Leave physreg identity copies alone. They can be created when joining a virtreg
with a physreg. They don't affect register allocation, and they will be removed
by the rewriter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131521 91177308-0d34-0410-b5e6-96231b3b80d8
The greedy register allocator has live range splitting and register class
inflation, so it can actually fully undo this join, including restoring the
original register classes.
We still don't want to do this for long live ranges, mostly because of the high
register pressure of there are many constrained live ranges overlapping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131466 91177308-0d34-0410-b5e6-96231b3b80d8
When instructions are deleted, they leave tombstone SlotIndex entries.
The isZeroLength method should ignore these null indexes.
This causes RABasic to sometimes spill a callee-saved register in the
abi-isel.ll test, so don't run that test with -regalloc=basic. Prioritizing
register allocation according to spill weight can cause more registers to be
used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131436 91177308-0d34-0410-b5e6-96231b3b80d8
by non-CMP expressions. The executable test case (129821) would test
this as well, if we had an "-O0 -disable-arm-fast-isel" LLVM-GCC
tester. Alas, the ARM assembly would be very difficult to check with
FileCheck.
The thumb2-cbnz.ll test is affected; it generates larger code (tst.w
vs. cmp #0), but I believe the new version is correct.
rdar://problem/9298790
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131261 91177308-0d34-0410-b5e6-96231b3b80d8
markers. In some cases a register def is dead on one path, but not on
another.
This is passing Clang self-hosting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131214 91177308-0d34-0410-b5e6-96231b3b80d8
about to be spilled.
This can only happen when two extra snippet registers are included in the spill,
and there is a copy between them. Hoisting the spill creates problems because
the hoist will mark the copy for later dead code elimination, and spilling the
second register will turn the copy into a spill.
<rdar://problem/9420853>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131192 91177308-0d34-0410-b5e6-96231b3b80d8
If there is a store after the load node, then there is a chain, which means
that there is another user. Thus, asking hasOneUser would fail. Instead we
ask hasNUsesOfValue on the 'data' value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131183 91177308-0d34-0410-b5e6-96231b3b80d8
intrinsic call. This prevents it from being reordered so that it appears
*before* the setjmp intrinsic (thus making it completely useless).
<rdar://problem/9409683>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131174 91177308-0d34-0410-b5e6-96231b3b80d8
at the start of basic blocks to their common predecessor. It's actually quite
common (e.g. about 50 times in JM/lencod) and has shown to be a nice code size
benefit. e.g.
pushq %rax
testl %edi, %edi
jne LBB0_2
## BB#1:
xorb %al, %al
popq %rdx
ret
LBB0_2:
xorb %al, %al
callq _foo
popq %rdx
ret
=>
pushq %rax
xorb %al, %al
testl %edi, %edi
je LBB0_2
## BB#1:
callq _foo
LBB0_2:
popq %rdx
ret
rdar://9145558
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131172 91177308-0d34-0410-b5e6-96231b3b80d8
test case; I've only seen this on a release branch, and I can't get it
to reproduce on trunk. rdar://problem/7662569
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131152 91177308-0d34-0410-b5e6-96231b3b80d8
this clang will use .debug_frame in, for example,
clang -g -c -m32 test.c
This matches gcc's behaviour. It looks like .debug_frame is a bit bigger
than .eh_frame, but has the big advantage of not being allocated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131140 91177308-0d34-0410-b5e6-96231b3b80d8
The previous invalidation missed the alias interference caches.
Also add a stats counter for the number of repaired ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131133 91177308-0d34-0410-b5e6-96231b3b80d8