give it a bit more responsibility. Also implement it for MachO.
If hacked to use cfi, 32 bit MachO will produce
.cfi_personality 155, L___gxx_personality_v0$non_lazy_ptr
and 64 bit will produce
.cfi_presonality ___gxx_personality_v0
The general idea is that .cfi_personality gets passed the final symbol. It is
up to codegen to produce it if using indirect representation (like 32 bit
MachO), but it is up to MC to decide which relocations to create.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130341 91177308-0d34-0410-b5e6-96231b3b80d8
Modified LinearFunctionTestReplace to push the condition on the dead
list instead of eagerly deleting it. This can cause unnecessary
IV rewrites, which should have no effect on codegen and will not be an
issue once we stop generating canonical IVs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130340 91177308-0d34-0410-b5e6-96231b3b80d8
successors) and use inverse depth first search to traverse the BBs. However
that doesn't work when the CFG has infinite loops. Simply do a linear
traversal of all BBs work just fine.
rdar://9344645
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130324 91177308-0d34-0410-b5e6-96231b3b80d8
only check arguments with pointer types. Update the documentation
of IntrReadArgMem reflect this.
While here, add support for TBAA tags on intrinsic calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130317 91177308-0d34-0410-b5e6-96231b3b80d8
We cannot rely on the <imp-def> operands added by LiveIntervals in all cases as
demonstrated by the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130313 91177308-0d34-0410-b5e6-96231b3b80d8
non private symbol. This will be use for handling
foo:
.cfi_startproc
...
On OS X where we have to create a foo.eh symbol.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130305 91177308-0d34-0410-b5e6-96231b3b80d8
effective in avoiding recomputation of LCSSA form; the widespread
use of instsimplify (which looks through phi nodes) means it was
not preserving LCSSA form anyway; and instcombine is no longer
scheduled in the middle of the loop passes so this doesn't matter
anymore.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130301 91177308-0d34-0410-b5e6-96231b3b80d8
Added a type check in ScalarEvolution::computeSCEVAtScope to handle the case in which operands of an
AddRecExpr in the current scope are folded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130271 91177308-0d34-0410-b5e6-96231b3b80d8
an earlier load could be widened to encompass a later load. For example,
if we see:
X = load i8* P, align 4
Y = load i8* (P+3), align 1
and we have a 32-bit native integer type, we can widen the former load
to i32 which then makes the second load redundant. GVN can't actually
do anything with this load/load relation yet, so this isn't testable, but
it is the next step to resolving PR6627, and a fairly general class of
"merge neighboring loads" missed optimizations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130250 91177308-0d34-0410-b5e6-96231b3b80d8
The number of blocks covered by a live range must be strictly decreasing when
splitting, otherwise we can't allow repeated splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130249 91177308-0d34-0410-b5e6-96231b3b80d8
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.
Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.
rdar://9329627
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130245 91177308-0d34-0410-b5e6-96231b3b80d8
when X has multiple uses. This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use. For example,
this can disable load folding.
This is inching towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130238 91177308-0d34-0410-b5e6-96231b3b80d8
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130229 91177308-0d34-0410-b5e6-96231b3b80d8
The hook will be used by the register allocator when recomputing register
classes after removing constraints.
Thumb1 code doesn't allow anything larger than tGPR, and x86 needs to ensure
that the spill size doesn't change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130228 91177308-0d34-0410-b5e6-96231b3b80d8
This worked untill now because stars are aligned (i.e. num of complex address elments are always 0 or 2+ and when it is 2+ at least two elements are access together)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130225 91177308-0d34-0410-b5e6-96231b3b80d8
translation fails. We were bailing out in some cases that would
cause us to miss GVN'ing some non-local cases away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130206 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for switch and indirectbr edges. This works by densely numbering
all blocks which have such terminators, and then separately numbering the
possible successors. The predecessors write down a number, the successor knows
its own number (as a ConstantInt) and sends that and the pointer to the number
the predecessor wrote down to the runtime, who looks up the counter in a
per-function table.
Coverage data should now be functional, but I haven't tested it on anything
other than my 2-file synthetic test program for coverage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130186 91177308-0d34-0410-b5e6-96231b3b80d8
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130180 91177308-0d34-0410-b5e6-96231b3b80d8