alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
otherwise labels get incorrectly merged. We handled this by emitting a
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes. Handle this by emitting a noop. This
is more gross than it should be because arm/ppc are not fully mc'ized yet.
This fixes rdar://7908505
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102400 91177308-0d34-0410-b5e6-96231b3b80d8
doesn't dominate the header is needed, don't check whether the increment
expression has computable loop evolution. While the operands of an
addrec are required to be loop-invariant, they're not required to
dominate any part of the loop. This fixes PR6914.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102389 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of globals with a specified alignment, we fix
common variables to obey their alignment. Add a comment
explaining why this behavior is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102365 91177308-0d34-0410-b5e6-96231b3b80d8
Also, generalize ScalarEvolutions's min and max recognition to handle
some new forms of min and max that this change makes more common.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102234 91177308-0d34-0410-b5e6-96231b3b80d8
alignment to match what's used in clang and GCC for __alignof, rather
than trying to guess what Legalize is going to be doing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102206 91177308-0d34-0410-b5e6-96231b3b80d8
misses an opportunity to fold add operands, but folds them
after LSR has separated them out. This fixes rdar://7886751.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102157 91177308-0d34-0410-b5e6-96231b3b80d8
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102146 91177308-0d34-0410-b5e6-96231b3b80d8
Fix RefreshCallGraph to use CGN->replaceCallEdge instead of hand
rolling its own loop. replaceCallEdge properly maintains the
reference counts of the nodes, fixing a crash exposed by the
iterative callgraph stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102120 91177308-0d34-0410-b5e6-96231b3b80d8
optimization for non-leaf functions. This will be hooked up to gcc's
-momit-leaf-frame-pointer option. rdar://7886181
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101984 91177308-0d34-0410-b5e6-96231b3b80d8
before reglist were not properly handled with respect to IT Block. Fix that by
creating a new method ARMBasicMCBuilder::DoPredicateOperands() used by those
instructions for disassembly. Add a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101974 91177308-0d34-0410-b5e6-96231b3b80d8
we have RefreshCallGraph detect when a function pass devirtualizes
a call, and have CGSCCPassMgr iterate (up to a count) when this
happens. This allows (in the example) GVN to devirtualize the
call in foo, then the inliner to inline it away.
This is not currently enabled because I haven't done any analysis
on the (potentially substantial) code size or performance impact of
doing this, and guess what, it exposes callgraph updating bugs in
various passes. This is progress though, and you can play with it
by passing -max-cg-scc-iterations=5 to opt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101973 91177308-0d34-0410-b5e6-96231b3b80d8
recursive callsites, inlining can reduce the number of calls by
exponential factors, as it does in
MultiSource/Benchmarks/Olden/treeadd. More involved heuristics
will be needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101969 91177308-0d34-0410-b5e6-96231b3b80d8
in other types. fix this by only bumping zero-byte globals
up to a single byte if the *entire global* is zero size,
fixing PR6340.
This also fixes empty arrays etc to be handled correctly,
and only does this on subsection-via-symbols targets (aka
darwin) which is the only place where this matters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101879 91177308-0d34-0410-b5e6-96231b3b80d8
condition we're unswitching on. In this case, don't try to
simplify the second copy of the loop which may be dead or not,
but is probably a constant now. This fixes PR6879
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101870 91177308-0d34-0410-b5e6-96231b3b80d8
shifts and null vectors. Autoupgrade these to what we'd lower them to.
Add a testcase to exercise this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101851 91177308-0d34-0410-b5e6-96231b3b80d8
Arg promotion was deleting call graph nodes that still had references
from the 'indirect' CGN. Like the inliner, it should only delete the
function if all references are gone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101845 91177308-0d34-0410-b5e6-96231b3b80d8
the intrinsics. The reason for those i8* types is that the intrinsics are
overloaded on the vector type and we don't have a way to declare an intrinsic
where one argument is an overloaded vector type and another argument is a
pointer to the vector element type. The bitcasts added here will match what
the frontend will typically generate when these intrinsics are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101840 91177308-0d34-0410-b5e6-96231b3b80d8
just ask ScalarEvolution for it on demand. This helps IVUsers be more robust
in the case of expressions changing underneath it. This fixes PR6862.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101819 91177308-0d34-0410-b5e6-96231b3b80d8
Pseudocode details of conditional, Condition bits '111x' indicate the
instruction is always executed. That is, '1111' is a leagl condition field
value, which is now mapped to ARMCC::AL.
Also add a test case for condition field '1111'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101817 91177308-0d34-0410-b5e6-96231b3b80d8
to determine where to place PHIs by iteratively comparing reaching definitions
at each block. That was just plain wrong. This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101612 91177308-0d34-0410-b5e6-96231b3b80d8
case. Also, the 0xFF hex literal involved in the shift for ESize64 should be
suffixed "ul" to preserve the shift result.
Implemented printHex*ImmOperand() by copying from ARMAsmPrinter.cpp and added a
test case for DisassembleN1RegModImmFrm()/printHex64ImmOperand().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101557 91177308-0d34-0410-b5e6-96231b3b80d8
to the UAL syntax of LDCL<c>, instead.
Add a test case for this change which also tests the removal of assert() from
printAddrMode2OffsetOperand().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101527 91177308-0d34-0410-b5e6-96231b3b80d8
dependent analyses, and increase code size, so doing it profitably would
require more complex heuristics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101471 91177308-0d34-0410-b5e6-96231b3b80d8
callee is expected to be expanded to something else by codegen, so that
normal infinitely recursive calls are still transformed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101468 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101350 91177308-0d34-0410-b5e6-96231b3b80d8
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101343 91177308-0d34-0410-b5e6-96231b3b80d8