the optimizer doesn't eliminate objc_retainBlock calls which are needed
for their side effect of copying blocks onto the heap.
This implements rdar://10361249.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148076 91177308-0d34-0410-b5e6-96231b3b80d8
1. Size heuristics changed. Now we calculate number of unswitching
branches only once per loop.
2. Some checks was moved from UnswitchIfProfitable to
processCurrentLoop, since it is not changed during processCurrentLoop
iteration. It allows decide to skip some loops at an early stage.
Extended statistics:
- Added total number of instructions analyzed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147935 91177308-0d34-0410-b5e6-96231b3b80d8
with other symbols.
An object in the __cfstring section is suppoed to be filled with CFString
objects, which have a pointer to ___CFConstantStringClassReference followed by a
pointer to a __cstring. If we allow the object in the __cstring section to be
merged with another global, then it could end up in any section. Because the
linker is going to remove these symbols in the final executable, we shouldn't
bother to merge them.
<rdar://problem/10564621>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147899 91177308-0d34-0410-b5e6-96231b3b80d8
These heuristics are sufficient for enabling IV chains by
default. Performance analysis has been done for i386, x86_64, and
thumbv7. The optimization is rarely important, but can significantly
speed up certain cases by eliminating spill code within the
loop. Unrolled loops are prime candidates for IV chains. In many
cases, the final code could still be improved with more target
specific optimization following LSR. The goal of this feature is for
LSR to make the best choice of induction variables.
Instruction selection may not completely take advantage of this
feature yet. As a result, there could be cases of slight code size
increase.
Code size can be worse on x86 because it doesn't support postincrement
addressing. In fact, when chains are formed, you may see redundant
address plus stride addition in the addressing mode. GenerateIVChains
tries to compensate for the common cases.
On ARM, code size increase can be mitigated by using postincrement
addressing, but downstream codegen currently misses some opportunities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147826 91177308-0d34-0410-b5e6-96231b3b80d8
After collecting chains, check if any should be materialized. If so,
hide the chained IV users from the LSR solver. LSR will only solve for
the head of the chain. GenerateIVChains will then materialize the
chained IV users by computing the IV relative to its previous value in
the chain.
In theory, chained IV users could be exposed to LSR's solver. This
would be considerably complicated to implement and I'm not aware of a
case where we need it. In practice it's more important to
intelligently prune the search space of nontrivial loops before
running the solver, otherwise the solver is often forced to prune the
most optimal solutions. Hiding the chained users does this well, so
that LSR is more likely to find the best IV for the chain as a whole.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147801 91177308-0d34-0410-b5e6-96231b3b80d8
This collects a set of IV uses within the loop whose values can be
computed relative to each other in a sequence. Following checkins will
make use of this information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147797 91177308-0d34-0410-b5e6-96231b3b80d8
We still save an instruction when just the "and" part is replaced.
Also change the code to match comments more closely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147753 91177308-0d34-0410-b5e6-96231b3b80d8
This will be more important as we extend the LSR pass in ways that don't rely on the formula solver. In particular, we need it for constructing IV chains.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147724 91177308-0d34-0410-b5e6-96231b3b80d8
LoopSimplify may not run on some outer loops, e.g. because of indirect
branches. SCEVExpander simply cannot handle outer loops with no preheaders.
Fixes rdar://10655343 SCEVExpander segfault.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147718 91177308-0d34-0410-b5e6-96231b3b80d8
present in the bottom of the CFG triangle, as the transformation isn't
ever valuable if the branch can't be eliminated.
Also, unify some heuristics between SimplifyCFG's multiple
if-converters, for consistency.
This fixes rdar://10627242.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147630 91177308-0d34-0410-b5e6-96231b3b80d8
code can incorrectly move the load across a store. This never
happens in practice today, but only because the current
heuristics accidentally preclude it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147623 91177308-0d34-0410-b5e6-96231b3b80d8
captured. This allows the tracker to look at the specific use, which may be
especially interesting for function calls.
Use this to fix 'nocapture' deduction in FunctionAttrs. The existing one does
not iterate until a fixpoint and does not guarantee that it produces the same
result regardless of iteration order. The new implementation builds up a graph
of how arguments are passed from function to function, and uses a bottom-up walk
on the argument-SCCs to assign nocapture. This gets us nocapture more often, and
does so rather efficiently and independent of iteration order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147327 91177308-0d34-0410-b5e6-96231b3b80d8
This was intended to undo the sub canonicalization in cases where it's not profitable, but it also
finds some cases on it's own.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147256 91177308-0d34-0410-b5e6-96231b3b80d8
This has the obvious advantage of being commutable and is always a win on x86 because
const - x wastes a register there. On less weird architectures this may lead to
a regression because other arithmetic doesn't fuse with it anymore. I'll address that
problem in a followup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147254 91177308-0d34-0410-b5e6-96231b3b80d8
performance regressions (both execution-time and compile-time) on our
nightly testers.
Original commit message:
Fix for bug #11429: Wrong behaviour for switches. Small improvement for code
size heuristics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147131 91177308-0d34-0410-b5e6-96231b3b80d8
For example,
if (a == b) {
if (a > b) // this is false
Fixes some of the issues on <rdar://problem/10554090>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146822 91177308-0d34-0410-b5e6-96231b3b80d8
"half precision" floating-point with a first-class type.
This patch adds basic IR support (but not codegen support).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146786 91177308-0d34-0410-b5e6-96231b3b80d8
into Analysis as a standalone function, since there's no need for
it to be in VMCore. Also, update it to use isKnownNonZero and
other goodies available in Analysis, making it more precise,
enabling more aggressive optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146610 91177308-0d34-0410-b5e6-96231b3b80d8
This should always be done as a matter of principal. I don't have a
case that exposes the problem. I just noticed this recently while
scanning the code and realized I meant to fix it long ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146438 91177308-0d34-0410-b5e6-96231b3b80d8
subdirectories to traverse into.
- Originally I wanted to avoid this and just autoscan, but this has one key
flaw in that new subdirectories can not automatically trigger a rerun of the
llvm-build tool. This is particularly a pain when switching back and forth
between trees where one has added a subdirectory, as the dependencies will
tend to be wrong. This will also eliminates FIXME implicitly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146436 91177308-0d34-0410-b5e6-96231b3b80d8
detected in the forward-CFG DFS. This prevents the reverse-CFG from
visiting blocks inside loops after blocks that dominate them in the
case where loops have multiple exits.
No testcase, because this fixes a bug which in practice only shows
up in a full optimizer run, due to the use-list order.
This fixes rdar://10422791 and others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146408 91177308-0d34-0410-b5e6-96231b3b80d8
indicates whether the intrinsic has a defined result for a first
argument equal to zero. This will eventually allow these intrinsics to
accurately model the semantics of GCC's __builtin_ctz and __builtin_clz
and the X86 instructions (prior to AVX) which implement them.
This patch merely sets the stage by extending the signature of these
intrinsics and establishing auto-upgrade logic so that the old spelling
still works both in IR and in bitcode. The upgrade logic preserves the
existing (inefficient) semantics. This patch should not change any
behavior. CodeGen isn't updated because it can use the existing
semantics regardless of the flag's value.
Note that this will be followed by API updates to Clang and DragonEgg.
Reviewed by Nick Lewycky!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146357 91177308-0d34-0410-b5e6-96231b3b80d8
Since we're not rewriting IVs in other loops, there's not much reason
to consider their stride when generating formulae.
This should reduce the number of useless formulas considered by LSR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146302 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Brendon Cahoon!
This extends the existing LoopUnroll and LoopUnrollPass. Brendon
measured no regressions in the llvm test suite with -unroll-runtime
enabled. This implementation works by using the existing loop
unrolling code to unroll the loop by a power-of-two (default 8). It
generates an if-then-else sequence of code prior to the loop to
execute the extra iterations before entering the unrolled loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146245 91177308-0d34-0410-b5e6-96231b3b80d8
- Walking over pred_begin/pred_end is an expensive operation.
- PHINodes contain a value for each predecessor anyway.
- While it may look like we used to save a few iterations with the set,
be aware that getIncomingValueForBlock does a linear search on
the values of the phi node.
- Another -5% on ARMDisassembler.cpp (Release build). This was the last
entry in the profile that was obviously wasting time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145937 91177308-0d34-0410-b5e6-96231b3b80d8
It's always good to prune early, but formulae that are unsatisfactory
in their own right need to be removed before running any other pruning
heuristics. We easily avoid generating such formulae, but we need them
as an intermediate basis for forming other good formulae.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145906 91177308-0d34-0410-b5e6-96231b3b80d8
where this would be bad as the backend shouldn't have a problem inlining small
memcpys.
rdar://10510150
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145865 91177308-0d34-0410-b5e6-96231b3b80d8
- Calling getUser in a loop is much more expensive than iterating over a few instructions.
- Use it instead of the open-coded loop in AddrModeMatcher.
- 5% speedup on ARMDisassembler.cpp Release builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145810 91177308-0d34-0410-b5e6-96231b3b80d8
The callee is usually smaller than the caller, too. This reduces the compile
time of ARMDisassembler.cpp by 32% (Release build). It still takes ages to
compile though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145690 91177308-0d34-0410-b5e6-96231b3b80d8
weak variable are compiled by different compilers, such as GCC and LLVM, while
LLVM may increase the alignment to the preferred alignment there is no reason to
think that GCC will use anything more than the ABI alignment. Since it is the
GCC version that might end up in the final program (as the linkage is weak), it
is wrong to increase the alignment of loads from the global up to the preferred
alignment as the alignment might only be the ABI alignment.
Increasing alignment up to the ABI alignment might be OK, but I'm not totally
convinced that it is. It seems better to just leave the alignment of weak
globals alone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145413 91177308-0d34-0410-b5e6-96231b3b80d8
gcc, though I thought it was older (my gcc 4.4 has it as a local patch. Whoops!)
This fixes PR10589.
Also add some debugging statements.
Remove GcnoFiles, the mapping from CompilationUnit to raw_ostream. Now that we
start by iterating over each CU and descending into them, there's no need to
maintain a mapping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145208 91177308-0d34-0410-b5e6-96231b3b80d8
The right way to check for a binary operation is
cast<BinaryOperator>. The original check: cast<Instruction> &&
numOperands() == 2 would match phi "instructions", leading to an
infinite loop in extreme corner case: a useless phi with operands
[self, constant] that prior optimization passes failed to remove,
being used in the loop by another useless phi, in turn being used by an
lshr or udiv.
Fixes PR11350: runaway iteration assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144935 91177308-0d34-0410-b5e6-96231b3b80d8
lead to it trying to re-mark a value marked as a constant with a different value. It also appears to trigger very rarely.
Fixes PR11357.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144352 91177308-0d34-0410-b5e6-96231b3b80d8
Size of data being pointed to wasn't always being checked so some small writes were killing big writes
Fixes <rdar://problem/10426753>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144312 91177308-0d34-0410-b5e6-96231b3b80d8
Currently checks alignment and killing stores on a power of 2 boundary as this is likely
to trim the size of the earlier store without breaking large vector stores into scalar ones.
Fixes <rdar://problem/10140300>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144239 91177308-0d34-0410-b5e6-96231b3b80d8
Only currently done if the later store is writing to a power of 2 address or
has the same alignment as the earlier store as then its likely to not break up
large stores into smaller ones
Fixes <rdar://problem/10140300>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143630 91177308-0d34-0410-b5e6-96231b3b80d8
We've been hitting asserts in this code due to the many supported
combintions of modes (iv-rewrite/no-iv-rewrite) and IV types. This
second rewrite of the code attempts to deal with these cases systematically.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143546 91177308-0d34-0410-b5e6-96231b3b80d8
instructions.
This doesn't introduce any optimizations we weren't doing before (except
potentially due to pass ordering issues), now passes will eliminate them sooner
as part of their own cleanups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142787 91177308-0d34-0410-b5e6-96231b3b80d8
element types, even though the element extraction code does. It is surprising
that this bug has been here for so long. Fixes <rdar://problem/10318778>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142740 91177308-0d34-0410-b5e6-96231b3b80d8
combining of the landingpad instruction. The ObjC personality function acts
almost identically to the C++ personality function. In particular, it uses
"null" as a "catch-all" value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142256 91177308-0d34-0410-b5e6-96231b3b80d8
possibility that it will span multiple CFG diamonds/triangles which
could have different controlling predicates. rdar://10282956
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142222 91177308-0d34-0410-b5e6-96231b3b80d8
Some code want to check that *any* call within a function has the 'returns
twice' attribute, not just that the current function has one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142221 91177308-0d34-0410-b5e6-96231b3b80d8
profile metadata at the same time. Use it to preserve metadata attached
to a branch when re-writing it in InstCombine.
Add metadata to the canonicalize_branch InstCombine test, and check that
it is tranformed correctly.
Reviewed by Nick Lewycky!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142168 91177308-0d34-0410-b5e6-96231b3b80d8
I rewrote the algorithm a while back so it doesn't require map lookup,
but neglected to change the data structure. This was caught by
llvm-gcc self host, not because there's anything special about
llvm-gcc, but because it is the only test for nondeterminism we
currently have. Unit tests don't work well for everything; we should
always try to have a nondeterminism stress test running.
Fixes PR11133: llvm-gcc self host .o mismatch after enable-iv-rewrite=false
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142036 91177308-0d34-0410-b5e6-96231b3b80d8
Someone more familiar with LSR should double-check that the extra cast is actually doing the right thing in the overflow cases; I'm not completely confident that's that case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141916 91177308-0d34-0410-b5e6-96231b3b80d8
would have never worked, since the element type of a vector type is never a
vector type. Also fix the conditional to be more direct in checking whether
EltTy is a vector type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141713 91177308-0d34-0410-b5e6-96231b3b80d8
I'm not sure we will need it in the long run, but the option is
currently useful for checking if the output of LSR is "clean".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141634 91177308-0d34-0410-b5e6-96231b3b80d8
IVs.
Indvars previously chose randomly between congruent IVs. Now it will
bias the decision toward IVs that SCEVExpander likes to create. This
was not done to fix any problem, it's just a welcome side effect of
factoring code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141633 91177308-0d34-0410-b5e6-96231b3b80d8
promoting allocas to preferred alignments that exceed the natural
alignment. This avoids some potentially expensive dynamic stack realignments.
The natural stack alignment is set in target data strings via the "S<size>"
option. Size is in bits and must be a multiple of 8. The natural stack alignment
defaults to "unspecified" (represented by a zero value), and the "unspecified"
value does not prevent any alignment promotions. Target maintainers that care
about avoiding promotions should explicitly add the "S<size>" option to their
target data strings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141599 91177308-0d34-0410-b5e6-96231b3b80d8
switch (n) {
case 27:
do_something(x);
...
}
the call do_something(x) will be replaced with do_something(27). In
gcc-as-one-big-file this results in the removal of about 500 lines of
bitcode (about 0.02%), so has about 1/10 of the effect of propagating
branch conditions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141360 91177308-0d34-0410-b5e6-96231b3b80d8
While I'm here, fix the related issue with strncmp, add some actual tests for strcmp and strncmp, and start using StringRef::compare for constant folding instead of using strcmp/strncmp so that the optimized IR isn't dependent on the host's implementation of strcmp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141227 91177308-0d34-0410-b5e6-96231b3b80d8
Just pull the instruction name, but don't change the order of anything
else. That keeps --debug happy and non-crashing, but doesn't change
how the worklist gets built.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141210 91177308-0d34-0410-b5e6-96231b3b80d8
When updating the worklist for InstCombine, the Add/AddUsersToWorklist
functions may access the instruction(s) being added, for debug output for
example. If the instructions aren't yet added to the basic block, this
can result in a crash. Finish the instruction transformation before
adjusting the worklist instead.
rdar://10238555
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141203 91177308-0d34-0410-b5e6-96231b3b80d8
branch "br i1 %x, label %if_true, label %if_false" then it replaces
"%x" with "true" in places only reachable via the %if_true arm, and
with "false" in places only reachable via the %if_false arm. Except
that actually it doesn't: if value numbering shows that %y is equal
to %x then, yes, %y will be turned into true/false in this way, but
any occurrences of %x itself are not transformed. Fix this. What's
more, it's often the case that %x is an equality comparison such as
"%x = icmp eq %A, 0", in which case every occurrence of %A that is
only reachable via the %if_true arm can be replaced with 0. Implement
this and a few other variations on this theme. This reduces the number
of lines of LLVM IR in "GCC as one big file" by 0.2%. It has a bigger
impact on Ada code, typically reducing the number of lines of bitcode
by around 0.4% by removing repeated compiler generated checks. Passes
the LLVM nightly testsuite and the Ada ACATS testsuite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141177 91177308-0d34-0410-b5e6-96231b3b80d8
it's OK for the false/true destination to have multiple
predecessors as long as the extra ones are dominated by
the branch destination.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141176 91177308-0d34-0410-b5e6-96231b3b80d8
This handles the case in which LSR rewrites an IV user that is a phi and
splits critical edges originating from a switch.
Fixes <rdar://problem/6453893> LSR is not splitting edges "nicely"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141059 91177308-0d34-0410-b5e6-96231b3b80d8
We want heuristics to be based on accurate data, but more importantly
we don't want llvm to behave randomly. A benign trunc inserted by an
upstream pass should not cause a wild swings in optimization
level. See PR11034. It's a general problem with threshold-based
heuristics, but we can make it less bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140919 91177308-0d34-0410-b5e6-96231b3b80d8
catch or repeated filter clauses. Teach instcombine a bunch
of tricks for simplifying landingpad clauses. Currently the
code only recognizes the GNU C++ and Ada personality functions,
but that doesn't stop it doing a bunch of "generic" transforms
which are hopefully fine for any real-world personality function.
If these "generic" transforms turn out not to be generic, they
can always be conditioned on the personality function. Probably
someone should add the ObjC++ personality function. I didn't as
I don't know anything about it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140852 91177308-0d34-0410-b5e6-96231b3b80d8
objc_retainBlock call is potentially responsible for copying
the block to the heap to extend its lifetime. rdar://10209613.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140814 91177308-0d34-0410-b5e6-96231b3b80d8
Rewriting the entire loop nest now requires -enable-lsr-nested.
See PR11035 for some performance data.
A few unit tests specifically test nested LSR, and are now under a flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140762 91177308-0d34-0410-b5e6-96231b3b80d8
Disabling aggressive LSR saves compilation time, and with the new
indvars behavior usually improves performance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140590 91177308-0d34-0410-b5e6-96231b3b80d8
The minor bug heuristic was noticed by inspection. I added the
isLoser/isValid helpers because they will become more
important with subsequent checkins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140580 91177308-0d34-0410-b5e6-96231b3b80d8
No test case. Noticed by inspection and I doubt it ever affects the
outcome of the overall heuristic, let alone final codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140431 91177308-0d34-0410-b5e6-96231b3b80d8
The landing pad must accompany the invoke when it's extracted. However, if it
does, then the loop isn't properly extracted. I.e., the resulting extraction has
a loop in it. The extracted function is then extracted, etc. resulting in an
infinite loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140193 91177308-0d34-0410-b5e6-96231b3b80d8
extract its associated landing pad block as well. However, that landing pad
block may have more than one predecessor. So split the landing pad block so that
individual landing pads have only one predecessor.
This type of transformation may produce a false positive with bugpoint.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140173 91177308-0d34-0410-b5e6-96231b3b80d8
extract the landing pad block. Otherwise, there will be a situation where the
invoke's unwind edge lands on a non-landing pad.
We also forbid the user from extracting the landing pad block by itself. Again,
this is not a valid transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140083 91177308-0d34-0410-b5e6-96231b3b80d8
No tests; these changes aren't really interesting in the sense that the logic is the same for volatile and atomic.
I believe this completes all of the changes necessary for the optimizer to handle loads and stores correctly. I'm going to try and come up with some additional testing, though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139533 91177308-0d34-0410-b5e6-96231b3b80d8
better.
Don't immediately give up when an add operation can't be trivially
sign/zero-extended within a loop. If it has NSW/NUW flags, generate a
new expression with sign extended (non-recurrent) operand. As before,
if SCEV says that all sign extends are loop invariant, then we can
widen the operation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139453 91177308-0d34-0410-b5e6-96231b3b80d8
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC. While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function. To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function. Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!). Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC. Patch mostly by Sanjoy Das.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139140 91177308-0d34-0410-b5e6-96231b3b80d8
This changes loop unrolling to use the same mechanism for trip count
computation as indvars. This is a stronger check that tends to unroll
more loops. A very common side-effect is that many single iteration
loops will be removed sooner. The real goal was simply to remove
dependence on canonical IVs.
x86 is break even.
ARM performance changes to expect (+ is good):
External/SPEC/CFP2000/183.equake/183.equake +13%
SingleSource/Benchmarks/Dhrystone/fldry +21%
MultiSource/Applications/spiff/spiff +3%
SingleSource/Benchmarks/Stanford/Puzzle -14%
The Puzzle regression is actually an improvement in loop optimization
that defeats GVN: rdar://problem/10065079.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139009 91177308-0d34-0410-b5e6-96231b3b80d8