Commit Graph

5352 Commits

Author SHA1 Message Date
Chandler Carruth
a403ceb205 [LPM] Fix PR18643, another scary place where loop transforms failed to
preserve loop simplify of enclosing loops.

The problem here starts with LoopRotation which ends up cloning code out
of the latch into the new preheader it is buidling. This can create
a new edge from the preheader into the exit block of the loop which
breaks LoopSimplify form. The code tries to fix this by splitting the
critical edge between the latch and the exit block to get a new exit
block that only the latch dominates. This sadly isn't sufficient.

The exit block may be an exit block for multiple nested loops. When we
clone an edge from the latch of the inner loop to the new preheader
being built in the outer loop, we create an exiting edge from the outer
loop to this exit block. Despite breaking the LoopSimplify form for the
inner loop, this is fine for the outer loop. However, when we split the
edge from the inner loop to the exit block, we create a new block which
is in neither the inner nor outer loop as the new exit block. This is
a predecessor to the old exit block, and so the split itself takes the
outer loop out of LoopSimplify form. We need to split every edge
entering the exit block from inside a loop nested more deeply than the
exit block in order to preserve all of the loop simplify constraints.

Once we try to do that, a problem with splitting critical edges
surfaces. Previously, we tried a very brute force to update LoopSimplify
form by re-computing it for all exit blocks. We don't need to do this,
and doing this much will sometimes but not always overlap with the
LoopRotate bug fix. Instead, the code needs to specifically handle the
cases which can start to violate LoopSimplify -- they aren't that
common. We need to see if the destination of the split edge was a loop
exit block in simplified form for the loop of the source of the edge.
For this to be true, all the predecessors need to be in the exact same
loop as the source of the edge being split. If the dest block was
originally in this form, we have to split all of the deges back into
this loop to recover it. The old mechanism of doing this was
conservatively correct because at least *one* of the exiting blocks it
rewrote was the DestBB and so the DestBB's predecessors were fixed. But
this is a much more targeted way of doing it. Making it targeted is
important, because ballooning the set of edges touched prevents
LoopRotate from being able to split edges *it* needs to split to
preserve loop simplify in a coherent way -- the critical edge splitting
would sometimes find the other edges in need of splitting but not
others.

Many, *many* thanks for help from Nick reducing these test cases
mightily. And helping lots with the analysis here as this one was quite
tricky to track down.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200393 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-29 13:16:53 +00:00
Chandler Carruth
6a67a3f3ec [LPM] Fix PR18642, a pretty nasty bug in IndVars that "never mattered"
because of the inside-out run of LoopSimplify in the LoopPassManager and
the fact that LoopSimplify couldn't be "preserved" across two
independent LoopPassManagers.

Anyways, in that case, IndVars wasn't correctly preserving an LCSSA PHI
node because it thought it was rewriting (via SCEV) the incoming value
to a loop invariant value. While it may well be invariant for the
current loop, it may be rewritten in terms of an enclosing loop's
values. This in and of itself is fine, as the LCSSA PHI node in the
enclosing loop for the inner loop value we're rewriting will have its
own LCSSA PHI node if used outside of the enclosing loop. With me so
far?

Well, the current loop and the enclosing loop may share an exiting
block and exit block, and when they do they also share LCSSA PHI nodes.
In this case, its not valid to RAUW through the LCSSA PHI node.

Expected crazy test included.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-29 04:40:19 +00:00
Rafael Espindola
f611ae40fd Fix pr14893.
When simplifycfg moves an instruction, it must drop metadata it doesn't know
is still valid with the preconditions changes. In particular, it must drop
the range and tbaa metadata.

The patch implements this with an utility function to drop all metadata not
in a white list.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200322 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 16:56:46 +00:00
Chandler Carruth
05d43d8b6f [vectorizer] Completely disable the block frequency guidance of the loop
vectorizer, placing it behind an off-by-default flag.

It turns out that block frequency isn't what we want at all, here or
elsewhere. This has been I think a nagging feeling for several of us
working with it, but Arnold has given some really nice simple examples
where the results are so comprehensively wrong that they aren't useful.

I'm planning to email the dev list with a summary of why its not really
useful and a couple of ideas about how to better structure these types
of heuristics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200294 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 09:10:41 +00:00
Reid Kleckner
59bec0e3c0 Update optimization passes to handle inalloca arguments
Summary:
I searched Transforms/ and Analysis/ for 'ByVal' and updated those call
sites to check for inalloca if appropriate.

I added tests for any change that would allow an optimization to fire on
inalloca.

Reviewers: nlewycky

Differential Revision: http://llvm-reviews.chandlerc.com/D2449

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200281 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 02:38:36 +00:00
Arnold Schwaighofer
a47aa4b4ef LoopVectorize: Support conditional stores by scalarizing
The vectorizer takes a loop like this and widens all instructions except for the
store. The stores are scalarized/unrolled and hidden behind an "if" block.

  for (i = 0; i < 128; ++i) {
    if (a[i] < 10)
      a[i] += val;
  }

  for (i = 0; i < 128; i+=2) {
    v = a[i:i+1];
    v0 = (extract v, 0) + 10;
    v1 = (extract v, 1) + 10;
    if (v0 < 10)
      a[i] = v0;
    if (v1 < 10)
      a[i] = v1;
  }

The vectorizer relies on subsequent optimizations to sink instructions into the
conditional block where they are anticipated.

The flag "vectorize-num-stores-pred" controls whether and how many stores to
handle this way. Vectorization of conditional stores is disabled per default for
now.

This patch also adds a change to the heuristic when the flag
"enable-loadstore-runtime-unroll" is enabled (off by default). It unrolls small
loops until load/store ports are saturated. This heuristic uses TTI's
getMaxUnrollFactor as a measure for load/store ports.

I also added a second flag -enable-cond-stores-vec. It will enable vectorization
of conditional stores. But there is no cost model for vectorization of
conditional stores in place yet so this will not do good at the moment.

rdar://15892953

Results for x86-64 -O3 -mavx +/- -mllvm -enable-loadstore-runtime-unroll
-vectorize-num-stores-pred=1 (before the BFI change):

 Performance Regressions:
   Benchmarks/Ptrdist/yacr2/yacr2 7.35% (maze3() is identical but 10% slower)
   Applications/siod/siod         2.18%
 Performance improvements:
   mesa                          -4.42%
   libquantum                    -4.15%

 With a patch that slightly changes the register heuristics (by subtracting the
 induction variable on both sides of the register pressure equation, as the
 induction variable is probably not really unrolled):

 Performance Regressions:
   Benchmarks/Ptrdist/yacr2/yacr2  7.73%
   Applications/siod/siod          1.97%

 Performance Improvements:
   libquantum                    -13.05% (we now also unroll quantum_toffoli)
   mesa                           -4.27%

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200270 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 01:01:53 +00:00
Manman Ren
aa6627016f PGO branch weight: keep halving the weights until they can fit into
uint32.

When folding branches to common destination, the updated branch weights
can exceed uint32 by more than factor of 2. We should keep halving the
weights until they can fit into uint32.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200262 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-27 23:39:03 +00:00
Chandler Carruth
5f61e70eac [vectorize] Initial version of respecting PGO in the vectorizer: treat
cold loops as-if they were being optimized for size.

Nothing fancy here. Simply test case included. The nice thing is that we
can now incrementally build on top of this to drive other heuristics.
All of the infrastructure work is done to get the profile information
into this layer.

The remaining work necessary to make this a fully general purpose loop
unroller for very hot loops is to make it a fully general purpose loop
unroller. Things I know of but am not going to have time to benchmark
and fix in the immediate future:

1) Don't disable the entire pass when the target is lacking vector
   registers. This really doesn't make any sense any more.
2) Teach the unroller at least and the vectorizer potentially to handle
   non-if-converted loops. This is trivial for the unroller but hard for
   the vectorizer.
3) Compute the relative hotness of the loop and thread that down to the
   various places that make cost tradeoffs (very likely only the
   unroller makes sense here, and then only when dealing with loops that
   are small enough for unrolling to not completely blow out the LSD).

I'm still dubious how useful hotness information will be. So far, my
experiments show that if we can get the correct logic for determining
when unrolling actually helps performance, the code size impact is
completely unimportant and we can unroll in all cases. But at least
we'll no longer burn code size on cold code.

One somewhat unrelated idea that I've had forever but not had time to
implement: mark all functions which are only reachable via the global
constructors rigging in the module as optsize. This would also decrease
the impact of any more aggressive heuristics here on code size.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200219 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-27 13:11:50 +00:00
Benjamin Kramer
08aa38d39b ConstantHoisting: We can't insert instructions directly in front of a PHI node.
Insert before the terminating instruction of the dominating block instead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200218 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-27 13:11:43 +00:00
Chandler Carruth
1c4746ed70 [vectorizer] Add an override for the target instruction cost and use it
to stabilize a test that really is trying to test generic behavior and
not a specific target's behavior.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200215 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-27 11:41:50 +00:00
Chandler Carruth
424b2b0093 [vectorizer] Teach the loop vectorizer's unroller to only unroll by
powers of two. This is essentially always the correct thing given the
impact on alignment, scaling factors that can be used in addressing
modes, etc. Also, fix the management of the unroll vs. small loop cost
to more accurately model things with this world.

Enhance a test case to actually exercise more of the unroll machinery if
using synthetic constants rather than a specific target model. Before
this change, with the added flags this test will unroll 3 times instead
of either 2 or 4 (the two sensible answers).

While I don't expect this to make a huge difference, if there are lots
of loops sitting right on the edge of hitting the 'small unroll' factor,
they might change behavior. However, I've benchmarked moving the small
loop cost up and down in many various ways and by a huge factor (2x)
without seeing more than 0.2% code size growth. Small adjustments such
as the series that led up here have led to about 1% improvement on some
benchmarks, but it is very close to the noise floor so I mostly checked
that nothing regressed. Let me know if you see bad behavior on other
targets but I don't expect this to be a sufficiently dramatic change to
trigger anything.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200213 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-27 11:12:24 +00:00
Chandler Carruth
3d69cf57e1 [LPM] Make LCSSA a utility with a FunctionPass that applies it to all
the loops in a function, and teach LICM to work in the presance of
LCSSA.

Previously, LCSSA was a loop pass. That made passes requiring it also be
loop passes and unable to depend on function analysis passes easily. It
also caused outer loops to have a different "canonical" form from inner
loops during analysis. Instead, we go into LCSSA form and preserve it
through the loop pass manager run.

Note that this has the same problem as LoopSimplify that prevents
enabling its verification -- loop passes which run at the end of the loop
pass manager and don't preserve these are valid, but the subsequent loop
pass runs of outer loops that do preserve this pass trigger too much
verification and fail because the inner loop no longer verifies.

The other problem this exposed is that LICM was completely unable to
handle LCSSA form. It didn't preserve it and it actually would give up
on moving instructions in many cases when they were used by an LCSSA phi
node. I've taught LICM to support detecting LCSSA-form PHI nodes and to
hoist and sink around them. This may actually let LICM fire
significantly more because we put everything into LCSSA form to rotate
the loop before running LICM. =/ Now LICM should handle that fine and
preserve it correctly. The down side is that LICM has to require LCSSA
in order to preserve it. This is just a fact of life for LCSSA. It's
entirely possible we should completely remove LCSSA from the optimizer.

The test updates are essentially accomodating LCSSA phi nodes in the
output of LICM, and the fact that we now completely sink every
instruction in ashr-crash below the loop bodies prior to unrolling.

With this change, LCSSA is computed only three times in the pass
pipeline. One of them could be removed (and potentially a SCEV run and
a separate LoopPassManager entirely!) if we had a LoopPass variant of
InstCombine that ran InstCombine on the loop body but refused to combine
away LCSSA PHI nodes. Currently, this also prevents loop unrolling from
being in the same loop pass manager is rotate, LICM, and unswitch.

There is one thing that I *really* don't like -- preserving LCSSA in
LICM is quite expensive. We end up having to re-run LCSSA twice for some
loops after LICM runs because LICM can undo LCSSA both in the current
loop and the parent loop. I don't really see good solutions to this
other than to completely move away from LCSSA and using tools like
SSAUpdater instead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200067 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-25 04:07:24 +00:00
Benjamin Kramer
d7053be532 InstCombine: Don't try to use aggregate elements of ConstantExprs.
PR18600.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200028 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 19:02:37 +00:00
Alp Toker
ae43cab6ba Fix known typos
Sweep the codebase for common typos. Includes some changes to visible function
names that were misspelt.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200018 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 17:20:08 +00:00
Benjamin Kramer
c166623dcd InstSimplify: Make shift, select and GEP simplifications vector-aware.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200016 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 17:09:53 +00:00
Rafael Espindola
b961ae25b4 Note the PR number.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199932 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-23 20:17:12 +00:00
Rafael Espindola
35b78fd04c Remove tail marker when changing an argument to an alloca.
Argument promotion can replace an argument of a call with an alloca. This
requires clearing the tail marker as it is very likely that the callee is now
using an alloca in the caller.

This fixes pr14710.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199909 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-23 17:19:42 +00:00
Chandler Carruth
aaf44af769 [LPM] Make LoopSimplify no longer a LoopPass and instead both a utility
function and a FunctionPass.

This has many benefits. The motivating use case was to be able to
compute function analysis passes *after* running LoopSimplify (to avoid
invalidating them) and then to run other passes which require
LoopSimplify. Specifically passes like unrolling and vectorization are
critical to wire up to BranchProbabilityInfo and BlockFrequencyInfo so
that they can be profile aware. For the LoopVectorize pass the only
things in the way are LoopSimplify and LCSSA. This fixes LoopSimplify
and LCSSA is next on my list.

There are also a bunch of other benefits of doing this:
- It is now very feasible to make more passes *preserve* LoopSimplify
  because they can simply run it after changing a loop. Because
  subsequence passes can assume LoopSimplify is preserved we can reduce
  the runs of this pass to the times when we actually mutate a loop
  structure.
- The new pass manager should be able to more easily support loop passes
  factored in this way.
- We can at long, long last observe that LoopSimplify is preserved
  across SCEV. This *halves* the number of times we run LoopSimplify!!!

Now, getting here wasn't trivial. First off, the interfaces used by
LoopSimplify are all over the map regarding how analysis are updated. We
end up with weird "pass" parameters as a consequence. I'll try to clean
at least some of this up later -- I'll have to have it all clean for the
new pass manager.

Next up I discovered a really frustrating bug. LoopUnroll *claims* to
preserve LoopSimplify. That's actually a lie. But the way the
LoopPassManager ends up running the passes, it always ran LoopSimplify
on the unrolled-into loop, rectifying this oversight before any
verification could kick in and point out that in fact nothing was
preserved. So I've added code to the unroller to *actually* simplify the
surrounding loop when it succeeds at unrolling.

The only functional change in the test suite is that we now catch a case
that was previously missed because SCEV and other loop transforms see
their containing loops as simplified and thus don't miss some
opportunities. One test case has been converted to check that we catch
this case rather than checking that we miss it but at least don't get
the wrong answer.

Note that I have #if-ed out all of the verification logic in
LoopSimplify! This is a temporary workaround while extracting these bits
from the LoopPassManager. Currently, there is no way to have a pass in
the LoopPassManager which preserves LoopSimplify along with one which
does not. The LPM will try to verify on each loop in the nest that
LoopSimplify holds but the now-Function-pass cannot distinguish what
loop is being verified and so must try to verify all of them. The inner
most loop is clearly no longer simplified as there is a pass which
didn't even *attempt* to preserve it. =/ Once I get LCSSA out (and maybe
LoopVectorize and some other fixes) I'll be able to re-enable this check
and catch any places where we are still failing to preserve
LoopSimplify. If this causes problems I can back this out and try to
commit *all* of this at once, but so far this seems to work and allow
much more incremental progress.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199884 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-23 11:23:19 +00:00
Matt Arsenault
4c72b6da97 Add CHECK-LABELs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199846 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-22 22:32:58 +00:00
Matt Arsenault
88a9f0476c Handle an addrspacecast case in memcpyopt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199836 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-22 21:53:19 +00:00
Owen Anderson
1e1446bf84 Fix all the remaining lost-fast-math-flags bugs I've been able to find. The most important of these are cases in the generic logic for combining BinaryOperators.
This logic hadn't been updated to handle FastMathFlags, and it took me a while to detect it because it doesn't show up in a simple search for CreateFAdd.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199629 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-20 07:44:53 +00:00
Benjamin Kramer
b45edea9b3 InstCombine: Modernize a bunch of cast combines.
Also make them vector-aware.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199608 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 20:05:13 +00:00
Benjamin Kramer
c7645e860a InstCombine: Replace a hand-rolled version of isKnownToBeAPowerOfTwo with the real thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199604 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 16:48:41 +00:00
Benjamin Kramer
0487faa97b InstCombine: Teach most integer add/sub/mul/div combines how to deal with vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199602 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 15:24:22 +00:00
Benjamin Kramer
3f6a9d705a InstCombine: Refactor fmul/fdiv combines to handle vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199598 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 13:36:27 +00:00
Chandler Carruth
e1a5243053 Fix a really nasty SROA bug with how we handled out-of-bounds memcpy
intrinsics.

Reported on the list by Evan with a couple of attempts to fix, but it
took a while to dig down to the root cause. There are two overlapping
bugs here, both centering around the circumstance of discovering
a memcpy operand which is known to be completely outside the bounds of
the alloca.

First, we need to kill the *other* side of the memcpy if it was added to
this alloca. Otherwise we'll factor it into our slicing and try to
rewrite it even though we know for a fact that it is dead. This is made
more tricky because we can visit the sides in either order. So we have
to both kill the other side and skip instructions marked as dead. The
latter really should be goodness in every case, but here is a matter of
correctness.

Second, we need to actually remove the *uses* of the alloca by the
memcpy when queuing it for later deletion. Otherwise it may still be
using the alloca when we go to promote it (if the rewrite re-uses the
existing alloca instruction). Do this by factoring out the
use-clobbering used when for nixing a Phi argument and re-using it
across the operands of a to-be-deleted instruction.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199590 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 12:16:54 +00:00
Arnold Schwaighofer
2becaaf3a1 LoopVectorizer: A reduction that has multiple uses of the reduction value is not
a reduction.

Really. Under certain circumstances (the use list of an instruction has to be
set up right - hence the extra pass in the test case) we would not recognize
when a value in a potential reduction cycle was used multiple times by the
reduction cycle.

Fixes PR18526.
radar://15851149

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199570 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-19 03:18:31 +00:00
Nick Lewycky
6d2bd95ff1 Don't refuse to transform constexpr(call(arg, ...)) to call(constexpr(arg), ...)) just because the function has multiple return values even if their return types are the same. Patch by Eduard Burtescu!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199564 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-18 22:47:12 +00:00
Benjamin Kramer
8e937c39bb InstCombine: Make the (fmul X, -1.0) -> (fsub -0.0, X) transform handle vectors too.
PR18532.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199553 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-18 16:43:14 +00:00
Owen Anderson
774cec5748 Fix more instances of dropped fast math flags when optimizing FADD instructions. All found by inspection (aka grep).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199528 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-18 00:48:14 +00:00
Owen Anderson
5ee5e0c430 Fix two cases where we could lose fast math flags when optimizing FADD expressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199427 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-16 21:26:02 +00:00
Owen Anderson
5d9450f92f Fix an instance where we would drop fast math flags when performing an fdiv to reciprocal multiply transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199425 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-16 21:07:52 +00:00
Owen Anderson
da5e148474 Fix a bug in InstCombine where we failed to preserve fast math flags when optimizing an FMUL expression.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199424 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-16 20:59:41 +00:00
Owen Anderson
a2a8bbb30f Teach InstCombine that (fmul X, -1.0) can be simplified to (fneg X), which LLVM expresses as (fsub -0.0, X).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199420 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-16 20:36:42 +00:00
Andrew Trick
d5a74a754d Fix PR18449: SCEV needs more precise max BECount for multi-exit loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199299 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-15 06:42:11 +00:00
Hans Wennborg
89fa06ba0f Switch-to-lookup tables: set threshold to 3 cases
There has been an old FIXME to find the right cut-off for when it's worth
analyzing and potentially transforming a switch to a lookup table.

The switches always have two or more cases. I could not measure any speed-up
by transforming a switch with two cases. A switch with three cases gets a nice
speed-up, and I couldn't measure any compile-time regression, so I think this
is the right threshold.

In a Clang self-host, this causes 480 new switches to be transformed,
and reduces the final binary size with 8 KB.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199294 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-15 05:00:27 +00:00
Arnold Schwaighofer
e96fec2e43 LoopVectorize: Only strip casts from integer types when replacing symbolic
strides

Fixes PR18480.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199291 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-15 03:35:46 +00:00
Matt Arsenault
0445dc203b Do pointer cast simplifications on addrspacecast
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199254 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-14 20:00:45 +00:00
Matt Arsenault
60ecc44266 Make nocapture analysis work with addrspacecast
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199246 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-14 19:11:52 +00:00
Hans Wennborg
19236d53eb Switch-to-lookup tables: Don't require a result for the default
case when the lookup table doesn't have any holes.

This means we can build a lookup table for switches like this:

  switch (x) {
    case 0: return 1;
    case 1: return 2;
    case 2: return 3;
    case 3: return 4;
    default: exit(1);
  }

The default case doesn't yield a constant result here, but that doesn't matter,
since a default result is only necessary for filling holes in the lookup table,
and this table doesn't have any holes.

This makes us transform 505 more switches in a clang bootstrap, and shaves 164 KB
off the resulting clang binary.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199025 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-12 00:44:41 +00:00
Benjamin Kramer
ccdb9c9483 Fix broken CHECK lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199016 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-11 21:06:00 +00:00
NAKAMURA Takumi
e62f38b6b7 llvm/test/Transforms/SampleProfile/syntax.ll: Eliminate locale-sensitive message check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199000 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-11 09:23:52 +00:00
Diego Novillo
4b2b2da9c7 Extend and simplify the sample profile input file.
1- Use the line_iterator class to read profile files.

2- Allow comments in profile file. Lines starting with '#'
   are completely ignored while reading the profile.

3- Add parsing support for discriminators and indirect call samples.

   Our external profiler can emit more profile information that we are
   currently not handling. This patch does not add new functionality to
   support this information, but it allows profile files to provide it.

   I will add actual support later on (for at least one of these
   features, I need support for DWARF discriminators in Clang).

   A sample line may contain the following additional information:

   Discriminator. This is used if the sampled program was compiled with
   DWARF discriminator support
   (http://wiki.dwarfstd.org/index.php?title=Path_Discriminators). This
   is currently only emitted by GCC and we just ignore it.

   Potential call targets and samples. If present, this line contains a
   call instruction. This models both direct and indirect calls. Each
   called target is listed together with the number of samples. For
   example,

                    130: 7  foo:3  bar:2  baz:7

   The above means that at relative line offset 130 there is a call
   instruction that calls one of foo(), bar() and baz(). With baz()
   being the relatively more frequent call target.

   Differential Revision: http://llvm-reviews.chandlerc.com/D2355

4- Simplify format of profile input file.

   This implements earlier suggestions to simplify the format of the
   sample profile file. The symbol table is not necessary and function
   profiles do not need to know the number of samples in advance.

   Differential Revision: http://llvm-reviews.chandlerc.com/D2419

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198973 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-10 23:23:51 +00:00
Diego Novillo
0de8cecb84 Propagation of profile samples through the CFG.
This adds a propagation heuristic to convert instruction samples
into branch weights. It implements a similar heuristic to the one
implemented by Dehao Chen on GCC.

The propagation proceeds in 3 phases:

1- Assignment of block weights. All the basic blocks in the function
   are initial assigned the same weight as their most frequently
   executed instruction.

2- Creation of equivalence classes. Since samples may be missing from
   blocks, we can fill in the gaps by setting the weights of all the
   blocks in the same equivalence class to the same weight. To compute
   the concept of equivalence, we use dominance and loop information.
   Two blocks B1 and B2 are in the same equivalence class if B1
   dominates B2, B2 post-dominates B1 and both are in the same loop.

3- Propagation of block weights into edges. This uses a simple
   propagation heuristic. The following rules are applied to every
   block B in the CFG:

   - If B has a single predecessor/successor, then the weight
     of that edge is the weight of the block.

   - If all the edges are known except one, and the weight of the
     block is already known, the weight of the unknown edge will
     be the weight of the block minus the sum of all the known
     edges. If the sum of all the known edges is larger than B's weight,
     we set the unknown edge weight to zero.

   - If there is a self-referential edge, and the weight of the block is
     known, the weight for that edge is set to the weight of the block
     minus the weight of the other incoming edges to that block (if
     known).

Since this propagation is not guaranteed to finalize for every CFG, we
only allow it to proceed for a limited number of iterations (controlled
by -sample-profile-max-propagate-iterations). It currently uses the same
GCC default of 100.

Before propagation starts, the pass builds (for each block) a list of
unique predecessors and successors. This is necessary to handle
identical edges in multiway branches. Since we visit all blocks and all
edges of the CFG, it is cleaner to build these lists once at the start
of the pass.

Finally, the patch fixes the computation of relative line locations.
The profiler emits lines relative to the function header. To discover
it, we traverse the compilation unit looking for the subprogram
corresponding to the function. The line number of that subprogram is the
line where the function begins. That becomes line zero for all the
relative locations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198972 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-10 23:23:46 +00:00
Arnold Schwaighofer
ee3f7de62e LoopVectorizer: Handle strided memory accesses by versioning
for (i = 0; i < N; ++i)
   A[i * Stride1] += B[i * Stride2];

We take loops like this and check that the symbolic strides 'Strided1/2' are one
and drop to the scalar loop if they are not.

This is currently disabled by default and hidden behind the flag
'enable-mem-access-versioning'.

radar://13075509

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198950 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-10 18:20:32 +00:00
Hao Liu
9e0fd27ce7 Fix a bug about generating undef operand when optimising shuffle vector and insert element in instruction combine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198730 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-08 03:06:15 +00:00
Andrew Trick
b4e0c9b85d Reapply r198654 "indvars: sink truncates outside the loop."
This doesn't seem to have actually broken anything. It was paranoia
on my part. Trying again now that bots are more stable.

This is a follow up of the r198338 commit that added truncates for
lcssa phi nodes. Sinking the truncates below the phis cleans up the
loop and simplifies subsequent analysis within the indvars pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198678 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-07 06:59:12 +00:00
Andrew Trick
2352abb7c6 Revert "indvars: sink truncates outside the loop."
This reverts commit r198654.

One of the bots reported a SciMark failure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198659 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-07 01:50:58 +00:00
Andrew Trick
ced88c5918 indvars: sink truncates outside the loop.
This is a follow up of the r198338 commit that added truncates for
lcssa phi nodes. Sinking the truncates below the phis cleans up the
loop and simplifies subsequent analysis within the indvars pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198654 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-07 01:02:55 +00:00
Andrew Trick
a55aaf7fe6 Reapply r198478 "Fix PR18361: Invalidate LoopDispositions after LoopSimplify hoists things."
Now with a fix for PR18384: ValueHandleBase::ValueIsDeleted.

We need to invalidate SCEV's loop info when we delete a block, even if no values are hoisted.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198631 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-06 19:43:14 +00:00