Commit Graph

211 Commits

Author SHA1 Message Date
Sanjoy Das
148e8c9b8b Add a new pass "inductive range check elimination"
IRCE eliminates range checks of the form

  0 <= A * I + B < Length

by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment.  As an
example, IRCE will convert

  len = < known positive >
  for (i = 0; i < n; i++) {
    if (0 <= i && i < len) {
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }

to

  len = < known positive >
  limit = smin(n, len)
  // no first segment
  for (i = 0; i < limit; i++) {
    if (0 <= i && i < len) { // this check is fully redundant
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }
  for (i = limit; i < n; i++) {
    if (0 <= i && i < len) {
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }


IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).

Currently IRCE does not do any profitability analysis.  That is a
TODO.

Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline.  Having said that, I will love
to get feedback and general input from people interested in trying
this out.

This pass was originally r226201.  It was reverted because it used C++
features not supported by MSVC 2012.

Differential Revision: http://reviews.llvm.org/D6693



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226238 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 01:03:22 +00:00
Sanjoy Das
df1b4f601d Revert r226201 (Add a new pass "inductive range check elimination")
The change used C++11 features not supported by MSVC 2012.  I will fix
the change to use things supported MSVC 2012 and recommit shortly.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226216 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 22:18:10 +00:00
Sanjoy Das
0170a308ec Add a new pass "inductive range check elimination"
IRCE eliminates range checks of the form

  0 <= A * I + B < Length

by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment.  As an
example, IRCE will convert

  len = < known positive >
  for (i = 0; i < n; i++) {
    if (0 <= i && i < len) {
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }

to

  len = < known positive >
  limit = smin(n, len)
  // no first segment
  for (i = 0; i < limit; i++) {
    if (0 <= i && i < len) { // this check is fully redundant
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }
  for (i = limit; i < n; i++) {
    if (0 <= i && i < len) {
      do_something();
    } else {
      throw_out_of_bounds();
    }
  }


IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).

Currently IRCE does not do any profitability analysis.  That is a
TODO.

Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline.  Having said that, I will love
to get feedback and general input from people interested in trying
this out.

Differential Revision: http://reviews.llvm.org/D6693



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226201 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 20:45:46 +00:00
Hao Liu
eb52f383c2 [SeparateConstOffsetFromGEP] Allow SeparateConstOffsetFromGEP pass to lower GEPs.
If LowerGEP is enabled, it can lower a GEP with multiple indices into GEPs with a single index
or arithmetic operations. Lowering GEPs can always extract structure indices. Lowering GEPs can
also give use more optimization opportunities. It can benefit passes like CSE, LICM and CGP.

Reviewed in http://reviews.llvm.org/D5864


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222328 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-19 06:24:44 +00:00
Jingyue Wu
9cd9e4bb2c [SimplifyCFG] threshold for folding branches with common destination
Summary:
This patch adds a threshold that controls the number of bonus instructions
allowed for folding branches with common destination. The original code allows
at most one bonus instruction. With this patch, users can customize the
threshold to allow multiple bonus instructions. The default threshold is still
1, so that the code behaves the same as before when users do not specify this
threshold.

The motivation of this change is that tuning this threshold significantly (up
to 25%) improves the performance of some CUDA programs in our internal code
base. In general, branch instructions are very expensive for GPU programs.
Therefore, it is sometimes worth trading more arithmetic computation for a more
straightened control flow. Here's a reduced example:

  __global__ void foo(int a, int b, int c, int d, int e, int n,
                      const int *input, int *output) {
    int sum = 0;
    for (int i = 0; i < n; ++i)
      sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i];
    *output = sum;
  }

The select statement in the loop body translates to two branch instructions "if
((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination.
With the default threshold, SimplifyCFG is unable to fold them, because
computing the condition of the second branch "(i | c) ^ d > e" requires two
bonus instructions. With the threshold increased, SimplifyCFG can fold the two
branches so that the loop body contains only one branch, making the code
conceptually look like:

  sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i];

Increasing the threshold significantly improves the performance of this
particular example. In the configuration where both conditions are guaranteed
to be true, increasing the threshold from 1 to 2 improves the performance by
18.24%. Even in the configuration where the first condition is false and the
second condition is true, which favors shortcuts, increasing the threshold from
1 to 2 still improves the performance by 4.35%.

We are still looking for a good threshold and maybe a better cost model than
just counting the number of bonus instructions. However, according to the above
numbers, we think it is at least worth adding a threshold to enable more
experiments and tuning. Let me know what you think. Thanks!

Test Plan: Added one test case to check the threshold is in effect

Reviewers: nadav, eliben, meheff, resistor, hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, llvm-commits

Differential Revision: http://reviews.llvm.org/D5529

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218711 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-30 22:23:38 +00:00
Michael Liao
35fdc092e0 Allow BB duplication threshold to be adjusted through JumpThreading's ctor
- BB duplication may not be desired on targets where there is no or small
  branch penalty and code duplication needs restrict control.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218375 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-24 04:59:06 +00:00
Hal Finkel
1d6c2d717d Add an AlignmentFromAssumptions Pass
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.

The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could).  Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217344 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-07 20:05:11 +00:00
Gerolf Hoflehner
d94715e273 MergedLoadStoreMotion pass
Merges equivalent loads on both sides of a hammock/diamond
and hoists into into the header.
Merges equivalent stores on both sides of a hammock/diamond
and sinks it to the footer.
Can enable if conversion and tolerate better load misses
and store operand latencies.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213396 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-18 19:13:09 +00:00
Michael J. Spencer
8bfb46e790 Add LoadCombine pass.
This pass is disabled by default. Use -combine-loads to enable in -O[1-3]

Differential revision: http://reviews.llvm.org/D3580

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209791 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-29 01:55:07 +00:00
Owen Anderson
866ed7f63f Make the LoopRotate pass's maximum header size configurable both programmatically
and via the command line, mirroring similar functionality in LoopUnroll.  In
situations where clients used custom unrolling thresholds, their intent could
previously be foiled by LoopRotate having a hardcoded threshold.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209617 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-26 08:58:51 +00:00
NAKAMURA Takumi
a1b1165f30 Reformat linefeeds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209609 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-26 00:25:26 +00:00
NAKAMURA Takumi
a9b0742276 Trailing whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209608 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-26 00:25:09 +00:00
Eli Bendersky
167a57ca45 Add an optimization that does CSE in a group of similar GEPs.
This optimization merges the common part of a group of GEPs, so we can compute
each pointer address by adding a simple offset to the common part.

The optimization is currently only enabled for the NVPTX backend, where it has
a large payoff on some benchmarks.

Review: http://reviews.llvm.org/D3462

Patch by Jingyue Wu.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207783 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-01 18:38:36 +00:00
Craig Topper
4ba844388c [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206142 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-14 00:51:57 +00:00
Hal Finkel
6bbb01bbf8 Move partial/runtime unrolling late in the pipeline
The generic (concatenation) loop unroller is currently placed early in the
standard optimization pipeline. This is a good place to perform full unrolling,
but not the right place to perform partial/runtime unrolling. However, most
targets don't enable partial/runtime unrolling, so this never mattered.

However, even some x86 cores benefit from partial/runtime unrolling of very
small loops, and follow-up commits will enable this. First, we need to move
partial/runtime unrolling late in the optimization pipeline (importantly, this
is after SLP and loop vectorization, as vectorization can drastically change
the size of a loop), while keeping the full unrolling where it is now. This
change does just that.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205264 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 23:23:51 +00:00
Mark Seaborn
9bb9615e1f Remove LowerInvoke's obsolete "-enable-correct-eh-support" option
This option caused LowerInvoke to generate code using SJLJ-based
exception handling, but there is no code left that interprets the
jmp_buf stack that the resulting code maintained (llvm.sjljeh.jblist).
This option has been obsolete for a while, and replaced by
SjLjEHPrepare.

This leaves the default behaviour of LowerInvoke, which is to convert
invokes to calls.

Differential Revision: http://llvm-reviews.chandlerc.com/D3136

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204388 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 19:54:47 +00:00
Diego Novillo
f05b45fdb2 Pass to emit DWARF path discriminators.
DWARF discriminators are used to distinguish multiple control flow paths
on the same source location. When this happens, instructions across
basic block boundaries will share the same debug location.

This pass detects this situation and creates a new lexical scope to one
of the two instructions. This lexical scope is a child scope of the
original and contains a new discriminator value. This discriminator is
then picked up from MCObjectStreamer::EmitDwarfLocDirective to be
written on the object file.

This fixes http://llvm.org/bugs/show_bug.cgi?id=18270.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202752 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-03 20:06:11 +00:00
Quentin Colombet
8048c44580 [CodeGenPrepare] Move CodeGenPrepare into lib/CodeGen.
CodeGenPrepare uses extensively TargetLowering which is part of libLLVMCodeGen.
This is a layer violation which would introduce eventually a dependence on
CodeGen in ScalarOpts.

Move CodeGenPrepare into libLLVMCodeGen to avoid that.

Follow-up of <rdar://problem/15519855>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201912 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-22 00:07:45 +00:00
Juergen Ributzka
943ce55f39 Revert "Revert "Add Constant Hoisting Pass" (r200034)"
This reverts commit r200058 and adds the using directive for
ARMTargetTransformInfo to silence two g++ overload warnings.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200062 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-25 02:02:55 +00:00
Hans Wennborg
503793e834 Revert "Add Constant Hoisting Pass" (r200034)
This commit caused -Woverloaded-virtual warnings. The two new
TargetTransformInfo::getIntImmCost functions were only added to the superclass,
and to the X86 subclass. The other targets were not updated, and the
warning highlighted this by pointing out that e.g. ARMTTI::getIntImmCost was
hiding the two new getIntImmCost variants.

We could pacify the warning by adding "using TargetTransformInfo::getIntImmCost"
to the various subclasses, or turning it off, but I suspect that it's wrong to
leave the functions unimplemnted in those targets. The default implementations
return TCC_Free, which I don't think is right e.g. for ARM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200058 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-25 01:18:18 +00:00
Juergen Ributzka
96172cb4a4 Add Constant Hoisting Pass
Retry commit r200022 with a fix for the build bot errors. Constant expressions
have (unlike instructions) module scope use lists and therefore may have users
in different functions. The fix is to simply ignore these out-of-function uses.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200034 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 20:18:00 +00:00
Juergen Ributzka
dc6f9b9a4f Revert "Add Constant Hoisting Pass"
This reverts commit r200022 to unbreak the build bots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200024 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 18:40:30 +00:00
Juergen Ributzka
fb282c68b7 Add Constant Hoisting Pass
This pass identifies expensive constants to hoist and coalesces them to
better prepare it for SelectionDAG-based code generation. This works around the
limitations of the basic-block-at-a-time approach.

First it scans all instructions for integer constants and calculates its
cost. If the constant can be folded into the instruction (the cost is
TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't
consider it expensive and leave it alone. This is the default behavior and
the default implementation of getIntImmCost will always return TCC_Free.

If the cost is more than TCC_BASIC, then the integer constant can't be folded
into the instruction and it might be beneficial to hoist the constant.
Similar constants are coalesced to reduce register pressure and
materialization code.

When a constant is hoisted, it is also hidden behind a bitcast to force it to
be live-out of the basic block. Otherwise the constant would be just
duplicated and each basic block would have its own copy in the SelectionDAG.
The SelectionDAG recognizes such constants as opaque and doesn't perform
certain transformations on them, which would create a new expensive constant.

This optimization is only applied to integer constants in instructions and
simple (this means not nested) constant cast experessions. For example:
%0 = load i64* inttoptr (i64 big_constant to i64*)

Reviewed by Eric

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200022 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 18:23:08 +00:00
Richard Sandiford
0f778794c8 Add a Scalarizer pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195471 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 16:58:05 +00:00
Hal Finkel
bebe48dbfe Add a loop rerolling pass
This adds a loop rerolling pass: the opposite of (partial) loop unrolling. The
transformation aims to take loops like this:

for (int i = 0; i < 3200; i += 5) {
  a[i]     += alpha * b[i];
  a[i + 1] += alpha * b[i + 1];
  a[i + 2] += alpha * b[i + 2];
  a[i + 3] += alpha * b[i + 3];
  a[i + 4] += alpha * b[i + 4];
}

and turn them into this:

for (int i = 0; i < 3200; ++i) {
  a[i] += alpha * b[i];
}

and loops like this:

for (int i = 0; i < 500; ++i) {
  x[3*i] = foo(0);
  x[3*i+1] = foo(0);
  x[3*i+2] = foo(0);
}

and turn them into this:

for (int i = 0; i < 1500; ++i) {
  x[i] = foo(0);
}

There are two motivations for this transformation:

  1. Code-size reduction (especially relevant, obviously, when compiling for
code size).

  2. Providing greater choice to the loop vectorizer (and generic unroller) to
choose the unrolling factor (and a better ability to vectorize). The loop
vectorizer can take vector lengths and register pressure into account when
choosing an unrolling factor, for example, and a pre-unrolled loop limits that
choice. This is especially problematic if the manual unrolling was optimized
for a machine different from the current target.

The current implementation is limited to single basic-block loops only. The
rerolling recognition should work regardless of how the loop iterations are
intermixed within the loop body (subject to dependency and side-effect
constraints), but the significant restriction is that the order of the
instructions in each iteration must be identical. This seems sufficient to
capture all current use cases.

This pass is not currently enabled by default at any optimization level.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194939 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-16 23:59:05 +00:00
Diego Novillo
563b29f8db SampleProfileLoader pass. Initial setup.
This adds a new scalar pass that reads a file with samples generated
by 'perf' during runtime. The samples read from the profile are
incorporated and emmited as IR metadata reflecting that profile.

The profile file is assumed to have been generated by an external
profile source. The profile information is converted into IR metadata,
which is later used by the analysis routines to estimate block
frequencies, edge weights and other related data.

External profile information files have no fixed format, each profiler
is free to define its own. This includes both the on-disk representation
of the profile and the kind of profile information stored in the file.
A common kind of profile is based on sampling (e.g., perf), which
essentially counts how many times each line of the program has been
executed during the run.

The SampleProfileLoader pass is organized as a scalar transformation.
On startup, it reads the file given in -sample-profile-file to
determine what kind of profile it contains.  This file is assumed to
contain profile information for the whole application. The profile
data in the file is read and incorporated into the internal state of
the corresponding profiler.

To facilitate testing, I've organized the profilers to support two file
formats: text and native. The native format is whatever on-disk
representation the profiler wants to support, I think this will mostly
be bitcode files, but it could be anything the profiler wants to
support. To do this, every profiler must implement the
SampleProfile::loadNative() function.

The text format is mostly meant for debugging. Records are separated by
newlines, but each profiler is free to interpret records as it sees fit.
Profilers must implement the SampleProfile::loadText() function.

Finally, the pass will call SampleProfile::emitAnnotations() for each
function in the current translation unit. This function needs to
translate the loaded profile into IR metadata, which the analyzer will
later be able to use.

This patch implements the first steps towards the above design. I've
implemented a sample-based flat profiler. The format of the profile is
fairly simplistic. Each sampled function contains a list of relative
line locations (from the start of the function) together with a count
representing how many samples were collected at that line during
execution. I generate this profile using perf and a separate converter
tool.

Currently, I have only implemented a text format for these profiles. I
am interested in initial feedback to the whole approach before I send
the other parts of the implementation for review.

This patch implements:

- The SampleProfileLoader pass.
- The base ExternalProfile class with the core interface.
- A SampleProfile sub-class using the above interface. The profiler
  generates branch weight metadata on every branch instructions that
  matches the profiles.
- A text loader class to assist the implementation of
  SampleProfile::loadText().
- Basic unit tests for the pass.

Additionally, the patch uses profile information to compute branch
weights based on instruction samples.

This patch converts instruction samples into branch weights. It
does a fairly simplistic conversion:

Given a multi-way branch instruction, it calculates the weight of
each branch based on the maximum sample count gathered from each
target basic block.

Note that this assignment of branch weights is somewhat lossy and can be
misleading. If a basic block has more than one incoming branch, all the
incoming branches will get the same weight. In reality, it may be that
only one of them is the most heavily taken branch.

I will adjust this assignment in subsequent patches.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194566 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 12:22:21 +00:00
Hal Finkel
c88eb08d02 Add a runtime unrolling parameter to the LoopUnroll pass constructor
As with the other loop unrolling parameters (the unrolling threshold, partial
unrolling, etc.) runtime unrolling can now also be controlled via the
constructor. This will be necessary for moving non-trivial unrolling late in
the pass manager (after loop vectorization).

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194027 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-05 00:08:03 +00:00
Chandler Carruth
3748de6e2d Remove the long, long defunct IR block placement pass.
This pass was based on the previous (essentially unused) profiling
infrastructure and the assumption that by ordering the basic blocks at
the IR level in a particular way, the correct layout would happen in the
end. This sometimes worked, and mostly didn't. It also was a really
naive implementation of the classical paper that dates from when branch
predictors were primarily directional and when loop structure wasn't
commonly available. It also didn't factor into the equation
non-fallthrough branches and other machine level details.

Anyways, for all of these reasons and more, I wrote
MachineBlockPlacement, which completely supercedes this pass. It both
uses modern profile information infrastructure, and actually works. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190748 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-14 09:28:14 +00:00
Richard Sandiford
a8a7099c18 Turn MipsOptimizeMathLibCalls into a target-independent scalar transform
...so that it can be used for z too.  Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.

The pass is opt-in because at the moment it only handles sqrt.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189097 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-23 10:27:02 +00:00
Tom Stellard
01d7203ef8 Factor FlattenCFG out from SimplifyCFG
Patch by: Mei Ye

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187764 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-06 02:43:45 +00:00
Tom Stellard
57e6b2d1f3 SimplifyCFG: Use parallel-and and parallel-or mode to consolidate branch conditions
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches.  The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.

Patch by: Mei Ye

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187278 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-27 00:01:07 +00:00
Meador Inge
be87bce32b Remove the simplify-libcalls pass (finally)
This commit completely removes what is left of the simplify-libcalls
pass.  All of the functionality has now been migrated to the instcombine
and functionattrs passes.  The following C API functions are now NOPs:

  1. LLVMAddSimplifyLibCallsPass
  2. LLVMPassManagerBuilderSetDisableSimplifyLibCalls

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184459 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-20 19:48:07 +00:00
Bill Wendling
f9fd58a44b Access the TargetLoweringInfo from the TargetMachine object instead of caching it. The TLI may change between functions. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184352 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-19 21:07:11 +00:00
Matt Arsenault
ad966ea7a8 Move StructurizeCFG out of R600 to generic Transforms.
Register it with PassManager

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184343 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-19 20:18:24 +00:00
Nick Lewycky
c14f1c4587 Fix typo in comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176997 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-14 01:26:17 +00:00
Michael Gottesman
24c4898973 Extracted ObjCARC.cpp into its own library libLLVMObjCARCOpts in preparation for refactoring the ARC Optimizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173647 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-28 01:35:51 +00:00
Chandler Carruth
e4ba75f43e Switch the SCEV expander and LoopStrengthReduce to use
TargetTransformInfo rather than TargetLowering, removing one of the
primary instances of the layering violation of Transforms depending
directly on Target.

This is a really big deal because LSR used to be a "special" pass that
could only be tested fully using llc and by looking at the full output
of it. It also couldn't run with any other loop passes because it had to
be created by the backend. No longer is this true. LSR is now just
a normal pass and we should probably lift the creation of LSR out of
lib/CodeGen/Passes.cpp and into the PassManagerBuilder. =] I've not done
this, or updated all of the tests to use opt and a triple, because
I suspect someone more familiar with LSR would do a better job. This
change should be essentially without functional impact for normal
compilations, and only change behvaior of targetless compilations.

The conversion required changing all of the LSR code to refer to the TTI
interfaces, which fortunately are very similar to TargetLowering's
interfaces. However, it also allowed us to *always* expect to have some
implementation around. I've pushed that simplification through the pass,
and leveraged it to simplify code somewhat. It required some test
updates for one of two things: either we used to skip some checks
altogether but now we get the default "no" answer for them, or we used
to have no information about the target and now we do have some.

I've also started the process of removing AddrMode, as the TTI interface
doesn't use it any longer. In some cases this simplifies code, and in
others it adds some complexity, but I think it's not a bad tradeoff even
there. Subsequent patches will try to clean this up even further and use
other (more appropriate) abstractions.

Yet again, almost all of the formatting changes brought to you by
clang-format. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171735 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 14:41:08 +00:00
Nadav Rotem
a04a4a79ea revert r166264 because the LTO build is still failing
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166340 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 21:28:43 +00:00
Nadav Rotem
725f1d1280 recommit the patch that makes LSR and LowerInvoke use the TargetTransform interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166264 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 04:27:49 +00:00
Bob Wilson
3b9a911efc Temporarily revert the TargetTransform changes.
The TargetTransform changes are breaking LTO bootstraps of clang.  I am
working with Nadav to figure out the problem, but I am reverting it for now
to get our buildbots working.

This reverts svn commits: 165665 165669 165670 165786 165787 165997
and I have also reverted clang svn 165741

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166168 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-18 05:43:52 +00:00
Nadav Rotem
e3d0e86919 Add a new interface to allow IR-level passes to access codegen-specific information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165665 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-10 22:04:55 +00:00
Chandler Carruth
1c8db50a9a Port the SSAUpdater-based promotion logic from the old SROA pass to the
new one, and add support for running the new pass in that mode and in
that slot of the pass manager. With this the new pass can completely
replace the old one within the pipeline.

The strategy for enabling or disabling the SSAUpdater logic is to do it
by making the requirement of the domtree analysis optional. By default,
it is required and we get the standard mem2reg approach. This is usually
the desired strategy when run in stand-alone situations. Within the
CGSCC pass manager, we disable requiring of the domtree analysis and
consequentially trigger fallback to the SSAUpdater promotion.

In theory this would allow the pass to re-use a domtree if one happened
to be available even when run in a mode that doesn't require it. In
practice, it lets us have a single pass rather than two which was
simpler for me to wrap my head around.

There is a hidden flag to force the use of the SSAUpdater code path for
the purpose of testing. The primary testing strategy is just to run the
existing tests through that path. One notable difference is that it has
custom code to handle lifetime markers, and one of the tests has been
enhanced to exercise that code.

This has survived a bootstrap and the test suite without serious
correctness issues, however my run of the test suite produced *very*
alarming performance numbers. I don't entirely understand or trust them
though, so more investigation is on-going.

To aid my understanding of the performance impact of the new SROA now
that it runs throughout the optimization pipeline, I'm enabling it by
default in this commit, and will disable it again once the LNT bots have
picked up one iteration with it. I want to get those bots (which are
much more stable) to evaluate the impact of the change before I jump to
any conclusions.

NOTE: Several Clang tests will fail because they run -O3 and check the
result's order of output. They'll go back to passing once I disable it
again.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163965 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-15 11:43:14 +00:00
Chandler Carruth
713aa9431d Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
  thresholds.
- It is overly conservative about which constructs can be split and
  promoted.
- The vector value replacement aspect is separated from the splitting
  logic, missing many opportunities where splitting and vector value
  formation can work together.
- The splitting is entirely based around the underlying type of the
  alloca, despite this type often having little to do with the reality
  of how that memory is used. This is especially prevelant with unions
  and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
  replacement (again because it is separate) can kick in for
  preposterous cases where we simply should have split the value. This
  results in forming i1024 and i2048 integer "bit vectors" that
  tremendously slow down subsequnet IR optimizations (due to large
  APInts) and impede the backend's lowering.

The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.

First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).

The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.

Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.

Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.

Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.

After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.

There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.

Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.

Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.

Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
Nuno Lopes
78435f6bb7 move the bounds checking pass to the instrumentation folder, where it belongs. I dunno why in the world I dropped it in the Scalar folder in the first place.
No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160587 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-20 22:39:33 +00:00
Nadav Rotem
2114a8aaba Add a number of threshold arguments to the SRA pass.
A patch by Tom Stellard with minor changes.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158918 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-21 13:44:31 +00:00
Nuno Lopes
5c525b59d5 add a new pass to instrument loads and stores for run-time bounds checking
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h

(a draft of this) patch reviewed by Andrew, thanks.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157261 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-22 17:19:09 +00:00
Dan Gohman
2f6263c96a Add a new ObjC ARC optimization pass to eliminate unneeded
autorelease push+pop pairs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148330 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 20:52:24 +00:00
Eli Friedman
0572c7d31e Remove reference to dead GEPSplitterPass. PR11506.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146195 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 22:28:17 +00:00
Devang Patel
827454e6e2 svn mv Target/ARM/ARMGlobalMerge.cpp Transforms/Scalar/GlobalMerge.cpp
There is no reason to have simple IR level pass in lib/Target.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@142200 91177308-0d34-0410-b5e6-96231b3b80d8
2011-10-17 17:17:43 +00:00
Rafael Espindola
f940a1a869 Remove the old tail duplication pass. It is not used and is unable to update
ssa, so it has to be run really early in the pipeline. Any replacement
should probably use the SSAUpdater.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138841 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-30 23:03:45 +00:00