Commit Graph

1572 Commits

Author SHA1 Message Date
Chandler Carruth
d6fc26217e Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154492 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-11 10:15:10 +00:00
Bill Wendling
3197b4453d Add an option to turn off the expensive GVN load PRE part of GVN.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153902 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-02 22:16:50 +00:00
Chandler Carruth
dafe48e230 Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153834 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:41:24 +00:00
Chandler Carruth
6052eef8bd Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153832 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:21:05 +00:00
Chandler Carruth
b594a84df5 Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153814 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 13:17:18 +00:00
Chandler Carruth
45de584b4f Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:48:08 +00:00
Chandler Carruth
f2286b0152 Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:42:41 +00:00
Benjamin Kramer
1955c9cf46 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153806 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 11:03:47 +00:00
Benjamin Kramer
c1ea16ec43 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153576 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28 14:50:09 +00:00
Chandler Carruth
dacffb6679 Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153506 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-27 10:48:28 +00:00
Chandler Carruth
d54f9a4c3b Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153403 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-25 04:03:40 +00:00
Kostya Serebryany
1db394921b add EP_OptimizerLast extension point
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153353 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-23 23:22:59 +00:00
Chandler Carruth
b2442fc760 Rip out support for 'llvm.noinline'. This thing has a strange history...
It was added in 2007 as the first cut at supporting no-inline
attributes, but we didn't have function attributes of any form at the
time. However, it was added without any mention in the LangRef or other
documentation.

Later on, in 2008, Devang added function notes for 'inline=never' and
then turned them into proper function attributes. From that point
onward, as far as I can tell, the world moved on, and no one has touched
'llvm.noinline' in any meaningful way since.

It's time has now come. We have had better mechanisms for doing this for
a long time, all the frontends I'm aware of use them, and this is just
holding back progress. Given that it was never a documented feature of
the IR, I've provided no auto-upgrade support. If people know of real,
in-the-wild bitcode that relies on this, yell at me and I'll add it, but
I *seriously* doubt anyone cares.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152904 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 06:10:15 +00:00
Chandler Carruth
f91f5af802 Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 06:10:13 +00:00
Chandler Carruth
b16117c368 Change where we enable the heuristic that delays inlining into functions
which are small enough to themselves be inlined. Delaying in this manner
can be harmful if the function is inelligible for inlining in some (or
many) contexts as it pessimizes the code of the function itself in the
event that inlining does not eventually happen.

Previously the check was written to only do this delaying of inlining
for static functions in the hope that they could be entirely deleted and
in the knowledge that all callers of static functions will have the
opportunity to inline if it is in fact profitable. However, with C++ we
get two other important sources of functions where the definition is
always available for inlining: inline functions and templated functions.
This patch generalizes the inliner to allow linkonce-ODR (the linkage
such C++ routines receive) to also qualify for this delay-based
inlining.

Benchmarking across a range of large real-world applications shows
roughly 2% size increase across the board, but an average speedup of
about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary
itself (when bootstrapped with this feature) shows a 1% -O0 performance
improvement when run over all Sema, Lex, and Parse source code smashed
into a single file. A clean re-build of Clang+LLVM with a bootstrapped
Clang shows approximately 2% improvement, but that measurement is often
noisy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152737 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-14 20:16:41 +00:00
Dan Gohman
f1ce79f3c3 Teach globalopt how to evaluate an invoke with a non-void return type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152634 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 18:01:37 +00:00
Chandler Carruth
6c0b3ac8ea When inlining a function and adding its inner call sites to the
candidate set for subsequent inlining, try to simplify the arguments to
the inner call site now that inlining has been performed.

The goal here is to propagate and fold constants through deeply nested
call chains. Without doing this, we loose the inliner bonus that should
be applied because the arguments don't match the exact pattern the cost
estimator uses.

Reviewed on IRC by Benjamin Kramer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152556 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-12 11:19:33 +00:00
Stepan Dyatkovskiy
c10fa6c801 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152297 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-08 07:06:20 +00:00
Benjamin Kramer
3bbf2b6548 Plog a memleak in GlobalOpt.
Found by valgrind.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151525 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-27 12:48:24 +00:00
Chad Rosier
dfba3ad882 Add comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151431 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-25 03:07:57 +00:00
Chad Rosier
fa086f1f00 Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151429 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-25 02:56:01 +00:00
Chad Rosier
ff16eb64f5 Fix indentation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151420 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-25 01:10:59 +00:00
Duncan Sands
4b794f8191 GCC fails to understand that NextBB is always initialized if EvaluateBlock
returns 'true' and emits a warning.  Help it out.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151242 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-23 08:23:06 +00:00
Nick Lewycky
a641c07828 Use the target-aware constant folder on expressions to improve the chance
they'll be simple enough to simulate, and to reduce the chance we'll encounter
equal but different simple pointer constants.

This removes the symptoms from PR11352 but is not a full fix. A proper fix would
either require a guarantee that two constant objects we simulate are folded
when equal, or a different way of handling equal pointers (ie., trying a
constantexpr icmp on them to see whether we know they're equal or non-equal or
unsure).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151093 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-21 22:08:06 +00:00
Nick Lewycky
0ef0557ab5 Check for the correct size in the invariant marker.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151003 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-20 23:32:26 +00:00
Nick Lewycky
7fa7677705 Rename class Evaluate to Evaluator and put it in an anonymous namespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150947 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-20 03:25:59 +00:00
Nick Lewycky
23ec5d7759 Move EvaluateFunction and EvaluateBlock into a class, and make the class store
the information that they pass around between them. No functionality change!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150939 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-19 23:26:27 +00:00
Nick Lewycky
81266c5c93 Add support for invariant.start inside the static constructor evaluator. This is
useful to represent a variable that is const in the source but can't be constant
in the IR because of a non-trivial constructor. If globalopt evaluates the
constructor, and there was an invariant.start with no matching invariant.end
possible, it will mark the global constant afterwards.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150794 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-17 06:59:21 +00:00
Nick Lewycky
132bd9ce56 Handle InvokeInst in EvaluateBlock. Don't try to support exceptions, it's just
that no optz'ns have run yet to convert invokes to calls.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150326 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 05:09:35 +00:00
Nick Lewycky
6f160d3d78 false is totally null!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150324 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 02:17:18 +00:00
Nick Lewycky
6a7df9aae6 Remove redundant getAnalysis<> calls in GlobalOpt. Add a few Itanium ABI calls
to TargetLibraryInfo and use one of them in GlobalOpt.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150323 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 02:15:20 +00:00
Nick Lewycky
6a577f82ba Pass TargetData and TargetLibraryInfo through to the constant folder. Fixes a
few fixme's when TLI was added.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150322 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 01:13:18 +00:00
Nick Lewycky
db292a6f7f Fix function name in comment to match actual name. Fix comments that are using
doxy-style on local variables to not do so. Fix one 80-col violation.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150320 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 00:52:26 +00:00
Nick Lewycky
8e4ba6b954 Don't traverse the PHI nodes twice. No functionality change!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150319 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-12 00:47:24 +00:00
Benjamin Kramer
c1322a13b5 Tweak comment readability and grammar.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150183 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-09 16:28:15 +00:00
Benjamin Kramer
d469274743 GlobalOpt: Be more aggressive about elminating side-effect free static dtors.
GlobalOpt runs early in the pipeline (before inlining) and complex class
hierarchies often introduce bitcasts or GEPs which weren't optimized away.
Teach it to ignore side-effect free instructions instead of depending on
other passes to remove them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150174 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-09 14:26:06 +00:00
Bill Wendling
aa5abe88d6 [unwind removal] We no longer have 'unwind' instructions being generated, so
remove the code that handles them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149901 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-06 21:16:41 +00:00
Nick Lewycky
da82fd411e Split part of EvaluateFunction into a new EvaluateBlock method. No functionality
change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149861 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-06 08:24:44 +00:00
Nick Lewycky
fad4d40f37 Teach GlobalOpt to handle atomic accesses to globals.
* Most of the transforms come through intact by having each transformed load or
store copy the ordering and synchronization scope of the original.
 * The transform that turns a global only accessed in main() into an alloca
(since main is non-recursive) with a store of the initial value uses an
unordered store, since it's guaranteed to be the first thing to happen in main.
(Threads may have started before main (!) but they can't have the address of a
function local before the point in the entry block we insert our code.)
 * The heap-SRoA transforms are disabled in the face of atomic operations. This
can probably be improved; it seems odd to have atomic accesses to an alloca
that doesn't have its address taken.

AnalyzeGlobal keeps track of the strongest ordering found in any use of the
global. This is more information than we need right now, but it's cheap to
compute and likely to be useful.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149847 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 19:56:38 +00:00
Nick Lewycky
bc384a1feb Clean up some whitespace and comments. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149845 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 19:48:37 +00:00
Stepan Dyatkovskiy
24473120a2 SwitchInst refactoring.
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.

What was done:

1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.

Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149481 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 07:49:51 +00:00
Hal Finkel
de5e5ec304 Add a basic-block autovectorization pass.
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure.
Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149468 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 03:51:43 +00:00
Chris Lattner
a78fa8cc2d continue making the world safe for ConstantDataVector. At this point,
we should (theoretically optimize and codegen ConstantDataVector as well
as ConstantVector.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149116 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-27 03:08:05 +00:00
Chris Lattner
d59ae907ee Continue improving support for ConstantDataAggregate, and use the
new methods recently added to (sometimes greatly!) simplify code.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149024 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-26 02:32:04 +00:00
Chris Lattner
a1f00f4d48 use Constant::getAggregateElement to simplify a bunch of code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148934 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 06:48:06 +00:00
David Blaikie
4d6ccb5f68 More dead code removal (using -Wunreachable-code)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148578 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 21:51:11 +00:00
Dan Gohman
7d4c87ef6e Add a new PassManagerBuilder customization point,
EP_ModuleOptimizerEarly, to allow passes to be added before the
main ModulePass optimizers.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148329 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 20:51:32 +00:00
Eli Friedman
e15f421a9a Re-fix the issue Bill fixed in r147899 in a slightly different way, which doesn't abuse the semantics of linker_private. We don't really want to merge any string constant with a weak_odr global.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147971 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 22:06:46 +00:00
Bill Wendling
37b94c6b4e If the global variable is removed by the linker, then don't constant merge it
with other symbols.

An object in the __cfstring section is suppoed to be filled with CFString
objects, which have a pointer to ___CFConstantStringClassReference followed by a
pointer to a __cstring. If we allow the object in the __cstring section to be
merged with another global, then it could end up in any section. Because the
linker is going to remove these symbols in the final executable, we shouldn't
bother to merge them.
<rdar://problem/10564621>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147899 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 00:13:08 +00:00
Eli Friedman
fb54ad19e7 PR11705, part 2: globalopt shouldn't put inttoptr/ptrtoint operations into global initializers if there's an implied extension or truncation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147625 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-05 23:03:32 +00:00