Commit Graph

1608 Commits

Author SHA1 Message Date
Benjamin Kramer
8a96348a41 Disable new sroa now that all buildbots have tested it.
What we have so far:
- Some clang test failures (these were known already)

- Perf results are mixed, some big regressions
  http://llvm.org/perf/db_default/v4/nts/3844
  http://llvm.org/perf/db_default/v4/nts/3845

  bullet suffers a lot. matmul is interesting: slower scalar code, faster with -vectorize.

- Some dragonegg selfhost bots crash in SROA during selfhost now
  http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.6-self-host-checks/builds/1632
  http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.5-self-host/builds/1891

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163968 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-15 15:11:10 +00:00
Chandler Carruth
1c8db50a9a Port the SSAUpdater-based promotion logic from the old SROA pass to the
new one, and add support for running the new pass in that mode and in
that slot of the pass manager. With this the new pass can completely
replace the old one within the pipeline.

The strategy for enabling or disabling the SSAUpdater logic is to do it
by making the requirement of the domtree analysis optional. By default,
it is required and we get the standard mem2reg approach. This is usually
the desired strategy when run in stand-alone situations. Within the
CGSCC pass manager, we disable requiring of the domtree analysis and
consequentially trigger fallback to the SSAUpdater promotion.

In theory this would allow the pass to re-use a domtree if one happened
to be available even when run in a mode that doesn't require it. In
practice, it lets us have a single pass rather than two which was
simpler for me to wrap my head around.

There is a hidden flag to force the use of the SSAUpdater code path for
the purpose of testing. The primary testing strategy is just to run the
existing tests through that path. One notable difference is that it has
custom code to handle lifetime markers, and one of the tests has been
enhanced to exercise that code.

This has survived a bootstrap and the test suite without serious
correctness issues, however my run of the test suite produced *very*
alarming performance numbers. I don't entirely understand or trust them
though, so more investigation is on-going.

To aid my understanding of the performance impact of the new SROA now
that it runs throughout the optimization pipeline, I'm enabling it by
default in this commit, and will disable it again once the LNT bots have
picked up one iteration with it. I want to get those bots (which are
much more stable) to evaluate the impact of the change before I jump to
any conclusions.

NOTE: Several Clang tests will fail because they run -O3 and check the
result's order of output. They'll go back to passing once I disable it
again.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163965 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-15 11:43:14 +00:00
Chandler Carruth
63db8bebe6 Actually keep the flag default-off for now. =/ That's what I get for
being busy testing this...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163890 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 10:18:54 +00:00
Chandler Carruth
713aa9431d Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
  thresholds.
- It is overly conservative about which constructs can be split and
  promoted.
- The vector value replacement aspect is separated from the splitting
  logic, missing many opportunities where splitting and vector value
  formation can work together.
- The splitting is entirely based around the underlying type of the
  alloca, despite this type often having little to do with the reality
  of how that memory is used. This is especially prevelant with unions
  and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
  replacement (again because it is separate) can kick in for
  preposterous cases where we simply should have split the value. This
  results in forming i1024 and i2048 integer "bit vectors" that
  tremendously slow down subsequnet IR optimizations (due to large
  APInts) and impede the backend's lowering.

The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.

First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).

The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.

Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.

Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.

Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.

After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.

There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.

Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.

Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.

Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
Nadav Rotem
aa8405811e Fix an 80 char line limit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163808 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-13 16:27:32 +00:00
Benjamin Kramer
8e0d1c03ca Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162841 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-29 15:32:21 +00:00
Bill Wendling
573e973267 Move the "findUsedStructTypes" functionality outside of the Module class.
The "findUsedStructTypes" method is very expensive to run. It needs to be
optimized so that LTO can run faster. Splitting this method out of the Module
class will help this occur. For instance, it can keep a list of seen objects so
that it doesn't process them over and over again.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161228 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-03 00:30:35 +00:00
Nick Lewycky
b8cd66b5d7 It's not safe to blindly remove invoke instructions. This happens when we
encounter an invoke of an allocation function. This should fix the dragonegg
bootstrap. Testcase to follow, later.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160757 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-25 21:19:40 +00:00
Nick Lewycky
952f5d562c Don't delete one more instruction than we're allowed to. This should fix the
Darwin bootstrap. Testcase exists but isn't fully reduced, I expect to commit
the testcase this evening.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160693 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-24 21:33:00 +00:00
Nick Lewycky
8899d5c6fb Teach globalopt to not nuke all stores to globals. Keep them around of they
might be deliberate "one time" leaks, so that leak checkers can find them.
This is a reapply of r160602 with the fix that this time I'm committing the
code I thought I was committing last time; the I->eraseFromParent() goes
*after* the break out of the loop.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160664 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-24 07:21:08 +00:00
Nick Lewycky
c7088c9a9c Revert r160602.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160603 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-21 09:03:15 +00:00
Nick Lewycky
61e2ff8f82 Teach globalopt to play nice with leak checkers. This is a reapplication of
r160529 that was subsequently reverted. The fix was to not call
GV->eraseFromParent() right before the caller does the same. The existing
testcases already caught this bug if run under valgrind.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160602 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-21 08:29:45 +00:00
Nick Lewycky
4a96d0e44b Revert r160529 due to crashes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160532 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-19 23:59:21 +00:00
Nick Lewycky
39e357fafe Don't wipe out global variables that are probably storing pointers to heap
memory. This makes clang play nice with leak checkers.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160529 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-19 22:35:28 +00:00
Benjamin Kramer
b26e2916c9 Replace some explicit compare loops with std::equal.
No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160501 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-19 10:46:05 +00:00
Bill Wendling
56cb229866 Remove tabs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160477 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-19 00:11:40 +00:00
Duncan Sands
b2fe7f183d GlobalOpt forgot to handle bitcast when analyzing globals. Found by inspection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159546 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-02 18:55:39 +00:00
Chandler Carruth
06cb8ed006 Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159421 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-29 12:38:19 +00:00
Bill Wendling
0bcbd1df7a Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.

The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159312 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-28 00:05:13 +00:00
Matt Beaumont-Gay
06b8c285d3 Revert r159136 due to PR13124.
Original commit message:

If a constant or a function has linkonce_odr linkage and unnamed_addr, mark it
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159272 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-27 17:10:33 +00:00
Rafael Espindola
a0706a9ff4 If a constant or a function has linkonce_odr linkage and unnamed_addr, mark it
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159136 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-25 14:30:31 +00:00
NAKAMURA Takumi
d5c407d2d0 llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159112 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-24 13:32:01 +00:00
Nick Lewycky
3eab3c4d40 Tab to spaces. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159104 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-24 04:07:14 +00:00
Hans Wennborg
ce718ff9f4 Extend the IL for selecting TLS models (PR9788)
This allows the user/front-end to specify a model that is better
than what LLVM would choose by default. For example, a variable
might be declared as

  @x = thread_local(initialexec) global i32 42

if it will not be used in a shared library that is dlopen'ed.

If the specified model isn't supported by the target, or if LLVM can
make a better choice, a different model may be used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159077 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-23 11:37:03 +00:00
Nuno Lopes
cd88efe516 fix whitespace in my last commit.
sorry for the churn :S  enough for today; going to sleep.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158953 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-22 00:29:58 +00:00
Nuno Lopes
eb7c6865cd remove extractMallocCallFromBitCast, since it was tailor maded for its sole user. Update GlobalOpt accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158952 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-22 00:25:01 +00:00
Rafael Espindola
2f135d4c14 Some optimizations done by globalopt are safe only for internal linkage, not
linkonce linkage. For example, it is not valid to add unnamed_addr.

This also fixes a crash in g++.dg/opt/static5.C.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158528 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-15 18:00:24 +00:00
Rafael Espindola
0397729d3b Implement the isSafeToDiscardIfUnused predicate and use it in globalopt and
globaldce. Globaldce was already removing linkonce globals, but globalopt was
not.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158476 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-14 22:48:13 +00:00
Benjamin Kramer
d9b0b02561 Fix typos found by http://github.com/lyda/misspell-check
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157885 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-02 10:20:22 +00:00
Chris Lattner
d509d0b532 switch AttrListPtr::get to take an ArrayRef, simplifying a lot of clients.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157556 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-28 01:47:44 +00:00
Patrik Hägglund
ab767213fd Fix the inliner so that the optsize function attribute don't alter the
inline threshold if the global inline threshold is lower (as for -Oz).

Reviewed by Chandler Carruth and Bill Wendling.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157323 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-23 13:42:57 +00:00
Jay Foad
b7454fd9df Teach Function::hasAddressTaken that BlockAddress doesn't really take
the address of a function.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156703 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-12 08:30:16 +00:00
Chandler Carruth
99650c9088 Move the CodeExtractor utility to a dedicated header file / source file,
and expose it as a utility class rather than as free function wrappers.

The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.

Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.

In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156163 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-04 10:18:49 +00:00
Bill Wendling
ab3a9193b1 Add a Fixme.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154793 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-16 04:23:52 +00:00
Hal Finkel
064551e94c By default, use Early-CSE instead of GVN for vectorization cleanup.
As has been suggested by Duncan and others, Early-CSE and GVN should
do similar redundancy elimination, but Early-CSE is much less expensive.
Most of my autovectorization benchmarks show a performance regresion, but
all of these are < 0.1%, and so I think that it is still worth using
the less expensive pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154673 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 17:15:33 +00:00
Bill Wendling
aab3c0cb07 Code-gen may inject code into the IR before it emits the ASM. The linker
obviously cannot know that this code is present, let alone used. So prevent the
internalize pass from internalizing those global values which code-gen may
insert.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154645 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 01:06:27 +00:00
Chandler Carruth
d6fc26217e Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154492 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-11 10:15:10 +00:00
Bill Wendling
3197b4453d Add an option to turn off the expensive GVN load PRE part of GVN.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153902 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-02 22:16:50 +00:00
Chandler Carruth
dafe48e230 Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153834 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:41:24 +00:00
Chandler Carruth
6052eef8bd Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153832 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:21:05 +00:00
Chandler Carruth
b594a84df5 Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153814 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 13:17:18 +00:00
Chandler Carruth
45de584b4f Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:48:08 +00:00
Chandler Carruth
f2286b0152 Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:42:41 +00:00
Benjamin Kramer
1955c9cf46 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153806 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 11:03:47 +00:00
Benjamin Kramer
c1ea16ec43 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153576 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28 14:50:09 +00:00
Chandler Carruth
dacffb6679 Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153506 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-27 10:48:28 +00:00
Chandler Carruth
d54f9a4c3b Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153403 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-25 04:03:40 +00:00
Kostya Serebryany
1db394921b add EP_OptimizerLast extension point
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153353 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-23 23:22:59 +00:00
Chandler Carruth
b2442fc760 Rip out support for 'llvm.noinline'. This thing has a strange history...
It was added in 2007 as the first cut at supporting no-inline
attributes, but we didn't have function attributes of any form at the
time. However, it was added without any mention in the LangRef or other
documentation.

Later on, in 2008, Devang added function notes for 'inline=never' and
then turned them into proper function attributes. From that point
onward, as far as I can tell, the world moved on, and no one has touched
'llvm.noinline' in any meaningful way since.

It's time has now come. We have had better mechanisms for doing this for
a long time, all the frontends I'm aware of use them, and this is just
holding back progress. Given that it was never a documented feature of
the IR, I've provided no auto-upgrade support. If people know of real,
in-the-wild bitcode that relies on this, yell at me and I'll add it, but
I *seriously* doubt anyone cares.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152904 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 06:10:15 +00:00
Chandler Carruth
f91f5af802 Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 06:10:13 +00:00