Make sure we do not emit index computations with NSW flags so that we dont get an undef value if the GEP overflows
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160589 91177308-0d34-0410-b5e6-96231b3b80d8
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.
I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.
I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.
Thanks to Bill and Eric for giving the green light for this bit of cleanup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159421 91177308-0d34-0410-b5e6-96231b3b80d8
The original algorithm only used recursive pair fusion of equal-length
types. This is now extended to allow pairing of any types that share
the same underlying scalar type. Because we would still generally
prefer the 2^n-length types, those are formed first. Then a second
set of iterations form the non-2^n-length types.
Also, a call to SimplifyInstructionsInBlock has been added after each
pairing iteration. This takes care of DCE (and a few other things)
that make the following iterations execute somewhat faster. For the
same reason, some of the simple shuffle-combination cases are now
handled internally.
There is some additional refactoring work to be done, but I've had
many requests for this feature, so additional refactoring will come
soon in future commits (as will additional test cases).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159330 91177308-0d34-0410-b5e6-96231b3b80d8
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h
(a draft of this) patch reviewed by Andrew, thanks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157261 91177308-0d34-0410-b5e6-96231b3b80d8
of the CodeExtractor utility. This allows speculatively computing input
and output sets to measure the likely size impact of the code
extraction.
These sets cannot be reused sadly -- we mutate the function prior to
forming the final sets used by the actual extraction.
The interface has been revamped slightly to make it easier to use
correctly by making the interface const and sinking the computation of
the number of exit blocks into the full extraction function and away
from the rest of this logic which just computed two output parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156168 91177308-0d34-0410-b5e6-96231b3b80d8
and expose it as a utility class rather than as free function wrappers.
The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.
Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.
In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156163 91177308-0d34-0410-b5e6-96231b3b80d8
extraction into a public interface. Also clean it up and apply it more
consistently such that we check for landing pads *anywhere* in the
extracted code, not just in single-block extraction.
This will be used to guide decisions in passes that are planning to
eventually perform a round of code extraction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156114 91177308-0d34-0410-b5e6-96231b3b80d8
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155817 91177308-0d34-0410-b5e6-96231b3b80d8
of the BBVectorizePass without using command line option. As pointed out
by Hal, we can ask the TargetLoweringInfo for the architecture specific
VectorizeConfig to perform vectorizing with architecture specific
information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154096 91177308-0d34-0410-b5e6-96231b3b80d8
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
loop unroll pass right after the loop is unrolled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154089 91177308-0d34-0410-b5e6-96231b3b80d8
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.
This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:
- Build a simplification mapping based on the callsite arguments to the
function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
breadth-first ordering) and walk its instructions using a custom
InstVisitor.
- For each instruction's operands, re-map them based on the
simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
cost metric.
- When the terminator is reached, replace any conditional value in the
terminator with any simplifications from the mapping we have, and add
any successors which are not proven to be dead from these
simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
callsite, stop analyzing the function in order to bound cost.
The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.
Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.
Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.
I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.
As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153572 91177308-0d34-0410-b5e6-96231b3b80d8
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.
Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.
The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.
The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.
This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.
I believe there is no functional change here, but yell if you spot one.
None are intended.
Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
changed since. No one was using it. It is yet another consumer of the
InlineCost interface that I'd like to change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152769 91177308-0d34-0410-b5e6-96231b3b80d8
are optimization hints, but at -O0 we're not optimizing. This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151429 91177308-0d34-0410-b5e6-96231b3b80d8
PHI nodes which were matched, rather than climbing up the
original PHI node's operands to rediscover PHI nodes for
recording, since the PHI nodes found that are not
necessarily part of the matched set.
This fixes rdar://10589171.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149654 91177308-0d34-0410-b5e6-96231b3b80d8
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure.
Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149468 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Brendon Cahoon!
This extends the existing LoopUnroll and LoopUnrollPass. Brendon
measured no regressions in the llvm test suite with -unroll-runtime
enabled. This implementation works by using the existing loop
unrolling code to unroll the loop by a power-of-two (default 8). It
generates an if-then-else sequence of code prior to the loop to
execute the extra iterations before entering the unrolled loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146245 91177308-0d34-0410-b5e6-96231b3b80d8
This handles the case in which LSR rewrites an IV user that is a phi and
splits critical edges originating from a switch.
Fixes <rdar://problem/6453893> LSR is not splitting edges "nicely"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@141059 91177308-0d34-0410-b5e6-96231b3b80d8
ssa, so it has to be run really early in the pipeline. Any replacement
should probably use the SSAUpdater.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138841 91177308-0d34-0410-b5e6-96231b3b80d8
SplitLandingPadPredecessors is similar to SplitBlockPredecessors in that it
splits the current block and attaches a set of predecessors to the new basic
block. However, it differs from SplitBlockPredecessors in that it's specifically
designed to handle landing pad blocks.
Two new basic blocks are created: one that is has the vector of predecessors as
its predecessors and one that has the remaining predecessors as its
predecessors. Those two new blocks then receive a cloned copy of the landingpad
instruction from the original block. The landingpad instructions are joined in a
PHI, etc. Like SplitBlockPredecessors, it updates the LLVM IR, AliasAnalysis,
DominatorTree, DominanceFrontier, LoopInfo, and LCCSA analyses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138014 91177308-0d34-0410-b5e6-96231b3b80d8
Before 3.0, I'd like to add a mechanism for automatically loading a set of plugins from a config file. API suggestions welcome...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137717 91177308-0d34-0410-b5e6-96231b3b80d8
based on ScalarEvolution without changing the induction variable phis.
This utility is the main tool of IndVarSimplifyPass, but the pass also
restructures induction variables in strange ways that are sensitive to
pass ordering. This provides a way for other loop passes to simplify
new uses of induction variables created during transformation. The
utility may be used by any pass that preserves ScalarEvolution. Soon
LoopUnroll will use it.
The net effect in this checkin is to cleanup the IndVarSimplify pass
by factoring out the SimplifyIndVar algorithm into a standalone utility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137197 91177308-0d34-0410-b5e6-96231b3b80d8
but it solves a layering violation since things in Support are not supposed to
use things in Transforms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136726 91177308-0d34-0410-b5e6-96231b3b80d8
patch brings numerous advantages to LLVM. One way to look at it
is through diffstat:
109 files changed, 3005 insertions(+), 5906 deletions(-)
Removing almost 3K lines of code is a good thing. Other advantages
include:
1. Value::getType() is a simple load that can be CSE'd, not a mutating
union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
uniques them. This means that the compiler doesn't merge them structurally
which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead
"const Type *" everywhere.
Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.
"LLVM 3.0" is the right time to do this.
There are still some cleanups pending after this, this patch is large enough
as-is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134829 91177308-0d34-0410-b5e6-96231b3b80d8
I also changed -simplifycfg, -jump-threading and -codegenprepare to use this to produce slightly better code without any extra cleanup passes (AFAICT this was the only place in -simplifycfg where now-dead conditions of replaced terminators weren't being cleaned up). The only other user of this function is -sccp, but I didn't read that thoroughly enough to figure out whether it might be holding pointers to instructions that could be deleted by this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131855 91177308-0d34-0410-b5e6-96231b3b80d8
instrument the program to emit .gcda.
TODO: we should emit slightly different .gcda files when .gcno emission is off.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129903 91177308-0d34-0410-b5e6-96231b3b80d8
does. Also mostly implement it. Still a work-in-progress, but generates legal
output on crafted test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129630 91177308-0d34-0410-b5e6-96231b3b80d8
will allow multiple context with different loop unroll parameters to run. This is a minor change and no effect
on existing application.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129449 91177308-0d34-0410-b5e6-96231b3b80d8
Use debug info in the IR to find the directory/file:line:col. Each time that location changes, bump a counter.
Unlike the existing profiling system, we don't try to look at argv[], and thusly don't require main() to be present in the IR. This matches GCC's technique where you specify the profiling flag when producing each .o file.
The runtime library is minimal, currently just calling printf at program shutdown time. The API is designed to make it possible to emit GCOV data later on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129340 91177308-0d34-0410-b5e6-96231b3b80d8
has some bugs. If this is interesting functionality, it should be
reimplemented in the argpromotion pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129314 91177308-0d34-0410-b5e6-96231b3b80d8
after the given instruction; make sure to handle that case correctly.
(It's difficult to trigger; the included testcase involves a dead
block, but I don't think that's a requirement.)
While I'm here, get rid of the unnecessary warning about
SimplifyInstructionsInBlock, since it should work correctly as far as I know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128782 91177308-0d34-0410-b5e6-96231b3b80d8
itself without going via a phi node then we could return false here in
spite of making a change. Also, tweak the comment because this method
can (and always could) return true without deleting the original phi node.
For example, if the phi node was used by a read-only invoke instruction
which is used by another phi node phi2 which is only used by and only uses
the invoke, then phi2 would be deleted but not the invoke instruction and
not the original phi node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126129 91177308-0d34-0410-b5e6-96231b3b80d8
Modified patch by Adam Preuss.
This builds on the existing framework for block tracing, edge profiling and optimal edge profiling.
See -help-hidden for new flags.
For documentation, see the technical report "Implementation of Path Profiling..." in llvm.org/pubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124515 91177308-0d34-0410-b5e6-96231b3b80d8
checks enabled:
1) Use '<' to compare integers in a comparison function rather than '<='.
2) Use the uniqued set DefBlocks rather than Info.DefiningBlocks to initialize
the priority queue.
The speedup of scalarrepl on test-suite + SPEC2000 + SPEC2006 is a bit less, at
just under 16% rather than 17%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123662 91177308-0d34-0410-b5e6-96231b3b80d8
eliminating a potentially quadratic data structure, this also gives a 17%
speedup when running -scalarrepl on test-suite + SPEC2000 + SPEC2006. My initial
experiment gave a greater speedup around 25%, but I moved the dominator tree
level computation from dominator tree construction to PromoteMemToReg.
Since this approach to computing IDFs has a much lower overhead than the old
code using precomputed DFs, it is worth looking at using this new code for the
second scalarrepl pass as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123609 91177308-0d34-0410-b5e6-96231b3b80d8
phi nodes. It is called from MergeBlockIntoPredecessor which is
called from GVN, which claims to preserve these.
I'm skeptical that this is the actual problem behind PR8954, but
this is a stab in the right direction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123222 91177308-0d34-0410-b5e6-96231b3b80d8
1. Take a flags argument instead of a bool. This makes
it more clear to the reader what it is used for.
2. Add a flag that says that "remapping a value not in the
map is ok".
3. Reimplement MapValue to share a bunch of code and be a lot
more efficient. For lookup failures, don't drop null values
into the map.
4. Using the new flag a bunch of code can vaporize in LinkModules
and LoopUnswitch, kill it.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123058 91177308-0d34-0410-b5e6-96231b3b80d8
of instcombine that is currently in the middle of the loop pass pipeline. This
commit only checks in the pass; it will hopefully be enabled by default later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122719 91177308-0d34-0410-b5e6-96231b3b80d8
it could only be tested indirectly, via instcombine, gvn or some other
pass that makes use of InstructionSimplify, which means that testcases
had to be carefully contrived to dance around any other transformations
that that pass did.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122264 91177308-0d34-0410-b5e6-96231b3b80d8
by my recent GVN improvement. Looking through a single layer of
PHI nodes when attempting to sink GEPs, we need to iteratively
look through arbitrary PHI nests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120202 91177308-0d34-0410-b5e6-96231b3b80d8
threshold given to createFunctionInliningPass().
Both opt -O3 and clang would silently ignore the -inline-threshold option.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118117 91177308-0d34-0410-b5e6-96231b3b80d8
must be called in the pass's constructor. This function uses static dependency declarations to recursively initialize
the pass's dependencies.
Clients that only create passes through the createFooPass() APIs will require no changes. Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.
I have tested this with all standard configurations of clang and llvm-gcc on Darwin. It is possible that there are problems
with the static dependencies that will only be visible with non-standard options. If you encounter any crash in pass
registration/creation, please send the testcase to me directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116820 91177308-0d34-0410-b5e6-96231b3b80d8
more careful not to call SimplifyInstructionsInBlock() on an unreachable block, the issue has been fixed at a higher level. Add
a big warning to SimplifyInstructionsInBlock() to hopefully prevent this in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114117 91177308-0d34-0410-b5e6-96231b3b80d8
I'm sure it is harmless. Original commit message:
If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory. Instead, simply pass
in the type and name explicitly, which is all that was used anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112810 91177308-0d34-0410-b5e6-96231b3b80d8
then the SSAUpdator may access freed memory. Instead, simply pass
in the type and name explicitly, which is all that was used anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112699 91177308-0d34-0410-b5e6-96231b3b80d8
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).
This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112190 91177308-0d34-0410-b5e6-96231b3b80d8
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111941 91177308-0d34-0410-b5e6-96231b3b80d8
- Eliminate redundant successors.
- Convert an indirectbr with one successor into a direct branch.
Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111060 91177308-0d34-0410-b5e6-96231b3b80d8
exactly what bugpoint expected it to do.
There was also only one user of
BlockExtractorPass(const std::vector<BasicBlock*> &B), so just remove it and
make BlockExtractorPass read BlockFile.
This fixes bugpoint's block extraction.
Nick, please review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109936 91177308-0d34-0410-b5e6-96231b3b80d8
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer.
Add new special pass to remove debug info for such symbols.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107416 91177308-0d34-0410-b5e6-96231b3b80d8
handled cases where a block had zero predecessors, but failed to detect other
cases like loops with no entries. The SSAUpdater is already doing a forward
traversal through the blocks, so it is not hard to identify the blocks that
were never reached on that traversal. This fixes the crash for ppc on the
stepanov_vector test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103184 91177308-0d34-0410-b5e6-96231b3b80d8
add a version of createLowerInvokePass that allows the client
to specify whether it wants "expensive" or "cheap" lowering.
Patch by Alex Mac!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102402 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call. The improved inliner behavior is still
disabled for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102196 91177308-0d34-0410-b5e6-96231b3b80d8
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102146 91177308-0d34-0410-b5e6-96231b3b80d8
arguments are handled with a new InlineFunctionInfo class. This
makes it easier to extend InlineFunction to return more info in the
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102137 91177308-0d34-0410-b5e6-96231b3b80d8
to determine where to place PHIs by iteratively comparing reaching definitions
at each block. That was just plain wrong. This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101612 91177308-0d34-0410-b5e6-96231b3b80d8
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>. No functionality change,
but now we have a much tidier interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101558 91177308-0d34-0410-b5e6-96231b3b80d8
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100304 91177308-0d34-0410-b5e6-96231b3b80d8
(what was I thinking?) and there's also a problem with LCSSA. I'll try again
later with fixes.
--- Reverse-merging r100263 into '.':
U lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100177 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100148 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100147 into '.':
U include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100131 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100130 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100126 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100050 into '.':
D test/Transforms/GVN/2010-03-31-RedundantPHIs.ll
--- Reverse-merging r100047 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100264 91177308-0d34-0410-b5e6-96231b3b80d8
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100191 91177308-0d34-0410-b5e6-96231b3b80d8
PHIs. The previous algorithm was unable to reliably detect when existing
PHIs in a cycle can be reused. I'm still working on reducing a testcase.
Radar 7711900.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100047 91177308-0d34-0410-b5e6-96231b3b80d8
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99928 91177308-0d34-0410-b5e6-96231b3b80d8
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
This is a more conservative version of r98089 that doesn't break the clang
test CodeGenCXX/temp-order.cpp. That test relies on rather extreme inlining
for constant folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98099 91177308-0d34-0410-b5e6-96231b3b80d8
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98089 91177308-0d34-0410-b5e6-96231b3b80d8
can be used in more places. Add an argument for the TargetData that
most of them need. Update for the getInt8PtrTy() change. Should be
no functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97844 91177308-0d34-0410-b5e6-96231b3b80d8
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly. Move testcase
to new directory.
Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95628 91177308-0d34-0410-b5e6-96231b3b80d8
This time it's for real! I am going to hook this up in the frontends as well.
The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.
We need some experiments to determine if that is the right thing to do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95466 91177308-0d34-0410-b5e6-96231b3b80d8
unconditionally. Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94875 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change except the forgotten test for
InlineLimit.getNumOccurrences() == 0 in the CurrentThreshold2 calculation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94007 91177308-0d34-0410-b5e6-96231b3b80d8
RecursivelyDeleteDeadPHINode, and DeleteDeadPHIs return a flag
indicating whether they made any changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92732 91177308-0d34-0410-b5e6-96231b3b80d8
MergeBlockIntoPredecessor. This makes SimplifyCFG slightly more aggressive,
and makes it unnecessary for LoopUnroll to have its own copy of this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85667 91177308-0d34-0410-b5e6-96231b3b80d8
Checks on Demand algorithm which looks at arbitrary branches instead of loop
iterations. This is GSoC work by Andre Tavares with only editorial changes
applied!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85382 91177308-0d34-0410-b5e6-96231b3b80d8
Remove LowerAllocations pass.
Update some more passes to treate free calls just like they were treating FreeInst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85176 91177308-0d34-0410-b5e6-96231b3b80d8
GEPs (more than one non-zero index) into simple GEPs (at most one
non-zero index). In some simple experiments using this it's not
uncommon to see 3% overall code size wins, because it exposes
redundancies that can be eliminated, however it's tricky to use
because instcombine aggressively undoes the work that this pass does.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85144 91177308-0d34-0410-b5e6-96231b3b80d8
Update all analysis passes and transforms to treat free calls just like FreeInst.
Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84987 91177308-0d34-0410-b5e6-96231b3b80d8
not just at the end. Add a big comment explaining when this could
be useful (which never happens for jump threading).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83741 91177308-0d34-0410-b5e6-96231b3b80d8
constants used in inlining heuristics (especially
those used in more than one file). No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83675 91177308-0d34-0410-b5e6-96231b3b80d8
out of it, and jump threading, condprop and gvn are now getting
most of the benefit. This was approved by Nicholas and Nicolas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83390 91177308-0d34-0410-b5e6-96231b3b80d8
constants out of loops. These aren't covered by the regular LICM
pass, because in LLVM IR constants don't require separate
instructions. They're not always covered by the MachineLICM pass
either, because it doesn't know how to unfold folded constant-pool
loads. This is somewhat experimental at this point, and off by
default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82076 91177308-0d34-0410-b5e6-96231b3b80d8
that get created during loop unswitching, and fix SplitBlockPredecessors'
LCSSA updating code to create new PHIs instead of trying to just move
existing ones.
Also, optimize Loop::verifyLoop, since it gets called a lot. Use
searches on a sorted list of blocks instead of calling the "contains"
function, as is done in other places in the Loop class, since "contains"
does a linear search. Also, don't call verifyLoop from LoopSimplify or
LCSSA, as the PassManager is already calling verifyLoop as part of
LoopInfo's verifyAnalysis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81221 91177308-0d34-0410-b5e6-96231b3b80d8
that these passes are properly preserved.
Fix several transformation passes that claimed to preserve LoopSimplify
form but weren't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80926 91177308-0d34-0410-b5e6-96231b3b80d8
argpromotion and structretpromote. Basically, when replacing
a function, they used the 'changeFunction' api which changes
the entry in the function map (and steals/reuses the callgraph
node).
This has some interesting effects: first, the problem is that it doesn't
update the "callee" edges in any callees of the function in the call graph.
Second, this covers for a major problem in all the CGSCC pass stuff, which
is that it is completely broken when functions are deleted if they *don't*
reuse a CGN. (there is a cute little fixme about this though :).
This patch changes the protocol that CGSCC passes must obey: now the CGSCC
pass manager copies the SCC and preincrements its iterator to avoid passes
invalidating it. This allows CGSCC passes to mutate the current SCC. However
multiple passes may be run on that SCC, so if passes do this, they are now
required to *update* the SCC to be current when they return.
Other less interesting parts of this patch are that it makes passes update
the CG more directly, eliminates changeFunction, and requires clients of
replaceCallSite to specify the new callee CGN if they are changing it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80527 91177308-0d34-0410-b5e6-96231b3b80d8
calls into a function and if the calls bring in arrays, try to merge
them together to reduce stack size. For example, in the testcase
we'd previously end up with 4 allocas, now we end up with 2 allocas.
As described in the comments, this is not really the ideal solution
to this problem, but it is surprisingly effective. For example, on
176.gcc, we end up eliminating 67 arrays at "gccas" time and another
24 at "llvm-ld" time.
One piece of concern that I didn't look into: at -O0 -g with
forced inlining this will almost certainly result in worse debug
info. I think this is acceptable though given that this is a case
of "debugging optimized code", and we don't want debug info to
prevent the optimizer from doing things anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80215 91177308-0d34-0410-b5e6-96231b3b80d8