interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.
This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:
- Build a simplification mapping based on the callsite arguments to the
function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
breadth-first ordering) and walk its instructions using a custom
InstVisitor.
- For each instruction's operands, re-map them based on the
simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
cost metric.
- When the terminator is reached, replace any conditional value in the
terminator with any simplifications from the mapping we have, and add
any successors which are not proven to be dead from these
simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
callsite, stop analyzing the function in order to bound cost.
The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.
Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.
Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.
I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.
As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153587 91177308-0d34-0410-b5e6-96231b3b80d8
undefined behavior, which Rafael was kind enough to fix.
Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153521 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loading a boolean value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153452 91177308-0d34-0410-b5e6-96231b3b80d8
constant-offsets of a common base using the generic GEP-walking logic
I added for computing pointer differences in the same situation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153419 91177308-0d34-0410-b5e6-96231b3b80d8
inbounds GEPs. This isn't really necessary for simplifying pointer
differences, but I'm planning to re-use the same code to simplify
pointer comparisons where it is necessary. Since real code almost
exclusively uses inbounds GEPs, it doesn't seem worth it to support the
extra complexity of turning it on and off. If anyone would like that
back, feel free to shout. Note that instcombine will still catch any of
these patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153418 91177308-0d34-0410-b5e6-96231b3b80d8
spotted by inspection, and I've crafted no test case that triggers it on
my machine, but some of the windows builders are hitting what looks like
memory corruption, so *something* is amiss here.
This patch takes a more generalized approach to eliminating
double-visits. Imagine code such as:
%x = ...
%y = add %x, 1
%z = add %x, %y
You can imagine that if we simplify %x, we would add %y and %z to the
list. If the use-chain order happens to cause us to add them in reverse
order, we could pull %y off first, and simplify it, adding %z to the
list. We now have %z on the list twice, and will reference it after it
is deleted.
Currently, all my test cases happen to not trigger this, likely due to
the use-chain ordering, but there seems no guarantee that such
a situation could not occur, so we should handle it correctly.
Again, if anyone knows how to craft a testcase that actually triggers
this, please let me know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153397 91177308-0d34-0410-b5e6-96231b3b80d8
worklist. This can happen in theory when an instruction uses itself,
such as a PHI node. This was spotted by inspection, and unfortunately
I've not been able to come up with a test case that would trigger it. If
anyone has ideas, let me know...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153396 91177308-0d34-0410-b5e6-96231b3b80d8
bit simpler by handling a common case explicitly.
Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.
I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153393 91177308-0d34-0410-b5e6-96231b3b80d8
the PassManager annoying and should be reimplemented as a decorator
on top of existing passes (as should the timing data).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153305 91177308-0d34-0410-b5e6-96231b3b80d8
not attched to a basic block or function. There are conservatively
correct answers in these cases, and this makes the analysis more useful
in contexts where we have a partially formed bit of IR.
I don't have any way to test this directly... suggestions welcome here,
but I'm not seeing anything sadly. I only found this using a subsequent
patch to the inliner which runs instsimplify on partially inlined
instructions, and even then only on a quite large program. I never got
a reasonable testcase out of it, and anything I do get is likely to be
quite fragile due to requiring an interaction of two different passes,
and the only result being a segfault if it goes wrong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153176 91177308-0d34-0410-b5e6-96231b3b80d8
instead of skipping the current loop.
My prior fix was incomplete because of an overzealous compile-time optimization:
Better fix for: <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153131 91177308-0d34-0410-b5e6-96231b3b80d8
overflow checking multiply intrinsic as well.
Add a test for this, updating the test from grep to FileCheck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153028 91177308-0d34-0410-b5e6-96231b3b80d8
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.
Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.
The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.
The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.
This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.
I believe there is no functional change here, but yell if you spot one.
None are intended.
Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
analysis implementation. The header was already separated. Also cleanup
all the comments in the header to follow a nice modern doxygen form.
There is still plenty of cruft here, but some of that will fall out in
subsequent refactorings and this was an easy step in the right
direction. No functionality changed here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152898 91177308-0d34-0410-b5e6-96231b3b80d8
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.
I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.
Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152892 91177308-0d34-0410-b5e6-96231b3b80d8
theoretical fix since it only matters for types with >= 2^63 bits (!) and also
only matters if pointers have more than 64 bits, which is not supported anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152831 91177308-0d34-0410-b5e6-96231b3b80d8
essentially sorting the pair's arguments. I'd love to actually call sort
here, but I'm just not that crazy. ;]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152764 91177308-0d34-0410-b5e6-96231b3b80d8
This appears to not be the case with dragonegg at least in some
contexts. Hopefully will fix the bootstrap assert failure there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152763 91177308-0d34-0410-b5e6-96231b3b80d8
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.
In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.
This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.
Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152752 91177308-0d34-0410-b5e6-96231b3b80d8
take a TargetLibraryInfo parameter. Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152623 91177308-0d34-0410-b5e6-96231b3b80d8
offset accumulation to use a boring APInt instead of ConstantExprs.
I didn't go all the way to an 'int64_t' because I wanted APInt to handle
any magic required to properly wrap the arithmetic when the pointer
width is <64 bits. If there is a significant penalty from using APInt
here, first off WTF, and secondly let me know and I'll do the math by
hand.
I've left one layer still operating w/ ConstantExpr because it makes the
interface quite a bit simpler, and that one isn't iterative so has much
lower cost.
I suppose this may potentially speed up some strang compilation
situations, but I don't really expect much. It should have no functional
impact either way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152590 91177308-0d34-0410-b5e6-96231b3b80d8
Typically instcombine has handled this, but pointer differences show up
in several contexts where we would like to get constant folding, and
cannot afford to run instcombine. Specifically, I'm working on improving
the constant folding of arguments used in inline cost analysis with
instsimplify.
Doing this in instsimplify implies some algorithm changes. We have to
handle multiple layers of all-constant GEPs because instsimplify cannot
fold them into a single GEP the way instcombine can. Also, we're only
interested in all-constant GEPs. The result is that this doesn't really
replace the instcombine logic, it's just complimentary and focused on
constant folding.
Reviewed on IRC by Benjamin Kramer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152555 91177308-0d34-0410-b5e6-96231b3b80d8
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152532 91177308-0d34-0410-b5e6-96231b3b80d8
The 'CmpInst::isFalseWhenEqual' function returns 'false' for values other than
simply equality. For instance, it returns 'false' for <= or >=. This isn't the
correct behavior for this transformation, which is checking for strict equality
and non-equality. It was causing the gcc.c-torture/execute/frame-address.c test
to fail because it would completely (and incorrectly) optimize a whole function
into a 'ret i32 0'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152497 91177308-0d34-0410-b5e6-96231b3b80d8
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.
Reviewed by Duncan Sands on IRC, but further comments here welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152490 91177308-0d34-0410-b5e6-96231b3b80d8
introduced. Specifically, there are cost reductions for all
constant-operand icmp instructions against an alloca, regardless of
whether the alloca will in fact be elligible for SROA. That means we
don't want to abort the icmp reduction computation when we abort the
SROA reduction computation. That in turn frees us from the need to keep
a separate worklist and defer the ICmp calculations.
Use this new-found freedom and some judicious function boundaries to
factor the innards of computing the cost factor of any given instruction
out of the loop over the instructions and into static helper functions.
This greatly simplifies the code, and hopefully makes it more clear what
is happening here.
Reviewed by Eric Christopher. There is some concern that we'd like to
ensure this doesn't get out of hand, and I plan to benchmark the effects
of this change over the next few days along with some further fixes to
the inline cost.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152368 91177308-0d34-0410-b5e6-96231b3b80d8
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html
Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".
ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.
Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.
Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow
for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
BasicBlock *BB = i.getCaseSuccessor();
ConstantInt *V = i.getCaseValue();
// Do something.
}
If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.
There are also related changes in llvm-clients: klee and clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152297 91177308-0d34-0410-b5e6-96231b3b80d8
analysis to be methods on the cost analysis's function info object
instead of the code metrics object. These really are just users of the
code metrics, they're building the information for the function's
analysis.
This is the first step of growing the amount of information we collect
about a function in order to cope with pair-wise simplifications due to
allocas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152283 91177308-0d34-0410-b5e6-96231b3b80d8
This could probably be made a lot smarter, but this is a common case and doesn't require LVI to scan a lot
of code. With this change CVP can optimize away the "shift == 0" case in Hashing.h that only gets hit when
"shift" is in a range not containing 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151919 91177308-0d34-0410-b5e6-96231b3b80d8
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151469 91177308-0d34-0410-b5e6-96231b3b80d8
by using llvm::isIdentifiedObject. Also teach it to handle GEPs that have
the same base pointer and constant operands. Fixes PR11238!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151449 91177308-0d34-0410-b5e6-96231b3b80d8
the dominance once the dominates method is fixed and why we can use the builder's
insertion point.
Fixes pr12048.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151125 91177308-0d34-0410-b5e6-96231b3b80d8
know where users will be added. Because of this, it cannot use
Builder.GetInsertPoint at all.
This patch
* removes the FIXME about adding the assert.
* adds a comment explaining hy we don't have one.
* removes a broken logic that only works for some callers and is not needed
since r150884.
* adds an assert to caller that would have caught the bug fixed by r150884.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151015 91177308-0d34-0410-b5e6-96231b3b80d8
the cast. If we do, we can end up with
inst1
--------------- < Insertion point
dbg inst
new inst
instead of the desired
inst1
new inst
--------------- < Insertion point
dbg inst
Another option would be for InsertNoopCastOfTo (or its callers) to move the
insertion point and we would end up with
inst1
dbg inst
new inst
--------------- < Insertion point
but that complicates the callers. This fixes PR12018 (and firefox's build).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150884 91177308-0d34-0410-b5e6-96231b3b80d8
actually work, at least as described. LLVM Metadata is not
intended to suppress LLVM IR rules, as it can be stripped at
any time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150821 91177308-0d34-0410-b5e6-96231b3b80d8
but with a critical fix to the SelectionDAG code that optimizes copies
from strings into immediate stores: the previous code was stopping reading
string data at the first nul. Address this by adding a new argument to
llvm::getConstantStringInfo, preserving the behavior before the patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149800 91177308-0d34-0410-b5e6-96231b3b80d8
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.
What was done:
1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.
Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149481 91177308-0d34-0410-b5e6-96231b3b80d8
kicking in the big win of ConstantDataArray. As part of this, change
the implementation of GetConstantStringInfo in ValueTracking to work
with ConstantDataArray (and not ConstantArray) making it dramatically,
amazingly, more efficient in the process and renaming it to
getConstantStringInfo.
This keeps around a GetConstantStringInfo entrypoint that (grossly)
forwards to getConstantStringInfo and constructs the std::string
required, but existing clients should move over to
getConstantStringInfo instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149351 91177308-0d34-0410-b5e6-96231b3b80d8
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149148 91177308-0d34-0410-b5e6-96231b3b80d8
we're at it, allow PatternMatch's "neg" pattern to match integer
vector negations, and enhance ComputeNumSigned bits to handle
shl of vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149082 91177308-0d34-0410-b5e6-96231b3b80d8
savings from a pointer argument becoming an alloca. Sometimes callees will even
compare a pointer to null and then branch to an otherwise unreachable block!
Detect these cases and compute the number of saved instructions, instead of
bailing out and reporting no savings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148941 91177308-0d34-0410-b5e6-96231b3b80d8
instead of its own hard coded thing, allowing it to handle
ConstantDataSequential and fixing some obscure bugs (e.g. it would
previously crash on a CAZ of vector type).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148788 91177308-0d34-0410-b5e6-96231b3b80d8
out into a new ConstantFoldLoadThroughGEPIndices (more useful) function
and rewrite it to be simpler, more efficient, and to handle the new
ConstantDataSequential type.
Enhance ConstantFoldLoadFromConstPtr to handle ConstantDataSequential.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148786 91177308-0d34-0410-b5e6-96231b3b80d8
can't handle. Also don't produce non-zero results for things which won't be
transformed by SROA at all just because we saw the loads/stores before we saw
the use of the address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148536 91177308-0d34-0410-b5e6-96231b3b80d8
LSR has gradually been improved to more aggressively reuse existing code, particularly existing phi cycles. This exposed problems with the SCEVExpander's sloppy treatment of its insertion point. I applied some rigor to the insertion point problem that will hopefully avoid an endless bug cycle in this area. Changes:
- Always used properlyDominates to check safe code hoisting.
- The insertion point provided to SCEV is now considered a lower bound. This is usually a block terminator or the use itself. Under no cirumstance may SCEVExpander insert below this point.
- LSR is reponsible for finding a "canonical" insertion point across expansion of different expressions.
- Robust logic to determine whether IV increments are in "expanded" form and/or can be safely hoisted above some insertion point.
Fixes PR11783: SCEVExpander assert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148535 91177308-0d34-0410-b5e6-96231b3b80d8
need to make a deep copy of each of the std::maps. Use a std::map of the
std::map instead. This improves the compile time of sqlite3 by ~2%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148003 91177308-0d34-0410-b5e6-96231b3b80d8
These heuristics are sufficient for enabling IV chains by
default. Performance analysis has been done for i386, x86_64, and
thumbv7. The optimization is rarely important, but can significantly
speed up certain cases by eliminating spill code within the
loop. Unrolled loops are prime candidates for IV chains. In many
cases, the final code could still be improved with more target
specific optimization following LSR. The goal of this feature is for
LSR to make the best choice of induction variables.
Instruction selection may not completely take advantage of this
feature yet. As a result, there could be cases of slight code size
increase.
Code size can be worse on x86 because it doesn't support postincrement
addressing. In fact, when chains are formed, you may see redundant
address plus stride addition in the addressing mode. GenerateIVChains
tries to compensate for the common cases.
On ARM, code size increase can be mitigated by using postincrement
addressing, but downstream codegen currently misses some opportunities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147826 91177308-0d34-0410-b5e6-96231b3b80d8
captured. This allows the tracker to look at the specific use, which may be
especially interesting for function calls.
Use this to fix 'nocapture' deduction in FunctionAttrs. The existing one does
not iterate until a fixpoint and does not guarantee that it produces the same
result regardless of iteration order. The new implementation builds up a graph
of how arguments are passed from function to function, and uses a bottom-up walk
on the argument-SCCs to assign nocapture. This gets us nocapture more often, and
does so rather efficiently and independent of iteration order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147327 91177308-0d34-0410-b5e6-96231b3b80d8
unsigned foo(unsigned x) { return 31 - __builtin_clz(x); }
now compiles into a single "bsrl" instruction on x86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147255 91177308-0d34-0410-b5e6-96231b3b80d8
probability wouldn't be considered "hot" in some weird loop structures
or other compounding probability patterns. This makes it much harder to
confuse, but isn't really a principled fix. I'd actually like it if we
could model a zero probability, as it would make this much easier to
reason about. Suggestions for how to do this better are welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147142 91177308-0d34-0410-b5e6-96231b3b80d8
call site of an intrinsic is also not an inline candidate. While here, make it
more obvious that this code ignores all intrinsics. Noticed by inspection!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147037 91177308-0d34-0410-b5e6-96231b3b80d8
pointer or a reference type - we actually just want the size of the
pointer then for that.
Fixes rdar://10335756
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146785 91177308-0d34-0410-b5e6-96231b3b80d8
into Analysis as a standalone function, since there's no need for
it to be in VMCore. Also, update it to use isKnownNonZero and
other goodies available in Analysis, making it more precise,
enabling more aggressive optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146610 91177308-0d34-0410-b5e6-96231b3b80d8
subdirectories to traverse into.
- Originally I wanted to avoid this and just autoscan, but this has one key
flaw in that new subdirectories can not automatically trigger a rerun of the
llvm-build tool. This is particularly a pain when switching back and forth
between trees where one has added a subdirectory, as the dependencies will
tend to be wrong. This will also eliminates FIXME implicitly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146436 91177308-0d34-0410-b5e6-96231b3b80d8
indicates whether the intrinsic has a defined result for a first
argument equal to zero. This will eventually allow these intrinsics to
accurately model the semantics of GCC's __builtin_ctz and __builtin_clz
and the X86 instructions (prior to AVX) which implement them.
This patch merely sets the stage by extending the signature of these
intrinsics and establishing auto-upgrade logic so that the old spelling
still works both in IR and in bitcode. The upgrade logic preserves the
existing (inefficient) semantics. This patch should not change any
behavior. CodeGen isn't updated because it can use the existing
semantics regardless of the flag's value.
Note that this will be followed by API updates to Clang and DragonEgg.
Reviewed by Nick Lewycky!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146357 91177308-0d34-0410-b5e6-96231b3b80d8
don't do this now, but add a test case to prevent this from happening in the
future.
Additional test for rdar://9892684
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145879 91177308-0d34-0410-b5e6-96231b3b80d8
-15% on ARMDisassembler.cpp (Release build). It's not that great to add another
layer of caching to the caching-heavy LVI but I don't see a better way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145770 91177308-0d34-0410-b5e6-96231b3b80d8
weak variable are compiled by different compilers, such as GCC and LLVM, while
LLVM may increase the alignment to the preferred alignment there is no reason to
think that GCC will use anything more than the ABI alignment. Since it is the
GCC version that might end up in the final program (as the linkage is weak), it
is wrong to increase the alignment of loads from the global up to the preferred
alignment as the alignment might only be the ABI alignment.
Increasing alignment up to the ABI alignment might be OK, but I'm not totally
convinced that it is. It seems better to just leave the alignment of weak
globals alone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145413 91177308-0d34-0410-b5e6-96231b3b80d8
and positive: positive, because it could be directly computed to be positive;
negative, because the nsw flags means it is either negative or undefined (the
multiplication always overflowed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145104 91177308-0d34-0410-b5e6-96231b3b80d8
The loop tree's inclusive block lists are painful and expensive to
update. (I have no idea why they're inclusive). The design was
supposed to handle this case but the implementation missed it and my
unit tests weren't thorough enough.
Fixes PR11335: loop unroll update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144970 91177308-0d34-0410-b5e6-96231b3b80d8
and stores capture) to permit the caller to see each capture point and decide
whether to continue looking.
Use this inside memdep to do an analysis that basicaa won't do. This lets us
solve another devirtualization case, fixing PR8908!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144580 91177308-0d34-0410-b5e6-96231b3b80d8