Commit Graph

4272 Commits

Author SHA1 Message Date
Rafael Espindola
95d594cac3 Teach CodeGen's version of computeMaskedBits to understand the range metadata.
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153817 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 18:14:00 +00:00
Chandler Carruth
f5f256cffd Fix a typo reported in IRC by someone reviewing this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153815 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 13:18:09 +00:00
Chandler Carruth
45de584b4f Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:48:08 +00:00
Chandler Carruth
f2286b0152 Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:42:41 +00:00
Rafael Espindola
7c7121edb9 Add computeMaskedBitsLoad back, as it was the change to instsimplify that
caused the slowdown last time.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153747 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-30 15:52:11 +00:00
Eric Christopher
6c31ee2b10 Lowercase the tag name to match the rest of dwarf.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153691 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-29 21:35:05 +00:00
Eric Christopher
b8ca988743 Add support for objc property decls according to the page at:
http://llvm.org/docs/SourceLevelDebugging.html#objcproperty

including type and DECL. Expand the metadata needed accordingly.

rdar://11144023

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153639 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-29 08:42:56 +00:00
Rafael Espindola
8f3fabe0fe Handle intrinsics in GlobalsModRef. Fixes pr12351.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153604 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28 21:31:24 +00:00
Chad Rosier
89e2b318e2 Revert r153521 as it's causing large regressions on the nightly testers.
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153587 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28 18:42:50 +00:00
Chad Rosier
d23a64cc16 Reapply r153423; the original commit was fine. The failing test, distray, had
undefined behavior, which Rafael was kind enough to fix.

Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153521 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-27 17:44:52 +00:00
Andrew Trick
eb6dd23c95 SCEV fix: Handle loop invariant loads.
Fixes PR11882: NULL dereference in ComputeLoadConstantCompareExitLimit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153480 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26 22:33:59 +00:00
Chad Rosier
cde6650bd0 Revert r153423 as this is causing failures on our internal nightly testers.
Original commit message:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loading a boolean value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153452 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26 18:07:14 +00:00
Rafael Espindola
7ddcd35d6b Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153423 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26 01:44:11 +00:00
Chandler Carruth
58725a66c0 Teach instsimplify how to simplify comparisons of pointers which are
constant-offsets of a common base using the generic GEP-walking logic
I added for computing pointer differences in the same situation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153419 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-25 21:28:14 +00:00
Chandler Carruth
9d9e29b4a8 Switch the pointer-difference simplification logic to only work with
inbounds GEPs. This isn't really necessary for simplifying pointer
differences, but I'm planning to re-use the same code to simplify
pointer comparisons where it is necessary. Since real code almost
exclusively uses inbounds GEPs, it doesn't seem worth it to support the
extra complexity of turning it on and off. If anyone would like that
back, feel free to shout. Note that instcombine will still catch any of
these patterns.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153418 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-25 20:43:07 +00:00
Chandler Carruth
6231d5be41 Try to harden the recursive simplification still further. This is again
spotted by inspection, and I've crafted no test case that triggers it on
my machine, but some of the windows builders are hitting what looks like
memory corruption, so *something* is amiss here.

This patch takes a more generalized approach to eliminating
double-visits. Imagine code such as:

  %x = ...
  %y = add %x, 1
  %z = add %x, %y

You can imagine that if we simplify %x, we would add %y and %z to the
list. If the use-chain order happens to cause us to add them in reverse
order, we could pull %y off first, and simplify it, adding %z to the
list. We now have %z on the list twice, and will reference it after it
is deleted.

Currently, all my test cases happen to not trigger this, likely due to
the use-chain ordering, but there seems no guarantee that such
a situation could not occur, so we should handle it correctly.

Again, if anyone knows how to craft a testcase that actually triggers
this, please let me know.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153397 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-24 22:34:26 +00:00
Chandler Carruth
c5b785b91c Don't add the instruction about to be RAUW'ed and erased to the
worklist. This can happen in theory when an instruction uses itself,
such as a PHI node. This was spotted by inspection, and unfortunately
I've not been able to come up with a test case that would trigger it. If
anyone has ideas, let me know...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153396 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-24 22:34:23 +00:00
Chandler Carruth
6b980541df Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153393 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-24 21:11:24 +00:00
Eric Christopher
9e7e609525 Take out the debug info probe stuff. It's making some changes to
the PassManager annoying and should be reimplemented as a decorator
on top of existing passes (as should the timing data).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153305 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-23 03:54:05 +00:00
Andrew Trick
1508e5e049 Cleanup IVUsers::addUsersIfInteresting.
Keep the public interface clean, even though LLVM proper does not
currently use it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153263 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-22 17:47:33 +00:00
Chandler Carruth
ff739c1575 Teach instsimplify to gracefully degrade in the presence of instructions
not attched to a basic block or function. There are conservatively
correct answers in these cases, and this makes the analysis more useful
in contexts where we have a partially formed bit of IR.

I don't have any way to test this directly... suggestions welcome here,
but I'm not seeing anything sadly. I only found this using a subsequent
patch to the inliner which runs instsimplify on partially inlined
instructions, and even then only on a quite large program. I never got
a reasonable testcase out of it, and anything I do get is likely to be
quite fragile due to requiring an interaction of two different passes,
and the only result being a segfault if it goes wrong.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153176 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-21 10:58:47 +00:00
Andrew Trick
a3b10b8359 LSR: teach isSimplifiedLoopNest to handle PHI IVUsers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153132 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-20 21:24:44 +00:00
Andrew Trick
f9492288bb LSR: fix IVUsers isSimplifiedLoopNest to perform a full domtree walk
instead of skipping the current loop.

My prior fix was incomplete because of an overzealous compile-time optimization:
Better fix for: <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153131 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-20 21:24:40 +00:00
Nick Lewycky
f201a06662 Factor out the multiply analysis code in ComputeMaskedBits and apply it to the
overflow checking multiply intrinsic as well.

Add a test for this, updating the test from grep to FileCheck.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153028 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-18 23:28:48 +00:00
Chandler Carruth
f91f5af802 Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 06:10:13 +00:00
Chandler Carruth
9b081d9691 Pull the implementation of the code metrics out of the inline cost
analysis implementation. The header was already separated. Also cleanup
all the comments in the header to follow a nice modern doxygen form.

There is still plenty of cruft here, but some of that will fall out in
subsequent refactorings and this was an easy step in the right
direction. No functionality changed here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152898 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 05:51:52 +00:00
Andrew Trick
75ae20366f LSR fix: Add isSimplifiedLoopNest to IVUsers analysis.
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.

I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.

Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152892 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 03:16:56 +00:00
Eric Christopher
75df9f23fa Do the right thing on NULL uint64 fields.
Patch by Clemens Hammacher!

Fixes PR12243

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152880 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-16 00:21:54 +00:00
Duncan Sands
f72e0ca715 Type sizes and fields offsets inside structs are unsigned. This is a highly
theoretical fix since it only matters for types with >= 2^63 bits (!) and also
only matters if pointers have more than 64 bits, which is not supported anyway.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152831 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-15 20:14:42 +00:00
Chandler Carruth
377c7f049d Make the swap code here a bit more obvious what its doing... We're
essentially sorting the pair's arguments. I'd love to actually call sort
here, but I'm just not that crazy. ;]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152764 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-15 00:55:51 +00:00
Chandler Carruth
0e33d9fea2 Don't assume that the arguments are processed in some particular order.
This appears to not be the case with dragonegg at least in some
contexts. Hopefully will fix the bootstrap assert failure there.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152763 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-15 00:50:21 +00:00
Chandler Carruth
220d2d7b50 Remove all remnants of partial specialization in the cost computation
side of things. This is all dead code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152759 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-15 00:29:08 +00:00
Chandler Carruth
274d377ea6 Extend the inline cost calculation to account for bonuses due to
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.

In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.

This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.

Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152752 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-14 23:19:53 +00:00
Chandler Carruth
3d1d895c86 Refactor the inline cost bonus calculation for constants to use
a worklist rather than a recursive call.

No functionality changed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152706 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-14 07:32:53 +00:00
Chris Lattner
5161de6ebb enhance jump threading to preserve TBAA information when PRE'ing loads,
fixing rdar://11039258, an issue that came up when inspecting clang's 
bootstrapped codegen.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152635 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 18:07:41 +00:00
Duncan Sands
bd0fe56425 Generalize the "trunc(ptrtoint(x)) - trunc(ptrtoint(y)) ->
trunc(ptrtoint(x-y))" optimization introduced by Chandler.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152626 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 14:07:05 +00:00
Duncan Sands
0aa85eb231 Uniformize the InstructionSimplify interface by ensuring that all routines
take a TargetLibraryInfo parameter.  Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152623 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 11:42:19 +00:00
Eli Friedman
5b8f0ddc7e Fix regression from r151466: an we can't replace uses of an instruction reachable from the entry block with uses of an instruction not reachable from the entry block. PR12231.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152595 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 01:06:07 +00:00
Chandler Carruth
90c14fcb7e Address some review comments from Duncan. This moves the iterative
offset accumulation to use a boring APInt instead of ConstantExprs.
I didn't go all the way to an 'int64_t' because I wanted APInt to handle
any magic required to properly wrap the arithmetic when the pointer
width is <64 bits. If there is a significant penalty from using APInt
here, first off WTF, and secondly let me know and I'll do the math by
hand.

I've left one layer still operating w/ ConstantExpr because it makes the
interface quite a bit simpler, and that one isn't iterative so has much
lower cost.

I suppose this may potentially speed up some strang compilation
situations, but I don't really expect much. It should have no functional
impact either way.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152590 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-13 00:06:15 +00:00
Chandler Carruth
fc72ae613a Teach instsimplify how to constant fold pointer differences.
Typically instcombine has handled this, but pointer differences show up
in several contexts where we would like to get constant folding, and
cannot afford to run instcombine. Specifically, I'm working on improving
the constant folding of arguments used in inline cost analysis with
instsimplify.

Doing this in instsimplify implies some algorithm changes. We have to
handle multiple layers of all-constant GEPs because instsimplify cannot
fold them into a single GEP the way instcombine can. Also, we're only
interested in all-constant GEPs. The result is that this doesn't really
replace the instcombine logic, it's just complimentary and focused on
constant folding.

Reviewed on IRC by Benjamin Kramer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152555 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-12 11:19:31 +00:00
Stepan Dyatkovskiy
3d3abe0852 llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152532 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-11 06:09:17 +00:00
Benjamin Kramer
4210b34e59 Make helper static, so it can be inlined into its sole caller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152515 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-10 22:41:06 +00:00
Bill Wendling
798d013bcb As Duncan pointed out, pointers tend not to be in floating point format...for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152499 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-10 18:20:55 +00:00
Bill Wendling
c17731d65d Make this transformation slightly less agressive and more correct.
The 'CmpInst::isFalseWhenEqual' function returns 'false' for values other than
simply equality. For instance, it returns 'false' for <= or >=. This isn't the
correct behavior for this transformation, which is checking for strict equality
and non-equality. It was causing the gcc.c-torture/execute/frame-address.c test
to fail because it would completely (and incorrectly) optimize a whole function
into a 'ret i32 0'.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152497 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-10 17:56:03 +00:00
Chandler Carruth
84dfc32ff9 Refactor some methods to look through bitcasts and GEPs on pointers into
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.

Reviewed by Duncan Sands on IRC, but further comments here welcome.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152490 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-10 08:39:09 +00:00
Nick Lewycky
00cbccceb3 Factor out the analysis of addition and subtraction in ComputeMaskedBits. Reuse
it to analyze extractvalue(llvm.[us](add|sub).with.overflow.*) intrinsics!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152398 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-09 09:23:50 +00:00
Chandler Carruth
e8187e0294 Undo a previous restriction on the inline cost calculation which Nick
introduced. Specifically, there are cost reductions for all
constant-operand icmp instructions against an alloca, regardless of
whether the alloca will in fact be elligible for SROA. That means we
don't want to abort the icmp reduction computation when we abort the
SROA reduction computation. That in turn frees us from the need to keep
a separate worklist and defer the ICmp calculations.

Use this new-found freedom and some judicious function boundaries to
factor the innards of computing the cost factor of any given instruction
out of the loop over the instructions and into static helper functions.
This greatly simplifies the code, and hopefully makes it more clear what
is happening here.

Reviewed by Eric Christopher. There is some concern that we'd like to
ensure this doesn't get out of hand, and I plan to benchmark the effects
of this change over the next few days along with some further fixes to
the inline cost.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152368 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-09 02:49:36 +00:00
Stepan Dyatkovskiy
c10fa6c801 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152297 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-08 07:06:20 +00:00
Chandler Carruth
6f130bf368 Rotate two of the functions used to count bonuses for the inline cost
analysis to be methods on the cost analysis's function info object
instead of the code metrics object. These really are just users of the
code metrics, they're building the information for the function's
analysis.

This is the first step of growing the amount of information we collect
about a function in order to cope with pair-wise simplifications due to
allocas.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152283 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-08 02:04:19 +00:00
Nick Lewycky
891495e676 No functionality change. Type::isSized() can be expensive, so avoid calling it
until after other inexpensive tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152195 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-07 02:27:53 +00:00