then it doesn't alter the instructions composing it, however it would continue
to move the instructions to just before the expression root. Ensure it doesn't
move them either, so now it really does nothing if there is nothing to do. That
commit also ensured that nsw etc flags weren't cleared if the expression was not
being changed. Tweak this a bit so that it doesn't clear flags on the initial
part of a computation either if that part didn't change but later bits did.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157518 91177308-0d34-0410-b5e6-96231b3b80d8
are passed in. However, those arguments may be in a write-protected area, as far
as the runtime library is concerned. For instance, the data could be placed into
a 'linkedit' section, which isn't writable. Emit the code from
llvm_gcda_increment_indirect_counter directly into the function instead.
Note: The code for this is ugly, and can lead to bloat. We should look into
simplifying this code instead of having all of these branches.
<rdar://problem/11181370>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157505 91177308-0d34-0410-b5e6-96231b3b80d8
with arbitrary topologies (previously it would give up when hitting a diamond
in the use graph for example). The testcase from PR12764 is now reduced from
a pile of additions to the optimal 1617*%x0+208. In doing this I changed the
previous strategy of dropping all uses for expression leaves to one of dropping
all but one use. This works out more neatly (but required a bunch of tweaks)
and is also safer: some recently fixed bugs during recursive linearization were
because the linearization code thinks it completely owns a node if it has no uses
outside the expression it is linearizing. But if the node was also in another
expression that had been linearized (and thus all uses of the node from that
expression dropped) then the conclusion that it is completely owned by the
expression currently being linearized is wrong. Keeping one use from within each
linearized expression avoids this kind of mistake.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157467 91177308-0d34-0410-b5e6-96231b3b80d8
LowerSwitch::Clusterify : main functinality was replaced with CRSBuilder::optimize, so big part of Clusterify's code was reduced.
test/Transform/LowerSwitch/feature.ll - this test was refactored: grep + count was replaced with FileCheck usage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157384 91177308-0d34-0410-b5e6-96231b3b80d8
inline threshold if the global inline threshold is lower (as for -Oz).
Reviewed by Chandler Carruth and Bill Wendling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157323 91177308-0d34-0410-b5e6-96231b3b80d8
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h
(a draft of this) patch reviewed by Andrew, thanks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157261 91177308-0d34-0410-b5e6-96231b3b80d8
leader table. That's because it wasn't expecting instructions to turn up as
leader for a value number that is not its own, but equality propagation could
create this situation. One solution is to have the leader table use a WeakVH
but this slows down GVN by about 5%. Instead just have equality propagation not
add instructions to the leader table, only constants and arguments. In theory
this might cause GVN to run more (each time it changes something it runs again)
but it doesn't seem to occur enough to cause a slow down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157251 91177308-0d34-0410-b5e6-96231b3b80d8
so that it can be reused in MemCpyOptimizer. This analysis is needed to remove
an unnecessary memcpy when returning a struct into a local variable.
rdar://11341081
PR12686
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156776 91177308-0d34-0410-b5e6-96231b3b80d8
add an additional parameter to InstCombiner::EmitGEPOffset() to force it to *not* emit operations with NUW flag
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156585 91177308-0d34-0410-b5e6-96231b3b80d8
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156515 91177308-0d34-0410-b5e6-96231b3b80d8
replace the operands of expressions with only one use with undef and generate
a new expression for the original without using RAUW to update the original.
Thus any copies of the original expression held in a vector may end up
referring to some bogus value - and using a ValueHandle won't help since there
is no RAUW. There is already a mechanism for getting the effect of recursion
non-recursively: adding the value to be recursed on to RedoInsts. But it wasn't
being used systematically. Have various places where recursion had snuck in at
some point use the RedoInsts mechanism instead. Fixes PR12169.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156379 91177308-0d34-0410-b5e6-96231b3b80d8
The primitive conservative heuristic seems to give a slight overall
improvement while not regressing stuff. Make it available to wider
testing. If you notice any speed regressions (or significant code
size regressions) let me know!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156258 91177308-0d34-0410-b5e6-96231b3b80d8
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156234 91177308-0d34-0410-b5e6-96231b3b80d8
of the CodeExtractor utility. This allows speculatively computing input
and output sets to measure the likely size impact of the code
extraction.
These sets cannot be reused sadly -- we mutate the function prior to
forming the final sets used by the actual extraction.
The interface has been revamped slightly to make it easier to use
correctly by making the interface const and sinking the computation of
the number of exit blocks into the full extraction function and away
from the rest of this logic which just computed two output parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156168 91177308-0d34-0410-b5e6-96231b3b80d8
blocks, assert that this doesn't happen. We don't want to bother trying
to support this call pattern as it isn't necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156167 91177308-0d34-0410-b5e6-96231b3b80d8
and expose it as a utility class rather than as free function wrappers.
The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.
Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.
In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156163 91177308-0d34-0410-b5e6-96231b3b80d8
minor behavior changes with this, but nothing I have seen evidence of in
the wild or expect to be meaningful. The real goal is unifying our logic
and simplifying the interfaces. A summary of the changes follows:
- Make 'callIsSmall' actually accept a callsite so it can handle
intrinsics, and simplify callers appropriately.
- Nuke a completely bogus declaration of 'callIsSmall' that was still
lurking in InlineCost.h... No idea how this got missed.
- Teach the 'isInstructionFree' about the various more intelligent
'free' heuristics that got added to the inline cost analysis during
review and testing. This mostly surrounds int->ptr and ptr->int casts.
- Switch most of the interesting parts of the inline cost analysis that
were essentially computing 'is this instruction free?' to use the code
metrics routine instead. This way we won't keep duplicating logic.
All of this is motivated by the desire to allow other passes to compute
a roughly equivalent 'cost' metric for a particular basic block as the
inline cost analysis. Sadly, re-using the same analysis for both is
really messy because only the actual inline cost analysis is ever going
to go to the contortions required for simplification, SROA analysis,
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156140 91177308-0d34-0410-b5e6-96231b3b80d8
extraction into a public interface. Also clean it up and apply it more
consistently such that we check for landing pads *anywhere* in the
extracted code, not just in single-block extraction.
This will be used to guide decisions in passes that are planning to
eventually perform a round of code extraction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156114 91177308-0d34-0410-b5e6-96231b3b80d8
<rdar://problem/11291436>.
This is a second attempt at a fix for this, the first was r155468. Thanks
to Chandler, Bob and others for the feedback that helped me improve this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155866 91177308-0d34-0410-b5e6-96231b3b80d8
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155817 91177308-0d34-0410-b5e6-96231b3b80d8
Target specific types should not be vectorized. As a practical matter,
these types are already register matched (at least in the x86 case),
and codegen does not always work correctly (at least in the ppc case,
and this is not worth fixing because ppc_fp128 is currently broken and
will probably go away soon).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155729 91177308-0d34-0410-b5e6-96231b3b80d8
The limit is set to an arbitrary 1000 recursion depth to avoid stack overflow
issues. <rdar://problem/11286839>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155722 91177308-0d34-0410-b5e6-96231b3b80d8
The required checks are moved to ChainInstruction() itself and the
policy decisions are moved to IVChain::isProfitableInc().
Also cache the ExprBase in IVChain to avoid frequent recomputations.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155676 91177308-0d34-0410-b5e6-96231b3b80d8
elements to minimize the number of multiplies required to compute the
final result. This uses a heuristic to attempt to form near-optimal
binary exponentiation-style multiply chains. While there are some cases
it misses, it seems to at least a decent job on a very diverse range of
inputs.
Initial benchmarks show no interesting regressions, and an 8%
improvement on SPASS. Let me know if any other interesting results (in
either direction) crop up!
Credit to Richard Smith for the core algorithm, and helping code the
patch itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155616 91177308-0d34-0410-b5e6-96231b3b80d8
of a precise count. Also, move RRInfo's Partial field into PtrState,
now that it won't increase the size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155513 91177308-0d34-0410-b5e6-96231b3b80d8
These lists exclude invoke unwind edges and loop backedges which
are being ignored. This makes it easier to ignore them
consistently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155500 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.
Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.
These transformations are deferred:
(X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses)
(X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1)
(X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2)
The corresponding exact transformations are preserved, just like
div-exact + mul:
(X >>?,exact C) << C --> X
(X >>?,exact C1) << C2 --> X << (C2-C1)
(X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)
The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155362 91177308-0d34-0410-b5e6-96231b3b80d8
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.
I'll put the patch back in after fixing the SelectionDAG problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155181 91177308-0d34-0410-b5e6-96231b3b80d8
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.
Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.
These transformations are deferred:
(X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses)
(X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1)
(X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2)
The corresponding exact transformations are preserved, just like
div-exact + mul:
(X >>?,exact C) << C --> X
(X >>?,exact C1) << C2 --> X << (C2-C1)
(X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)
The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155136 91177308-0d34-0410-b5e6-96231b3b80d8
If the loop contains invoke instructions, whose unwind edge escapes the loop,
then don't try to unswitch the loop. Doing so may cause the unwind edge to be
split, which not only is non-trivial but doesn't preserve loop simplify
information.
Fixes PR12573
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154987 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces a threshold of 200 IV Users, which is very
conservative but should be sufficient to avoid serious compile time
sink or stack overflow. The llvm test-suite with LTO never exceeds 190
users per loop.
The bug doesn't relate to a specific type of loop. Checking in an
arbitrary giant loop as a unit test would be silly.
Fixes rdar://11262507.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154983 91177308-0d34-0410-b5e6-96231b3b80d8
also fix SimplifyLibCalls to use TLI rather than compile-time conditionals to enable optimizations on floor, ceil, round, rint, and nearbyint
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154960 91177308-0d34-0410-b5e6-96231b3b80d8
When vectorizing pointer types it is important to realize that potential
pairs cannot be connected via the address pointer argument of a load or store.
This is because even after vectorization, the address is still a scalar because
the address of the higher half of the pair is implicit from the address of the
lower half (it need not be, and should not be, explicitly computed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154735 91177308-0d34-0410-b5e6-96231b3b80d8
As has been suggested by Duncan and others, Early-CSE and GVN should
do similar redundancy elimination, but Early-CSE is much less expensive.
Most of my autovectorization benchmarks show a performance regresion, but
all of these are < 0.1%, and so I think that it is still worth using
the less expensive pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154673 91177308-0d34-0410-b5e6-96231b3b80d8
library return value optimization for phi uses. Even when the
phi itself is not dominated, the specific use may be dominated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154647 91177308-0d34-0410-b5e6-96231b3b80d8
obviously cannot know that this code is present, let alone used. So prevent the
internalize pass from internalizing those global values which code-gen may
insert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154645 91177308-0d34-0410-b5e6-96231b3b80d8