Two related small changes:
Various dominance based queries about liveness can get confused if we're talking about unreachable blocks. To avoid reasoning about such cases, just remove them before rewriting statepoints.
Remove single entry phis (likely left behind by LCSSA) to reduce the number of live values.
Both of these are motivated by http://reviews.llvm.org/D8674 which will be submitted shortly.
Differential Revision: http://reviews.llvm.org/D8675
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234651 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds limited support for inserting explicit relocations when there's a vector of pointers live over the statepoint. This doesn't handle the case where the vector contains a mix of base and non-base pointers; that's future work.
The current implementation just scalarizes the vector over the gc.statepoint before doing the explicit rewrite. An alternate approach would be to plumb the vector all the way though the backend lowering, but doing that appears challenging. In particular, the size of the indirect spill slot is currently assumed to be sizeof(pointer) throughout the backend.
In practice, this is enough to allow running the SLP and Loop vectorizers before RewriteStatepointsForGC.
Differential Revision: http://reviews.llvm.org/D8671
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234647 91177308-0d34-0410-b5e6-96231b3b80d8
CallSite roughly behaves as a common base CallInst and InvokeInst. Bring
the behavior closer to that model by making upcasts explicit. Downcasts
remain implicit and work as before.
Following dyn_cast as a mental model checking whether a Value *V isa
CallSite now looks like this:
if (auto CS = CallSite(V)) // think dyn_cast
instead of:
if (CallSite CS = V)
This is an extra token but I think it is slightly clearer. Making the
ctor explicit has the advantage of not accidentally creating nullptr
CallSites, e.g. when you pass a Value * to a function taking a CallSite
argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234601 91177308-0d34-0410-b5e6-96231b3b80d8
The plan here is to push the API changes out from the common components
(like Constant::getGetElementPtr and IRBuilder::CreateGEP related
functions) and just update callers to either pass the type if it's
obvious, or pass null.
Do this with LoadInst as well and anything else that comes up, then to
start porting specific uses to not pass null anymore - this may require
some refactoring in each case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234042 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The old requirement on GEP candidates being in bounds is unnecessary.
For off-bound GEPs, we still have
&B[i * S] = B + (i * S) * e = B + (i * e) * S
Test Plan: slsr_offbound_gep in slsr-gep.ll
Reviewers: meheff
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233949 91177308-0d34-0410-b5e6-96231b3b80d8
Require the pointee type to be passed explicitly and assert that it is
correct. For now it's possible to pass nullptr here (and I've done so in
a few places in this patch) but eventually that will be disallowed once
all clients have been updated or removed. It'll be a long road to get
all the way there... but if you have the cahnce to update your callers
to pass the type explicitly without depending on a pointer's element
type, that would be a good thing to do soon and a necessary thing to do
eventually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233938 91177308-0d34-0410-b5e6-96231b3b80d8
This re-adds float2int to the tree, after fixing PR23038. It turns
out the argument to APSInt() is true-if-unsigned, rather than
true-if-signed :(. Added testcase and explanatory comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233370 91177308-0d34-0410-b5e6-96231b3b80d8
The assertion here was more expensive then it needed to be. We're only inserting allocas in the entry block, so we only need to consider ones in the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233362 91177308-0d34-0410-b5e6-96231b3b80d8
All the removed assertions are either implied locally by the assert at the top of the function or properties of the verifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233358 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch enhances SLSR to handle another candidate form &B[i * S]. If
we found two candidates
S1: X = &B[i * S]
S2: Y = &B[i' * S]
and S1 dominates S2, we can replace S2 with
Y = &X[(i' - i) * S]
Test Plan:
slsr-gep.ll
X86/no-slsr.ll: verify that we do not run SLSR on GEPs that already fit into
an addressing mode
Reviewers: eliben, atrick, meheff, hfinkel
Reviewed By: hfinkel
Subscribers: sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D7459
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233286 91177308-0d34-0410-b5e6-96231b3b80d8
Added test Float2Int/float2int-optnone.ll to verify that pass Float2Int
is not run on optnone functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233183 91177308-0d34-0410-b5e6-96231b3b80d8
The changes to InstCombine do seem a bit silly - it doesn't make
anything obviously better to have the caller access the pointers element
type (the thing I'm trying to remove) than the GEP itself, but it's a
helpful migration step. This will allow me to more obviously lock down
GEP (& Load, etc) API usage, then fix all the code that accesses pointer
element types except the places that need to be removed (most of the
InstCombines) anyway - at which point I'll need to just remove all that
code because it won't be meaningful anymore (there will be no pointer
types, so no bitcasts to combine)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233126 91177308-0d34-0410-b5e6-96231b3b80d8
This caused PR23008, compiles failing with: "Use still stuck around after Def is
destroyed: %.sroa.speculated"
Also reverting follow-up r233064.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233105 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE requires the induction variables it handles to not sign-overflow.
The current scheme of checking if sext({X,+,S}) == {sext(X),+,sext(S)}
fails when SCEV simplifies sext(X) too. After this change we //also//
check no-signed-wrap by looking at the flags set on the SCEVAddRecExpr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233102 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE should not try to eliminate range checks that check an induction
variable against a loop-varying length.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233101 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible to have code that converts from integer to float, performs operations then converts back, and the result is provably the same as if integers were used.
This can come from different sources, but the most obvious is a helper function that uses floats but the arguments given at an inlined callsites are integers.
This pass considers all integers requiring a bitwidth less than or equal to the bitwidth of the mantissa of a floating point type (23 for floats, 52 for doubles) as exactly representable in floating point.
To reduce the risk of harming efficient code, the pass only attempts to perform complete removal of inttofp/fptoint operations, not just move them around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233062 91177308-0d34-0410-b5e6-96231b3b80d8
Don't use `DebugLoc` accessors if we're pointing at null, which will be
a problem after a WIP patch to make the `DIDescriptor` accessors more
strict. Caught by Frontend/profile-sample-use-loc-tracking.c (in
clang).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232792 91177308-0d34-0410-b5e6-96231b3b80d8
Remove `DebugInfoVerifierLegacyPass` and the `-verify-di` pass.
Instead, call into the `DebugInfoVerifier` from inside
`VerifierLegacyPass::finalizeModule()`. This better matches the logic
in `verifyModule()` (used by the new PassManager), avoids requiring two
separate passes to verify the IR, and makes the API for "add a pass to
verify the IR" simple.
Note: the `-verify-debug-info` flag still works (for now, at least;
eventually it might make sense to just remove it).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232772 91177308-0d34-0410-b5e6-96231b3b80d8
Benign warning (clang deliberately suppresses this case) but does
regularly produce bad formatting, so it's nice to fix/reformat.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232508 91177308-0d34-0410-b5e6-96231b3b80d8
This change to IRCE gets it to recognize "half" range checks. Half
range checks are range checks that only either check if the index is
`slt` some positive integer ("length") or if the index is `sge` `0`.
The range solver does not try to be clever / aggressive about solving
half-range checks -- it transforms "I < L" to "0 <= I < L" and "0 <= I"
to "0 <= I < INT_SMAX". This is safe, but not always optimal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232444 91177308-0d34-0410-b5e6-96231b3b80d8
I'm just going to migrate these in a pretty ad-hoc & incremental way -
providing the backwards compatible API for now, then locally removing
it, fixing a few callers, adding it back in and commiting those callers.
Rinse, repeat.
The assertions should ensure that if I get this wrong we'll find out
about it and not just have one giant patch to revert, recommit, revert,
recommit, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232240 91177308-0d34-0410-b5e6-96231b3b80d8
This reapplies the patch previously committed at revision 232190. This was
reverted at revision 232196 as it caused test failures in tests that did not
expect operands to be commuted. I have made the tests more resilient to
reassociation in revision 232206.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232209 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts revision 232190 due to buildbot failure reported on clang-hexagon-elf
for test arm64_vtst.c. To be investigated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232196 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds initial support for vector instructions to the reassociation
pass. It enables most parts of the pass to work with vectors but to keep the
size of the patch small, optimization of Xor trees, canonicalization of
negative constants and converting shifts to muls, etc., have been left out.
This will be handled in later patches.
The patch is based on an initial patch by Chad Rosier.
Differential Revision: http://reviews.llvm.org/D7566
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232190 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Now that the DataLayout is a mandatory part of the module, let's start
cleaning the codebase. This patch is a first attempt at doing that.
This patch is not exactly NFC as for instance some places were passing
a nullptr instead of the DataLayout, possibly just because there was a
default value on the DataLayout argument to many functions in the API.
Even though it is not purely NFC, there is no change in the
validation.
I turned as many pointer to DataLayout to references, this helped
figuring out all the places where a nullptr could come up.
I had initially a local version of this patch broken into over 30
independant, commits but some later commit were cleaning the API and
touching part of the code modified in the previous commits, so it
seemed cleaner without the intermediate state.
Test Plan:
Reviewers: echristo
Subscribers: llvm-commits
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231740 91177308-0d34-0410-b5e6-96231b3b80d8
Runtime unrolling is an expensive optimization which can bring benefit
only if the loop is hot and iteration number is relatively large enough.
For some loops, we know they are not worth to be runtime unrolled.
The scalar loop from vectorization is one of the cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231631 91177308-0d34-0410-b5e6-96231b3b80d8
Specifically this:
* Prevents an "unused" warning in non-assert builds.
* In that error case return with out removing a child loop instead of
looping forever.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231459 91177308-0d34-0410-b5e6-96231b3b80d8
This pass interchanges loops to provide a more cache-friendly memory access.
For e.g. given a loop like -
for(int i=0;i<N;i++)
for(int j=0;j<N;j++)
A[j][i] = A[j][i]+B[j][i];
is interchanged to -
for(int j=0;j<N;j++)
for(int i=0;i<N;i++)
A[j][i] = A[j][i]+B[j][i];
This pass is currently disabled by default.
To give a brief introduction it consists of 3 stages-
LoopInterchangeLegality : Checks the legality of loop interchange based on Dependency matrix.
LoopInterchangeProfitability: A very basic heuristic has been added to check for profitibility. This will evolve over time.
LoopInterchangeTransform : Which does the actual transform.
LNT Performance tests shows improvement in Polybench/linear-algebra/kernels/mvt and Polybench/linear-algebra/kernels/gemver becnmarks.
TODO:
1) Add support for reductions and lcssa phi.
2) Improve profitability model.
3) Improve loop selection algorithm to select best loop for interchange. Currently the innermost loop is selected for interchange.
4) Improve compile time regression found in llvm lnt due to this pass.
5) Fix issues in Dependency Analysis module.
A special thanks to Hal for reviewing this code.
Review: http://reviews.llvm.org/D7499
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231458 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
DataLayout keeps the string used for its creation.
As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().
Get rid of DataLayoutPass: the DataLayout is in the Module
The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.
Make DataLayout Non-Optional in the Module
Module->getDataLayout() will never returns nullptr anymore.
Reviewers: echristo
Subscribers: resistor, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D7992
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231270 91177308-0d34-0410-b5e6-96231b3b80d8
RewriteStatepointsForGC pass emits an alloca for each GC pointer which will be relocated. It then inserts stores after def and all relocations, and inserts loads before each use as well. In the end, mem2reg is used to update IR with relocations in SSA form.
However, there is a problem with inserting stores for values defined by invoke instructions. The code didn't expect a def was a terminator instruction, and inserting instructions after these terminators resulted in malformed IR.
This patch fixes this problem by handling invoke instructions as a special case. If the def is an invoke instruction, the store will be inserted at the beginning of the normal destination block. Since return value from invoke instruction does not dominate the unwind destination block, no action is needed there.
Patch by: Chen Li
Differential Revision: http://reviews.llvm.org/D7923
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231183 91177308-0d34-0410-b5e6-96231b3b80d8
The assertion was just checking a class invariant that's pretty easy to
verify by inspection (no mutating operations, and the two non-copy ctors
already ensure the state is maintained) so remove the explicit copy ctor
in favor of the default, thus allowing the use of the default copy
assignment operator without hitting the C++11 deprecation here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231143 91177308-0d34-0410-b5e6-96231b3b80d8
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.
This reverts commit r231135.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231136 91177308-0d34-0410-b5e6-96231b3b80d8
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231135 91177308-0d34-0410-b5e6-96231b3b80d8
There's really no reason to have them have entries in the symbol table
anymore. Old versions of ld64 had some bugs in this area but those have
been fixed long ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231041 91177308-0d34-0410-b5e6-96231b3b80d8
This re-lands change r230921. r230921 was reverted because it broke a
clang test; a checkin fixing the clang test will be commited shortly.
Summary:
As far as I can tell, the real bug causing the issue was fixed in
r230533. SCEVExpander should mark an increment operation as nuw or nsw
only if it can *prove* that the operation does not overflow. There
shouldn't be any situation where we have to do something different
because of no-wrap flags generated by SCEVExpander.
Revert "IndVarSimplify: Allow LFTR to fire more often"
This reverts commit 1ade0f0faa (SVN: 222213).
Revert "IndVarSimplify: Don't let LFTR compare against a poison value"
This reverts commit c0f2b8b528 (SVN: 217102).
Reviewers: majnemer, atrick, spatel
Differential Revision: http://reviews.llvm.org/D7979
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231018 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
As far as I can tell, the real bug causing the issue was fixed in
r230533. SCEVExpander should mark an increment operation as nuw or nsw
only if it can *prove* that the operation does not overflow. There
shouldn't be any situation where we have to do something different
because of no-wrap flags generated by SCEVExpander.
Revert "IndVarSimplify: Allow LFTR to fire more often"
This reverts commit 1ade0f0faa (SVN: 222213).
Revert "IndVarSimplify: Don't let LFTR compare against a poison value"
This reverts commit c0f2b8b528 (SVN: 217102).
Reviewers: majnemer, atrick, spatel
Differential Revision: http://reviews.llvm.org/D7979
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230921 91177308-0d34-0410-b5e6-96231b3b80d8
Leaving empty blocks around just opens up a can of bugs like PR22704. Deleting
them early also slightly simplifies code.
Thanks to Sanjay for the IR test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230856 91177308-0d34-0410-b5e6-96231b3b80d8
All of the cases were just appending from random access iterators to a
vector. Using insert/append can grow the vector to the perfect size
directly and moves the growing out of the loop. No intended functionalty
change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230845 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out the naming of inserted phis and selects is sensative to the order in which two sets are iterated. We need to nail this down to avoid non-deterministic output and possible test failures.
The modified test is the one I first noticed something odd in. The change is making it more strict to report the error. With the test change, but without the code change, the test fails roughly 1 in 5. With the code change, I've run ~30 runs without error.
Long term, the right fix here is to adjust the naming scheme. I'm checking in this hack to avoid any possible non-determinism in the tests over the weekend. HJust because I only noticed one case doesn't mean it's actually the only case. I hope to get to the right change Monday.
std->llvm data structure changes bugfix change #3
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230835 91177308-0d34-0410-b5e6-96231b3b80d8
Inserting into a DenseMap you're iterating over is not well defined. This is unfortunate since this is well defined on a std::map.
"cleanup per llvm code style standards" bug #2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230827 91177308-0d34-0410-b5e6-96231b3b80d8
These tests cover the 'base object' identification and rewritting portion of RewriteStatepointsForGC. These aren't completely exhaustive, but they've proven to be reasonable effective over time at finding regressions.
In the process of porting these tests over, I found my first "cleanup per llvm code style standards" bug. We were relying on the order of iteration when testing the base pointers found for a derived pointer. When we switched from std::set to DenseSet, this stopped being a safe assumption. I'm suspecting I'm going to find more of those. In particular, I'm now really wondering about the main iteration loop for this algorithm. I need to go take a closer look at the assumptions there.
I'm not really happy with the fact these are testing what is essentially debug output (i.e. enabled via command line flags). Suggestions for how to structure this better are very welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230818 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE can now split the iteration space for loops like:
for (i = n; i >= 0; i--)
a[i + k] = 42; // bounds check on access
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230618 91177308-0d34-0410-b5e6-96231b3b80d8
Use the IRBuilder helpers for gc.statepoint and gc.result, instead of
coding the construction by hand. Note that the gc.statepoint IRBuilder
handles only CallInst, not InvokeInst; retain that part of hand-coding.
Differential Revision: http://reviews.llvm.org/D7518
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230591 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-on to r227491 which tightens the check for propagating FP
values. If a non-constant value happens to be a zero, we would hit the same
bug as before.
Bug noted and patch suggested by Eli Friedman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230564 91177308-0d34-0410-b5e6-96231b3b80d8
This refactors the core functionality of LICM: HoistRegion, SinkRegion and
PromoteAliasSet (renamed to promoteLoopAccessesToScalars) as utility functions
in LoopUtils. This will enable other transformations to make use of them
directly.
Patch by Ashutosh Nema.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230178 91177308-0d34-0410-b5e6-96231b3b80d8
work with a non-canonical induction variable.
This is currently a non-functional change because we only ever call
computeSafeIterationSpace on a canonical induction variable; but the
generalization will be useful in a later commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230151 91177308-0d34-0410-b5e6-96231b3b80d8
calculations. Semantically non-functional change.
This gets rid of some of the SCEV -> Value -> SCEV round tripping and
the Construct(SMin|SMax)Of and MaybeSimplify helper routines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230150 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, this pass ran over every function in the Module if added to the pass order. With this change, it runs only over those with a GC attribute where the GC explicitly opts in. A GC can also choose which of entry safepoint polls, backedge safepoint polls, and call safepoints it wants. I hope to get these exposed as checks on the GCStrategy at some point, but for now, the checks are manual string comparisons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230097 91177308-0d34-0410-b5e6-96231b3b80d8
These are internal options. I need to go through, evaluate which are worth keeping and which not. Many of them should probably be renamed as well. Until I have time to do that, we can at least stop poluting the standard opt -help output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230088 91177308-0d34-0410-b5e6-96231b3b80d8
This should be the last cleanup on non-llvm preferred data structures. I left one use of std::set in an assertion; DenseSet didn't seem to have a tombstone for CallSite defined. That might be worth fixing, but wasn't worth it for a debug only use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230084 91177308-0d34-0410-b5e6-96231b3b80d8
I'd done the work of extracting the typedef in a previous commit, but didn't actually change it. Hopefully this will make any subtle changes easier to isolate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230081 91177308-0d34-0410-b5e6-96231b3b80d8
Use llvm_unreachable where appropriate, use SmallVector where easy to do so, introduce typedefs for planned type migrations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230068 91177308-0d34-0410-b5e6-96231b3b80d8
The notion of a range of inserted safepoint related code is no longer really applicable. This survived over from an earlier implementation. Just saving the inserted gc.statepoint and working from that is far clearer given the current code structure. Particularly when invokable statepoints get involved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230063 91177308-0d34-0410-b5e6-96231b3b80d8
Yet another chapter in the endless story. While this looks like we leave
the loop in a non-canonical state this replicates the logic in
LoopSimplify so it doesn't diverge from the canonical form in any way.
PR21968
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230058 91177308-0d34-0410-b5e6-96231b3b80d8
When doing style cleanup, I noticed a minor bug in this code. If we have a pointer that we think is unused after a statepoint and thus doesn't need relocation, we store a null pointer into the alloca we're about to promote. This helps turn a mistake in liveness analysis into an easily debuggable crash. It turned out this code had never been updated to handle invoke statepoints.
There's no test for this. Without a bug in liveness, it appears impossible to make this trigger in a way which is visible in the resulting IR. We might store the null, but when promoting the alloca, there will be no uses and thus nothing to test against. Suggestions on how to test are very welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230047 91177308-0d34-0410-b5e6-96231b3b80d8
Starting to update variable naming and types to match LLVM style. This will be an incremental process to minimize the chance of breakage as I work. Step one, rename member variables to LLVM CamelCase and use llvm's ADT. Much more to come.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230042 91177308-0d34-0410-b5e6-96231b3b80d8
Before calling Function::getGC to test for enablement, we need to make sure there's actually a GC at all via Function::hasGC. Otherwise, we'd crash on functions without a GC. Thankfully, this only mattered if you manually scheduled the pass, but still, oops. :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230040 91177308-0d34-0410-b5e6-96231b3b80d8
When back merging the changes in 229945 I noticed that I forgot to mark the test cases with the appropriate GC. We want the rewriting to be off by default (even when manually added to the pass order), not on-by default. To keep the current test working, mark them as using the statepoint-example GC and whitelist that GC.
Longer term, we need a better selection mechanism here for both actual usage and testing. As I migrate more tests to the in tree version of this pass, I will probably need to update the enable/disable logic as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229954 91177308-0d34-0410-b5e6-96231b3b80d8
This patch consists of a single pass whose only purpose is to visit previous inserted gc.statepoints which do not have gc.relocates inserted yet, and insert them. This can be used either immediately after IR generation to perform 'early safepoint insertion' or late in the pass order to perform 'late insertion'.
This patch is setting the stage for work to continue in tree. In particular, there are known naming and style violations in the current patch. I'll try to get those resolved over the next week or so. As I touch each area to make style changes, I need to make sure we have adequate testing in place. As part of the cleanup, I will be cleaning up a collection of test cases we have out of tree and submitting them upstream. The tests included in this change are very basic and mostly to provide examples of usage.
The pass has several main subproblems it needs to address:
- First, it has identify any live pointers. In the current code, the use of address spaces to distinguish pointers to GC managed objects is hard coded, but this will become parametrizable in the near future. Note that the current change doesn't actually contain a useful liveness analysis. It was seperated into a followup change as the code wasn't ready to be shared. Instead, the current implementation just considers any dominating def of appropriate pointer type to be live.
- Second, it has to identify base pointers for each live pointer. This is a fairly straight forward data flow algorithm.
- Third, the information in the previous steps is used to actually introduce rewrites. Rather than trying to do this by hand, we simply re-purpose the code behind Mem2Reg to do this for us.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229945 91177308-0d34-0410-b5e6-96231b3b80d8
This is a function pass that runs the analysis on demand. The analysis
can be initiated by querying the loop access info via LAA::getInfo. It
either returns the cached info or runs the analysis.
Symbolic stride information continues to reside outside of this analysis
pass. We may move it inside later but it's not a priority for me right
now. The idea is that Loop Distribution won't support run-time stride
checking at least initially.
This means that when querying the analysis, symbolic stride information
can be provided optionally. Whether stride information is used can
invalidate the cache entry and rerun the analysis. Note that if the
loop does not have any symbolic stride, the entry should be preserved
across Loop Distribution and LV.
Since currently the only user of the pass is LV, I just check that the
symbolic stride information didn't change when using a cached result.
On the LV side, LoopVectorizationLegality requests the info object
corresponding to the loop from the analysis pass. A large chunk of the
diff is due to LAI becoming a pointer from a reference.
A test will be added as part of the -analyze patch.
Also tested that with AVX, we generate identical assembly output for the
testsuite (including the external testsuite) before and after.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229893 91177308-0d34-0410-b5e6-96231b3b80d8
r229622: "[LoopAccesses] Make VectorizerParams global"
r229623: "[LoopAccesses] Stash the report from the analysis rather than emitting it"
r229624: "[LoopAccesses] Cache the result of canVectorizeMemory"
r229626: "[LoopAccesses] Create the analysis pass"
r229628: "[LoopAccesses] Change debug messages from LV to LAA"
r229630: "[LoopAccesses] Add canAnalyzeLoop"
r229631: "[LoopAccesses] Add missing const to APIs in VectorizationReport"
r229632: "[LoopAccesses] Split out LoopAccessReport from VectorizerReport"
r229633: "[LoopAccesses] Add -analyze support"
r229634: "[LoopAccesses] Change LAA:getInfo to return a constant reference"
r229638: "Analysis: fix buildbots"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229650 91177308-0d34-0410-b5e6-96231b3b80d8
This is a function pass that runs the analysis on demand. The analysis
can be initiated by querying the loop access info via LAA::getInfo. It
either returns the cached info or runs the analysis.
Symbolic stride information continues to reside outside of this analysis
pass. We may move it inside later but it's not a priority for me right
now. The idea is that Loop Distribution won't support run-time stride
checking at least initially.
This means that when querying the analysis, symbolic stride information
can be provided optionally. Whether stride information is used can
invalidate the cache entry and rerun the analysis. Note that if the
loop does not have any symbolic stride, the entry should be preserved
across Loop Distribution and LV.
Since currently the only user of the pass is LV, I just check that the
symbolic stride information didn't change when using a cached result.
On the LV side, LoopVectorizationLegality requests the info object
corresponding to the loop from the analysis pass. A large chunk of the
diff is due to LAI becoming a pointer from a reference.
A test will be added as part of the -analyze patch.
Also tested that with AVX, we generate identical assembly output for the
testsuite (including the external testsuite) before and after.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229626 91177308-0d34-0410-b5e6-96231b3b80d8
When visiting the initial list of "root" instructions (those which must always
be alive), for those that are integer-valued (such as invokes returning an
integer), we mark their bits as (initially) all dead (we might, obviously, find
uses of those bits later, but all bits are assumed dead until proven
otherwise). Don't do so, however, if we're already seen a use of those bits by
another root instruction (such as a store).
Fixes a miscompile of the sanitizer unit tests on x86_64.
Also, add a debug line for visiting the root instructions, and remove a debug
line which tried to print instructions being removed (printing dead
instructions is dangerous, and can sometimes crash).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229618 91177308-0d34-0410-b5e6-96231b3b80d8
BDCE is a bit-tracking dead code elimination pass. It is based on ADCE (the
"aggressive DCE" pass), with the added capability to track dead bits of integer
valued instructions and remove those instructions when all of the bits are
dead.
Currently, it does not actually do this all-bits-dead removal, but rather
replaces the instruction's uses with a constant zero, and lets instcombine (and
the later run of ADCE) do the rest. Because we essentially get a run of ADCE
"for free" while tracking the dead bits, we also do what ADCE does and removes
actually-dead instructions as well (this includes instructions newly trivially
dead because all bits were dead, but not all such instructions can be removed).
The motivation for this is a case like:
int __attribute__((const)) foo(int i);
int bar(int x) {
x |= (4 & foo(5));
x |= (8 & foo(3));
x |= (16 & foo(2));
x |= (32 & foo(1));
x |= (64 & foo(0));
x |= (128& foo(4));
return x >> 4;
}
As it turns out, if you order the bit-field insertions so that all of the dead
ones come last, then instcombine will remove them. However, if you pick some
other order (such as the one above), the fact that some of the calls to foo()
are useless is not locally obvious, and we don't remove them (without this
pass).
I did a quick compile-time overhead check using sqlite from the test suite
(Release+Asserts). BDCE took ~0.4% of the compilation time (making it about
twice as expensive as ADCE).
I've not looked at why yet, but we eliminate instructions due to having
all-dead bits in:
External/SPEC/CFP2006/447.dealII/447.dealII
External/SPEC/CINT2006/400.perlbench/400.perlbench
External/SPEC/CINT2006/403.gcc/403.gcc
MultiSource/Applications/ClamAV/clamscan
MultiSource/Benchmarks/7zip/7zip-benchmark
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229462 91177308-0d34-0410-b5e6-96231b3b80d8
To be consistent with what clang-format does, don't add extra indentation
inside an anonymous namespace. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229412 91177308-0d34-0410-b5e6-96231b3b80d8
We won't find a root with index zero in any loop that we are able to reroll.
However, we may find one in a non-rerollable loop, so bail gracefully instead
of failing hard.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229406 91177308-0d34-0410-b5e6-96231b3b80d8
If a PHI has no users, don't crash; bail gracefully. This shouldn't
happen often, but we can make no guarantees that previous passes didn't leave
dead code around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229405 91177308-0d34-0410-b5e6-96231b3b80d8
Added test CodeGen/X86/constant-hoisting-optnone.ll to verify that
pass Constant Hoisting is not run on optnone functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229258 91177308-0d34-0410-b5e6-96231b3b80d8
Canonicalize access to function attributes to use the simpler API.
getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
=> getFnAttribute(Kind)
getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
=> hasFnAttribute(Kind)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229202 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM's include tree and the use of using declarations to hide the
'legacy' namespace for the old pass manager.
This undoes the primary modules-hostile change I made to keep
out-of-tree targets building. I sent an email inquiring about whether
this would be reasonable to do at this phase and people seemed fine with
it, so making it a reality. This should allow us to start bootstrapping
with modules to a certain extent along with making it easier to mix and
match headers in general.
The updates to any code for users of LLVM are very mechanical. Switch
from including "llvm/PassManager.h" to "llvm/IR/LegacyPassManager.h".
Qualify the types which now produce compile errors with "legacy::". The
most common ones are "PassManager", "PassManagerBase", and
"FunctionPassManager".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229094 91177308-0d34-0410-b5e6-96231b3b80d8
The issues with the new unroll analyzer are more fundamental than code
cleanup, algorithm, or data structure changes. I've sent an email to the
original commit thread with details and a proposal for how to redesign
things. I'm disabling this for now so that we don't spend time
debugging issues with it in its current state.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229064 91177308-0d34-0410-b5e6-96231b3b80d8
UnrollAnalyzer.
Now they share a single worklist and have less implicit state between
them. There was no real benefit to separating these two things out.
I'm going to subsequently refactor things to share even more code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229062 91177308-0d34-0410-b5e6-96231b3b80d8
contained in it each time we try to add it to the worklist, just check
this when pulling it off the worklist. That way we do it at most once
per instruction with the cost of the worklist set we would need to pay
anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229060 91177308-0d34-0410-b5e6-96231b3b80d8
vector.
In addition to dramatically reducing the work required for contrived
example loops, this also has to correct some serious latent bugs in the
cost computation. Previously, we might add an instruction onto the
worklist once for every load which it used and was simplified. Then we
would visit it many times and accumulate "savings" each time.
I mean, fortunately this couldn't matter for things like calls with 100s
of operands, but even for binary operators this code seems like it must
be double counting the savings.
I just noticed this by inspection and due to the runtime problems it can
introduce, I don't have any test cases for cases where the cost produced
by this routine is unacceptable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229059 91177308-0d34-0410-b5e6-96231b3b80d8
In the unroll analyzer, it is checking each user to see if that user
will become dead. However, it first checked if that user was missing
from the simplified values map, and then if was also missing from the
dead instructions set. We add everything from the simplified values map
to the dead instructions set, so the first step is completely subsumed
by the second. Moreover, the first step requires *inserting* something
into the simplified value map which isn't what we want at all.
This also replaces a dyn_cast with a cast as an instruction cannot be
used by a non-instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229057 91177308-0d34-0410-b5e6-96231b3b80d8
check.
Also hoist this into the enqueue process as it is faster even than
testing the worklist set, we should just directly filter these out much
like we filter out constants and such.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229056 91177308-0d34-0410-b5e6-96231b3b80d8
We don't just want to handle duplicate operands within an instruction,
but also duplicates across operands of different instructions. I should
have gone straight to this, but I had convinced myself that it wasn't
going to be necessary briefly. I've come to my senses after chatting
more with Nick, and am now happier here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229054 91177308-0d34-0410-b5e6-96231b3b80d8
into the worklist. This avoids allocating lots of worklist memory for
them when there are large numbers of repeated operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229052 91177308-0d34-0410-b5e6-96231b3b80d8
reasonably quickly.
I don't have a reduced test case, but for a version of FFMPEG, this
makes the loop unroller start finishing at all (after over 15 minutes of
running, it hadn't terminated for me, no idea if it was a true infloop
or just exponential work).
The key thing here is to check the DeadInstructions set when pulling
things off the worklist. Without this, we would re-walk the user list of
already dead instructions again and again and again. Consider phi nodes
with many, many operands and other patterns.
The other important aspect of this is that because we would keep
re-visiting instructions that were already known dead, we kept adding
their cost savings to this! This would cause our cost savings to be
*insanely* inflated from this.
While I was here, I also rotated the operand walk out of the worklist
loop to make the code easier to read. There is still work to be done to
minimize worklist traffic because we don't de-duplicate operands. This
means we may add the same instruction onto the worklist 1000s of times
if it shows up in 1000s of operansd to a PHI node for example.
Still, with this patch, the ffmpeg testcase I have finishes quickly and
I can't measure the runtime impact of the unroll analysis any more. I'll
probably try to do a few more cleanups to this code, but not sure how
much cleanup I can justify right now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229038 91177308-0d34-0410-b5e6-96231b3b80d8
readable.
The biggest thing that was causing me problems is recognizing the
references vs. poniters here. I also found that for maps naming the loop
variable as KeyValue helps make it obvious why you don't actually use it
directly. Finally, using 'auto' instead of 'User *' doesn't seem like
a good tradeoff. Much like with the other cases, I like to know its
a pointer, and 'User' is just as long and tells the reader a lot more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229033 91177308-0d34-0410-b5e6-96231b3b80d8
hard to type and read for me, and is inconsistent with the other
abbreviation in the base class "Inst". For most of these (where they are
used widely) I prefer just spelling it out as Instruction. I've changed
two of the short-lived variables to use "Inst" to match the base class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229028 91177308-0d34-0410-b5e6-96231b3b80d8
This is much more efficient. In particular, the query with the user
instruction has to insert a false for every missing instruction into the
set. This is just a cleanup a long the way to fixing the underlying
algorithm problems here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228994 91177308-0d34-0410-b5e6-96231b3b80d8
When we try to estimate number of potentially removed instructions in
loop unroller, we analyze first N iterations and then scale the
computed number by TripCount/N. We should bail out early if N is 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228988 91177308-0d34-0410-b5e6-96231b3b80d8
We can't solve the full subgraph isomorphism problem. But we can
allow obvious cases, where for example two instructions of different
types are out of order. Due to them having different types/opcodes,
there is no ambiguity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228931 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When trying to canonicalize negative constants out of
multiplication expressions, we need to check that the
constant is not INT_MIN which cannot be negated.
Reviewers: mcrosier
Reviewed By: mcrosier
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7286
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228872 91177308-0d34-0410-b5e6-96231b3b80d8
A DAGRootSet models an induction variable being used in a rerollable
loop. For example:
x[i*3+0] = y1
x[i*3+1] = y2
x[i*3+2] = y3
Base instruction -> i*3
+---+----+
/ | \
ST[y1] +1 +2 <-- Roots
| |
ST[y2] ST[y3]
There may be multiple DAGRootSets, for example:
x[i*2+0] = ... (1)
x[i*2+1] = ... (1)
x[i*2+4] = ... (2)
x[i*2+5] = ... (2)
x[(i+1234)*2+5678] = ... (3)
x[(i+1234)*2+5679] = ... (3)
This concept is similar to the "Scale" member used previously, but allows
multiple independent sets of roots based off the same induction variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228821 91177308-0d34-0410-b5e6-96231b3b80d8
I realized that my early fix for this was overly complicated. Rather than scatter checks around in a bunch of places, just exit early when we visit the poll function itself.
Thinking about it a bit, the whole inlining mechanism used with gc.safepoint_poll could probably be cleaned up a bit. Originally, poll insertion was fused with gc relocation rewriting. It might be worth going back to see if we can simplify the chain of events now that these two are seperated. As one thought, maybe it makes sense to rewrite calls inside the helper function before inlining it to the many callers. This would require us to visit the poll function before any other functions though..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228634 91177308-0d34-0410-b5e6-96231b3b80d8
for any padding introduced by SROA. In particular, do not emit debug info
for an alloca that represents only the padding introduced by a previous
iteration.
Fixes PR22495.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228632 91177308-0d34-0410-b5e6-96231b3b80d8
intermediate representation. This
- increases consistency by using the same granularity everywhere
- allows for pieces < 1 byte
- DW_OP_piece didn't actually allow storing an offset.
Part of PR22495.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228631 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
It's important that our users immediately know what gc.safepoint_poll
is. Also fix the style of the declaration of CreateGCStatepoint, in
preparation for another change that will wrap it.
Reviewers: reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7517
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228626 91177308-0d34-0410-b5e6-96231b3b80d8
This is just adding really simple tests which should have been part of the original submission. When doing so, I discovered that I'd mistakenly removed required pieces when preparing the patch for upstream submission. I fixed two such bugs in this submission.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228610 91177308-0d34-0410-b5e6-96231b3b80d8
The only difference between deleteIfDeadInstruction and
RecursivelyDeleteTriviallyDeadInstructions is that the former also
manually invalidates SCEV. That's unnecessary because SCEV automatically
gets informed when an instruction is deleted via a ValueHandle. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228508 91177308-0d34-0410-b5e6-96231b3b80d8
If complete-unroll could help us to optimize away N% of instructions, we
might want to do this even if the final size would exceed loop-unroll
threshold. However, we don't want to unroll huge loop, and we are add
AbsoluteThreshold to avoid that - this threshold will never be crossed,
even if we expect to optimize 99% instructions after that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228434 91177308-0d34-0410-b5e6-96231b3b80d8
It is a variation of SimplifyBinOp, but it takes into account
FastMathFlags.
It is needed in inliner and loop-unroller to accurately predict the
transformation's outcome (previously we dropped the flags and were too
conservative in some cases).
Example:
float foo(float *a, float b) {
float r;
if (a[1] * b)
r = /* a lot of expensive computations */;
else
r = 1;
return r;
}
float boo(float *a) {
return foo(a, 0.0);
}
Without this patch, we don't inline 'foo' into 'boo'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228432 91177308-0d34-0410-b5e6-96231b3b80d8
Complete loop unrolling can make some loads constant, thus enabling a
lot of other optimizations. To catch such cases, we look for loads that
might become constants and estimate number of instructions that would be
simplified or become dead after substitution.
Example:
Suppose we have:
int a[] = {0, 1, 0};
v = 0;
for (i = 0; i < 3; i ++)
v += b[i]*a[i];
If we completely unroll the loop, we would get:
v = b[0]*a[0] + b[1]*a[1] + b[2]*a[2]
Which then will be simplified to:
v = b[0]* 0 + b[1]* 1 + b[2]* 0
And finally:
v = b[1]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228265 91177308-0d34-0410-b5e6-96231b3b80d8
We were previously doing a post-order traversal and operating on the
list in reverse, however this would occasionaly cause backedges for
loops to be visited before some of the other blocks in the loop.
We know use a reverse post-order traversal, which avoids this issue.
The reverse post-order traversal is not completely ideal, so we need
to manually fixup the list to ensure that inner loop backedges are
visited before outer loop backedges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228186 91177308-0d34-0410-b5e6-96231b3b80d8
This pass is responsible for figuring out where to place call safepoints and safepoint polls. It doesn't actually make the relocations explicit; that's the job of the RewriteStatepointsForGC pass (http://reviews.llvm.org/D6975).
Note that this code is not yet finalized. Its moving in tree for incremental development, but further cleanup is needed and will happen over the next few days. It is not yet part of the standard pass order.
Planned changes in the near future:
- I plan on restructuring the statepoint rewrite to use the functions add to the IRBuilder a while back.
- In the current pass, the function "gc.safepoint_poll" is treated specially but is not an intrinsic. I plan to make identifying the poll function a property of the GCStrategy at some point in the near future.
- As follow on patches, I will be separating a collection of test cases we have out of tree and submitting them upstream.
- It's not explicit in the code, but these two patches are introducing a new state for a statepoint which looks a lot like a patchpoint. There's no a transient form which doesn't yet have the relocations explicitly represented, but does prevent reordering of memory operations. Once this is in, I need to update actually make this explicit by reserving the 'unused' argument of the statepoint as a flag, updating the docs, and making the code explicitly check for such a thing. This wasn't really planned, but once I split the two passes - which was done for other reasons - the intermediate state fell out. Just reminds us once again that we need to merge statepoints and patchpoints at some point in the not that distant future.
Future directions planned:
- Identifying more cases where a backedge safepoint isn't required to ensure timely execution of a safepoint poll.
- Tweaking the insertion process to generate easier to optimize IR. (For example, investigating making SplitBackedge) the default.
- Adding opt-in flags for a GCStrategy to use this pass. Once done, add this pass to the actual pass ordering.
Differential Revision: http://reviews.llvm.org/D6981
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228090 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228016 91177308-0d34-0410-b5e6-96231b3b80d8
getTTI method used to get an actual TTI object.
No functionality changed. This just threads the argument and ensures
code like the inliner can correctly look up the callee's TTI rather than
using a fixed one.
The next change will use this to implement per-function subtarget usage
by TTI. The changes after that should eliminate the need for FTTI as that
will have become the default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227730 91177308-0d34-0410-b5e6-96231b3b80d8
This should be sufficient to replace the initial (minor) function pass
pipeline in Clang with the new pass manager. I'll probably add an (off
by default) flag to do that just to ensure we can get extra testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227726 91177308-0d34-0410-b5e6-96231b3b80d8
I've added RUN lines both to the basic test for EarlyCSE and the
target-specific test, as this serves as a nice test that the TTI layer
in the new pass manager is in fact working well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227725 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
CUDA driver can unroll loops when jit-compiling PTX. To prevent CUDA
driver from unrolling a loop marked with llvm.loop.unroll.disable is not
unrolled by CUDA driver, we need to emit .pragma "nounroll" at the
header of that loop.
This patch also extracts getting unroll metadata from loop ID metadata
into a shared helper function.
Test Plan: test/CodeGen/NVPTX/nounroll.ll
Reviewers: eliben, meheff, jholewinski
Reviewed By: jholewinski
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7041
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227703 91177308-0d34-0410-b5e6-96231b3b80d8
aggregate or scalar, the debug info needs to refer to the absolute offset
(relative to the entire variable) instead of storing the offset inside
the smaller aggregate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227702 91177308-0d34-0410-b5e6-96231b3b80d8
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227669 91177308-0d34-0410-b5e6-96231b3b80d8
The validation algorithm used an incremental approach, building each
iteration's data structures temporarily, validating them, then
adding them to a global set.
This does not scale well to having multiple sets of Root nodes, as the
set of instructions used in each iteration is the union over all
the root nodes. Therefore, refactor the logic to create a single, simple
container to which later logic then refers. This makes it simpler
control-flow wise to make the creation of the container more complex with
the addition of multiple root sets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227499 91177308-0d34-0410-b5e6-96231b3b80d8
reroll() was slightly monolithic and a pain to modify. Refactor
a bunch of its state from local variables to member variables
of a helper class, and do some trivial simplification while we're
there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227439 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Igor Laevsky <igor@azulsystems.com>
"Currently SplitBlockPredecessors generates incorrect code in case if basic block we are going to split has a landingpad. Also seems like it is fairly common case among it's users to conditionally call either SplitBlockPredecessors or SplitLandingPadPredecessors. Because of this I think it is reasonable to add this condition directly into SplitBlockPredecessors."
Differential Revision: http://reviews.llvm.org/D7157
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227390 91177308-0d34-0410-b5e6-96231b3b80d8
abomination.
For starters, this API is incredibly slow. In order to lookup the name
of a pass it must take a memory fence to acquire a pointer to the
managed static pass registry, and then potentially acquire locks while
it consults this registry for information about what passes exist by
that name. This stops the world of LLVMs in your process no matter
how little they cared about the result.
To make this more joyful, you'll note that we are preserving many passes
which *do not exist* any more, or are not even analyses which one might
wish to have be preserved. This means we do all the work only to say
"nope" with no error to the user.
String-based APIs are a *bad idea*. String-based APIs that cannot
produce any meaningful error are an even worse idea. =/
I have a patch that simply removes this API completely, but I'm hesitant
to commit it as I don't really want to perniciously break out-of-tree
users of the old pass manager. I'd rather they just have to migrate to
the new one at some point. If others disagree and would like me to kill
it with fire, just say the word. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227294 91177308-0d34-0410-b5e6-96231b3b80d8
Splitting a loop to make range checks redundant is profitable only if
the range check "never" fails. Make this fact a part of recognizing a
range check -- a branch is a range check only if it is expected to
pass (via branch_weights metadata).
Differential Revision: http://reviews.llvm.org/D7192
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227249 91177308-0d34-0410-b5e6-96231b3b80d8
LoopRotate wanted to avoid live range interference by looking at the
uses of a Value in the loop latch and seeing if any lied outside of the
loop. We would wrongly perform this operation on Constants.
This fixes PR22337.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227171 91177308-0d34-0410-b5e6-96231b3b80d8
object that manages a single run of this pass.
This was already essentially how it worked. Within the run function, it
would point members at *stack local* allocations that were only live for
a single run. Instead, it seems much cleaner to have a utility object
whose lifetime is clearly bounded by the run of the pass over the
function and can use member variables in a more direct way.
This also makes it easy to plumb the analyses used into it from the pass
and will make it re-usable with the new pass manager.
No functionality changed here, its just a refactoring.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227162 91177308-0d34-0410-b5e6-96231b3b80d8
when refactoring for the new pass manager without introducing too many
formatting changes into meaning full diffs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227000 91177308-0d34-0410-b5e6-96231b3b80d8
This just lifts the logic into a static helper function, sinks the
legacy pass to be a trivial wrapper of that helper fuction, and adds
a trivial wrapper for the new PM as well. Not much to see here.
I switched a test case to run in both modes, but we have to strip the
dead prototypes separately as that pass isn't in the new pass manager
(yet).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226999 91177308-0d34-0410-b5e6-96231b3b80d8
changed the IR. This is particularly easy as we can just look for the
existence of any expect intrinsic at all to know whether we've changed
the IR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226998 91177308-0d34-0410-b5e6-96231b3b80d8
for small switches, and avoid using a complex loop to set up the
weights.
We know what the baseline weights will be so we can just resize the
vector to contain all that value and clobber the one slot that is
likely. This seems much more direct than the previous code that tested
at every iteration, and started off by zeroing the vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226995 91177308-0d34-0410-b5e6-96231b3b80d8
no members for them to use.
Also, make them accept references as there is no possibility of a null
pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226994 91177308-0d34-0410-b5e6-96231b3b80d8
It was already in the Scalar header and referenced extensively as being
in this library, the source file was just in the utils directory for
some reason. No actual functionality changed. I noticed as it didn't
make sense to add a pass header to the utils headers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226991 91177308-0d34-0410-b5e6-96231b3b80d8
Use the struct instead of a std::pair<Value *, Value *>. This makes a
Range an obviously immutable object, and we can now assert that a
range is well-typed (Begin->getType() == End->getType()) on its
construction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226804 91177308-0d34-0410-b5e6-96231b3b80d8
There are places where the inductive range check elimination pass
depends on two llvm::Values or llvm::SCEVs to be of the same
llvm::Type when they do not need to be. This patch relaxes those
restrictions (by bailing out of the optimization if the types
mismatch), and adds test cases to trigger those paths.
These issues were found by bootstrapping clang with IRCE running in
the -O3 pass ordering.
Differential Revision: http://reviews.llvm.org/D7082
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226793 91177308-0d34-0410-b5e6-96231b3b80d8
This reapplies r225379.
ChangeLog:
- The assertion that this commit previously ran into about the inability
to handle indirect variables has since been removed and the backend
can handle this now.
- Testcases were upgrade to the new MDLocation format.
- Instead of keeping a DebugDeclares map, we now use
llvm::FindAllocaDbgDeclare().
Original commit message follows.
Debug info: Teach SROA how to update debug info for fragmented variables.
This allows us to generate debug info for extremely advanced code such as
typedef struct { long int a; int b;} S;
int foo(S s) {
return s.b;
}
which at -O1 on x86_64 is codegen'd into
define i32 @foo(i64 %s.coerce0, i32 %s.coerce1) #0 {
ret i32 %s.coerce1, !dbg !24
}
with this patch we emit the following debug info for this
TAG_formal_parameter [3]
AT_location( 0x00000000
0x0000000000000000 - 0x0000000000000006: rdi, piece 0x00000008, rsi, piece 0x00000004
0x0000000000000006 - 0x0000000000000008: rdi, piece 0x00000008, rax, piece 0x00000004 )
AT_name( "s" )
AT_decl_file( "/Volumes/Data/llvm/_build.ninja.release/test.c" )
Thanks to chandlerc, dblaikie, and echristo for their feedback on all
previous iterations of this patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226598 91177308-0d34-0410-b5e6-96231b3b80d8