The reason that I am turning off this optimization is that there is an
additional case where a block can escape that has come up. Specifically, this
occurs when a block is used in a scope outside of its current scope.
This can cause a captured retainable object pointer whose life is preserved by
the objc_retainBlock to be deallocated before the block is invoked.
An example of the code needed to trigger the bug is:
----
\#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
void (^somethingToDoLater)();
{
NSObject *obj = [NSObject new];
somethingToDoLater = ^{
[obj self]; // Crashes here
};
}
NSLog(@"test.");
somethingToDoLater();
return 0;
}
----
In the next commit, I remove all the dead code that results from this.
Once I put in the fixing commit I will bring back the tests that I deleted in
this commit.
rdar://14802782.
rdar://14868830.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189869 91177308-0d34-0410-b5e6-96231b3b80d8
This patch changes the default setting for the LateVectorization flag that controls where the loop-vectorizer is ran.
Perf gains:
SingleSource/Benchmarks/Shootout/matrix -37.33%
MultiSource/Benchmarks/PAQ8p/paq8p -22.83%
SingleSource/Benchmarks/Linpack/linpack-pc -16.22%
SingleSource/Benchmarks/Shootout-C++/ary3 -15.16%
MultiSource/Benchmarks/TSVC/NodeSplitting-flt/NodeSplitting-flt -10.34%
MultiSource/Benchmarks/TSVC/NodeSplitting-dbl/NodeSplitting-dbl -7.12%
Regressions:
SingleSource/Benchmarks/Misc/lowercase 15.10%
MultiSource/Benchmarks/TSVC/Equivalencing-flt/Equivalencing-flt 13.18%
SingleSource/Benchmarks/Shootout-C++/matrix 8.27%
SingleSource/Benchmarks/CoyoteBench/lpbench 7.30%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189858 91177308-0d34-0410-b5e6-96231b3b80d8
1) If the width of vectorization list candidate is bigger than vector reg width, we will break it down to fit the vector reg.
2) We do not vectorize the width which is not power of two.
The performance result shows it will help some spec benchmarks. mesa improved 6.97% and ammp improved 1.54%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189830 91177308-0d34-0410-b5e6-96231b3b80d8
Select condition shadow was being ignored resulting in false negatives.
This change OR-s sign-extended condition shadow into the result shadow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189785 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code missed some edge cases when e.g. we're going to emit sqrtf but
only the availability of sqrt was checked. This happens on odd platforms like
windows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189724 91177308-0d34-0410-b5e6-96231b3b80d8
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189672 91177308-0d34-0410-b5e6-96231b3b80d8
Revert unintentional commit (of an unreviewed change).
Original commit message:
Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189566 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189565 91177308-0d34-0410-b5e6-96231b3b80d8
1. They are a kind of cannonicalization.
2. The performance measurements show that it is better to keep them in.
There should be no functional change if you are not enabling the LateVectorization mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189539 91177308-0d34-0410-b5e6-96231b3b80d8
When unrolling is disabled in the pass manager, the loop vectorizer should also
not unroll loops. This will allow the -fno-unroll-loops option in Clang to
behave as expected (even for vectorizable loops). The loop vectorizer's
-force-vector-unroll option will (continue to) override the pass-manager
setting (including -force-vector-unroll=0 to force use of the internal
auto-selection logic).
In order to test this, I added a flag to opt (-disable-loop-unrolling) to force
disable unrolling through opt (the analog of -fno-unroll-loops in Clang). Also,
this fixes a small bug in opt where the loop vectorizer was enabled only after
the pass manager populated the queue of passes (the global_alias.ll test needed
a slight update to the RUN line as a result of this fix).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189499 91177308-0d34-0410-b5e6-96231b3b80d8
This patch merges LoopVectorize of InnerLoopVectorizer and InnerLoopUnroller by adding checks for VF=1. This helps in erasing the Unroller code that is almost identical to the InnerLoopVectorizer code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189391 91177308-0d34-0410-b5e6-96231b3b80d8
The builder inserts from before the insert point,
not after, so this would insert before the last
instruction in the bundle instead of after it.
I'm not sure if this can actually be a problem
with any of the current insertions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189285 91177308-0d34-0410-b5e6-96231b3b80d8
This patch enables unrolling of loops when vectorization is legal but not profitable.
We add a new class InnerLoopUnroller, that extends InnerLoopVectorizer and replaces some of the vector-specific logic with scalars.
This patch does not introduce any runtime regressions and improves the following workloads:
SingleSource/Benchmarks/Shootout/matrix -22.64%
SingleSource/Benchmarks/Shootout-C++/matrix -13.06%
External/SPEC/CINT2006/464_h264ref/464_h264ref -3.99%
SingleSource/Benchmarks/Adobe-C++/simple_types_constant_folding -1.95%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189281 91177308-0d34-0410-b5e6-96231b3b80d8
The code was erroneously reading overflow area shadow from the TLS slot,
bypassing the local copy. Reading shadow directly from TLS is wrong, because
it can be overwritten by a nested vararg call, if that happens before va_start.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189104 91177308-0d34-0410-b5e6-96231b3b80d8
...so that it can be used for z too. Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.
The pass is opt-in because at the moment it only handles sqrt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189097 91177308-0d34-0410-b5e6-96231b3b80d8
The current version of StripDeadDebugInfo became stale and no longer actually
worked since it was expecting an older version of debug info.
This patch updates it to use DebugInfoFinder and the modern DebugInfo classes as
much as possible to make it more redundent to such changes. Additionally, the
only place where that was avoided (the code where we replace the old sets with
the new), I call verify on the DIContextUnit implying that if the format changes
and my live set changes no longer make sense an assert will be hit. In order to
ensure that that occurs I have included a test case.
The actual stripping of the dead debug info follows the same strategy as was
used before in this class: find the live set and replace the old set in the
given compile unit (which may contain dead global variables/functions) with the
new live one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189078 91177308-0d34-0410-b5e6-96231b3b80d8
DFSan changes the ABI of each function in the module. This makes it possible
for a function with the native ABI to be called with the instrumented ABI,
or vice versa, thus possibly invoking undefined behavior. A simple way
of statically detecting instances of this problem is to prepend the prefix
"dfs$" to the name of each instrumented-ABI function.
This will not catch every such problem; in particular function pointers passed
across the instrumented-native barrier cannot be used on the other side.
These problems could potentially be caught dynamically.
Differential Revision: http://llvm-reviews.chandlerc.com/D1373
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189052 91177308-0d34-0410-b5e6-96231b3b80d8
using GEPs. Previously, it used a number of different heuristics for
analyzing the GEPs. Several of these were conservatively correct, but
failed to fall back to SCEV even when SCEV might have given a reasonable
answer. One was simply incorrect in how it was formulated.
There was good code already to recursively evaluate the constant offsets
in GEPs, look through pointer casts, etc. I gathered this into a form
code like the SLP code can use in a previous commit, which allows all of
this code to become quite simple.
There is some performance (compile time) concern here at first glance as
we're directly attempting to walk both pointers constant GEP chains.
However, a couple of thoughts:
1) The very common cases where there is a dynamic pointer, and a second
pointer at a constant offset (usually a stride) from it, this code
will actually not do any unnecessary work.
2) InstCombine and other passes work very hard to collapse constant
GEPs, so it will be rare that we iterate here for a long time.
That said, if there remain performance problems here, there are some
obvious things that can improve the situation immensely. Doing
a vectorizer-pass-wide memoizer for each individual layer of pointer
values, their base values, and the constant offset is likely to be able
to completely remove redundant work and strictly limit the scaling of
the work to scrape these GEPs. Since this optimization was not done on
the prior version (which would still benefit from it), I've not done it
here. But if folks have benchmarks that slow down it should be straight
forward for them to add.
I've added a test case, but I'm not really confident of the amount of
testing done for different access patterns, strides, and pointer
manipulation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189007 91177308-0d34-0410-b5e6-96231b3b80d8
There are situations which can affect the correctness (or at least expectation)
of the gcov output. For instance, if a call to __gcov_flush() occurs within a
block before the execution count is registered and then the program aborts in
some way, then that block will not be marked as executed. This is not normally
what the user expects.
If we move the code that's registering when a block is executed to the
beginning, we can catch these types of situations.
PR16893
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188849 91177308-0d34-0410-b5e6-96231b3b80d8
Update iterator when the SLP vectorizer changes the instructions in the basic
block by restarting the traversal of the basic block.
Patch by Yi Jiang!
Fixes PR 16899.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188832 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a llvm.copysign intrinsic; We already have Libfunc recognition for
copysign (which is turned into the FCOPYSIGN SDAG node). In order to
autovectorize calls to copysign in the loop vectorizer, we need a corresponding
intrinsic as well.
In addition to the expected changes to the language reference, the loop
vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into
an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a
few lists in LegalizeVector{Ops,Types} so that vector copysigns can be
expanded.
In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN
be Expand for vector types. This seems correct for all in-tree targets, and I
think is the right thing to do because, previously, there was no way to generate
vector-values FCOPYSIGN nodes (and most targets don't specify an action for
vector-typed FCOPYSIGN).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188728 91177308-0d34-0410-b5e6-96231b3b80d8