Partial fix for PR17459: wrong code at -O3 on x86_64-linux-gnu
(affecting trunk and 3.3)
When SCEV expands a recurrence outside of a loop it attempts to scale
by the stride of the recurrence. Chained recurrences don't work that
way. We could compute binomial coefficients, but would hve to
guarantee that the chained AddRec's are in a perfectly reduced form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193438 91177308-0d34-0410-b5e6-96231b3b80d8
Partial fix for PR17459: wrong code at -O3 on x86_64-linux-gnu
(affecting trunk and 3.3)
ScalarEvolutionNormalization was attempting to normalize by adding and
subtracting strides. Chained recurrences don't work that way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193437 91177308-0d34-0410-b5e6-96231b3b80d8
This fix a memory leak found by valgrind.
Calling it from the base class destructor would not destroy the BasicCallGraph
bits.
FIXME: BasicCallGraph is the only thing that inherits from CallGraph. Can
we merge the two?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193412 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM optimizers may widen accesses to packed structures that overflow the structure itself, but should be in bounds up to the alignment of the object
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193317 91177308-0d34-0410-b5e6-96231b3b80d8
Major steps include:
1). introduces a not-addr-taken bit-field in GlobalVariable
2). GlobalOpt pass sets "not-address-taken" if it proves a global varirable
dosen't have its address taken.
3). AA use this info for disambiguation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193251 91177308-0d34-0410-b5e6-96231b3b80d8
We can have a struct type with a single field and the field does not start
with 0. In that case, we should correctly update the offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193137 91177308-0d34-0410-b5e6-96231b3b80d8
The test before wasn't successfully testing this
since it was missing the datalayout piece to change
the size of the second address space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193102 91177308-0d34-0410-b5e6-96231b3b80d8
SCEV currently fails to compute loop counts for nonunit stride
loops. This comes up frequently. It prevents loop optimization and
forces vectorization to insert extra loop checks.
For example:
void foo(int n, int *x) {
for (int i = 0; i < n; i += 3) {
x[i] = i;
x[i+1] = i+1;
x[i+2] = i+2;
}
}
We need to properly handle the case in which limit > INT_MAX-stride. In
the above case: n > INT_MAX-3. In this case the loop counter will step
beyond the limit and overflow at the same time. However, knowing that
signed integer overlow in undefined, we can assume the loop test
behavior is arbitrary after overflow. This obeys both C undefined
behavior rules, and the more strict LLVM poison value rules.
I'm finally fixing this in response to Hal Finkel's persistence.
The most probable reason that we never optimized this before is that
we were being careful to handle case where the developer expected a
side-effect free infinite loop relying on overflow:
for (int i = 0; i < n; i += s) {
++j;
}
return j;
If INT_MAX+1 is a multiple of s and n > INT_MAX-s, then we might
expect an infinite loop. However there are plenty of ways to achieve
this effect without relying on undefined behavior of signed overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193015 91177308-0d34-0410-b5e6-96231b3b80d8
The heuristic was added to avoid spending too much compile time A specially
crafted test case (PR17461, PR16474) with many uses on a select or bitcast
instruction can still trigger the slow case. Add a check for that case.
This only affects compile time, don't have a good way to test it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191896 91177308-0d34-0410-b5e6-96231b3b80d8
infrastructure.
This was essentially work toward PGO based on a design that had several
flaws, partially dating from a time when LLVM had a different
architecture, and with an effort to modernize it abandoned without being
completed. Since then, it has bitrotted for several years further. The
result is nearly unusable, and isn't helping any of the modern PGO
efforts. Instead, it is getting in the way, adding confusion about PGO
in LLVM and distracting everyone with maintenance on essentially dead
code. Removing it paves the way for modern efforts around PGO.
Among other effects, this removes the last of the runtime libraries from
LLVM. Those are being developed in the separate 'compiler-rt' project
now, with somewhat different licensing specifically more approriate for
runtimes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191835 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.
When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.
Existing testing cases are updated to use the struct-path aware TBAA format.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191538 91177308-0d34-0410-b5e6-96231b3b80d8
This code isn't ready to deal with allocation functions where the return is not
the allocated pointer. The checks below will reject posix_memalign anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191319 91177308-0d34-0410-b5e6-96231b3b80d8
This is safe per C++11 18.6.1.1p3: [operator new returns] a non-null pointer to
suitably aligned storage (3.7.4), or else throw a bad_alloc exception. This
requirement is binding on a replacement version of this function.
Brings us a tiny bit closer to eliminating more vector push_backs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191310 91177308-0d34-0410-b5e6-96231b3b80d8
Overflow doesn't affect the correctness of equalities. Computing this is cheap,
we just reuse the computation for the inbounds case and try to peel of more
non-inbounds GEPs. This pattern is unlikely to ever appear in code generated by
Clang, but SCEV occasionally produces it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191200 91177308-0d34-0410-b5e6-96231b3b80d8
If address space 0 was smaller than the address space
in a constant inttoptr/ptrtoint pair, the wrong mask size
would be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190899 91177308-0d34-0410-b5e6-96231b3b80d8
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190876 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190542 91177308-0d34-0410-b5e6-96231b3b80d8
instead of having its own implementation.
The implementation of isTBAAVtableAccess is in TypeBasedAliasAnalysis.cpp
since it is related to the format of TBAA metadata.
The path for struct-path tbaa will be exercised by
test/Instrumentation/ThreadSanitizer/read_from_global.ll, vptr_read.ll, and
vptr_update.ll when struct-path tbaa is on by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190216 91177308-0d34-0410-b5e6-96231b3b80d8
Revert unintentional commit (of an unreviewed change).
Original commit message:
Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189566 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189565 91177308-0d34-0410-b5e6-96231b3b80d8
...so that it can be used for z too. Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.
The pass is opt-in because at the moment it only handles sqrt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189097 91177308-0d34-0410-b5e6-96231b3b80d8
Also fix it calculating the wrong value. The struct index
is not a ConstantInt, so it was being interpreted as an array
index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188713 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes SCEVExpander so that it does not create multiple distinct induction
variables for duplicate PHI entries. Specifically, given some code like this:
do.body6: ; preds = %do.body6, %do.body6, %if.then5
%end.0 = phi i8* [ undef, %if.then5 ], [ %incdec.ptr, %do.body6 ], [ %incdec.ptr, %do.body6 ]
...
Note that it is legal to have multiple entries for a basic block so long as the
associated value is the same. So the above input is okay, but expanding an
AddRec in this loop could produce code like this:
do.body6: ; preds = %do.body6, %do.body6, %if.then5
%indvar = phi i64 [ %indvar.next, %do.body6 ], [ %indvar.next1, %do.body6 ], [ 0, %if.then5 ]
%end.0 = phi i8* [ undef, %if.then5 ], [ %incdec.ptr, %do.body6 ], [ %incdec.ptr, %do.body6 ]
...
%indvar.next = add i64 %indvar, 1
%indvar.next1 = add i64 %indvar, 1
And this is not legal because there are two PHI entries for %do.body6 each with
a distinct value.
Unfortunately, I don't have an in-tree test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188614 91177308-0d34-0410-b5e6-96231b3b80d8
to find loops if the From and To instructions were in the same block.
Refactor the code a little now that we need to fill to start the CFG-walking
algorithm with more than one starting basic block sometimes.
Special thanks to Andrew Trick for catching an error in my understanding of
natural loops in code review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188236 91177308-0d34-0410-b5e6-96231b3b80d8
Inlining between functions with different values of sanitize_* attributes
leads to over- or under-sanitizing, which is always bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187967 91177308-0d34-0410-b5e6-96231b3b80d8
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.
For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).
This will be used by the PowerPC backend in a follow-up commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187926 91177308-0d34-0410-b5e6-96231b3b80d8
This fix is very lightweight. The same fix already existed for AddRec
but was missing for NAry expressions.
This is obviously an improvement and I'm unsure how to test compile
time problems.
Patch by Xiaoyi Guo!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187475 91177308-0d34-0410-b5e6-96231b3b80d8
Call into ComputeMaskedBits to figure out which bits are set on both add
operands and determine if the value is a power-of-two-or-zero or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187445 91177308-0d34-0410-b5e6-96231b3b80d8
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187283 91177308-0d34-0410-b5e6-96231b3b80d8
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187278 91177308-0d34-0410-b5e6-96231b3b80d8
The great thing about the SCEVAddRec No-Wrap flag (unlike nsw/nuw) is
that is can be preserved while normalizing (reassociating and
factoring).
The bad thing is that is can't be tranfered back to IR, which is one
of the reasons I don't like the concept of SCEVExpander.
Sorry, I can't think of a direct way to test this, which is why these
were FIXMEs for so long. I just think it's a good time to finally
clean it up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186273 91177308-0d34-0410-b5e6-96231b3b80d8
Address calculation for gather/scather in vectorized code can incur a
significant cost making vectorization unbeneficial. Add infrastructure to add
cost.
Tests and cost model for targets will be in follow-up commits.
radar://14351991
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186187 91177308-0d34-0410-b5e6-96231b3b80d8
ScalarEvolution::getSignedRange uses ComputeNumSignBits from ValueTracking on
ashr instructions. ComputeNumSignBits can return zero, but this case was not
handled correctly by the code in getSignedRange which was calling:
APInt::getSignedMinValue(BitWidth).ashr(NS - 1)
with NS = 0, resulting in an assertion failure in APInt::ashr.
Now, we just return the conservative result (as with NS == 1).
Another bug found by llvm-stress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185955 91177308-0d34-0410-b5e6-96231b3b80d8
(add nsw x, (and x, y)) isn't a power of two if x is zero, it's zero
(add nsw x, (xor x, y)) isn't a power of two if y has bits set that aren't set in x
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185954 91177308-0d34-0410-b5e6-96231b3b80d8
The symptom is seg-fault, and the root cause is that a SCEV contains a SCEVUnknown
which has null-pointer to a llvm::Value.
This is how the problem take place:
===================================
1). In the pristine input IR, there are two relevant instrutions Op1 and Op2,
Op1's corresponding SCEV (denoted as SCEV(op1)) is a SCEVUnknown, and
SCEV(Op2) contains SCEV(Op1). None of these instructions are dead.
Op1 : V1 = ...
...
Op2 : V2 = ... // directly or indirectly (data-flow) depends on Op1
2) Optimizer (LSR in my case) generates an instruction holding the equivalent
value of Op1, making Op1 dead.
Op1': V1' = ...
Op1: V1 = ... ; now dead)
Op2 : V2 = ... //Now deps on Op1', but the SCEV(Op2) still contains SCEV(Op1)
3) Op1 is deleted, and call-back function is called to reset
SCEV(Op1) to indicate it is invalid. However, SCEV(Op2) is not
invalidated as well.
4) Following pass get the cached, invalid SCEV(Op2), and try to manipulate it,
and cause segfault.
The fix:
========
It seems there is no clean yet inexpensive fix. I write to dev-list
soliciting good solution, unforunately no ack. So, I decide to fix this
problem in a brute-force way:
When ScalarEvolution::getSCEV is called, check if the cached SCEV
contains a invalid SCEVUnknow, if yes, remove the cached SCEV, and
re-evaluate the SCEV from scratch.
I compile buch of big *.c and *.cpp, fortunately, I don't see any increase
in compile time.
Misc:
=====
The reduced test-case has 2357 lines of code+other-stuff, too big to commit.
rdar://14283433
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185843 91177308-0d34-0410-b5e6-96231b3b80d8
functions. Make the function attributes pass add it to known library functions
and when it can deduce it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185735 91177308-0d34-0410-b5e6-96231b3b80d8
The Builtin attribute is an attribute that can be placed on function call site that signal that even though a function is declared as being a builtin,
rdar://problem/13727199
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185049 91177308-0d34-0410-b5e6-96231b3b80d8
This is a band-aid to fix the most severe regressions we're seeing from basing
spill decisions on block frequencies, until we have a better solution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184835 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183060 91177308-0d34-0410-b5e6-96231b3b80d8
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183045 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR16130 - clang produces incorrect code with loop/expression at -O2.
This is a 2+ year old bug that's now holding up the release. It's a
case where we knowingly made aggressive assumptions about undefined
behavior. These assumptions are wrong when SCEV is computing a
subexpression that does not directly control the branch. With this
fix, we avoid making assumptions in those cases but still optimize the
common case. SCEV's trip count computation for exits controlled by
'or' expressions is now analagous to the trip count computation for
loops with multiple exits. I had already fixed the multiple exit case
to be conservative.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182989 91177308-0d34-0410-b5e6-96231b3b80d8
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182802 91177308-0d34-0410-b5e6-96231b3b80d8
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182638 91177308-0d34-0410-b5e6-96231b3b80d8