The heuristic was added to avoid spending too much compile time A specially
crafted test case (PR17461, PR16474) with many uses on a select or bitcast
instruction can still trigger the slow case. Add a check for that case.
This only affects compile time, don't have a good way to test it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191896 91177308-0d34-0410-b5e6-96231b3b80d8
infrastructure.
This was essentially work toward PGO based on a design that had several
flaws, partially dating from a time when LLVM had a different
architecture, and with an effort to modernize it abandoned without being
completed. Since then, it has bitrotted for several years further. The
result is nearly unusable, and isn't helping any of the modern PGO
efforts. Instead, it is getting in the way, adding confusion about PGO
in LLVM and distracting everyone with maintenance on essentially dead
code. Removing it paves the way for modern efforts around PGO.
Among other effects, this removes the last of the runtime libraries from
LLVM. Those are being developed in the separate 'compiler-rt' project
now, with somewhat different licensing specifically more approriate for
runtimes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191835 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.
When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.
Existing testing cases are updated to use the struct-path aware TBAA format.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191538 91177308-0d34-0410-b5e6-96231b3b80d8
This code isn't ready to deal with allocation functions where the return is not
the allocated pointer. The checks below will reject posix_memalign anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191319 91177308-0d34-0410-b5e6-96231b3b80d8
This is safe per C++11 18.6.1.1p3: [operator new returns] a non-null pointer to
suitably aligned storage (3.7.4), or else throw a bad_alloc exception. This
requirement is binding on a replacement version of this function.
Brings us a tiny bit closer to eliminating more vector push_backs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191310 91177308-0d34-0410-b5e6-96231b3b80d8
Overflow doesn't affect the correctness of equalities. Computing this is cheap,
we just reuse the computation for the inbounds case and try to peel of more
non-inbounds GEPs. This pattern is unlikely to ever appear in code generated by
Clang, but SCEV occasionally produces it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191200 91177308-0d34-0410-b5e6-96231b3b80d8
If address space 0 was smaller than the address space
in a constant inttoptr/ptrtoint pair, the wrong mask size
would be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190899 91177308-0d34-0410-b5e6-96231b3b80d8
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190876 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190542 91177308-0d34-0410-b5e6-96231b3b80d8
instead of having its own implementation.
The implementation of isTBAAVtableAccess is in TypeBasedAliasAnalysis.cpp
since it is related to the format of TBAA metadata.
The path for struct-path tbaa will be exercised by
test/Instrumentation/ThreadSanitizer/read_from_global.ll, vptr_read.ll, and
vptr_update.ll when struct-path tbaa is on by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190216 91177308-0d34-0410-b5e6-96231b3b80d8
Revert unintentional commit (of an unreviewed change).
Original commit message:
Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189566 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189565 91177308-0d34-0410-b5e6-96231b3b80d8
...so that it can be used for z too. Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.
The pass is opt-in because at the moment it only handles sqrt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189097 91177308-0d34-0410-b5e6-96231b3b80d8
Also fix it calculating the wrong value. The struct index
is not a ConstantInt, so it was being interpreted as an array
index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188713 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes SCEVExpander so that it does not create multiple distinct induction
variables for duplicate PHI entries. Specifically, given some code like this:
do.body6: ; preds = %do.body6, %do.body6, %if.then5
%end.0 = phi i8* [ undef, %if.then5 ], [ %incdec.ptr, %do.body6 ], [ %incdec.ptr, %do.body6 ]
...
Note that it is legal to have multiple entries for a basic block so long as the
associated value is the same. So the above input is okay, but expanding an
AddRec in this loop could produce code like this:
do.body6: ; preds = %do.body6, %do.body6, %if.then5
%indvar = phi i64 [ %indvar.next, %do.body6 ], [ %indvar.next1, %do.body6 ], [ 0, %if.then5 ]
%end.0 = phi i8* [ undef, %if.then5 ], [ %incdec.ptr, %do.body6 ], [ %incdec.ptr, %do.body6 ]
...
%indvar.next = add i64 %indvar, 1
%indvar.next1 = add i64 %indvar, 1
And this is not legal because there are two PHI entries for %do.body6 each with
a distinct value.
Unfortunately, I don't have an in-tree test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188614 91177308-0d34-0410-b5e6-96231b3b80d8
to find loops if the From and To instructions were in the same block.
Refactor the code a little now that we need to fill to start the CFG-walking
algorithm with more than one starting basic block sometimes.
Special thanks to Andrew Trick for catching an error in my understanding of
natural loops in code review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188236 91177308-0d34-0410-b5e6-96231b3b80d8
Inlining between functions with different values of sanitize_* attributes
leads to over- or under-sanitizing, which is always bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187967 91177308-0d34-0410-b5e6-96231b3b80d8
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.
For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).
This will be used by the PowerPC backend in a follow-up commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187926 91177308-0d34-0410-b5e6-96231b3b80d8
This fix is very lightweight. The same fix already existed for AddRec
but was missing for NAry expressions.
This is obviously an improvement and I'm unsure how to test compile
time problems.
Patch by Xiaoyi Guo!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187475 91177308-0d34-0410-b5e6-96231b3b80d8
Call into ComputeMaskedBits to figure out which bits are set on both add
operands and determine if the value is a power-of-two-or-zero or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187445 91177308-0d34-0410-b5e6-96231b3b80d8
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187283 91177308-0d34-0410-b5e6-96231b3b80d8
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187278 91177308-0d34-0410-b5e6-96231b3b80d8