Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190876 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190542 91177308-0d34-0410-b5e6-96231b3b80d8
Revert unintentional commit (of an unreviewed change).
Original commit message:
Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189566 91177308-0d34-0410-b5e6-96231b3b80d8
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189565 91177308-0d34-0410-b5e6-96231b3b80d8
...so that it can be used for z too. Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.
The pass is opt-in because at the moment it only handles sqrt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189097 91177308-0d34-0410-b5e6-96231b3b80d8
to find loops if the From and To instructions were in the same block.
Refactor the code a little now that we need to fill to start the CFG-walking
algorithm with more than one starting basic block sometimes.
Special thanks to Andrew Trick for catching an error in my understanding of
natural loops in code review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188236 91177308-0d34-0410-b5e6-96231b3b80d8
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187283 91177308-0d34-0410-b5e6-96231b3b80d8
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187278 91177308-0d34-0410-b5e6-96231b3b80d8
Address calculation for gather/scather in vectorized code can incur a
significant cost making vectorization unbeneficial. Add infrastructure to add
cost.
Tests and cost model for targets will be in follow-up commits.
radar://14351991
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186187 91177308-0d34-0410-b5e6-96231b3b80d8
This is a generic block implementation that works on more than machine blocks.
The C++ mode addition is a bonus due to the extra space provided.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186175 91177308-0d34-0410-b5e6-96231b3b80d8
The symptom is seg-fault, and the root cause is that a SCEV contains a SCEVUnknown
which has null-pointer to a llvm::Value.
This is how the problem take place:
===================================
1). In the pristine input IR, there are two relevant instrutions Op1 and Op2,
Op1's corresponding SCEV (denoted as SCEV(op1)) is a SCEVUnknown, and
SCEV(Op2) contains SCEV(Op1). None of these instructions are dead.
Op1 : V1 = ...
...
Op2 : V2 = ... // directly or indirectly (data-flow) depends on Op1
2) Optimizer (LSR in my case) generates an instruction holding the equivalent
value of Op1, making Op1 dead.
Op1': V1' = ...
Op1: V1 = ... ; now dead)
Op2 : V2 = ... //Now deps on Op1', but the SCEV(Op2) still contains SCEV(Op1)
3) Op1 is deleted, and call-back function is called to reset
SCEV(Op1) to indicate it is invalid. However, SCEV(Op2) is not
invalidated as well.
4) Following pass get the cached, invalid SCEV(Op2), and try to manipulate it,
and cause segfault.
The fix:
========
It seems there is no clean yet inexpensive fix. I write to dev-list
soliciting good solution, unforunately no ack. So, I decide to fix this
problem in a brute-force way:
When ScalarEvolution::getSCEV is called, check if the cached SCEV
contains a invalid SCEVUnknow, if yes, remove the cached SCEV, and
re-evaluate the SCEV from scratch.
I compile buch of big *.c and *.cpp, fortunately, I don't see any increase
in compile time.
Misc:
=====
The reduced test-case has 2357 lines of code+other-stuff, too big to commit.
rdar://14283433
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185843 91177308-0d34-0410-b5e6-96231b3b80d8
This is a band-aid to fix the most severe regressions we're seeing from basing
spill decisions on block frequencies, until we have a better solution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184835 91177308-0d34-0410-b5e6-96231b3b80d8
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183045 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR16130 - clang produces incorrect code with loop/expression at -O2.
This is a 2+ year old bug that's now holding up the release. It's a
case where we knowingly made aggressive assumptions about undefined
behavior. These assumptions are wrong when SCEV is computing a
subexpression that does not directly control the branch. With this
fix, we avoid making assumptions in those cases but still optimize the
common case. SCEV's trip count computation for exits controlled by
'or' expressions is now analagous to the trip count computation for
loops with multiple exits. I had already fixed the multiple exit case
to be conservative.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182989 91177308-0d34-0410-b5e6-96231b3b80d8
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182802 91177308-0d34-0410-b5e6-96231b3b80d8
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182638 91177308-0d34-0410-b5e6-96231b3b80d8
BitVector/SmallBitVector::reference::operator bool remain implicit since
they model more exactly a bool, rather than something else that can be
boolean tested.
The most common (non-buggy) case are where such objects are used as
return expressions in bool-returning functions or as boolean function
arguments. In those cases I've used (& added if necessary) a named
function to provide the equivalent (or sometimes negative, depending on
convenient wording) test.
One behavior change (YAMLParser) was made, though no test case is
included as I'm not sure how to reach that code path. Essentially any
comparison of llvm::yaml::document_iterators would be invalid if neither
iterator was at the end.
This helped uncover a couple of bugs in Clang - test cases provided for
those in a separate commit along with similar changes to `operator bool`
instances in Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181868 91177308-0d34-0410-b5e6-96231b3b80d8
On certain architectures we can support efficient vectorized version of
instructions if the operand value is uniform (splat) or a constant scalar.
An example of this is a vector shift on x86.
We can efficiently support
for (i = 0 ; i < ; i += 4)
w[0:3] = v[0:3] << <2, 2, 2, 2>
but not
for (i = 0; i < ; i += 4)
w[0:3] = v[0:3] << x[0:3]
This patch adds a parameter to getArithmeticInstrCost to further qualify operand
values as uniform or uniform constant.
Targets can then choose to return a different cost for instructions with such
operand values.
A follow-up commit will test this feature on x86.
radar://13576547
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178807 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR15570: SEGV: SCEV back-edge info invalid after dead code removal.
Indvars creates a SCEV expression for the loop's back edge taken
count, then determines that the comparison is always true and
removes it.
When loop-unroll asks for the expression, it contains a NULL
SCEVUnknkown (as a CallbackVH).
forgetMemoizedResults should invalidate the loop back edges expression.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177986 91177308-0d34-0410-b5e6-96231b3b80d8
This pass hasn't been touched in two years & would fail with assertions against
the current debug info metadata format (the only test case for it still uses a
many-versions old debug info metadata format)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176707 91177308-0d34-0410-b5e6-96231b3b80d8
The "invariant.load" metadata indicates the memory unit being accessed is immutable.
A load annotated with this metadata can be moved across any store.
As I am not sure if it is legal to move such loads across barrier/fence, this
change dose not allow such transformation.
rdar://11311484
Thank Arnold for code review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176562 91177308-0d34-0410-b5e6-96231b3b80d8
Clarify that we mean the object starting at the pointer to the end of the
underlying object and not the size of the whole allocated object.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176491 91177308-0d34-0410-b5e6-96231b3b80d8
This adds minimalistic support for PHI nodes to llvm.objectsize() evaluation
fingers crossed so that it does break clang boostrap again..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176408 91177308-0d34-0410-b5e6-96231b3b80d8
this is similar to getObjectSize(), but doesnt subtract the offset
tweak the BasicAA code accordingly (per PR14988)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176407 91177308-0d34-0410-b5e6-96231b3b80d8
Adds a function to target transform info to query for the cost of address
computation. The cost model analysis pass now also queries this interface.
The code in LoopVectorize adds the cost of address computation as part of the
memory instruction cost calculation. Only there, we know whether the instruction
will be scalarized or not.
Increase the penality for inserting in to D registers on swift. This becomes
necessary because we now always assume that address computation has a cost and
three is a closer value to the architecture.
radar://13097204
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174713 91177308-0d34-0410-b5e6-96231b3b80d8
it isn't really an AliasAnalysis concept, and ValueTracking has similar things
that it could plausibly share code with some day.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174027 91177308-0d34-0410-b5e6-96231b3b80d8
reference to a pointer, so that it can handle the case where DataLayout
is not available and behave conservatively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174024 91177308-0d34-0410-b5e6-96231b3b80d8
generic function calls and intrinsics. This is somewhat overlapping with
an existing intrinsic cost method, but that one seems targetted at
vector intrinsics. I'll merge them or separate their names and use cases
in a separate commit.
This sinks the test of 'callIsSmall' down into TTI where targets can
control it. The whole thing feels very hack-ish to me though. I've left
a FIXME comment about the fundamental design problem this presents. It
isn't yet clear to me what the users of this function *really* care
about. I'll have to do more analysis to figure that out. Putting this
here at least provides it access to proper analysis pass tools and other
such. It also allows us to more cleanly implement the baseline cost
interfaces in TTI.
With this commit, it is now theoretically possible to simplify much of
the inline cost analysis's handling of calls by calling through to this
interface. That conversion will have to happen in subsequent commits as
it requires more extensive restructuring of the inline cost analysis.
The CodeMetrics class is now really only in the business of running over
a block of code and aggregating the metrics on that block of code, with
the actual cost evaluation done entirely in terms of TTI.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173148 91177308-0d34-0410-b5e6-96231b3b80d8
is free. The whole CodeMetrics API should probably be reworked more, but
this is enough to allow deleting the duplicate code there for computing
whether an instruction is free.
All of the passes using this have been updated to pull in TTI and hand
it to the CodeMetrics stuff. Further, a dead CodeMetrics API
(analyzeFunction) is nuked for lack of users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173036 91177308-0d34-0410-b5e6-96231b3b80d8
depend on and use other analyses (as long as they're either immutable
passes or CGSCC passes of course -- nothing in the pass manager has been
fixed here). Leverage this to thread TargetTransformInfo down through
the inline cost analysis.
No functionality changed here, this just threads things through.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173031 91177308-0d34-0410-b5e6-96231b3b80d8
a dynamic analysis done on each call to the routine. However, now it can
use the standard pass infrastructure to reference other analyses,
instead of a silly setter method. This will become more interesting as
I teach it about more analysis passes.
This updates the two inliner passes to use the inline cost analysis.
Doing so highlights how utterly redundant these two passes are. Either
we should find a cheaper way to do always inlining, or we should merge
the two and just fiddle with the thresholds to get the desired behavior.
I'm leaning increasingly toward the latter as it would also remove the
Inliner sub-class split.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173030 91177308-0d34-0410-b5e6-96231b3b80d8
lowered cost.
Currently, this is a direct port of the logic implementing
isInstructionFree in CodeMetrics. The hope is that the interface can be
improved (f.ex. supporting un-formed instruction queries) and the
implementation abstracted so that as we have test cases and target
knowledge we can expose increasingly accurate heuristics to clients.
I'll start switching existing consumers over and kill off the routine in
CodeMetrics in subsequent commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172998 91177308-0d34-0410-b5e6-96231b3b80d8
Okay, here's how to reproduce the problem:
1) Build a Release (or Release+Asserts) version of clang in the normal way.
2) Using the clang & clang++ binaries from (1), build a Release (or
Release+Asserts) version of the same sources, but this time enable LTO ---
specify the `-flto' flag on the command line.
3) Run the ARC migrator tests:
$ arcmt-test --args -triple x86_64-apple-darwin10 -fsyntax-only -x objective-c++ ./src/tools/clang/test/ARCMT/cxx-rewrite.mm
You'll see that the output isn't correct (the whitespace is off).
The mis-compile is in the function `RewriteBuffer::RemoveText' in the
clang/lib/Rewrite/Core/Rewriter.cpp file. When that function and RewriteRope.cpp
are compiled with LTO and the `arcmt-test' executable is regenerated, you'll see
the error. When those files are not LTO'ed, then the output of the `arcmt-test'
is fine.
It is *really* hard to get a testcase out of this. I'll file a PR with what I
have currently.
--- Reverse-merging r172363 into '.':
U include/llvm/Analysis/MemoryBuiltins.h
U lib/Analysis/MemoryBuiltins.cpp
--- Reverse-merging r171325 into '.':
U test/Transforms/InstCombine/objsize.ll
G include/llvm/Analysis/MemoryBuiltins.h
G lib/Analysis/MemoryBuiltins.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172756 91177308-0d34-0410-b5e6-96231b3b80d8
Moving the X86CostTable to a common place, so that other back-ends
can share the code. Also simplifying it a bit and commoning up
tables with one and two types on operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172658 91177308-0d34-0410-b5e6-96231b3b80d8
Note that this bug is only exposed because LTO fails to use TTI.
Fixes self-LTO of clang. rdar://13007381.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172462 91177308-0d34-0410-b5e6-96231b3b80d8
TargetTransformInfo rather than TargetLowering, removing one of the
primary instances of the layering violation of Transforms depending
directly on Target.
This is a really big deal because LSR used to be a "special" pass that
could only be tested fully using llc and by looking at the full output
of it. It also couldn't run with any other loop passes because it had to
be created by the backend. No longer is this true. LSR is now just
a normal pass and we should probably lift the creation of LSR out of
lib/CodeGen/Passes.cpp and into the PassManagerBuilder. =] I've not done
this, or updated all of the tests to use opt and a triple, because
I suspect someone more familiar with LSR would do a better job. This
change should be essentially without functional impact for normal
compilations, and only change behvaior of targetless compilations.
The conversion required changing all of the LSR code to refer to the TTI
interfaces, which fortunately are very similar to TargetLowering's
interfaces. However, it also allowed us to *always* expect to have some
implementation around. I've pushed that simplification through the pass,
and leveraged it to simplify code somewhat. It required some test
updates for one of two things: either we used to skip some checks
altogether but now we get the default "no" answer for them, or we used
to have no information about the target and now we do have some.
I've also started the process of removing AddrMode, as the TTI interface
doesn't use it any longer. In some cases this simplifies code, and in
others it adds some complexity, but I think it's not a bad tradeoff even
there. Subsequent patches will try to clean this up even further and use
other (more appropriate) abstractions.
Yet again, almost all of the formatting changes brought to you by
clang-format. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171735 91177308-0d34-0410-b5e6-96231b3b80d8
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.
There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.
The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.
I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).
I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171366 91177308-0d34-0410-b5e6-96231b3b80d8
constant folding calls. Add the initial tests for this which show that
now instsimplify can simplify blindingly obvious code patterns expressed
with both intrinsics and library calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171194 91177308-0d34-0410-b5e6-96231b3b80d8
are nice and decomposed so that we can simplify synthesized calls as
easily as actually call instructions. The internal utility still has the
same behavior, it just now operates on a more generic interface so that
I can extend the set of call simplifications that instsimplify knows
about.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171189 91177308-0d34-0410-b5e6-96231b3b80d8