This commit adds the infrastructure for performing bottom-up SLP vectorization (and other optimizations) on parallel computations.
The infrastructure has three potential users:
1. The loop vectorizer needs to be able to vectorize AOS data structures such as (sum += A[i] + A[i+1]).
2. The BB-vectorizer needs this infrastructure for bottom-up SLP vectorization, because bottom-up vectorization is faster to compute.
3. A loop-roller needs to be able to analyze consecutive chains and roll them into a loop, in order to reduce code size. A loop roller does not need to create vector instructions, and this infrastructure separates the chain analysis from the vectorization.
This patch also includes a simple (100 LOC) bottom up SLP vectorizer that uses the infrastructure, and can vectorize this code:
void SAXPY(int *x, int *y, int a, int i) {
x[i] = a * x[i] + y[i];
x[i+1] = a * x[i+1] + y[i+1];
x[i+2] = a * x[i+2] + y[i+2];
x[i+3] = a * x[i+3] + y[i+3];
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179117 91177308-0d34-0410-b5e6-96231b3b80d8
parse an identifier. Otherwise, parseExpression may parse multiple tokens,
which makes it impossible to properly compute an immediate displacement.
An example of such a case is the source operand (i.e., [Symbol + ImmDisp]) in
the below example:
__asm mov eax, [Symbol + ImmDisp]
The existing test cases exercise this patch.
rdar://13611297
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179115 91177308-0d34-0410-b5e6-96231b3b80d8
therefore not at all) of the pc or statement list. We also don't
need to emit the compilation dir so save so space and time
and don't bother.
Fix up the testcase accordingly and verify that we don't emit
the attributes or the items that they use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179114 91177308-0d34-0410-b5e6-96231b3b80d8
Some general cleanup and only scan the end of a BB for branches (once we're
done with the terminators and debug values, then there should not be any other
branches). These address post-commit review suggestions by Bill Schmidt.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179112 91177308-0d34-0410-b5e6-96231b3b80d8
rather than deriving the StringRef from the Start and End SMLocs.
Using the Start and End SMLocs works fine for operands such as [Symbol], but
not for operands such as [Symbol + ImmDisp]. All existing test cases that
reference a variable exercise this patch.
rdar://13602265
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179109 91177308-0d34-0410-b5e6-96231b3b80d8
This pattern occurs in SROA output due to the way vector arguments are lowered
on ARM.
The testcase from PR15525 now compiles into this, which is better than the code
we got with the old scalarrepl:
_Store:
ldr.w r9, [sp]
vmov d17, r3, r9
vmov d16, r1, r2
vst1.8 {d16, d17}, [r0]
bx lr
Differential Revision: http://llvm-reviews.chandlerc.com/D647
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179106 91177308-0d34-0410-b5e6-96231b3b80d8
On PowerPC, non-vector loads and stores have r+i forms; however, in functions
with large stack frames these were not being used to access slots far from the
stack pointer because such slots were out of range for the signed 16-bit
immediate offset field. This increases register pressure because we need a
separate register for each offset (when the r+r form is used). By enabling
virtual base registers, we can deal with large stack frames without unduly
increasing register pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179105 91177308-0d34-0410-b5e6-96231b3b80d8
Some translations here are not 1x1 because there are grep|grep
chains that are non-trivial to implement in terms of FileCheck features. I
made an effort for the tests to remain as similar as possible; do let me know
if you notice anything fishy. The good news are that some buggy tests were
fixed (grep | not grep - a bug waiting to happen).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179102 91177308-0d34-0410-b5e6-96231b3b80d8
The save area is twice as big and there is no struct return slot. The
stack pointer is always 16-byte aligned (after adding the bias).
Also eliminate the stack adjustment instructions around calls when the
function has a reserved stack frame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179083 91177308-0d34-0410-b5e6-96231b3b80d8
Some parts of PointerIntPair assumed that the IntType of the pair was implicitly
convertible to intptr_t, which is not the case for enum class values. Add a
static_cast<intptr_t> to make these conversions explicit and allow
PointerIntPair to be used with an enum class IntType. While we're here, rename
some of the argument values so we don't have variables named "Int" floating
around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179073 91177308-0d34-0410-b5e6-96231b3b80d8
I brazenly think this change is slightly simpler than r178793 because:
- no "state" in functor
- "OpndPtrs[i]" looks simpler than "&Opnds[OpndIndices[i]]"
While I can reproduce the probelm in Valgrind, it is rather difficult to come up
a standalone testing case. The reason is that when an iterator is invalidated,
the stale invalidated elements are not yet clobbered by nonsense data, so the
optimizer can still proceed successfully.
Thank Benjamin for fixing this bug and generously providing the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179062 91177308-0d34-0410-b5e6-96231b3b80d8
The costs are overfitted so that I can still use the legalization factor.
For example the following kernel has about half the throughput vectorized than
unvectorized when compiled with SSE2. Before this patch we would vectorize it.
unsigned short A[1024];
double B[1024];
void f() {
int i;
for (i = 0; i < 1024; ++i) {
B[i] = (double) A[i];
}
}
radar://13599001
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179033 91177308-0d34-0410-b5e6-96231b3b80d8
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179026 91177308-0d34-0410-b5e6-96231b3b80d8
I've managed to convince myself that AArch64's acquire/release
instructions are sufficient to guarantee C++11's required semantics,
even in the sequentially-consistent case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179005 91177308-0d34-0410-b5e6-96231b3b80d8
I couldn't touch this file and not clean it up some. These reformattings
brought to you by clang-format, with some minor adjustments by me. More
spring cleaning to follow here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179004 91177308-0d34-0410-b5e6-96231b3b80d8
internal linkage and so wasn't a patent bug, it doesn't make any sense
here. We can avoid even calling operator<< by just embedding the newline
in the string literals that were already being streamed out. It also
gives the impression of some line-ending agnosticisms which is not
present, and that flushing happens when it doesn't.
If we want to use std::endl, we could do that, but honestly it doesn't
seem remotely worth it. Using '\n' directly is much more clear when
working with raw_ostream.
It also happens to fix builds with old crufty GCC STL implementations
that include std::endl into the global namespace (or headers written to
be compatible with such atrocities).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179003 91177308-0d34-0410-b5e6-96231b3b80d8
First, we should not cheat: fsel-based lowering of select_cc is a
finite-math-only optimization (the ISA manual, section F.3 of v2.06, makes
this clear, as does a note in our own README).
This also adds fsel-based lowering of EQ and NE condition codes. As it turned
out, fsel generation was covered by a grand total of zero regression test
cases. I've added some test cases to cover the existing behavior (which is now
finite-math only), as well as the new EQ cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179000 91177308-0d34-0410-b5e6-96231b3b80d8
The code in getTypeConversion attempts to promote the element vector type
before it trys to split or widen the vector.
After it failed finding a legal vector type by promoting it would continue using
the promoted vector element type. Thereby missing legal splitted vector types.
For example the type v32i32 that has a legal split of 4 x v3i32 on x86/sse2
would be transformed to: v32i256 and from there on successively split to:
v16i256, v8i256, v1i256 and then finally ends up as an i64 type.
By resetting the vector element type to the original vector element type that
existed before the promotion the code will attempt to split the vector type to
smaller vector widths of the same type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178999 91177308-0d34-0410-b5e6-96231b3b80d8