This matters for example in following matrix multiply:
int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
int i, j, k, val;
for (i=0; i<rows; i++) {
for (j=0; j<cols; j++) {
val = 0;
for (k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
}
m3[i][j] = val;
}
}
return(m3);
}
Taken from the test-suite benchmark Shootout.
We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).
Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.
I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:
for (i ...)
r += a[i] * 3;
for (i ...)
m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.
In each case the vectorized version was considerably slower.
radar://13304919
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176403 91177308-0d34-0410-b5e6-96231b3b80d8
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.
PR14448.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176399 91177308-0d34-0410-b5e6-96231b3b80d8
The sys::fs::is_directory() check is unnecessary because, if the filename is
a directory, the function will fail anyway with the same error code returned.
Remove the check to avoid an unnecessary stat call.
Someone needs to review on windows and see if the check is necessary there or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176386 91177308-0d34-0410-b5e6-96231b3b80d8
This patch eliminates the need to emit a constant move instruction when this
pattern is matched:
(select (setgt a, Constant), T, F)
The pattern above effectively turns into this:
(conditional-move (setlt a, Constant + 1), F, T)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176384 91177308-0d34-0410-b5e6-96231b3b80d8
This reduces the time actually spent doing string to ID conversion and shows a 10% improvement in compile time for a particularly bad case that involves ARM Neon intrinsics (these have many overloads).
Patch by Jean-Luc Duprat!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176365 91177308-0d34-0410-b5e6-96231b3b80d8
- ISD::SHL/SRL/SRA must have either both scalar or both vector operands
but TLI.getShiftAmountTy() so far only return scalar type. As a
result, backend logic assuming that breaks.
- Rename the original TLI.getShiftAmountTy() to
TLI.getScalarShiftAmountTy() and re-define TLI.getShiftAmountTy() to
return target-specificed scalar type or the same vector type as the
1st operand.
- Fix most TICG logic assuming TLI.getShiftAmountTy() a simple scalar
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176364 91177308-0d34-0410-b5e6-96231b3b80d8
dispatch code. As far as I can tell the thumb2 code is behaving as expected.
I was able to compile and run the associated test case for both arm and thumb1.
rdar://13066352
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176363 91177308-0d34-0410-b5e6-96231b3b80d8
v2: based on Michels patch, but now allows copying of all registers sizes.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176346 91177308-0d34-0410-b5e6-96231b3b80d8
It's much easier to specify the encoding with tablegen directly.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176344 91177308-0d34-0410-b5e6-96231b3b80d8
This function will be used later when the capability to search delay slot
filling instructions in successor blocks is added. No intended functionality
changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176325 91177308-0d34-0410-b5e6-96231b3b80d8
We avoided computing DAG height/depth during Node printing because it
shouldn't depend on an otherwise valid DAG. But this has become far
too annoying for the common case of a valid DAG where we want to see
valid values. If doing the computation on-the-fly turns out to be a
problem in practice, then I'll add a mode to the diagnostics to only
force it when we're likely to have a valid DAG, otherwise explicitly
print INVALID instead of bogus numbers. For now, just go for it all
the time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176314 91177308-0d34-0410-b5e6-96231b3b80d8
SelectionDAGIsel::LowerArguments needs a function, not a basic block. So it
makes sense to pass it the function instead of extracting a basic-block from
the function and then tossing it. This is also more self-documenting (functions
have arguments, BBs don't).
In addition, added comments to a couple of Select* methods.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176305 91177308-0d34-0410-b5e6-96231b3b80d8
This was causing the folding set to fail to fold attributes, because it was
being calculated in one spot without an empty values string but here with an
empty values string.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176301 91177308-0d34-0410-b5e6-96231b3b80d8
The instcombine recognized pattern looks like:
a = b * c
d = a +/- Cst
or
a = b * c
d = Cst +/- a
When creating the new operands for fadd or fsub instruction following the related fmul, the first operand was created with the second original operand (M0 was created with C1) and the second with the first (M1 with Opnd0).
The fix consists in creating the new operands with the appropriate original operand, i.e., M0 with Opnd0 and M1 with C1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176300 91177308-0d34-0410-b5e6-96231b3b80d8
We make the cost for calling libm functions extremely high as emitting the
calls is expensive and causes spills (on x86) so performance suffers. We still
vectorize important calls like ceilf and friends on SSE4.1. and fabs.
Differential Revision: http://llvm-reviews.chandlerc.com/D466
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176287 91177308-0d34-0410-b5e6-96231b3b80d8
The work done by the post-encoder (setting architecturally unused bits to 0 as
required) can be done by the existing operand that covers the "#0.0". This
removes at least one use of the discouraged PostEncoderMethod uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176261 91177308-0d34-0410-b5e6-96231b3b80d8
If an otherwise weak var is actually defined in this unit, it can't be
undefined at runtime so we can use normal global variable sequences (ADRP/ADD)
to access it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176259 91177308-0d34-0410-b5e6-96231b3b80d8
Shadow checks are disabled and memory loads always produce fully initialized
values in functions that don't have a sanitize_memory attribute. Value and
argument shadow is propagated as usual.
This change also updates blacklist behaviour to match the above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176247 91177308-0d34-0410-b5e6-96231b3b80d8
to create the parent path.
This can happen if the path is a relative filename and the current directory was removed.
Thanks to Daniel D. for the hint in fixing it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176226 91177308-0d34-0410-b5e6-96231b3b80d8