one-byte NOPs. If the processor actually executes those NOPs, as it sometimes
does with aligned bundling, this can have a performance impact. From my
micro-benchmarks run on my one machine, a 15-byte NOP followed by twelve
one-byte NOPs is about 20% worse than a 15 followed by a 12. This patch
changes NOP emission to emit as many 15-byte (the maximum) as possible followed
by at most one shorter NOP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176464 91177308-0d34-0410-b5e6-96231b3b80d8
GlobalValue linkage up to ExternalLinkage in the ExtractGV pass. This
prevents linkonce and linkonce_odr symbols from being DCE'd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176459 91177308-0d34-0410-b5e6-96231b3b80d8
'R' An address that can be sued in a non-macro load or store.
This patch includes a positive test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176452 91177308-0d34-0410-b5e6-96231b3b80d8
* Only apply divide bypass optimization when not optimizing for size.
* Fixed bug caused by constant for 0 value of type Int32,
used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176442 91177308-0d34-0410-b5e6-96231b3b80d8
This adds minimalistic support for PHI nodes to llvm.objectsize() evaluation
fingers crossed so that it does break clang boostrap again..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176408 91177308-0d34-0410-b5e6-96231b3b80d8
This matters for example in following matrix multiply:
int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
int i, j, k, val;
for (i=0; i<rows; i++) {
for (j=0; j<cols; j++) {
val = 0;
for (k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
}
m3[i][j] = val;
}
}
return(m3);
}
Taken from the test-suite benchmark Shootout.
We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).
Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.
I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:
for (i ...)
r += a[i] * 3;
for (i ...)
m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.
In each case the vectorized version was considerably slower.
radar://13304919
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176403 91177308-0d34-0410-b5e6-96231b3b80d8
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.
PR14448.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176399 91177308-0d34-0410-b5e6-96231b3b80d8
This patch eliminates the need to emit a constant move instruction when this
pattern is matched:
(select (setgt a, Constant), T, F)
The pattern above effectively turns into this:
(conditional-move (setlt a, Constant + 1), F, T)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176384 91177308-0d34-0410-b5e6-96231b3b80d8
Also removed the comments of "should produce..." because they completely
don't match the actually produced output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176381 91177308-0d34-0410-b5e6-96231b3b80d8
detail.
The was this test was written, it was relying on an implementation detail
(fixups) and hence was very brittle (relying, among other things, on the
exact ordering of statistics printed by MC).
The test was rewritten to check a more observable output difference. While it
doesn't cover 100% of the things the original test covered, it's a good
practice to write regression tests this way. If we want to check that
internal details and invariants hold, such tests should be expressed as unit
tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176377 91177308-0d34-0410-b5e6-96231b3b80d8
The make (all) target takes care of creating lit configs and auto-generating
tests. The problem with the original 'lit.site.cfg' target is it's not
recursive and doesn't fully create everything necessary for testing
clang-tools-extra.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176374 91177308-0d34-0410-b5e6-96231b3b80d8
- These tests wont't crash on trunk but would be better to add them so that
they don't break again in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176369 91177308-0d34-0410-b5e6-96231b3b80d8
- ISD::SHL/SRL/SRA must have either both scalar or both vector operands
but TLI.getShiftAmountTy() so far only return scalar type. As a
result, backend logic assuming that breaks.
- Rename the original TLI.getShiftAmountTy() to
TLI.getScalarShiftAmountTy() and re-define TLI.getShiftAmountTy() to
return target-specificed scalar type or the same vector type as the
1st operand.
- Fix most TICG logic assuming TLI.getShiftAmountTy() a simple scalar
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176364 91177308-0d34-0410-b5e6-96231b3b80d8
dispatch code. As far as I can tell the thumb2 code is behaving as expected.
I was able to compile and run the associated test case for both arm and thumb1.
rdar://13066352
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176363 91177308-0d34-0410-b5e6-96231b3b80d8
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176359 91177308-0d34-0410-b5e6-96231b3b80d8
The instcombine recognized pattern looks like:
a = b * c
d = a +/- Cst
or
a = b * c
d = Cst +/- a
When creating the new operands for fadd or fsub instruction following the related fmul, the first operand was created with the second original operand (M0 was created with C1) and the second with the first (M1 with Opnd0).
The fix consists in creating the new operands with the appropriate original operand, i.e., M0 with Opnd0 and M1 with C1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176300 91177308-0d34-0410-b5e6-96231b3b80d8
We make the cost for calling libm functions extremely high as emitting the
calls is expensive and causes spills (on x86) so performance suffers. We still
vectorize important calls like ceilf and friends on SSE4.1. and fabs.
Differential Revision: http://llvm-reviews.chandlerc.com/D466
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176287 91177308-0d34-0410-b5e6-96231b3b80d8
The work done by the post-encoder (setting architecturally unused bits to 0 as
required) can be done by the existing operand that covers the "#0.0". This
removes at least one use of the discouraged PostEncoderMethod uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176261 91177308-0d34-0410-b5e6-96231b3b80d8
If an otherwise weak var is actually defined in this unit, it can't be
undefined at runtime so we can use normal global variable sequences (ADRP/ADD)
to access it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176259 91177308-0d34-0410-b5e6-96231b3b80d8
Shadow checks are disabled and memory loads always produce fully initialized
values in functions that don't have a sanitize_memory attribute. Value and
argument shadow is propagated as usual.
This change also updates blacklist behaviour to match the above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176247 91177308-0d34-0410-b5e6-96231b3b80d8
Most of the tests that behave differently on llvm-arm-linux buildbot
did so becase the triple wasn't set correctly to armv5, so we can
revert most of the special behaviour added previously. Some tests
still need the special treatment, though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176243 91177308-0d34-0410-b5e6-96231b3b80d8
definition DIE (TAG_variable), and put AT_MIPS_linkage_name to TAG_member when
DarwinGDBCompat is true.
Darwin GDB needs AT_MIPS_linkage_name at both places to work.
Follow-up patch to r176143.
rdar://problem/13291234
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176220 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes an issue where trying to assemlbe valid ADR instructions would cause
LLVM to hit a failed assertion.
Patch by Keith Walker.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176189 91177308-0d34-0410-b5e6-96231b3b80d8
This properly asks TargetLibraryInfo if a call is available and if it is, it
can be translated into the corresponding LLVM builtin. We don't vectorize sqrt()
yet because I'm not sure about the semantics for negative numbers. The other
intrinsic should be exact equivalents to the libm functions.
Differential Revision: http://llvm-reviews.chandlerc.com/D465
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176188 91177308-0d34-0410-b5e6-96231b3b80d8
PR15262 reported a bug where the following instruction:
i8 getelementptr inbounds i8* bitcast ([4 x i8] addrspace(12)* @buf to i8*),
i32 2
was getting folded into:
addrspace(12)* getelementptr inbounds ([4 x i8] addrspace(12)* @buf, i32 0,
i32 2)
This caused instcombine to crash because the original instruction and
the folded instruction have different types. The issue was fixed by
disallowing bitcasts between different address spaces to be folded away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176156 91177308-0d34-0410-b5e6-96231b3b80d8