Don't promote byval pointer arguments when when their size in bits is
not equal to their alloc size in bits. This can happen for x86_fp80,
where the size in bits is 80 but the alloca size in bits in 128.
Promoting these types can break passing unions of x86_fp80s and other
types.
Patch by Thomas Jablin!
Reviewed By: rnk
Differential Revision: http://reviews.llvm.org/D5057
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216693 91177308-0d34-0410-b5e6-96231b3b80d8
For a detailed description of the problem see the comment in the test file.
The problematic moveBefore() calls are not required anymore because the new
scheduling algorithm ensures a correct ordering anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216656 91177308-0d34-0410-b5e6-96231b3b80d8
Several combines involving icmp (shl C2, %X) C1 can be simplified
without introducing any new instructions. Move them to InstSimplify;
while we are at it, make them more powerful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216642 91177308-0d34-0410-b5e6-96231b3b80d8
We try to perform this transform in InstSimplify but we aren't always
able to. Sometimes, we need to insert a bitcast if X and Y don't have
the same time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216598 91177308-0d34-0410-b5e6-96231b3b80d8
It's incorrect to perform this simplification if the types differ.
A bitcast would need to be inserted for this to work.
This fixes PR20771.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216597 91177308-0d34-0410-b5e6-96231b3b80d8
'shl nuw CI, x' produces [CI, CI << CLZ(CI)]
'shl nsw CI, x' produces [CI << CLO(CI)-1, CI] if CI is negative
'shl nsw CI, x' produces [CI, CI << CLZ(CI)-1] if CI is non-negative
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216570 91177308-0d34-0410-b5e6-96231b3b80d8
We supported transforming:
(gep i8* X, -(ptrtoint Y))
to:
(inttoptr (sub (ptrtoint X), (ptrtoint Y)))
However, this only fired if 'X' had type i8*. Generalize this to
support various types of different sizes. This results in much better
CodeGen, especially for pointers to packed structs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216523 91177308-0d34-0410-b5e6-96231b3b80d8
consider:
long long *f(long long *b, long long *e) {
return b + (e - b);
}
we would lower this to something like:
define i64* @f(i64* %b, i64* %e) {
%1 = ptrtoint i64* %e to i64
%2 = ptrtoint i64* %b to i64
%3 = sub i64 %1, %2
%4 = ashr exact i64 %3, 3
%5 = getelementptr inbounds i64* %b, i64 %4
ret i64* %5
}
This should fold away to just 'e'.
N.B. This adds m_SpecificInt as a convenient way to match against a
particular 64-bit integer when using LLVM's match interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216439 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
There is no functionality change here except in the way we assemble and
dump musttail calls in variadic functions. There's really no need to
separate out the bits for musttail and "is forwarding varargs" on call
instructions. A musttail call by definition has to forward the ellipsis
or it would fail verification.
Reviewers: chandlerc, nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D4892
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216423 91177308-0d34-0410-b5e6-96231b3b80d8
Adding, removing, or changing non-pack parameters can change the ABI
classification of pack parameters. Clang and other frontends encode the
classification in the IR of the call site, but the callee side
determines it dynamically based on the number of registers consumed so
far. Changing the prototype affects the number of registers consumed
would break such code.
Dead argument elimination performs a similar task and already has a
similar check to avoid this problem.
Patch by Thomas Jablin!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216421 91177308-0d34-0410-b5e6-96231b3b80d8
GlobalDCE deletes global vars and updates their initializers to nullptr
while leaving underlying constants to be cleaned up later by its uses.
The clean up may never happen, fix this by forcing it every time it's
safe to destroy constants.
Final patch by Rafael Espindola
http://reviews.llvm.org/D4931
<rdar://problem/17523868>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216390 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support to recognize division by uniform power of 2 and modifies the cost table to vectorize division by uniform power of 2 whenever possible.
Updates Cost model for Loop and SLP Vectorizer.The cost table is currently only updated for X86 backend.
Thanks to Hal, Andrea, Sanjay for the review. (http://reviews.llvm.org/D4971)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216371 91177308-0d34-0410-b5e6-96231b3b80d8
CFE, with -03, would turn:
bool f(unsigned x) {
bool a = x & 1;
bool b = x & 2;
return a | b;
}
into:
%1 = lshr i32 %x, 1
%2 = or i32 %1, %x
%3 = and i32 %2, 1
%4 = icmp ne i32 %3, 0
This sort of thing exposes a nasty pathology in GCC, ICC and LLVM.
Instead, we would rather want:
%1 = and i32 %x, 3
%2 = icmp ne i32 %1, 0
Things get a bit more interesting in the following case:
%1 = lshr i32 %x, %y
%2 = or i32 %1, %x
%3 = and i32 %2, 1
%4 = icmp ne i32 %3, 0
Replacing it with the following sequence is better:
%1 = shl nuw i32 1, %y
%2 = or i32 %1, 1
%3 = and i32 %2, %x
%4 = icmp ne i32 %3, 0
This sequence is preferable because %1 doesn't involve %x and could
potentially be hoisted out of loops if it is invariant; only perform
this transform in the non-constant case if we know we won't increase
register pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216343 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Fixes PR20425.
During slice building, if all of the incoming values of a PHI node are the same, replace the PHI node with the common value. This simplification makes alloca's used by PHI nodes easier to promote.
Test Plan: Added three more tests in phi-and-select.ll
Reviewers: nlewycky, eliben, meheff, chandlerc
Reviewed By: chandlerc
Subscribers: zinovy.nis, hfinkel, baldrick, llvm-commits
Differential Revision: http://reviews.llvm.org/D4659
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216299 91177308-0d34-0410-b5e6-96231b3b80d8
Consider:
%add = add nuw i32 %a, -16777216
%and = and i32 %add, 255
Regardless of whether or not we demand the sign bit of %add, we cannot
replace -16777216 with 2130706432 without also removing 'nuw' from the
instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216273 91177308-0d34-0410-b5e6-96231b3b80d8
Consider:
%add = add nsw i32 %a, -16777216
%and = and i32 %add, 255
Regardless of whether or not we demand the sign bit of %add, we cannot
replace -16777216 with 2130706432 without also removing 'nsw' from the
instruction.
This fixes PR20377.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216261 91177308-0d34-0410-b5e6-96231b3b80d8
In unreachable blocks it's legal to have instructions like "%x = op %x".
Such instuctions are not schedulable. Therefore the SLPVectorizer has to check for
unreachable blocks and ignore them.
Fixes bug 20646.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216256 91177308-0d34-0410-b5e6-96231b3b80d8
Given something like X01XX + X01XX, we know that the result must look
like X1XXX.
Adapted from a patch by Richard Smith, test-case written by me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216250 91177308-0d34-0410-b5e6-96231b3b80d8
In this case, we are creating an x86_fp80 slice for a union from C where
the padding bytes may contain real data. An x86_fp80 alloca is 16 bytes,
and that's just fine. We can't, however, use regular loads and stores to
access the slice, because the store size is only 10 bytes / 80 bits.
Instead, use memcpy and memset.
Fixes PR18726.
Reviewed By: chandlerc
Differential Revision: http://reviews.llvm.org/D5012
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216248 91177308-0d34-0410-b5e6-96231b3b80d8
Somewhat unnoticed in the original implementation of discriminators, but
it could cause instructions to end up in new, small,
DW_TAG_lexical_blocks due to the use of DILexicalBlock to track
discriminator changes.
Instead, use DILexicalBlockFile which we already use to track file
changes without introducing new scopes, so it works well to track
discriminator changes in the same way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216239 91177308-0d34-0410-b5e6-96231b3b80d8
Currently only "add nsw" are widened. This patch eliminates tons of "sext" instructions for 64 bit code (and the corresponding target code) in cases like:
int N = 100;
float **A;
void foo(int x0, int x1)
{
float * A_cur = &A[0][0];
float * A_next = &A[1][0];
for(int x = x0; x < x1; ++x).
{
// Currently only [x+N] case is widened. Others 2 cases lead to sext.
// This patch fixes it, so all 3 cases do not need sext.
const float div = A_cur[x + N] + A_cur[x - N] + A_cur[x * N];
A_next[x] = div;
}
}
...
> clang++ test.cpp -march=core-avx2 -Ofast -fno-unroll-loops -fno-tree-vectorize -S -o -
Differential Revision: http://reviews.llvm.org/D4695
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216160 91177308-0d34-0410-b5e6-96231b3b80d8
We can prove that a 'sub' can be a 'sub nuw' if the left-hand side is
negative and the right-hand side is non-negative.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216045 91177308-0d34-0410-b5e6-96231b3b80d8
We can prove that a 'sub' can be a 'sub nsw' under certain conditions:
- The sign bits of the operands is the same.
- Both operands have more than 1 sign bit.
The subtraction cannot be a signed overflow in either case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216037 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, the hint mechanism relied on clean up passes to remove redundant
metadata, which still showed up if running opt at low levels of optimization.
That also has shown that multiple nodes of the same type, but with different
values could still coexist, even if temporary, and cause confusion if the
next pass got the wrong value.
This patch makes sure that, if metadata already exists in a loop, the hint
mechanism will never append a new node, but always replace the existing one.
It also enhances the algorithm to cope with more metadata types in the future
by just adding a new type, not a lot of code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215994 91177308-0d34-0410-b5e6-96231b3b80d8
- add check for volatile (probably unneeded, but I agree that we should be conservative about it).
- strengthen condition from isUnordered() to isSimple(), as I don't understand well enough Unordered semantics (and it also matches the comment better this way) to be confident in the previous behaviour (thanks for catching that one, I had missed the case Monotonic/Unordered).
- separate a condition in two.
- lengthen comment about aliasing and loads
- add tests in GVN/atomic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215943 91177308-0d34-0410-b5e6-96231b3b80d8
While this might seem like an obvious canonicalization, there is one subtle problem with it. The result of the original expression
is undef when x is NaN (remember, fast math flags), but the result of the select is always defined when x is NaN. This means that the
new expression is strictly more defined than the original one. One unfortunate consequence of this is that the transform is not reversible!
It's always legal to make increase the defined-ness of an expression, but it's not legal to reduce it. Thus, targets that prefer the original
form of the expression cannot reverse the transform to recover it. Another way to think of it is that the transform has lost source-level
information (the fast math flags), which is undesirable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215825 91177308-0d34-0410-b5e6-96231b3b80d8
We can combne a mul with a div if one of the operands is a multiple of
the other:
%mul = mul nsw nuw %a, C1
%ret = udiv %mul, C2
=>
%ret = mul nsw %a, (C1 / C2)
This can expose further optimization opportunities if we end up
multiplying or dividing by a power of 2.
Consider this small example:
define i32 @f(i32 %a) {
%mul = mul nuw i32 %a, 14
%div = udiv exact i32 %mul, 7
ret i32 %div
}
which gets CodeGen'd to:
imull $14, %edi, %eax
imulq $613566757, %rax, %rcx
shrq $32, %rcx
subl %ecx, %eax
shrl %eax
addl %ecx, %eax
shrl $2, %eax
retq
We can now transform this into:
define i32 @f(i32 %a) {
%shl = shl nuw i32 %a, 1
ret i32 %shl
}
which gets CodeGen'd to:
leal (%rdi,%rdi), %eax
retq
This fixes PR20681.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215815 91177308-0d34-0410-b5e6-96231b3b80d8
When a call site with noalias metadata is inlined, that metadata can be
propagated directly to the inlined instructions (only those that might access
memory because it is not useful on the others). Prior to inlining, the noalias
metadata could express that a call would not alias with some other memory
access, which implies that no instruction within that called function would
alias. By propagating the metadata to the inlined instructions, we preserve
that knowledge.
This should complete the enhancements requested in PR20500.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215676 91177308-0d34-0410-b5e6-96231b3b80d8
When preserving noalias function parameter attributes by adding noalias
metadata in the inliner, we should do this for general function calls (not just
memory intrinsics). The logic is very similar to what already existed (except
that we want to add this metadata even for functions taking no relevant
parameters). This metadata can be used by ModRef queries in the caller after
inlining.
This addresses the first part of PR20500. Adding noalias metadata during
inlining is still turned off by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215657 91177308-0d34-0410-b5e6-96231b3b80d8
Vector instructions are (still) not supported for either integer or floating
point. Hopefully, that work will be landed shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215647 91177308-0d34-0410-b5e6-96231b3b80d8
v2: continue iterating through the rest of the bb
use for loop
v3: initialize FlattenCFG pass in ScalarOps
add test
v4: split off initializing flattencfg to a separate patch
add comment
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215574 91177308-0d34-0410-b5e6-96231b3b80d8
attribute and function argument attribute synthesizing and propagating.
As with the other uses of this attribute, the goal remains a best-effort
(no guarantees) attempt to not optimize the function or assume things
about the function when optimizing. This is particularly useful for
compiler testing, bisecting miscompiles, triaging things, etc. I was
hitting specific issues using optnone to isolate test code from a test
driver for my fuzz testing, and this is one step of fixing that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215538 91177308-0d34-0410-b5e6-96231b3b80d8