Multiplication by an integer with a number of trailing zero bits leaves
the same number of lower bits of the result initialized to zero.
This change makes MSan take this into account in the case of multiplication by
a compile-time constant.
We don't handle the general, non-constant, case because
(a) it's not going to be cheap (computation-wise);
(b) multiplication by a partially uninitialized value in user code is
a bad idea anyway.
Constant case must be handled because it appears from LLVM optimization of a
completely valid user code, as the test case in compiler-rt demonstrates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211092 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
As a starting step, we only use one simple heuristic: if the sign bits
of both a and b are zero, we can prove "add a, b" do not unsigned
overflow, and thus convert it to "add nuw a, b".
Updated all affected tests and added two new tests (@zero_sign_bit and
@zero_sign_bit2) in AddOverflow.ll
Test Plan: make check-all
Reviewers: eliben, rafael, meheff, chandlerc
Reviewed By: chandlerc
Subscribers: chandlerc, llvm-commits
Differential Revision: http://reviews.llvm.org/D4144
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211084 91177308-0d34-0410-b5e6-96231b3b80d8
r199771 accidently broke the logic that makes sure that SROA only splits
load on byte boundaries. If such a split happens, some bits get lost
when reassembling loads of wider types, causing data corruption.
Move the width check up to reject such splits early, avoiding the
corruption. Fixes PR19250.
Patch by: Björn Steinbrink <bsteinbr@gmail.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211082 91177308-0d34-0410-b5e6-96231b3b80d8
[This is resubmitting r210721, which was reverted due to suspected breakage
which turned out to be unrelated].
Some extra review comments were addressed. See D4090 and D4147 for more details.
The Clang change that produces this metadata was committed in r210667
Patch by Mark Heffernan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211076 91177308-0d34-0410-b5e6-96231b3b80d8
When LowerSwitch transforms a switch instruction into a tree of ifs it
is actually performing a binary search into the various case ranges, to
see if the current value falls into one cases range of values.
So, if we have a program with something like this:
switch (a) {
case 0:
do0();
break;
case 1:
do1();
break;
case 2:
do2();
break;
default:
break;
}
the code produced is something like this:
if (a < 1) {
if (a == 0) {
do0();
}
} else {
if (a < 2) {
if (a == 1) {
do1();
}
} else {
if (a == 2) {
do2();
}
}
}
This code is inefficient because the check (a == 1) to execute do1() is
not needed.
The reason is that because we already checked that (a >= 1) initially by
checking that also (a < 2) we basically already inferred that (a == 1)
without the need of an extra basic block spawned to check if actually (a
== 1).
The patch addresses this problem by keeping track of already
checked bounds in the LowerSwitch algorithm, so that when the time
arrives to produce a Leaf Block that checks the equality with the case
value / range the algorithm can decide if that block is really needed
depending on the already checked bounds .
For example, the above with "a = 1" would work like this:
the bounds start as LB: NONE , UB: NONE
as (a < 1) is emitted the bounds for the else path become LB: 1 UB:
NONE. This happens because by failing the test (a < 1) we know that the
value "a" cannot be smaller than 1 if we enter the else branch.
After the emitting the check (a < 2) the bounds in the if branch become
LB: 1 UB: 1. This is because by checking that "a" is smaller than 2 then
the upper bound becomes 2 - 1 = 1.
When it is time to emit the leaf block for "case 1:" we notice that 1
can be squeezed exactly in between the LB and UB, which means that if we
arrived to that block there is no need to emit a block that checks if (a
== 1).
Patch by: Marcello Maggioni <hayarms@gmail.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211038 91177308-0d34-0410-b5e6-96231b3b80d8
As a follow-up to r210375 which canonicalizes addrspacecast
instructions, this patch canonicalizes addrspacecast constant
expressions.
Given clang uses ConstantExpr::getAddrSpaceCast to emit addrspacecast
cosntant expressions, this patch is also a step towards having the
frontend emit canonicalized addrspacecasts.
Piggyback a minor refactor in InstCombineCasts.cpp
Update three affected tests in addrspacecast-alias.ll,
access-non-generic.ll and constant-fold-gep.ll and added one new test in
constant-fold-address-space-pointer.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211004 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is to move GlobalMerge pass from Transform/Scalar
to CodeGen, because GlobalMerge depends on TargetMachine.
In the mean time, the macro INITIALIZE_TM_PASS is also moved
to CodeGen/Passes.h. With this fix we can avoid making
libScalarOpts depend on libCodeGen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210951 91177308-0d34-0410-b5e6-96231b3b80d8
Init-order and use-after-return modes can currently be enabled
by runtime flags. use-after-scope mode is not really working at the
moment.
The only problem I see is that users won't be able to disable extra
instrumentation for init-order and use-after-scope by a top-level Clang flag.
But this instrumentation was implicitly enabled for quite a while and
we didn't hear from users hurt by it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210924 91177308-0d34-0410-b5e6-96231b3b80d8
This commit adds a weak variant of the cmpxchg operation, as described
in C++11. A cmpxchg instruction with this modifier is permitted to
fail to store, even if the comparison indicated it should.
As a result, cmpxchg instructions must return a flag indicating
success in addition to their original iN value loaded. Thus, for
uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The
second flag is 1 when the store succeeded.
At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been
added as the natural representation for the new cmpxchg instructions.
It is a strong cmpxchg.
By default this gets Expanded to the existing ATOMIC_CMP_SWAP during
Legalization, so existing backends should see no change in behaviour.
If they wish to deal with the enhanced node instead, they can call
setOperationAction on it. Beware: as a node with 2 results, it cannot
be selected from TableGen.
Currently, no use is made of the extra information provided in this
patch. Test updates are almost entirely adapting the input IR to the
new scheme.
Summary for out of tree users:
------------------------------
+ Legacy Bitcode files are upgraded during read.
+ Legacy assembly IR files will be invalid.
+ Front-ends must adapt to different type for "cmpxchg".
+ Backends should be unaffected by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210903 91177308-0d34-0410-b5e6-96231b3b80d8
Enable value forwarding for loads from `calloc()` without an intervening
store.
This change extends GVN to handle the following case:
%1 = tail call noalias i8* @calloc(i64 1, i64 4)
%2 = bitcast i8* %1 to i32*
; This load is trivially constant zero
%3 = load i32* %2, align 4
This is analogous to the handling for `malloc()` in the same places.
`malloc()` returns `undef`; `calloc()` returns a zero value. Note that
it is correct to return zero even for out of bounds GEPs since the
result of such a GEP would be undefined.
Patch by Philip Reames!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210828 91177308-0d34-0410-b5e6-96231b3b80d8
This is a minimal change to remove the header. I will remove the occurrences
of "using std::error_code" in a followup patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210803 91177308-0d34-0410-b5e6-96231b3b80d8
Pass initialization requires to initialize TargetMachine for back-end
specific passes. This commit creates a new macro INITIALIZE_TM_PASS to
simplify this kind of initialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210641 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is to improve global merge pass and support global symbol merge.
The global symbol merge is not enabled by default. For aarch64, we need some
more back-end fix to make it really benifit ADRP CSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210640 91177308-0d34-0410-b5e6-96231b3b80d8
never be true in a well-defined context. The checking for null pointers
has been moved into the caller logic so it does not rely on undefined behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210497 91177308-0d34-0410-b5e6-96231b3b80d8
Instructions from __nodebug__ functions don't have file:line
information even when inlined into no-nodebug functions. As a result,
intrinsics (SSE and other) from <*intrin.h> clang headers _never_
have file:line information.
With this change, an instruction without !dbg metadata gets one from
the call instruction when inlined.
Fixes PR19001.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210459 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a crash on MMX intrinsics, as well as a corner case in handling of
all unsigned pack intrinsics.
PR19953.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210454 91177308-0d34-0410-b5e6-96231b3b80d8
For each array index that is in the form of zext(a), convert it to sext(a)
if we can prove zext(a) <= max signed value of typeof(a). The conversion
helps to split zext(x + y) into sext(x) + sext(y).
Reviewed in http://reviews.llvm.org/D4060
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210444 91177308-0d34-0410-b5e6-96231b3b80d8
zext(a + b) != zext(a) + zext(b) even if a + b >= 0 && b >= 0.
e.g., a = i4 0b1111, b = i4 0b0001
zext a + b to i8 = zext 0b0000 to i8 = 0b00000000
(zext a to i8) + (zext b to i8) = 0b00001111 + 0b00000001 = 0b00010000
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210439 91177308-0d34-0410-b5e6-96231b3b80d8
The messages were
"PR19753: Optimize comparisons with "ashr exact" of a constanst."
"Added support to optimize comparisons with "lshr exact" of a constant."
They were not correctly handling signed/unsigned operation differences,
causing pr19958.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210393 91177308-0d34-0410-b5e6-96231b3b80d8
addrspacecast X addrspace(M)* to Y addrspace(N)*
-->
bitcast X addrspace(M)* to Y addrspace(M)*
addrspacecast Y addrspace(M)* to Y addrspace(N)*
Updat all affected tests and add several new tests in addrspacecast.ll.
This patch is based on http://reviews.llvm.org/D2186 (authored by Matt
Arsenault) with fixes and more tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210375 91177308-0d34-0410-b5e6-96231b3b80d8
If we have common uses on separate paths in the tree; process the one with greater common depth first.
This makes sure that we do not assume we need to extract a load when it is actually going to be part of a vectorized tree.
Review: http://reviews.llvm.org/D3800
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210310 91177308-0d34-0410-b5e6-96231b3b80d8
Most issues are on mishandling s/zext.
Fixes:
1. When rebuilding new indices, s/zext should be distributed to
sub-expressions. e.g., sext(a +nsw (b +nsw 5)) = sext(a) + sext(b) + 5 but not
sext(a + b) + 5. This also affects the logic of recursively looking for a
constant offset, we need to include s/zext into the context of the searching.
2. Function find should return the bitwidth of the constant offset instead of
always sign-extending it to i64.
3. Stop shortcutting zext'ed GEP indices. LLVM conceptually sign-extends GEP
indices to pointer-size before computing the address. Therefore, gep base,
zext(a + b) != gep base, a + b
Improvements:
1. Add an optimization for splitting sext(a + b): if a + b is proven
non-negative (e.g., used as an index of an inbound GEP) and one of a, b is
non-negative, sext(a + b) = sext(a) + sext(b)
2. Function Distributable checks whether both sext and zext can be distributed
to operands of a binary operator. This helps us split zext(sext(a + b)) to
zext(sext(a) + zext(sext(b)) when a + b does not signed or unsigned overflow.
Refactoring:
Merge some common logic of handling add/sub/or in find.
Testing:
Add many tests in split-gep.ll and split-gep-and-gvn.ll to verify the changes
we made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210291 91177308-0d34-0410-b5e6-96231b3b80d8
As discussed in cfe commit r210279, the correct little-endian
semantics for the vec_perm Altivec interfaces are implemented by
reversing the order of the input vectors and complementing the permute
control vector. This converts the desired permute from little endian
element order into the big endian element order that the underlying
PowerPC vperm instruction uses. This is represented with a
ppc_altivec_vperm intrinsic function.
The instruction combining pass contains code to convert a
ppc_altivec_vperm intrinsic into a vector shuffle operation when the
intrinsic has a permute control vector (mask) that is a constant.
However, the vector shuffle operation assumes that vector elements are
in natural order for their endianness, so for little endian code we
will get the wrong result with the existing transformation.
This patch reverses the semantic change to vec_perm that was performed
in altivec.h by once again swapping the input operands and
complementing the permute control vector, returning the element
ordering to little endian.
The correctness of this code is tested by the new perm.c test added in
a previous patch, and by other tests in the test suite that fail
without this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210282 91177308-0d34-0410-b5e6-96231b3b80d8
It includes a pass that rewrites all indirect calls to jumptable functions to pass through these tables.
This also adds backend support for generating the jump-instruction tables on ARM and X86.
Note that since the jumptable attribute creates a second function pointer for a
function, any function marked with jumptable must also be marked with unnamed_addr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210280 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements two things:
1. If we know one number is positive and another is negative, we return true as
signed addition of two opposite signed numbers will never overflow.
2. Implemented TODO : If one of the operands only has one non-zero bit, and if
the other operand has a known-zero bit in a more significant place than it
(not including the sign bit) the ripple may go up to and fill the zero, but
won't change the sign. e.x - (x & ~4) + 1
We make sure that we are ignoring 0 at MSB.
Patch by Suyog Sarda.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210186 91177308-0d34-0410-b5e6-96231b3b80d8