intersecting bits. This triggers all over the place, for example in lencode,
with adds of stuff like:
%tmp580 = mul i32 %tmp579, 2
%tmp582 = and i32 %b8, 1
and
%tmp28 = shl i32 %abs.i, 1
%sign.0 = select i1 %tmp23, i32 1, i32 0
and
%tmp344 = shl i32 %tmp343, 2
%tmp346 = and i32 %tmp96, 3
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51263 91177308-0d34-0410-b5e6-96231b3b80d8
is bitcast to return a floating point value. The result of the instruction may
not be used by the program afterwards, and LLVM will happily remove all
instructions except the call. But, on some platforms, if a value is returned as
a floating point, it may need to be removed from the stack (like x87). Thus, we
can't get rid of the bitcast even if there isn't a use of the value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51134 91177308-0d34-0410-b5e6-96231b3b80d8
also need to be checked for memory modifying instructions before we
can sink them. THis fixes the second half of PR2297.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50860 91177308-0d34-0410-b5e6-96231b3b80d8
ComputeMaskedBits knows about cttz, ctlz, and ctpop. Teach
SelectionDAG's ComputeMaskedBits what InstCombine's knows
about SRem. And teach them both some things about high bits
in Mul, UDiv, URem, and Sub. This allows instcombine and
dagcombine to eliminate sign-extension operations in
several new cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50358 91177308-0d34-0410-b5e6-96231b3b80d8
getelementptr-seteq.ll into:
define i1 @test(i64 %X, %S* %P) {
%C = icmp eq i64 %X, -1 ; <i1> [#uses=1]
ret i1 %C
}
instead of:
define i1 @test(i64 %X, %S* %P) {
%A.idx.mask = and i64 %X, 4611686018427387903 ; <i64> [#uses=1]
%C = icmp eq i64 %A.idx.mask, 4611686018427387903 ; <i1> [#uses=1]
ret i1 %C
}
And fixes the second half of PR2235. This speeds up the insertion sort
case by 45%, from 1.12s to 0.77s. In practice, this will significantly
speed up for loops structured like:
for (double *P = Base + N; P != Base; --P)
...
Which happens frequently for C++ iterators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50079 91177308-0d34-0410-b5e6-96231b3b80d8
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
as a ComputeMaskedBits problem, moving all of its special alignment
knowledge to ComputeMaskedBits as low-zero-bits knowledge.
Also, teach ComputeMaskedBits a few basic things about Mul and PHI
instructions.
This improves ComputeMaskedBits-based simplifications in a few cases,
but more noticeably it significantly improves instcombine's alignment
detection for loads, stores, and memory intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49492 91177308-0d34-0410-b5e6-96231b3b80d8
simplify things like (X & 4) >> 1 == 2 --> (X & 4) == 4.
since it is obvious that the shift doesn't remove any bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48631 91177308-0d34-0410-b5e6-96231b3b80d8
the type instead of the byte size. This was causing troublesome mis-compilations.
True to form, this took 2 days to find and is a one-line fix. :-P
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48354 91177308-0d34-0410-b5e6-96231b3b80d8
was incorrectly simplifying "x == (gep x, 1, i)" into false, even
though i could be negative. As it turns out, all the code to
handle this already existed, we just need to disable the incorrect
optimization case and let the general case handle it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46739 91177308-0d34-0410-b5e6-96231b3b80d8