This shouldn't happen in practice because the icmp would be a constant.
Add a check so we don't miscompile code if something goes wrong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130446 91177308-0d34-0410-b5e6-96231b3b80d8
effective in avoiding recomputation of LCSSA form; the widespread
use of instsimplify (which looks through phi nodes) means it was
not preserving LCSSA form anyway; and instcombine is no longer
scheduled in the middle of the loop passes so this doesn't matter
anymore.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130301 91177308-0d34-0410-b5e6-96231b3b80d8
when X has multiple uses. This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use. For example,
this can disable load folding.
This is inching towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130238 91177308-0d34-0410-b5e6-96231b3b80d8
canonical, and generally leads to better code. Found while looking at
an article about saturating arithmetic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129545 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.
First part of <rdar://problem/8460511>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129401 91177308-0d34-0410-b5e6-96231b3b80d8
space info. We crash with an assert in this case. This change checks that the
address space of the bitcasted pointer is the same as the gep ptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128884 91177308-0d34-0410-b5e6-96231b3b80d8
It's possible to craft an input that hits the recursion limits in a way
that SimplifyDemandedBits doesn't simplify the icmp but ComputeMaskedBits
can infer which bits are zero.
No test case as it depends on too many other things. Fixes PR9609.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128777 91177308-0d34-0410-b5e6-96231b3b80d8
- Localize the check if an icmp has one use to a place where we know we're
introducing something that's likely more expensive than a sext from i1.
- Add an assert to make sure a case that would lead to a miscompilation is
folded away earlier.
- Fix a typo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128744 91177308-0d34-0410-b5e6-96231b3b80d8
removes one use of X which helps it pass the many hasOneUse() checks.
In my analysis, this turns up very often where X = A >>exact B and that can't be
simplified unless X has one use (except by increasing the lifetime of A which is
generally a performance loss).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128373 91177308-0d34-0410-b5e6-96231b3b80d8
load and store reference same memory location, the memory location
is represented by getelementptr with two uses (load and store) and
the getelementptr's base is alloca with single use. At this point,
instructions from alloca to store can be removed.
(this pattern is generated when bitfield is accessed.)
For example,
%u = alloca %struct.test, align 4 ; [#uses=1]
%0 = getelementptr inbounds %struct.test* %u, i32 0, i32 0;[#uses=2]
%1 = load i8* %0, align 4 ; [#uses=1]
%2 = and i8 %1, -16 ; [#uses=1]
%3 = or i8 %2, 5 ; [#uses=1]
store i8 %3, i8* %0, align 4
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127565 91177308-0d34-0410-b5e6-96231b3b80d8
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127418 91177308-0d34-0410-b5e6-96231b3b80d8