1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26131 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26122 91177308-0d34-0410-b5e6-96231b3b80d8
uses of loop values outside the loop. We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26089 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26087 91177308-0d34-0410-b5e6-96231b3b80d8
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26050 91177308-0d34-0410-b5e6-96231b3b80d8
is just as efficient as MVIZ and is also more general.
Fix a few minor bugs introduced in recent patches
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26036 91177308-0d34-0410-b5e6-96231b3b80d8
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26035 91177308-0d34-0410-b5e6-96231b3b80d8
'demanded bits', inspired by Nate's work in the dag combiner. This isn't
complete, but needs to unrelated instcombiner changes to continue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26033 91177308-0d34-0410-b5e6-96231b3b80d8
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].
Tested with: rem.ll:test5, div.ll:test10
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26003 91177308-0d34-0410-b5e6-96231b3b80d8
1. When rewriting code in outer loops, sometimes we would insert code into
inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.
This is a performance neutral change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25964 91177308-0d34-0410-b5e6-96231b3b80d8
the shifts.
This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):
int %x(uint %anf-temporary776) {
%anf-temporary777 = shr uint %anf-temporary776, ubyte 1
%anf-temporary800 = cast uint %anf-temporary777 to int
%anf-temporary804 = shl int %anf-temporary800, ubyte 1
%anf-temporary805 = add int %anf-temporary804, -2
%anf-temporary806 = or int %anf-temporary805, 1
ret int %anf-temporary806
}
into this:
int %x(uint %anf-temporary776) {
%anf-temporary776 = cast uint %anf-temporary776 to int
%anf-temporary776.mask1 = add int %anf-temporary776, -2
%anf-temporary805 = or int %anf-temporary776.mask1, 1
ret int %anf-temporary805
}
note that instcombine already knew how to eliminate the AND that the two
shifts fold into. This is tested by InstCombine/shift.ll:test26
-Chris
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25128 91177308-0d34-0410-b5e6-96231b3b80d8