Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand. This lets us fold this:
int %test23(int %a) {
%tmp.1 = and int %a, 1
%tmp.2 = seteq int %tmp.1, 0
%tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1
ret int %tmp.3
}
into: xor (and a, 1), 1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26396 91177308-0d34-0410-b5e6-96231b3b80d8
caused SPASS to fail building last night.
We can't trivially unswitch a loop if the exit block has phi nodes in it,
because we don't know which predecessor to use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26320 91177308-0d34-0410-b5e6-96231b3b80d8
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26238 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a testcase that nate reduced from spass.
Also included are a couple minor code changes that don't affect the generated
code at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26235 91177308-0d34-0410-b5e6-96231b3b80d8
unswitch this loop on 2 before sweating to unswitch on 1/3.
void test4(int N, int i, int C, int*P, int*Q) {
int j;
for (j = 0; j < N; ++j) {
switch (C) { // general unswitching.
default: P[i+j] = 0; break;
case 1: Q[i+j] = 0; break;
case 3: P[i+j] = Q[i+j]; break;
case 2: break; // TRIVIAL UNSWITCH on C==2
}
}
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26223 91177308-0d34-0410-b5e6-96231b3b80d8
this for example:
for (j = 0; j < N; ++j) { // trivial unswitch
if (C)
P[i+j] = 0;
}
turning it into the obvious code without bothering to duplicate an empty loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26220 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26131 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26122 91177308-0d34-0410-b5e6-96231b3b80d8
uses of loop values outside the loop. We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26089 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26087 91177308-0d34-0410-b5e6-96231b3b80d8
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26050 91177308-0d34-0410-b5e6-96231b3b80d8