* Properly compile this:
struct a {};
int test() {
struct a b[2];
if (&b[0] != &b[1])
abort ();
return 0;
}
to 'return 0', not abort().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19875 91177308-0d34-0410-b5e6-96231b3b80d8
The second folds operations into selects, e.g. (select C, (X+Y), (Y+Z))
-> (Y+(select C, X, Z)
This occurs a few times across spec, e.g.
select add/sub
mesa: 83 0
povray: 5 2
gcc 4 2
parser 0 22
perlbmk 13 30
twolf 0 3
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19706 91177308-0d34-0410-b5e6-96231b3b80d8
Disable the xform for < > cases. It turns out that the following is being
miscompiled:
bool %test(sbyte %S) {
%T = cast sbyte %S to uint
%V = setgt uint %T, 255
ret bool %V
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19628 91177308-0d34-0410-b5e6-96231b3b80d8
* We can now fold cast instructions into select instructions that
have at least one constant operand.
* We now optimize expressions more aggressively based on bits that are
known to be zero. These optimizations occur a lot in code that uses
bitfields even in simple ways.
* We now turn more cast-cast sequences into AND instructions. Before we
would only do this if it if all types were unsigned. Now only the
middle type needs to be unsigned (guaranteeing a zero extend).
* We transform sign extensions into zero extensions in several cases.
This corresponds to these test/Regression/Transforms/InstCombine testcases:
2004-11-22-Missed-and-fold.ll
and.ll: test28-29
cast.ll: test21-24
and-or-and.ll
cast-cast-to-and.ll
zeroext-and-reduce.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19220 91177308-0d34-0410-b5e6-96231b3b80d8
successor block. This turns cases like this:
x = a op b
if (c) {
use x
}
into:
if (c) {
x = a op b
use x
}
This triggers 3965 times in spec, and is tested by
Regression/Transforms/InstCombine/sink_instruction.ll
This appears to expose a bug in the X86 backend for 177.mesa, which I'm
looking in to.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18677 91177308-0d34-0410-b5e6-96231b3b80d8
* Make sure we handle signed to unsigned conversion correctly
* Move this visitSetCondInst case to its own method.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18312 91177308-0d34-0410-b5e6-96231b3b80d8
If this happens, detect it early instead of relying on instcombine to notice
it later. This can be a big speedup, because PHI nodes can have many
incoming values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17741 91177308-0d34-0410-b5e6-96231b3b80d8
This exposes subsequent optimization possiblities and reduces code size.
This triggers 1423 times in spec.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17740 91177308-0d34-0410-b5e6-96231b3b80d8
%X = alloca ...
%Y = alloca ...
X == Y
into false. This allows us to simplify some stuff in eon (and probably
many other C++ programs) where operator= was checking for self assignment.
Folding this allows us to SROA several additional structs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17735 91177308-0d34-0410-b5e6-96231b3b80d8
for (X * C1) + (X * C2) (where * can be mul or shl), allowing us to fold:
Y+Y+Y+Y+Y+Y+Y+Y
into
%tmp.8 = shl long %Y, ubyte 3 ; <long> [#uses=1]
instead of
%tmp.4 = shl long %Y, ubyte 2 ; <long> [#uses=1]
%tmp.12 = shl long %Y, ubyte 2 ; <long> [#uses=1]
%tmp.8 = add long %tmp.4, %tmp.12 ; <long> [#uses=1]
This implements add.ll:test25
Also add support for (X*C1)-(X*C2) -> X*(C1-C2), implementing sub.ll:test18
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17704 91177308-0d34-0410-b5e6-96231b3b80d8
* SubOne/AddOne functions always return ConstantInt, declare them as such
* Pull code for handling setcc X, cst, where cst is at the end of the range,
or cc is LE or GE up earlier in visitSetCondInst. This reduces #iterations
in some cases.
* Fold: (div X, C1) op C2 -> range check, implementing div.ll:test6 - test9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16588 91177308-0d34-0410-b5e6-96231b3b80d8
This takes something like this:
%A = phi int [ 3, %cond_false.0 ], [ 2, %endif.0.i ], [ 2, %endif.1.i ]
%B = div int %tmp.243, 4
and turns it into:
%A = phi int [ 3/4, %cond_false.0 ], [ 2/4, %endif.0.i ], [ 2/4, %endif.1.i ]
which is later simplified (in this case) into %A = 0.
This triggers thousands of times in spec, for example, 269 times in 176.gcc.
This is tested by InstCombine/add.ll:test23 and set.ll:test18.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16582 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine (setcc (truncate X), C1).
This occurs THOUSANDS of times in many benchmarks. Particularlly common
seem to be things like (seteq (cast bool X to int), int 0)
This turns it into (seteq bool %X, false), which then becomes (not %X).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16567 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for several reasons:
1. Benchmarks have lots of code that looks like this (perlbmk in particular):
%tmp.2.i = setne int %tmp.0.i, 128 ; <bool> [#uses=1]
%tmp.6343 = seteq int %tmp.0.i, 1 ; <bool> [#uses=1]
%tmp.63 = and bool %tmp.2.i, %tmp.6343 ; <bool> [#uses=1]
we now fold away the setne, a clear improvement.
2. In the more important cases, such as (X >= 10) & (X < 20), we now produce
smaller code: (X-10) < 10.
3. Perhaps the nicest effect of this patch is that it really helps out the
code generators. In particular, for a 'range test' like the above,
instead of generating this on X86 (the difference on PPC is even more
pronounced):
cmp %EAX, 50
setge %CL
cmp %EAX, 100
setl %AL
and %CL, %AL
cmp %CL, 0
we now generate this:
add %EAX, -50
cmp %EAX, 50
Furthermore, this causes setcc's to be folded into branches more often.
These combinations trigger dozens of times in the spec benchmarks, particularly
in 176.gcc, 186.crafty, 253.perlbmk, 254.gap, & 099.go.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16559 91177308-0d34-0410-b5e6-96231b3b80d8
Implement (setcc (shl X, C1), C2) folding.
The second one occurs several dozen times in spec. The first was added
just in case. :)
These are tested by shift.ll:test2[12], and div.ll:test5
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16549 91177308-0d34-0410-b5e6-96231b3b80d8
This latent bug was exposed by recent changes, and is tested as:
llvm/test/Regression/Transforms/InstCombine/2004-09-28-BadShiftAndSetCC.llx
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16546 91177308-0d34-0410-b5e6-96231b3b80d8
where we folded (X & 254) -> X < 1 instead of X < 2. These problems were
latent problems exposed by the latest patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16528 91177308-0d34-0410-b5e6-96231b3b80d8
triggers often, for example:
6x in povray, 1x in gzip, 279x in gcc, 1x in crafty, 8x in eon, 11x in perlbmk,
362x in gap, 4x in vortex, 14 in m88ksim, 211x in 126.gcc, 1x in compress,
11x in ijpeg, and 4x in 147.vortex.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16521 91177308-0d34-0410-b5e6-96231b3b80d8
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
assumed that a constant on the RHS of a multiplication was either an
IntConstant or an FPConstant. It checked for an IntConstant and then,
if it did not find one, did a hard cast to an FPConstant. That code
would crash if the RHS were a ConstantExpr that was neither an
IntConstant nor an FPConstant. This version replaces the hard cast
with a dyn_cast. It performs the same way for IntConstants and
FPConstants but does nothing, instead of crashing, for constant
expressions.
The regression test for this change is 2004-07-27-ConstantExprMul.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15291 91177308-0d34-0410-b5e6-96231b3b80d8
* Test for whether bits are shifted out during the optzn.
If so, the fold is illegal, though it can be handled explicitly for setne/seteq
This fixes the miscompilation of 254.gap last night, which was a latent bug
exposed by other optimizer improvements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15085 91177308-0d34-0410-b5e6-96231b3b80d8
actually care about. Someday when the cast instruction is gone, we can do
better here, but this will do for now. This implements
instcombine/cast.ll:test17/18 as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15018 91177308-0d34-0410-b5e6-96231b3b80d8
"load (cast foo)". This allows us to compile C++ code like this:
class Bclass {
public: virtual int operator()() { return 666; }
};
class Dclass: public Bclass {
public: virtual int operator()() { return 667; }
} ;
int main(int argc, char** argv) {
Dclass x;
return x();
}
Into this:
int %main(int %argc, sbyte** %argv) {
entry:
call void %__main( )
ret int 667
}
Instead of this:
int %main(int %argc, sbyte** %argv) {
entry:
%x = alloca "struct.std::bad_typeid" ; <"struct.std::bad_typeid"*> [#uses=3]
call void %__main( )
%tmp.1.i.i = getelementptr "struct.std::bad_typeid"* %x, uint 0, uint 0, uint 0 ; <int (...)***> [#uses=1]
store int (...)** getelementptr ([3 x int (...)*]* %vtable for Bclass, int 0, long 2), int (...)*** %tmp.1.i.i
%tmp.3.i = getelementptr "struct.std::bad_typeid"* %x, int 0, uint 0, uint 0 ; <int (...)***> [#uses=1]
store int (...)** getelementptr ([3 x int (...)*]* %vtable for Dclass, int 0, long 2), int (...)*** %tmp.3.i
%tmp.5 = load int ("struct.std::bad_typeid"*)** cast (int (...)** getelementptr ([3 x int (...)*]* %vtable for Dclass, int 0, long 2) to int
("struct.std::bad_typeid"*)**) ; <int ("struct.std::bad_typeid"*)*> [#uses=1]
%tmp.6 = call int %tmp.5( "struct.std::bad_typeid"* %x ) ; <int> [#uses=1]
ret int %tmp.6
ret int 0
}
In order words, we now resolve the virtual function call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14783 91177308-0d34-0410-b5e6-96231b3b80d8
Also, remove X % -1 = 0, because it's not true for unsigneds, and the
signed case is superceeded by this new handling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14637 91177308-0d34-0410-b5e6-96231b3b80d8