is now theoretically feature-complete. It has not, however, been thoroughly
test, and is still considered experimental.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28529 91177308-0d34-0410-b5e6-96231b3b80d8
the iterated Dominance Frontier of the loop-closure Phi's. This is the
second phase of the LCSSA pass. The third phase (coming soon) will be to
update all uses of loop variables to use the loop-closure Phi's instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28524 91177308-0d34-0410-b5e6-96231b3b80d8
makes it so that it constant folds instructions on the fly. This is good
for several reasons:
0. Many instructions are constant foldable after inlining, particularly if
inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
that can be constant folded, then a subsequent pass has to delete them. This
gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
partially solves a phase order issue where the inliner would inline lots
of code that folds away to nothing, but think that the resultant function
is big because of this code that will be gone. Now the code never exists.
This is the first part of a 2-step process. The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.
This implements Transforms/Inline/inline_constprop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28521 91177308-0d34-0410-b5e6-96231b3b80d8
the program. This exposes more opportunities for the instcombiner, and implements
vec_shuffle.ll:test6
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28487 91177308-0d34-0410-b5e6-96231b3b80d8
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28224 91177308-0d34-0410-b5e6-96231b3b80d8
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1. In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28215 91177308-0d34-0410-b5e6-96231b3b80d8
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated. If one or
both doesn't, then this xform doesn't remove a cast.
This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28141 91177308-0d34-0410-b5e6-96231b3b80d8
nondeterminism being bad) could cause some trivial missed optimizations (dead
phi nodes being left around for later passes to clean up).
With this, llvm-gcc4 now bootstraps and correctly compares. I don't know
why I never tried to do it before... :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27984 91177308-0d34-0410-b5e6-96231b3b80d8
Make the insert/extract elt -> shuffle code more aggressive.
This fixes CodeGen/PowerPC/vec_shuffle.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27728 91177308-0d34-0410-b5e6-96231b3b80d8
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.
This implements InstCombine/cast.ll:test27/28.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27620 91177308-0d34-0410-b5e6-96231b3b80d8
are visible to analysis as intrinsics. That is, make sure someone doesn't pass
free around by address in some struct (as happens in say 176.gcc).
This doesn't get rid of any indirect calls, just ensure calls to free and malloc
are always direct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27560 91177308-0d34-0410-b5e6-96231b3b80d8
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
%tmp = cast <4 x int> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
into:
%tmp = cast <4 x uint> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27355 91177308-0d34-0410-b5e6-96231b3b80d8
%tmp = cast <4 x uint>* %testData to <4 x int>* ; <<4 x int>*> [#uses=1]
%tmp = load <4 x int>* %tmp ; <<4 x int>> [#uses=1]
to this:
%tmp = load <4 x uint>* %testData ; <<4 x uint>> [#uses=1]
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27353 91177308-0d34-0410-b5e6-96231b3b80d8
- Added more debugging info.
- Allow reuse of IV of negative stride. e.g. -4 stride == 2 * iv of -2 stride.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26841 91177308-0d34-0410-b5e6-96231b3b80d8
stride. For a set of uses of the IV of a stride which is a multiple
of another stride, do not insert a new IV expression. Rather, reuse the
previous IV and rewrite the uses as uses of IV expression multiplied by
the factor.
e.g.
x = 0 ...; x ++
y = 0 ...; y += 4
then use of y can be rewritten as use of 4*x for x86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26803 91177308-0d34-0410-b5e6-96231b3b80d8
the pointer is known to come from either a global variable, alloca or
malloc. This allows us to compile this:
P = malloc(28);
memset(P, 0, 28);
into explicit stores on PPC instead of a memset call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26577 91177308-0d34-0410-b5e6-96231b3b80d8
Transforms/InstCombine/vec_narrow.ll. This add support for narrowing
extract_element(insertelement) also.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26538 91177308-0d34-0410-b5e6-96231b3b80d8
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand. This lets us fold this:
int %test23(int %a) {
%tmp.1 = and int %a, 1
%tmp.2 = seteq int %tmp.1, 0
%tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1
ret int %tmp.3
}
into: xor (and a, 1), 1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26396 91177308-0d34-0410-b5e6-96231b3b80d8
caused SPASS to fail building last night.
We can't trivially unswitch a loop if the exit block has phi nodes in it,
because we don't know which predecessor to use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26320 91177308-0d34-0410-b5e6-96231b3b80d8
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26238 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a testcase that nate reduced from spass.
Also included are a couple minor code changes that don't affect the generated
code at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26235 91177308-0d34-0410-b5e6-96231b3b80d8
unswitch this loop on 2 before sweating to unswitch on 1/3.
void test4(int N, int i, int C, int*P, int*Q) {
int j;
for (j = 0; j < N; ++j) {
switch (C) { // general unswitching.
default: P[i+j] = 0; break;
case 1: Q[i+j] = 0; break;
case 3: P[i+j] = Q[i+j]; break;
case 2: break; // TRIVIAL UNSWITCH on C==2
}
}
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26223 91177308-0d34-0410-b5e6-96231b3b80d8
this for example:
for (j = 0; j < N; ++j) { // trivial unswitch
if (C)
P[i+j] = 0;
}
turning it into the obvious code without bothering to duplicate an empty loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26220 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26131 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26122 91177308-0d34-0410-b5e6-96231b3b80d8
uses of loop values outside the loop. We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26089 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26087 91177308-0d34-0410-b5e6-96231b3b80d8
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26050 91177308-0d34-0410-b5e6-96231b3b80d8
is just as efficient as MVIZ and is also more general.
Fix a few minor bugs introduced in recent patches
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26036 91177308-0d34-0410-b5e6-96231b3b80d8
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26035 91177308-0d34-0410-b5e6-96231b3b80d8
'demanded bits', inspired by Nate's work in the dag combiner. This isn't
complete, but needs to unrelated instcombiner changes to continue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26033 91177308-0d34-0410-b5e6-96231b3b80d8
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].
Tested with: rem.ll:test5, div.ll:test10
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26003 91177308-0d34-0410-b5e6-96231b3b80d8
1. When rewriting code in outer loops, sometimes we would insert code into
inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.
This is a performance neutral change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25964 91177308-0d34-0410-b5e6-96231b3b80d8
1. Do not statically construct a map when the program starts up, this
is expensive and cannot be optimized. Instead, create a list.
2. Do not insert entries for all function in the module into a hashmap
that lives the full life of the compiler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25512 91177308-0d34-0410-b5e6-96231b3b80d8
1. Use the varargs version of getOrInsertFunction to simplify code.
2. remove #include
3. Reduce the number of #ifdef's.
4. remove extraneous vertical whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25508 91177308-0d34-0410-b5e6-96231b3b80d8
Don't do floor->floorf conversion if floorf is not available. This checks
the compiler's host, not its target, which is incorrect for cross-compilers
Not sure that's important as we don't build many cross-compilers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25456 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25366 91177308-0d34-0410-b5e6-96231b3b80d8
it doesn't contain any calls. This is a fairly common case for C++ code,
so it will probably speed up the inliner marginally in these cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25284 91177308-0d34-0410-b5e6-96231b3b80d8
function was not an alloca, we wouldn't check the entry block for any allocas,
leading to increased stack space in some cases. In practice, allocas are almost
always at the top of the block, so this was never noticed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25280 91177308-0d34-0410-b5e6-96231b3b80d8
the shifts.
This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):
int %x(uint %anf-temporary776) {
%anf-temporary777 = shr uint %anf-temporary776, ubyte 1
%anf-temporary800 = cast uint %anf-temporary777 to int
%anf-temporary804 = shl int %anf-temporary800, ubyte 1
%anf-temporary805 = add int %anf-temporary804, -2
%anf-temporary806 = or int %anf-temporary805, 1
ret int %anf-temporary806
}
into this:
int %x(uint %anf-temporary776) {
%anf-temporary776 = cast uint %anf-temporary776 to int
%anf-temporary776.mask1 = add int %anf-temporary776, -2
%anf-temporary805 = or int %anf-temporary776.mask1, 1
ret int %anf-temporary805
}
note that instcombine already knew how to eliminate the AND that the two
shifts fold into. This is tested by InstCombine/shift.ll:test26
-Chris
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25128 91177308-0d34-0410-b5e6-96231b3b80d8
a) use better local variable names (OldMT -> OldFT) where "M" is used to
mean "Function" (perhaps it was previously "Method"?)
b) print out the module identifier in a warning message so that it is
possible to track down in which module the error occurred.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24698 91177308-0d34-0410-b5e6-96231b3b80d8
186.crafty by about 16% (from 15.109s to 13.045s) on my system.
This turns allocas with unions/casts into scalars. For example crafty has
something like this:
union doub {
unsigned short i[4];
long long d;
};
int f(long long a) {
return ((union doub){.d=a}).i[1];
}
Instead of generating loads and stores to an alloca, we now promote the
whole thing to a scalar long value.
This implements: Transforms/ScalarRepl/AggregatePromote.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24667 91177308-0d34-0410-b5e6-96231b3b80d8
The code is organized into 3 parts (2 passes)
1) a linked set of profiling passes, which implement an analysis group (linked, like alias analysis are). These insert profiling into the program, and remember what they inserted, so that at a later time they can be queried about any instruction.
2) a pass that handles inserting the random sampling framework. This also has options to control how random samples are choosen. Currently implemented are Global counters, register allocated global counters, and read cycle counter (see? there was a reason for it).
The profiling passes are almost identical to the existing ones (block, function, and null profiling is supported right now), and they are valid passes without the sampling framework (hence the existing passes can be unified with the new ones, not done yet).
Some things are a bit ugly still, but that should be fixed up soon enough.
Other todo? making the counter values not "magic 2^16 -1" values, but dynamically choosable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24493 91177308-0d34-0410-b5e6-96231b3b80d8
has a single def. In this case, look for uses that are dominated by the def
and attempt to rewrite them to directly use the stored value.
This speeds up mem2reg on these values and reduces the number of phi nodes
inserted. This should address PR665.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24411 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for specifying alignment and size of setjmp jmpbufs.
No targets currently do anything with this information, nor is it presrved
in the bytecode representation. That's coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24196 91177308-0d34-0410-b5e6-96231b3b80d8
a few times in crafty:
OLD: %tmp.36 = div int %tmp.35, 8 ; <int> [#uses=1]
NEW: %tmp.36 = div uint %tmp.35, 8 ; <uint> [#uses=0]
OLD: %tmp.19 = div int %tmp.18, 8 ; <int> [#uses=1]
NEW: %tmp.19 = div uint %tmp.18, 8 ; <uint> [#uses=0]
OLD: %tmp.117 = div int %tmp.116, 8 ; <int> [#uses=1]
NEW: %tmp.117 = div uint %tmp.116, 8 ; <uint> [#uses=0]
OLD: %tmp.92 = div int %tmp.91, 8 ; <int> [#uses=1]
NEW: %tmp.92 = div uint %tmp.91, 8 ; <uint> [#uses=0]
Which all turn into shrs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24190 91177308-0d34-0410-b5e6-96231b3b80d8
8 times in vortex, allowing the srems to be turned into shrs:
OLD: %tmp.104 = rem int %tmp.5.i37, 16 ; <int> [#uses=1]
NEW: %tmp.104 = rem uint %tmp.5.i37, 16 ; <uint> [#uses=0]
OLD: %tmp.98 = rem int %tmp.5.i24, 16 ; <int> [#uses=1]
NEW: %tmp.98 = rem uint %tmp.5.i24, 16 ; <uint> [#uses=0]
OLD: %tmp.91 = rem int %tmp.5.i19, 8 ; <int> [#uses=1]
NEW: %tmp.91 = rem uint %tmp.5.i19, 8 ; <uint> [#uses=0]
OLD: %tmp.88 = rem int %tmp.5.i14, 8 ; <int> [#uses=1]
NEW: %tmp.88 = rem uint %tmp.5.i14, 8 ; <uint> [#uses=0]
OLD: %tmp.85 = rem int %tmp.5.i9, 1024 ; <int> [#uses=2]
NEW: %tmp.85 = rem uint %tmp.5.i9, 1024 ; <uint> [#uses=0]
OLD: %tmp.82 = rem int %tmp.5.i, 512 ; <int> [#uses=2]
NEW: %tmp.82 = rem uint %tmp.5.i1, 512 ; <uint> [#uses=0]
OLD: %tmp.48.i = rem int %tmp.5.i.i161, 4 ; <int> [#uses=1]
NEW: %tmp.48.i = rem uint %tmp.5.i.i161, 4 ; <uint> [#uses=0]
OLD: %tmp.20.i2 = rem int %tmp.5.i.i, 4 ; <int> [#uses=1]
NEW: %tmp.20.i2 = rem uint %tmp.5.i.i, 4 ; <uint> [#uses=0]
it also occurs 9 times in gcc, but with odd constant divisors (1009 and 61)
so the payoff isn't as great.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24189 91177308-0d34-0410-b5e6-96231b3b80d8