Implement the start of global ctor optimization. It is currently smart
enough to remove the global ctor for cases like this:
struct foo {
foo() {}
} x;
... saving a bit of startup time for the program.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23433 91177308-0d34-0410-b5e6-96231b3b80d8
not define a value that is used outside of it's block. This catches many
more simplifications, e.g. 854 in 176.gcc, 137 in vpr, etc.
This implements branch-phi-thread.ll:test3.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23397 91177308-0d34-0410-b5e6-96231b3b80d8
control across branches with determined outcomes. More generality to follow.
This triggers a couple thousand times in specint.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23391 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for 178.galgel where resolution of dope vectors (by the
optimizer) causes the scales to become apparent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23328 91177308-0d34-0410-b5e6-96231b3b80d8
if () { store A -> P; } else { store B -> P; }
into a PHI node with one store, in the most trival case. This implements
load.ll:test10.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23324 91177308-0d34-0410-b5e6-96231b3b80d8
load are exactly consequtive. This is picked up by other passes, but this
triggers thousands of times in fortran programs that use static locals
(and is thus a compile-time speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23320 91177308-0d34-0410-b5e6-96231b3b80d8
code for IV uses outside of loops that are not dominated by the latch block.
We should only convert these uses to use the post-inc value if they ARE
dominated by the latch block.
Also use a new LoopInfo method to simplify some code.
This fixes Transforms/LoopStrengthReduce/2005-09-12-UsesOutOutsideOfLoop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23318 91177308-0d34-0410-b5e6-96231b3b80d8
in building maximal expressions before simplifying them. In particular, i
cases like this:
X-(A+B+X)
the code would consider A+B+X to be a maximal expression (not understanding
that the single use '-' would be turned into a + later), simplify it (a noop)
then later get simplified again.
Each of these simplify steps is where the cost of reassociation comes from,
so this patch should speed up the already fast pass a bit.
Thanks to Dan for noticing this!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23214 91177308-0d34-0410-b5e6-96231b3b80d8
Do not claim to not change the CFG. We do change the cfg to split critical
edges. This isn't causing us a problem now, but could likely do so in the
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22824 91177308-0d34-0410-b5e6-96231b3b80d8
loop, because a IV-dependent value was used outside of the loop and didn't
have immediate-folding capability
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22798 91177308-0d34-0410-b5e6-96231b3b80d8
a problem in LoopStrengthReduction, where it would split critical edges
then confused itself with outdated loop information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22776 91177308-0d34-0410-b5e6-96231b3b80d8
edge so that the code is not always executed for both operands. This
prevents LSR from inserting code into loops whose exit blocks contain
PHI uses of IV expressions (which are outside of loops). On gzip, for
example, we turn this ugly code:
.LBB_test_1: ; loopentry
add r27, r3, r28
lhz r27, 3(r27)
add r26, r4, r28
lhz r26, 3(r26)
add r25, r30, r28 ;; Only live if exiting the loop
add r24, r29, r28 ;; Only live if exiting the loop
cmpw cr0, r27, r26
bne .LBB_test_5 ; loopexit
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_test_9 ; loopexit
.LBB_test_2: ; shortcirc_next.0
...
blt .LBB_test_1
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_t_3: ; shortcirc_next.0
.LBB_test_3: ; shortcirc_next.0
...
blt .LBB_test_1
Next step: get the block out of the loop so that the loop is all
fall-throughs again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22766 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, just update the BB in-place. This is both faster, and it prevents
split-critical-edges from shuffling the PHI argument list unneccesarily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22765 91177308-0d34-0410-b5e6-96231b3b80d8
into just Y. This often occurs when it seperates loops that have collapsed loop
headers. This implements LoopSimplify/phi-node-simplify.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22746 91177308-0d34-0410-b5e6-96231b3b80d8
For code like this:
void foo(float *a, float *b, int n, int stride_a, int stride_b) {
int i;
for (i=0; i<n; i++)
a[i*stride_a] = b[i*stride_b];
}
we now emit:
.LBB_foo2_2: ; no_exit
lfs f0, 0(r4)
stfs f0, 0(r3)
addi r7, r7, 1
add r4, r2, r4
add r3, r6, r3
cmpw cr0, r7, r5
blt .LBB_foo2_2 ; no_exit
instead of:
.LBB_foo_2: ; no_exit
mullw r8, r2, r7 ;; multiply!
slwi r8, r8, 2
lfsx f0, r4, r8
mullw r8, r2, r6 ;; multiply!
slwi r8, r8, 2
stfsx f0, r3, r8
addi r2, r2, 1
cmpw cr0, r2, r5
blt .LBB_foo_2 ; no_exit
loops with variable strides occur pretty often. For example, in SPECFP2K
there are 317 variable strides in 177.mesa, 3 in 179.art, 14 in 188.ammp,
56 in 168.wupwise, 36 in 172.mgrid.
Now we can allow indvars to turn functions written like this:
void foo2(float *a, float *b, int n, int stride_a, int stride_b) {
int i, ai = 0, bi = 0;
for (i=0; i<n; i++)
{
a[ai] = b[bi];
ai += stride_a;
bi += stride_b;
}
}
into code like the above for better analysis. With this patch, they generate
identical code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22740 91177308-0d34-0410-b5e6-96231b3b80d8
first is a correctness thing, and the later is an optzn thing. This also
is needed to support a future change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22720 91177308-0d34-0410-b5e6-96231b3b80d8
The termination condition actually wants to use the post-incremented value
of the loop, not a new indvar with an unusual base.
On PPC, for example, this allows us to compile
LoopStrengthReduce/exit_compare_live_range.ll to:
_foo:
li r2, 0
.LBB_foo_1: ; no_exit
li r5, 0
stw r5, 0(r3)
addi r2, r2, 1
cmpw cr0, r2, r4
bne .LBB_foo_1 ; no_exit
blr
instead of:
_foo:
li r2, 1 ;; IV starts at 1, not 0
.LBB_foo_1: ; no_exit
li r5, 0
stw r5, 0(r3)
addi r5, r2, 1
cmpw cr0, r2, r4
or r2, r5, r5 ;; Reg-reg copy, extra live range
bne .LBB_foo_1 ; no_exit
blr
This implements LoopStrengthReduce/exit_compare_live_range.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22699 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach this code to move allocas out of the loop when tail call eliminating
a call marked 'tail'. This implements TailCallElim/move_alloca_for_tail_call.ll
* Do not perform this transformation if a call is marked 'tail' and if there
are allocas that we cannot move out of the loop in #2. Doing so would increase
the stack usage of the function. This implements fixes
PR615 and TailCallElim/dont-tce-tail-marked-call.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22690 91177308-0d34-0410-b5e6-96231b3b80d8
BasicBlock's removePredecessor routine. This requires shuffling around
the definition and implementation of hasContantValue from Utils.h,cpp into
Instructions.h,cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22664 91177308-0d34-0410-b5e6-96231b3b80d8
that the symbolic evaluator is not always able to use subtraction to remove
expressions. This makes the code faster, and fixes the last crash on 178.galgel.
Finally, add a statistic to see how many phi nodes are inserted.
On 178.galgel, we get the follow stats:
2562 loop-reduce - Number of PHIs inserted
3927 loop-reduce - Number of GEPs strength reduced
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22662 91177308-0d34-0410-b5e6-96231b3b80d8
method.
* Fix a crash on 178.galgel, where we would insert expressions before PHI
nodes instead of into the PHI node predecessor blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22657 91177308-0d34-0410-b5e6-96231b3b80d8
for (i = 0; i < N; ++i)
A[i][foo()] = 0;
here we still want to strength reduce the A[i] part, even though foo() is
l-v.
This also simplifies some of the 'CanReduce' logic.
This implements Transforms/LoopStrengthReduce/ops_after_indvar.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22652 91177308-0d34-0410-b5e6-96231b3b80d8
1. We only analyze instructions once, guaranteed
2. AnalyzeGetElementPtrUsers has been ripped apart and replaced with
something much simpler.
The next step is to handle expressions that are not all indvar+loop-invariant
values (e.g. handling indvar+loopvariant).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22649 91177308-0d34-0410-b5e6-96231b3b80d8
Only emit one PHI node for IV uses with identical bases and strides (after
moving foldable immediates to the load/store instruction).
This implements LoopStrengthReduce/dont_insert_redundant_ops.ll, allowing
us to generate this PPC code for test1:
or r30, r3, r3
.LBB_test1_1: ; Loop
li r2, 0
stw r2, 0(r30)
stw r2, 4(r30)
bl L_pred$stub
addi r30, r30, 8
cmplwi cr0, r3, 0
bne .LBB_test1_1 ; Loop
instead of this code:
or r30, r3, r3
or r29, r3, r3
.LBB_test1_1: ; Loop
li r2, 0
stw r2, 0(r29)
stw r2, 4(r30)
bl L_pred$stub
addi r30, r30, 8 ;; Two iv's with step of 8
addi r29, r29, 8
cmplwi cr0, r3, 0
bne .LBB_test1_1 ; Loop
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22635 91177308-0d34-0410-b5e6-96231b3b80d8
unify some parallel vectors and get field names more descriptive than
"first" and "second". This isn't lisp afterall :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22633 91177308-0d34-0410-b5e6-96231b3b80d8
map from instruction* to SCEVHandles. When we delete instructions, we have
to tell it about it. We would run into nasty cases where new instructions
were reallocated at old instruction addresses and get the old map values.
Bad bad bad :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22632 91177308-0d34-0410-b5e6-96231b3b80d8
consideration the case where a reference in an unreachable block could
occur. This fixes Transforms/SimplifyCFG/2005-08-01-PHIUpdateFail.ll,
something I ran into while bugpoint'ing another pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22584 91177308-0d34-0410-b5e6-96231b3b80d8
SimplifyLibCalls probably has to be audited to make sure it does not make
this mistake elsewhere. Also, if this code knows that the type will be
unsigned, obviously one arm of this is dead.
Reid, can you take a look into this further?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22566 91177308-0d34-0410-b5e6-96231b3b80d8
target data to decide which loop induction variables to strength reduce
and how to do so. This work is mostly by Chris Lattner, with tweaks by
me to get it working on some of MultiSource.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22558 91177308-0d34-0410-b5e6-96231b3b80d8
Because the instcombine has to scan the entire function when it starts up
to begin with, we might as well do it in DFO so we can nuke unreachable code.
This fixes: Transforms/InstCombine/2005-07-07-DeadPHILoop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22348 91177308-0d34-0410-b5e6-96231b3b80d8
The optimization for locally used allocas was not safe for allocas that
were read before they were written. This change disables that optimization
in that case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22318 91177308-0d34-0410-b5e6-96231b3b80d8
is a mismatch in their character type pointers (i.e. fprintf() prints an
array of ubytes while fwrite() takes an array of sbytes).
We can probably do better than this (such as casting the ubyte to an
sbyte).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22310 91177308-0d34-0410-b5e6-96231b3b80d8
It is actually always true. This fixes PR586 and
Transforms/InstCombine/2005-06-16-SetCCOrSetCCMiscompile.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22236 91177308-0d34-0410-b5e6-96231b3b80d8
* Check for availability of ffsll call in configure script
* Support ffs, ffsl, and ffsll conversion to constant value if the argument
is constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22027 91177308-0d34-0410-b5e6-96231b3b80d8
a call. This fixes Prolangs-C++/deriv2, kimwitu++, and Misc-C++/bigfib
on X86 with -enable-x86-fastcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22023 91177308-0d34-0410-b5e6-96231b3b80d8
This makes reassociate realize that loads should be treated as unmovable, and
gives distinct ranks to distinct values defined in the same basic block, allowing
reassociate to do its thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21783 91177308-0d34-0410-b5e6-96231b3b80d8
in. This tends to get cases like this:
X = cast ubyte to int
Y = shr int X, ...
Tested by: shift.ll:test24
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21775 91177308-0d34-0410-b5e6-96231b3b80d8
of trying to do local reassociation tweaks at each level, only process an expression
tree once (at its root). This does not improve the reassociation pass in any real way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21768 91177308-0d34-0410-b5e6-96231b3b80d8
strlen(x) != 0 -> *x != 0
strlen(x) == 0 -> *x == 0
* Change nested statistics to use style of other LLVM statistics so that
only the name of the optimization (simplify-libcalls) is used as the
statistic name, and the description indicates which specific all is
optimized. Cuts down on some redundancy and saves a few bytes of space.
* Make note of stpcpy optimization that could be done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21766 91177308-0d34-0410-b5e6-96231b3b80d8
the result, turn signed shift rights into unsigned shift rights if possible.
This leads to later simplification and happens *often* in 176.gcc. For example,
this testcase:
struct xxx { unsigned int code : 8; };
enum codes { A, B, C, D, E, F };
int foo(struct xxx *P) {
if ((enum codes)P->code == A)
bar();
}
used to be compiled to:
int %foo(%struct.xxx* %P) {
%tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1]
%tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1]
%tmp.3 = cast uint %tmp.2 to int ; <int> [#uses=1]
%tmp.4 = shl int %tmp.3, ubyte 24 ; <int> [#uses=1]
%tmp.5 = shr int %tmp.4, ubyte 24 ; <int> [#uses=1]
%tmp.6 = cast int %tmp.5 to sbyte ; <sbyte> [#uses=1]
%tmp.8 = seteq sbyte %tmp.6, 0 ; <bool> [#uses=1]
br bool %tmp.8, label %then, label %UnifiedReturnBlock
Now it is compiled to:
%tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1]
%tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1]
%tmp.2 = cast uint %tmp.2 to sbyte ; <sbyte> [#uses=1]
%tmp.8 = seteq sbyte %tmp.2, 0 ; <bool> [#uses=1]
br bool %tmp.8, label %then, label %UnifiedReturnBlock
which is the difference between this:
foo:
subl $4, %esp
movl 8(%esp), %eax
movl (%eax), %eax
shll $24, %eax
sarl $24, %eax
testb %al, %al
jne .LBBfoo_2
and this:
foo:
subl $4, %esp
movl 8(%esp), %eax
movl (%eax), %eax
testb %al, %al
jne .LBBfoo_2
This occurs 3243 times total in the External tests, 215x in povray,
6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl,
25x in gap, 3x in m88ksim, 25x in ijpeg.
Maybe this will cause a little jump on gcc tommorow :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21715 91177308-0d34-0410-b5e6-96231b3b80d8
This implements set.ll:test20.
This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon,
6x on perlbmk and 11x on m88ksim.
It allows us to compile these two functions into the same code:
struct s { unsigned int bit : 1; };
unsigned foo(struct s *p) {
if (p->bit)
return 1;
else
return 0;
}
unsigned bar(struct s *p) { return p->bit; }
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21690 91177308-0d34-0410-b5e6-96231b3b80d8
library function:
isdigit(chr) -> 0 or 1 if chr is constant
isdigit(chr) -> chr - '0' <= 9 otherwise
Although there are many calls to isdigit in llvm-test, most of them are
compiled away by macros leaving only this:
2 MultiSource/Applications/hexxagon
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21688 91177308-0d34-0410-b5e6-96231b3b80d8
actual spec (int -> uint)
* Add the ability to get/cache the strlen function prototype.
* Make sure generated values are appropriately named for debugging purposes
* Add the SPrintFOptimiation for 4 casts of sprintf optimization:
sprintf(str,cstr) -> llvm.memcpy(str,cstr) (if cstr has no %)
sprintf(str,"") -> store sbyte 0, str
sprintf(str,"%s",src) -> llvm.memcpy(str,src) (if src is constant)
sprintf(str,"%c",chr) -> store chr, str ; store sbyte 0, str+1
The sprintf optimization didn't fire as much as I had hoped:
2 MultiSource/Applications/SPASS
5 MultiSource/Benchmarks/McCat/18-imp
22 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC
1 MultiSource/Benchmarks/Prolangs-C/assembler
6 MultiSource/Benchmarks/Prolangs-C/unix-smail
2 MultiSource/Benchmarks/mediabench/mpeg2/mpeg2dec
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21679 91177308-0d34-0410-b5e6-96231b3b80d8
Neither of these activated as many times as was hoped:
strchr:
9 MultiSource/Applications/siod
1 MultiSource/Applications/d
2 MultiSource/Prolangs-C/archie-client
1 External/SPEC/CINT2000/176.gcc/176.gcc
llvm.memset:
no hits
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21669 91177308-0d34-0410-b5e6-96231b3b80d8
strings passed to Statistic's constructor are not destructable. The stats
are printed during static destruction and the SimplifyLibCalls module was
getting destructed before the statistics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21661 91177308-0d34-0410-b5e6-96231b3b80d8
type be obtained from a CallInst we're optimizing.
* Make it possible for getConstantStringLength to return the ConstantArray
that it extracts in case the content is needed by an Optimization.
* Implement the strcmp optimization
* Implement the toascii optimization
This pass is now firing several to many times in the following MultiSource
tests:
Applications/Burg - 7 (strcat,strcpy)
Applications/siod - 13 (strcat,strcpy,strlen)
Applications/spiff - 120 (exit,fputs,strcat,strcpy,strlen)
Applications/treecc - 66 (exit,fputs,strcat,strcpy)
Applications/kimwitu++ - 34 (strcmp,strcpy,strlen)
Applications/SPASS - 588 (exit,fputs,strcat,strcpy,strlen)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21626 91177308-0d34-0410-b5e6-96231b3b80d8
sinh, cosh, etc.
* Make the name comparisons for the fp libcalls a little more efficient by
switching on the first character of the name before doing comparisons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21611 91177308-0d34-0410-b5e6-96231b3b80d8
* Name the instructions by appending to name of original
* Factor common part out of a switch statement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21597 91177308-0d34-0410-b5e6-96231b3b80d8
* Correct stale documentation in a few places
* Re-order the file to better associate things and reduce line count
* Make the pass thread safe by caching the Function* objects needed by the
optimizers in the pass object instead of globally.
* Provide the SimplifyLibCalls pass object to the optimizer classes so they
can access cached Function* objects and TargetData info
* Make sure the pass resets its cache if the Module passed to runOnModule
changes
* Rename CallOptimizer LibCallOptimization. All the classes are named
*Optimization while the objects are *Optimizer.
* Don't cache Function* in the optimizer objects because they could be used
by multiple PassManager's running in multiple threads
* Add an optimization for strcpy which is similar to strcat
* Add a "TODO" list at the end of the file for ideas on additional libcall
optimizations that could be added (get ideas from other compilers).
Sorry for the huge diff. Its mostly reorganization of code. That won't
happen again as I believe the design and infrastructure for this pass is
now done or close to it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21589 91177308-0d34-0410-b5e6-96231b3b80d8
call to them into an 'unreachable' instruction.
This triggers a bunch of times, particularly on gcc:
gzip: 36
gcc: 601
eon: 12
bzip: 38
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21587 91177308-0d34-0410-b5e6-96231b3b80d8
* MemCpyOptimization can only be optimized if the 3rd and 4th arguments are
constants and we weren't checking for that.
* The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in
a cast.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21570 91177308-0d34-0410-b5e6-96231b3b80d8
* Have the SimplifyLibCalls pass acquire the TargetData and pass it down to
the optimization classes so they can use it to make better choices for
the signatures of functions, etc.
* Rearrange the code a little so the utility functions are closer to their
usage and keep the core of the pass near the top of the files.
* Adjust the StrLen pass to get/use the correct prototype depending on the
TargetData::getIntPtrType() result. The result of strlen is size_t which
could be either uint or ulong depending on the platform.
* Clean up some coding nits (cast vs. dyn_cast, remove redundant items from
a switch, etc.)
* Implement the MemMoveOptimization as a twin of MemCpyOptimization (they
only differ in name).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21569 91177308-0d34-0410-b5e6-96231b3b80d8
named getConstantStringLength. This is the common part of StrCpy and
StrLen optimizations and probably several others, yet to be written. It
performs all the validity checks for looking at constant arrays that are
supposed to be null-terminated strings and then computes the actual
length of the string.
* Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4
and 8 byte data blocks that are properly aligned on those boundaries into
a load and a store. Much more could be done here but alignment
restrictions and lack of knowledge of the target instruction set prevent
use from doing significantly more. That will have to be delegated to the
code generators as they lower llvm.memcpy calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21562 91177308-0d34-0410-b5e6-96231b3b80d8
* Factor out commonalities between StrLenOptimization and StrCatOptimization
* Make sure that signatures return sbyte* not void*
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21559 91177308-0d34-0410-b5e6-96231b3b80d8
* Change signatures of OptimizeCall and ValidateCalledFunction so they are
non-const, allowing the optimization object to be modified. This is in
support of caching things used across multiple calls.
* Provide two functions for constructing and caching function types
* Modify the StrCatOptimization to cache Function objects for strlen and
llvm.memcpy so it doesn't regenerate them on each call site. Make sure
these are invalidated each time we start the pass.
* Handle both a GEP Instruction and a GEP ConstantExpr
* Add additional checks to make sure we really are dealing with an arary of
sbyte and that all the element initializers are ConstantInt or
ConstantExpr that reduce to ConstantInt.
* Make sure the GlobalVariable is constant!
* Don't use ConstantArray::getString as it can fail and it doesn't give us
the right thing. We must check for null bytes in the middle of the array.
* Use llvm.memcpy instead of memcpy so we can factor alignment into it.
* Don't use void* types in signatures, replace with sbyte* instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21555 91177308-0d34-0410-b5e6-96231b3b80d8
* Don't use std::string for the function names, const char* will suffice
* Allow each CallOptimizer to validate the function signature before
doing anything
* Repeatedly loop over the functions until an iteration produces
no more optimizations. This allows one optimization to insert a
call that is optimized by another optimization.
* Implement the ConstantArray portion of the StrCatOptimization
* Provide a template for the MemCpyOptimization
* Make ExitInMainOptimization split the block, not delete everything
after the return instruction.
(This covers revision 1.3 and 1.4, as the 1.3 comments were botched)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21548 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix comments at top of file
* Change algorithm for running the call optimizations from n*n to something
closer to n.
* Use a hash_map to store and lookup the optimizations since there will
eventually (or potentially) be a large number of them. This gets lookup
based on the name of the function to O(1). Each CallOptimizer now has a
std::string member named func_name that tracks the name of the function
that it applies to. It is this string that is entered into the hash_map
for fast comparison against the function names encountered in the module.
* Cleanup some style issues pertaining to iterator invalidation
* Don't pass the Function pointer to the OptimizeCall function because if
the optimization needs it, it can get it from the CallInst passed in.
* Add the skeleton for a new CallOptimizer, StrCatOptimizer which will
eventually replace strcat's of constant strings with direct copies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21526 91177308-0d34-0410-b5e6-96231b3b80d8
calls. The pass visits all external functions in the module and determines
if such function calls can be optimized. The optimizations are specific to
the library calls involved. This initial version only optimizes calls to
exit(3) when they occur in main(): it changes them to ret instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21522 91177308-0d34-0410-b5e6-96231b3b80d8
Completely rework the 'setcc (cast x to larger), y' code. This code has
the advantage of implementing setcc.ll:test19 (being more general than
the previous code) and being correct in all cases.
This allows us to unxfail 2004-11-27-SetCCForCastLargerAndConstant.ll,
and close PR454.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21491 91177308-0d34-0410-b5e6-96231b3b80d8
Make IPSCCP strip off dead constant exprs that are using functions, making
them appear as though their address is taken. This allows us to propagate
some more pool descriptors, lowering the overhead of pool alloc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21363 91177308-0d34-0410-b5e6-96231b3b80d8
This pass forward branches through conditions when it can show that the
conditions is either always true or false for a predecessor. This currently
only handles the most simple cases of this, but is successful at threading
across 2489 branches and 65 switch instructions in 176.gcc, which isn't bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21306 91177308-0d34-0410-b5e6-96231b3b80d8
using Function::arg_{iterator|begin|end}. Likewise Module::g* -> Module::global_*.
This patch is contributed by Gabor Greif, thanks!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20597 91177308-0d34-0410-b5e6-96231b3b80d8
* Loop invariant code does not dominate the loop header, but rather
the end of the loop preheader.
* The base for a reduced GEP isn't a constant unless all of its
operands (preceding the induction variable) are constant.
* Allow induction variable elimination for the simple case after all.
Also made changes recommended by Chris for properly deleting
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20383 91177308-0d34-0410-b5e6-96231b3b80d8
This does a simple form of "jump threading", which eliminates CFG edges that
are provably dead. This triggers 90 times in the external tests, and
eliminating CFG edges is always always a good thing! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20300 91177308-0d34-0410-b5e6-96231b3b80d8
and handle incomplete control dependences correctly. This fixes:
Regression/Transforms/ADCE/dead-phi-edge.ll
-> a missed optimization
Regression/Transforms/ADCE/dead-phi-edge.ll
-> a compiler crash distilled from QT4
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20227 91177308-0d34-0410-b5e6-96231b3b80d8