only do these transformations if there are a small number of phi's.
This speeds up Ptrdist/ks from 2.35s to 2.19s on my mac pro.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31853 91177308-0d34-0410-b5e6-96231b3b80d8
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31542 91177308-0d34-0410-b5e6-96231b3b80d8
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31380 91177308-0d34-0410-b5e6-96231b3b80d8
Make necessary changes to support DIV -> [SUF]Div. This changes llvm to
have three division instructions: signed, unsigned, floating point. The
bytecode and assembler are bacwards compatible, however.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31195 91177308-0d34-0410-b5e6-96231b3b80d8
property is added by running through the list of uses of the value and
adding resolved properties to the property set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31126 91177308-0d34-0410-b5e6-96231b3b80d8
1. Better document what is going on here.
2. Only hack on one branch per iteration, making the results less conservative.
3. Handle the problematic case by marking edges executable instead of by
playing with value lattice states. This is far less pessimistic, and fixes
SCCP/ipsccp-gvar.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31106 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31063 91177308-0d34-0410-b5e6-96231b3b80d8
SimplifyDemandedBits. The idea is that some operations can be simplified if
not all of the computed elements are needed. Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified. If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.
This turns (f.e.):
ushort %Convert_sse(float %f) {
%tmp = insertelement <4 x float> undef, float %f, uint 0 ; <<4 x float>> [#uses=1]
%tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1 ; <<4 x float>> [#uses=1]
%tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2 ; <<4 x float>> [#uses=1]
%tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3 ; <<4 x float>> [#uses=1]
%tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
into:
ushort %Convert_sse(float %f) {
entry:
%tmp28 = sub float %f, 1.000000e+00 ; <float> [#uses=1]
%tmp37 = mul float %tmp28, 5.000000e-01 ; <float> [#uses=1]
%tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0 ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
which improves codegen from:
_Convert_sse:
movss LCPI1_0, %xmm0
movss 4(%esp), %xmm1
subss %xmm0, %xmm1
movss LCPI1_1, %xmm0
mulss %xmm0, %xmm1
movss LCPI1_2, %xmm0
minss %xmm0, %xmm1
xorps %xmm0, %xmm0
maxss %xmm0, %xmm1
cvttss2si %xmm1, %eax
andl $65535, %eax
ret
to:
_Convert_sse:
movss 4(%esp), %xmm0
subss LCPI1_0, %xmm0
mulss LCPI1_1, %xmm0
movss LCPI1_2, %xmm1
minss %xmm1, %xmm0
xorps %xmm1, %xmm1
maxss %xmm1, %xmm0
cvttss2si %xmm0, %eax
andl $65535, %eax
ret
This is just a first step, it can be extended in many ways. Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30752 91177308-0d34-0410-b5e6-96231b3b80d8
Ensure that we copy KnownProperties before calling visitBasicBlock, else
we may leak properties into blocks where they don't belong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30705 91177308-0d34-0410-b5e6-96231b3b80d8
The critical edge block dominates the dest block if the destblock dominates
all edges other than the one incoming from the critical edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30696 91177308-0d34-0410-b5e6-96231b3b80d8
or when splitting loops with a common header into multiple loops. In particular
the old code would always insert the preheader before the old loop header. This
is disasterous in cases where the loop hasn't been rotated. For example, it can
produce code like:
.. outside the loop...
jmp LBB1_2 #bb13.outer
LBB1_1: #bb1
movsd 8(%esp,%esi,8), %xmm1
mulsd (%edi), %xmm1
addsd %xmm0, %xmm1
addl $24, %edi
incl %esi
jmp LBB1_3 #bb13
LBB1_2: #bb13.outer
leal (%edx,%eax,8), %edi
pxor %xmm1, %xmm1
xorl %esi, %esi
LBB1_3: #bb13
movapd %xmm1, %xmm0
cmpl $4, %esi
jl LBB1_1 #bb1
Note that the loop body is actually LBB1_1 + LBB1_3, which means that the
loop now contains an uncond branch WITHIN it to jump around the inserted
loop header (LBB1_2). Doh.
This patch changes the preheader insertion code to insert it in the right
spot, producing this code:
... outside the loop, fall into the header ...
LBB1_1: #bb13.outer
leal (%edx,%eax,8), %esi
pxor %xmm0, %xmm0
xorl %edi, %edi
jmp LBB1_3 #bb13
LBB1_2: #bb1
movsd 8(%esp,%edi,8), %xmm0
mulsd (%esi), %xmm0
addsd %xmm1, %xmm0
addl $24, %esi
incl %edi
LBB1_3: #bb13
movapd %xmm0, %xmm1
cmpl $4, %edi
jl LBB1_2 #bb1
Totally crazy, no branch in the loop! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30587 91177308-0d34-0410-b5e6-96231b3b80d8
reachable, making it general purpose enough for use by InsertPreheaderForLoop.
Eliminate custom dominfo updating code in InsertPreheaderForLoop, using
UpdateDomInfoForRevectoredPreds instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30586 91177308-0d34-0410-b5e6-96231b3b80d8
that we can't modify the CFG any more, at least not until it's possible
to update the dominator tree (PR217).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30469 91177308-0d34-0410-b5e6-96231b3b80d8
DLL* linkages got full (I hope) codegeneration support in C & both x86
assembler backends.
External weak linkage added for future use, we don't provide any
codegeneration, etc. support for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30374 91177308-0d34-0410-b5e6-96231b3b80d8
operations (like findProperties) should be faster, at the expense of
unionSets being slower in cases that are rare in practise.
Don't erase a dead Instruction. This fixes a memory corruption issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30235 91177308-0d34-0410-b5e6-96231b3b80d8
Call these from your backend to enjoy setjmp/longjmp goodness, see
lib/Target/IA64/IA64ISelLowering.cpp for an example
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30095 91177308-0d34-0410-b5e6-96231b3b80d8
Reorder operations to remove duplicated work.
Fix to leave floating-point types out of the optimization.
Add tests to predsimplify.ll for SwitchInst and SelectInst handling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30055 91177308-0d34-0410-b5e6-96231b3b80d8
another Value) weren't being found by findProperties.
This fixes predsimplify.ll test6, a missed optimization opportunity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29991 91177308-0d34-0410-b5e6-96231b3b80d8
If a branch's condition has become a ConstantBool, simplify it immediately.
Removing the edge saves work and exposes up more optimization opportunities
in the pass.
Add support for SelectInst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29970 91177308-0d34-0410-b5e6-96231b3b80d8
exit blocks. The output is dependent on addresses of basic block.
Add and use Loop::getUniqueExitBlocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29966 91177308-0d34-0410-b5e6-96231b3b80d8
Not only will this take huge amounts of compile time, the resultant loop nests
won't be useful for optimization. This reduces loopsimplify time on
Transforms/LoopSimplify/2006-08-11-LoopSimplifyLongTime.ll from ~32s to ~0.4s
with a debug build of llvm on a 2.7Ghz G5.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29647 91177308-0d34-0410-b5e6-96231b3b80d8
blocks that target loop blocks.
Before, the code was run once per loop, and depended on the number of
predecessors each block in the loop had. Unfortunately, scanning preds can
be really slow when huge numbers of phis exist or when phis with huge numbers
of inputs exist.
Now, the code is run once per function and scans successors instead of preds,
which is far faster. In addition, the new code is simpler and is goto free,
woo.
This change speeds up a nasty testcase Duraid provided me from taking hours to
taking ~72s with a debug build. The functionality this implements is already
tested in the testsuite as Transforms/CodeExtractor/2004-03-13-LoopExtractorCrash.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29644 91177308-0d34-0410-b5e6-96231b3b80d8
SlowOperatingInfo, Statistics). Besides providing an example of how to
use these facilities, it also serves to debug problems with runtime linking
when dlopening a loadable module. These three support facilities exercise
different combinations of Text/Weak Weak/Text and Text/Text linking
between the executable and the module.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29552 91177308-0d34-0410-b5e6-96231b3b80d8
1. Change the usage of LOADABLE_MODULE so that it implies all the things
necessary to make a loadable module. This reduces the user's burdern to
get a loadable module correctly built.
2. Document the usage of LOADABLE_MODULE in the MakefileGuide
3. Adjust the makefile for lib/Transforms/Hello to use the new specification
for building loadable modules
4. Adjust the sample project to not attempt to build a shared library for
its little library. This was just wasteful and not instructive at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29551 91177308-0d34-0410-b5e6-96231b3b80d8
1. Update an obsolete comment.
2. Make the sorting by base an explicit (though still N^2) step, so
that the code is more clear on what it is doing.
3. Partition uses so that uses inside the loop are handled before uses
outside the loop.
Note that none of these changes currently changes the code inserted by LSR,
but they are a stepping stone to getting there.
This code is the result of some crazy pair programming with Nate. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29493 91177308-0d34-0410-b5e6-96231b3b80d8
down approach, inspired by discussions with Tanya.
This approach is significantly faster, because it does not need dominator
frontiers and it does not insert extraneous unused PHI nodes. For example, on
252.eon, in a release-asserts build, this speeds up LCSSA (which is the slowest
pass in gccas) from 9.14s to 0.74s on my G5. This code is also slightly smaller
and significantly simpler than the old code.
Amusingly, in a normal Release build (which includes the
"assert(L->isLCSSAForm());" assertion), asserting that the result of LCSSA
is in LCSSA form is actually slower than the LCSSA transformation pass
itself on 252.eon. I will see if Loop::isLCSSAForm can be sped up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29463 91177308-0d34-0410-b5e6-96231b3b80d8
Handle this case, which doesn't require a new callgraph edge. This fixes
a crash compiling MallocBench/gs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29121 91177308-0d34-0410-b5e6-96231b3b80d8
target CG node. This allows the inliner to properly update the callgraph
when using the pruning inliner. The pruning inliner may not copy over all
call sites from a callee to a caller, so the edges corresponding to those
call sites should not be copied over either.
This fixes PR827 and Transforms/Inline/2006-07-12-InlinePruneCGUpdate.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29120 91177308-0d34-0410-b5e6-96231b3b80d8
cases. Ideally, this issue will go away in the future as LCSSA gets smarter
about which Phi nodes it inserts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29076 91177308-0d34-0410-b5e6-96231b3b80d8
will be profitable. This is mainly to remove some cases where excessive
unswitching would result in long compile times and/or huge generated code.
Once someone comes up with a better heuristic that avoids these cases, this
should be switched out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28962 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the Function pointer cast in these calls, converting it to
a cast of argument.
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( int 10 )
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( uint %tmp51 )
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28953 91177308-0d34-0410-b5e6-96231b3b80d8
of LCSSA. This results several times the number of unswitchings occurring on
tests such and timberwolfmc, unix-tbl, and ldecod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28912 91177308-0d34-0410-b5e6-96231b3b80d8
"LCSSA" phi node causes indvars to break dominance properties. This fixes
causes indvars to avoid inserting aggressive code in this case, instead
indvars should be fixed to be more aggressive in the face of lcssa phi's.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28850 91177308-0d34-0410-b5e6-96231b3b80d8
LCSSA is still the slowest pass when gccas'ing 252.eon, but now it only takes
39s instead of 289s. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28776 91177308-0d34-0410-b5e6-96231b3b80d8
not handling PHI nodes correctly when determining if a value was live-out.
This patch reduces the number of detected live-out variables in the testcase
from 6565 to 485.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28771 91177308-0d34-0410-b5e6-96231b3b80d8
If a single exit block has multiple predecessors within the loop, it will
appear in the exit blocks list more than once. LCSSA needs to take that into
account so that it doesn't double process that exit block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28750 91177308-0d34-0410-b5e6-96231b3b80d8
post-increment value, should be first cast to the appropriated type (to the
type of the common expr). Otherwise, the rewrite of a use based on (common +
iv) may end up with an incorrect type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28735 91177308-0d34-0410-b5e6-96231b3b80d8
to link in the implementation. Thanks to Anton Korobeynikov for figuring out
what was going on here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28660 91177308-0d34-0410-b5e6-96231b3b80d8
code (while cloning) it often gets the branch/switch instructions. Since it
knows that edges of the CFG are dead, it need not clone (or even look) at
the obviously dead blocks. This should speed up the inliner substantially on
code where there are lots of inlinable calls to functions with constant
arguments. On C++ code in particular, this kicks in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28641 91177308-0d34-0410-b5e6-96231b3b80d8
reimplement getValueDominatingFunction to walk the DominanceTree rather than
just searching blindly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28618 91177308-0d34-0410-b5e6-96231b3b80d8
is now theoretically feature-complete. It has not, however, been thoroughly
test, and is still considered experimental.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28529 91177308-0d34-0410-b5e6-96231b3b80d8
the iterated Dominance Frontier of the loop-closure Phi's. This is the
second phase of the LCSSA pass. The third phase (coming soon) will be to
update all uses of loop variables to use the loop-closure Phi's instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28524 91177308-0d34-0410-b5e6-96231b3b80d8
makes it so that it constant folds instructions on the fly. This is good
for several reasons:
0. Many instructions are constant foldable after inlining, particularly if
inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
that can be constant folded, then a subsequent pass has to delete them. This
gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
partially solves a phase order issue where the inliner would inline lots
of code that folds away to nothing, but think that the resultant function
is big because of this code that will be gone. Now the code never exists.
This is the first part of a 2-step process. The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.
This implements Transforms/Inline/inline_constprop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28521 91177308-0d34-0410-b5e6-96231b3b80d8
the program. This exposes more opportunities for the instcombiner, and implements
vec_shuffle.ll:test6
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28487 91177308-0d34-0410-b5e6-96231b3b80d8
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28224 91177308-0d34-0410-b5e6-96231b3b80d8
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1. In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28215 91177308-0d34-0410-b5e6-96231b3b80d8