BBNumbers. Instead of using a bi-directional mapping, just use a single
densemap. This speeds up mem2reg on 176.gcc by 8%, from 1.3489 to
1.2485s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33940 91177308-0d34-0410-b5e6-96231b3b80d8
the Transforms library. This reduces debug library size by 132 KB, debug
binary size by 376 KB, and reduces link time for llvm tools slightly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33939 91177308-0d34-0410-b5e6-96231b3b80d8
This patch replaces the SymbolTable class with ValueSymbolTable which does
not support types planes. This means that all symbol names in LLVM must now
be unique. The patch addresses the necessary changes to deal with this and
removes code no longer needed as a result. This completes the bulk of the
changes for this PR. Some cleanup patches will follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33918 91177308-0d34-0410-b5e6-96231b3b80d8
Make the Module's dependent library use a std::vector instead of SetVector
adjust #includes in .cpp files because SetVector.h is no longer included.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33855 91177308-0d34-0410-b5e6-96231b3b80d8
updating. These were exposed by Devang's recent passmgr changes (with
non-default passorderings) because now the inliner can be interleved with
the LCSSA pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33760 91177308-0d34-0410-b5e6-96231b3b80d8
ConstantFoldInstOperands/ConstantFoldCall to take a pointer to an array
of operands + size, instead of an std::vector.
In some cases, switch to using a SmallVector instead of a vector.
This allows us to get rid of some special case gross code that was there
to avoid the cost of constructing a vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33670 91177308-0d34-0410-b5e6-96231b3b80d8
The Module::setEndianness and Module::setPointerSize methods have been
removed. Instead you can get/set the DataLayout. Adjust thise accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33530 91177308-0d34-0410-b5e6-96231b3b80d8
This is the final patch for this PR. It implements some minor cleanup
in the use of IntegerType, to wit:
1. Type::getIntegerTypeMask -> IntegerType::getBitMask
2. Type::Int*Ty changed to IntegerType* from Type*
3. ConstantInt::getType() returns IntegerType* now, not Type*
This also fixes PR1120.
Patch by Sheng Zhou.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33370 91177308-0d34-0410-b5e6-96231b3b80d8
rename Type::getIntegralTypeMask to Type::getIntegerTypeMask.
This makes naming much more consistent. For example, there are now no longer any
instances of IntegerType that are not considered isInteger! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33225 91177308-0d34-0410-b5e6-96231b3b80d8
recommended that getBoolValue be replaced with getZExtValue and that
get(bool) be replaced by get(const Type*, uint64_t). This implements
those changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33110 91177308-0d34-0410-b5e6-96231b3b80d8
Merge ConstantIntegral and ConstantBool into ConstantInt.
Remove ConstantIntegral and ConstantBool from LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33073 91177308-0d34-0410-b5e6-96231b3b80d8
Take an incremental step towards type plane elimination. This change
separates types from values in the symbol tables by finally making use
of the TypeSymbolTable class. This yields more natural interfaces for
dealing with types and unclutters the SymbolTable class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32956 91177308-0d34-0410-b5e6-96231b3b80d8
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int -> Int32
4. [U]Long -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
and other methods related to signedness. In a few places this warranted
identifying the signedness information from other sources.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32785 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes the SetCC instructions and replaces them with the ICmp
and FCmp instructions. The SetCondInst instruction has been removed and
been replaced with ICmpInst and FCmpInst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32751 91177308-0d34-0410-b5e6-96231b3b80d8
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32636 91177308-0d34-0410-b5e6-96231b3b80d8
The long awaited CAST patch. This introduces 12 new instructions into LLVM
to replace the cast instruction. Corresponding changes throughout LLVM are
provided. This passes llvm-test, llvm/test, and SPEC CPUINT2000 with the
exception of 175.vpr which fails only on a slight floating point output
difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31931 91177308-0d34-0410-b5e6-96231b3b80d8
only do these transformations if there are a small number of phi's.
This speeds up Ptrdist/ks from 2.35s to 2.19s on my mac pro.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31853 91177308-0d34-0410-b5e6-96231b3b80d8
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31542 91177308-0d34-0410-b5e6-96231b3b80d8
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31380 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31063 91177308-0d34-0410-b5e6-96231b3b80d8
The critical edge block dominates the dest block if the destblock dominates
all edges other than the one incoming from the critical edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30696 91177308-0d34-0410-b5e6-96231b3b80d8
or when splitting loops with a common header into multiple loops. In particular
the old code would always insert the preheader before the old loop header. This
is disasterous in cases where the loop hasn't been rotated. For example, it can
produce code like:
.. outside the loop...
jmp LBB1_2 #bb13.outer
LBB1_1: #bb1
movsd 8(%esp,%esi,8), %xmm1
mulsd (%edi), %xmm1
addsd %xmm0, %xmm1
addl $24, %edi
incl %esi
jmp LBB1_3 #bb13
LBB1_2: #bb13.outer
leal (%edx,%eax,8), %edi
pxor %xmm1, %xmm1
xorl %esi, %esi
LBB1_3: #bb13
movapd %xmm1, %xmm0
cmpl $4, %esi
jl LBB1_1 #bb1
Note that the loop body is actually LBB1_1 + LBB1_3, which means that the
loop now contains an uncond branch WITHIN it to jump around the inserted
loop header (LBB1_2). Doh.
This patch changes the preheader insertion code to insert it in the right
spot, producing this code:
... outside the loop, fall into the header ...
LBB1_1: #bb13.outer
leal (%edx,%eax,8), %esi
pxor %xmm0, %xmm0
xorl %edi, %edi
jmp LBB1_3 #bb13
LBB1_2: #bb1
movsd 8(%esp,%edi,8), %xmm0
mulsd (%esi), %xmm0
addsd %xmm1, %xmm0
addl $24, %esi
incl %edi
LBB1_3: #bb13
movapd %xmm0, %xmm1
cmpl $4, %edi
jl LBB1_2 #bb1
Totally crazy, no branch in the loop! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30587 91177308-0d34-0410-b5e6-96231b3b80d8
reachable, making it general purpose enough for use by InsertPreheaderForLoop.
Eliminate custom dominfo updating code in InsertPreheaderForLoop, using
UpdateDomInfoForRevectoredPreds instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30586 91177308-0d34-0410-b5e6-96231b3b80d8
Call these from your backend to enjoy setjmp/longjmp goodness, see
lib/Target/IA64/IA64ISelLowering.cpp for an example
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30095 91177308-0d34-0410-b5e6-96231b3b80d8
Not only will this take huge amounts of compile time, the resultant loop nests
won't be useful for optimization. This reduces loopsimplify time on
Transforms/LoopSimplify/2006-08-11-LoopSimplifyLongTime.ll from ~32s to ~0.4s
with a debug build of llvm on a 2.7Ghz G5.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29647 91177308-0d34-0410-b5e6-96231b3b80d8
blocks that target loop blocks.
Before, the code was run once per loop, and depended on the number of
predecessors each block in the loop had. Unfortunately, scanning preds can
be really slow when huge numbers of phis exist or when phis with huge numbers
of inputs exist.
Now, the code is run once per function and scans successors instead of preds,
which is far faster. In addition, the new code is simpler and is goto free,
woo.
This change speeds up a nasty testcase Duraid provided me from taking hours to
taking ~72s with a debug build. The functionality this implements is already
tested in the testsuite as Transforms/CodeExtractor/2004-03-13-LoopExtractorCrash.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29644 91177308-0d34-0410-b5e6-96231b3b80d8
down approach, inspired by discussions with Tanya.
This approach is significantly faster, because it does not need dominator
frontiers and it does not insert extraneous unused PHI nodes. For example, on
252.eon, in a release-asserts build, this speeds up LCSSA (which is the slowest
pass in gccas) from 9.14s to 0.74s on my G5. This code is also slightly smaller
and significantly simpler than the old code.
Amusingly, in a normal Release build (which includes the
"assert(L->isLCSSAForm());" assertion), asserting that the result of LCSSA
is in LCSSA form is actually slower than the LCSSA transformation pass
itself on 252.eon. I will see if Loop::isLCSSAForm can be sped up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29463 91177308-0d34-0410-b5e6-96231b3b80d8
Handle this case, which doesn't require a new callgraph edge. This fixes
a crash compiling MallocBench/gs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29121 91177308-0d34-0410-b5e6-96231b3b80d8
target CG node. This allows the inliner to properly update the callgraph
when using the pruning inliner. The pruning inliner may not copy over all
call sites from a callee to a caller, so the edges corresponding to those
call sites should not be copied over either.
This fixes PR827 and Transforms/Inline/2006-07-12-InlinePruneCGUpdate.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29120 91177308-0d34-0410-b5e6-96231b3b80d8
cases. Ideally, this issue will go away in the future as LCSSA gets smarter
about which Phi nodes it inserts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29076 91177308-0d34-0410-b5e6-96231b3b80d8
LCSSA is still the slowest pass when gccas'ing 252.eon, but now it only takes
39s instead of 289s. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28776 91177308-0d34-0410-b5e6-96231b3b80d8
not handling PHI nodes correctly when determining if a value was live-out.
This patch reduces the number of detected live-out variables in the testcase
from 6565 to 485.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28771 91177308-0d34-0410-b5e6-96231b3b80d8
If a single exit block has multiple predecessors within the loop, it will
appear in the exit blocks list more than once. LCSSA needs to take that into
account so that it doesn't double process that exit block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28750 91177308-0d34-0410-b5e6-96231b3b80d8
to link in the implementation. Thanks to Anton Korobeynikov for figuring out
what was going on here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28660 91177308-0d34-0410-b5e6-96231b3b80d8
code (while cloning) it often gets the branch/switch instructions. Since it
knows that edges of the CFG are dead, it need not clone (or even look) at
the obviously dead blocks. This should speed up the inliner substantially on
code where there are lots of inlinable calls to functions with constant
arguments. On C++ code in particular, this kicks in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28641 91177308-0d34-0410-b5e6-96231b3b80d8
reimplement getValueDominatingFunction to walk the DominanceTree rather than
just searching blindly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28618 91177308-0d34-0410-b5e6-96231b3b80d8
is now theoretically feature-complete. It has not, however, been thoroughly
test, and is still considered experimental.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28529 91177308-0d34-0410-b5e6-96231b3b80d8
the iterated Dominance Frontier of the loop-closure Phi's. This is the
second phase of the LCSSA pass. The third phase (coming soon) will be to
update all uses of loop variables to use the loop-closure Phi's instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28524 91177308-0d34-0410-b5e6-96231b3b80d8
makes it so that it constant folds instructions on the fly. This is good
for several reasons:
0. Many instructions are constant foldable after inlining, particularly if
inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
that can be constant folded, then a subsequent pass has to delete them. This
gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
partially solves a phase order issue where the inliner would inline lots
of code that folds away to nothing, but think that the resultant function
is big because of this code that will be gone. Now the code never exists.
This is the first part of a 2-step process. The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.
This implements Transforms/Inline/inline_constprop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28521 91177308-0d34-0410-b5e6-96231b3b80d8
nondeterminism being bad) could cause some trivial missed optimizations (dead
phi nodes being left around for later passes to clean up).
With this, llvm-gcc4 now bootstraps and correctly compares. I don't know
why I never tried to do it before... :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27984 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25366 91177308-0d34-0410-b5e6-96231b3b80d8
it doesn't contain any calls. This is a fairly common case for C++ code,
so it will probably speed up the inliner marginally in these cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25284 91177308-0d34-0410-b5e6-96231b3b80d8
function was not an alloca, we wouldn't check the entry block for any allocas,
leading to increased stack space in some cases. In practice, allocas are almost
always at the top of the block, so this was never noticed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25280 91177308-0d34-0410-b5e6-96231b3b80d8
has a single def. In this case, look for uses that are dominated by the def
and attempt to rewrite them to directly use the stored value.
This speeds up mem2reg on these values and reduces the number of phi nodes
inserted. This should address PR665.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24411 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for specifying alignment and size of setjmp jmpbufs.
No targets currently do anything with this information, nor is it presrved
in the bytecode representation. That's coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24196 91177308-0d34-0410-b5e6-96231b3b80d8
into the LLVMAnalysis library.
This allows LLVMTranform and LLVMTransformUtils to be archives and linked
with LLVMAnalysis.a, which provides any missing definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24036 91177308-0d34-0410-b5e6-96231b3b80d8
SparcV9 JIT.
2. Make LLVMTransformUtils a relinked object file and always link it before
LLVMAnalysis.a. These two libraries have circular dependencies on each
other which creates problem when building the SparcV9 JIT. This change
fixes the dependency on all platforms problems with a minimum of fuss.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24023 91177308-0d34-0410-b5e6-96231b3b80d8
pointer marking the end of the list, the zero *must* be cast to the pointer
type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will
not extend the zero to 64 bits, thus allowing the upper 32 bits to be
random junk.
The new END_WITH_NULL macro may be used to annotate a such a function
so that GCC (version 4 or newer) will detect the use of un-casted zero
at compile time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23888 91177308-0d34-0410-b5e6-96231b3b80d8