llvm-6502/lib/Analysis
Benjamin Kramer 05d96f98cb Reduce duplicated hash map lookups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162362 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-22 15:37:57 +00:00
..
IPA RefreshCallGraph: ignore 'invoke intrinsic'. IntrinsicInst doesnt not recognize invoke, and shouldnt at this point, since the rest of LLVM codebase doesnt expect invoke of intrinsics 2012-06-29 17:49:32 +00:00
AliasAnalysis.cpp
AliasAnalysisCounter.cpp
AliasAnalysisEvaluator.cpp
AliasDebugger.cpp
AliasSetTracker.cpp Reduce use list thrashing by using DenseMap's find_as for maps with ValueHandle keys. 2012-06-30 22:37:15 +00:00
Analysis.cpp
BasicAliasAnalysis.cpp refactor the MemoryBuiltin analysis: 2012-06-21 15:45:28 +00:00
BlockFrequencyInfo.cpp
BranchProbabilityInfo.cpp Set the branch probability of branching to the 'normal' destination of an invoke 2012-08-15 12:22:35 +00:00
CaptureTracking.cpp
CFGPrinter.cpp
CMakeLists.txt Update the CMake files. 2012-06-29 09:01:47 +00:00
CodeMetrics.cpp
ConstantFolding.cpp When constant folding GEP expressions, keep the address space information of pointers. 2012-07-30 07:25:20 +00:00
DbgInfoPrinter.cpp Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and 2012-06-28 00:05:13 +00:00
DominanceFrontier.cpp
DomPrinter.cpp
InlineCost.cpp PR13095: Give an inline cost bonus to functions using byval arguments. 2012-08-07 11:13:19 +00:00
InstCount.cpp
InstructionSimplify.cpp Fix PR13412, a nasty miscompile due to the interleaved 2012-08-07 10:59:59 +00:00
Interval.cpp
IntervalPartition.cpp
IVUsers.cpp IVUsers should only generate SCEV's for values that are safe to speculate. 2012-07-13 23:33:05 +00:00
LazyValueInfo.cpp Reduce duplicated hash map lookups. 2012-08-22 15:37:57 +00:00
LibCallAliasAnalysis.cpp
LibCallSemantics.cpp
Lint.cpp
LLVMBuild.txt
Loads.cpp
LoopDependenceAnalysis.cpp
LoopInfo.cpp Reduce duplicated hash map lookups. 2012-08-22 15:37:57 +00:00
LoopPass.cpp Enable the new LoopInfo algorithm by default. 2012-06-26 04:11:38 +00:00
Makefile
MemDepPrinter.cpp
MemoryBuiltins.cpp MemoryBuiltins: Properly guard ObjectSizeOffsetVisitor against cycles in the IR. 2012-08-17 19:26:41 +00:00
MemoryDependenceAnalysis.cpp MemoryDependenceAnalysis attempts to find the first memory dependency for function calls. 2012-08-13 23:03:43 +00:00
ModuleDebugInfoPrinter.cpp Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and 2012-06-28 00:05:13 +00:00
NoAliasAnalysis.cpp
PathNumbering.cpp Move llvm/Support/TypeBuilder.h -> llvm/TypeBuilder.h. This completes 2012-07-15 23:45:24 +00:00
PathProfileInfo.cpp
PathProfileVerifier.cpp
PHITransAddr.cpp
PostDominators.cpp
ProfileEstimatorPass.cpp
ProfileInfo.cpp
ProfileInfoLoader.cpp Remove unused private member variables uncovered by the recent changes to clang's -Wunused-private-field. 2012-07-20 22:05:57 +00:00
ProfileInfoLoaderPass.cpp Round 2 of dead private variable removal. 2012-06-06 19:47:08 +00:00
ProfileVerifierPass.cpp
README.txt
RegionInfo.cpp Implement the block_iterator of Region based on df_iterator. 2012-08-02 14:20:02 +00:00
RegionPass.cpp
RegionPrinter.cpp
ScalarEvolution.cpp Stay rational; don't assert trying to take the square root of a negative value. 2012-08-01 09:14:36 +00:00
ScalarEvolutionAliasAnalysis.cpp
ScalarEvolutionExpander.cpp Fix a typo (the the => the) 2012-07-23 08:51:15 +00:00
ScalarEvolutionNormalization.cpp
SparsePropagation.cpp
Trace.cpp
TypeBasedAliasAnalysis.cpp
ValueTracking.cpp PHINode::hasConstantValue(): return undef if the PHI is fully recursive. 2012-07-03 21:15:40 +00:00

Analysis Opportunities:

//===---------------------------------------------------------------------===//

In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:

  {1,+,3,+,2}<loop>

Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as

  (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))

In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.

//===---------------------------------------------------------------------===//

In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,

ScalarEvolution is forming this expression:

((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))

This could be folded to

(-1 * (trunc i64 undef to i32))

//===---------------------------------------------------------------------===//