llvm-6502/lib/Analysis
Owen Anderson f9a26b89f8 What the loop unroller cares about, rather than just not unrolling loops with calls, is
not unrolling loops that contain calls that would be better off getting inlined.  This mostly
comes up when an interleaved devirtualization pass has devirtualized a call which the inliner
will inline on a future pass.  Thus, rather than blocking all loops containing calls, add
a metric for "inline candidate calls" and block loops containing those instead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113535 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-09 20:32:23 +00:00
..
IPA dead method. 2010-09-04 18:19:16 +00:00
AliasAnalysis.cpp Extend the getDependence query with support for PHI translation. 2010-09-09 18:37:31 +00:00
AliasAnalysisCounter.cpp
AliasAnalysisEvaluator.cpp
AliasDebugger.cpp
AliasSetTracker.cpp Don't print two "0x" prefixes. Use a raw_ostream overload instead of llvm::format. 2010-08-30 14:46:53 +00:00
Analysis.cpp
BasicAliasAnalysis.cpp Extend the getDependence query with support for PHI translation. 2010-09-09 18:37:31 +00:00
CaptureTracking.cpp
CFGPrinter.cpp zap dead code. 2010-09-04 18:12:00 +00:00
CMakeLists.txt
ConstantFolding.cpp
DbgInfoPrinter.cpp Convert DbgInfoPrinter to use errs() instead of outs(). 2010-08-20 18:03:05 +00:00
DebugInfo.cpp Let FE use derived types for DW_TAG_friend. 2010-08-23 23:16:25 +00:00
DomPrinter.cpp
InlineCost.cpp What the loop unroller cares about, rather than just not unrolling loops with calls, is 2010-09-09 20:32:23 +00:00
InstCount.cpp
InstructionSimplify.cpp
Interval.cpp
IntervalPartition.cpp
IVUsers.cpp stop forcing a noop AssemblyAnnotationWriter to silence #uses 2010-09-02 23:03:10 +00:00
LazyValueInfo.cpp Clean up some of the PassRegistry implementation, and pImpl-ize it to reduce #include clutter 2010-09-07 19:16:25 +00:00
LibCallAliasAnalysis.cpp
LibCallSemantics.cpp
Lint.cpp zap dead code. 2010-09-04 18:12:00 +00:00
LiveValues.cpp
Loads.cpp
LoopDependenceAnalysis.cpp
LoopInfo.cpp pull a simple method out of LICM into a new 2010-09-06 01:05:37 +00:00
LoopPass.cpp zap dead code. 2010-09-04 18:12:00 +00:00
Makefile
MemoryBuiltins.cpp
MemoryDependenceAnalysis.cpp cleanup some of the lifetime/invariant marker stuff, add a big fixme. 2010-09-06 03:58:04 +00:00
ModuleDebugInfoPrinter.cpp
PHITransAddr.cpp
PointerTracking.cpp
PostDominators.cpp
ProfileEstimatorPass.cpp Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API. 2010-08-23 17:52:01 +00:00
ProfileInfo.cpp
ProfileInfoLoader.cpp
ProfileInfoLoaderPass.cpp Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API. 2010-08-23 17:52:01 +00:00
ProfileVerifierPass.cpp
README.txt
RegionInfo.cpp
RegionPrinter.cpp
ScalarEvolution.cpp Reapply r112432, now that the real problem is addressed. 2010-08-31 22:53:17 +00:00
ScalarEvolutionAliasAnalysis.cpp
ScalarEvolutionExpander.cpp
ScalarEvolutionNormalization.cpp Disable the asserts that check that normalization is perfectly 2010-09-03 22:12:56 +00:00
SparsePropagation.cpp
Trace.cpp
TypeBasedAliasAnalysis.cpp
ValueTracking.cpp fix PR8063, a crash in globalopt in the malloc analysis code. 2010-09-05 17:20:46 +00:00

Analysis Opportunities:

//===---------------------------------------------------------------------===//

In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:

  {1,+,3,+,2}<loop>

Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as

  (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))

In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.

//===---------------------------------------------------------------------===//

In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,

ScalarEvolution is forming this expression:

((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))

This could be folded to

(-1 * (trunc i64 undef to i32))

//===---------------------------------------------------------------------===//