llvm-6502/lib/Analysis
Eric Christopher acc8f2d938 Revert "Address Duncan's CR request:"
This reverts commit 20a05be15e. (svn rev 138340)

Conflicts:

	test/Transforms/InstCombine/bitcast.ll

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138366 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-23 20:11:10 +00:00
..
IPA Silence a bunch (but not all) "variable written but not read" warnings 2011-08-12 14:54:45 +00:00
AliasAnalysis.cpp Misc analysis passes that need to be aware of atomic load/store. 2011-08-15 20:54:19 +00:00
AliasAnalysisCounter.cpp
AliasAnalysisEvaluator.cpp
AliasDebugger.cpp
AliasSetTracker.cpp Atomic load/store support in LICM. 2011-08-15 20:52:09 +00:00
Analysis.cpp C API functions must be able to see their extern "C" definitions, or it will be impossible to call them from C. 2011-08-19 01:36:54 +00:00
BasicAliasAnalysis.cpp
BlockFrequencyInfo.cpp
BranchProbabilityInfo.cpp
CaptureTracking.cpp
CFGPrinter.cpp
CMakeLists.txt
ConstantFolding.cpp Revert "Address Duncan's CR request:" 2011-08-23 20:11:10 +00:00
DbgInfoPrinter.cpp
DebugInfo.cpp Do not use named md nodes to track variables that are completely optimized. This does not scale while doing LTO with debug info. New approach is to include list of variables in the subprogram info directly. 2011-08-19 23:28:12 +00:00
DIBuilder.cpp Do not use named md nodes to track variables that are completely optimized. This does not scale while doing LTO with debug info. New approach is to include list of variables in the subprogram info directly. 2011-08-19 23:28:12 +00:00
DominanceFrontier.cpp
DomPrinter.cpp
InlineCost.cpp
InstCount.cpp
InstructionSimplify.cpp Revert r137781; I agree with Duncan's comment that the situation in question is clearly impossible given the current structure of the code. 2011-08-17 19:31:49 +00:00
Interval.cpp
IntervalPartition.cpp
IVUsers.cpp
LazyValueInfo.cpp
LibCallAliasAnalysis.cpp
LibCallSemantics.cpp
Lint.cpp
Loads.cpp Add some comments here because the lack of a check for volatile/atomic here is a bit unusual. 2011-08-15 21:56:39 +00:00
LoopDependenceAnalysis.cpp Misc analysis passes that need to be aware of atomic load/store. 2011-08-15 20:54:19 +00:00
LoopInfo.cpp Make a bunch of symbols private. 2011-08-19 01:42:18 +00:00
LoopPass.cpp Reapplying r136844. 2011-08-10 23:22:57 +00:00
Makefile
MemDepPrinter.cpp Misc analysis passes that need to be aware of atomic load/store. 2011-08-15 20:54:19 +00:00
MemoryBuiltins.cpp
MemoryDependenceAnalysis.cpp Misc analysis passes that need to be aware of atomic load/store. 2011-08-15 20:54:19 +00:00
ModuleDebugInfoPrinter.cpp
NoAliasAnalysis.cpp
PathNumbering.cpp
PathProfileInfo.cpp
PathProfileVerifier.cpp
PHITransAddr.cpp
PostDominators.cpp
ProfileEstimatorPass.cpp
ProfileInfo.cpp
ProfileInfoLoader.cpp
ProfileInfoLoaderPass.cpp
ProfileVerifierPass.cpp
README.txt
RegionInfo.cpp
RegionPass.cpp
RegionPrinter.cpp
ScalarEvolution.cpp Allow loop unrolling to get known trip counts from ScalarEvolution. 2011-08-11 23:36:16 +00:00
ScalarEvolutionAliasAnalysis.cpp
ScalarEvolutionExpander.cpp Use the getFirstInsertionPt() method instead of getFirstNonPHI + an 'isa<>' 2011-08-16 20:45:24 +00:00
ScalarEvolutionNormalization.cpp
SparsePropagation.cpp
Trace.cpp
TypeBasedAliasAnalysis.cpp
ValueTracking.cpp

Analysis Opportunities:

//===---------------------------------------------------------------------===//

In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:

  {1,+,3,+,2}<loop>

Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as

  (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))

In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.

//===---------------------------------------------------------------------===//

In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,

ScalarEvolution is forming this expression:

((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))

This could be folded to

(-1 * (trunc i64 undef to i32))

//===---------------------------------------------------------------------===//