mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-11-05 13:09:10 +00:00
37aa33bc11
-stable-loops enables a new algorithm for generating the Loop forest. It differs from the original algorithm in a few respects: - Not determined by use-list order. - Initially guarantees RPO order of block and subloops. - Linear in the number of CFG edges. - Nonrecursive. I didn't want to change the LoopInfo API yet, so the block lists are still inclusive. This seems strange to me, and it means that building LoopInfo is not strictly linear, but it may not be a problem in practice. At least the block lists start out in RPO order now. In the future we may add an attribute or wrapper analysis that allows other passes to assume RPO order. The primary motivation of this work was not to optimize LoopInfo, but to allow reproducing performance issues by decomposing the compilation stages. I'm often unable to do this with the current LoopInfo, because the loop tree order determines Loop pass order. Serializing the IR tends to invert the order, which reverses the optimization order. This makes it nearly impossible to debug interdependent loop optimizations such as LSR. I also believe this will provide more stable performance results across time. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158790 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
IPA | ||
AliasAnalysis.cpp | ||
AliasAnalysisCounter.cpp | ||
AliasAnalysisEvaluator.cpp | ||
AliasDebugger.cpp | ||
AliasSetTracker.cpp | ||
Analysis.cpp | ||
BasicAliasAnalysis.cpp | ||
BlockFrequencyInfo.cpp | ||
BranchProbabilityInfo.cpp | ||
CaptureTracking.cpp | ||
CFGPrinter.cpp | ||
CMakeLists.txt | ||
CodeMetrics.cpp | ||
ConstantFolding.cpp | ||
DbgInfoPrinter.cpp | ||
DebugInfo.cpp | ||
DIBuilder.cpp | ||
DominanceFrontier.cpp | ||
DomPrinter.cpp | ||
InlineCost.cpp | ||
InstCount.cpp | ||
InstructionSimplify.cpp | ||
Interval.cpp | ||
IntervalPartition.cpp | ||
IVUsers.cpp | ||
LazyValueInfo.cpp | ||
LibCallAliasAnalysis.cpp | ||
LibCallSemantics.cpp | ||
Lint.cpp | ||
LLVMBuild.txt | ||
Loads.cpp | ||
LoopDependenceAnalysis.cpp | ||
LoopInfo.cpp | ||
LoopPass.cpp | ||
Makefile | ||
MemDepPrinter.cpp | ||
MemoryBuiltins.cpp | ||
MemoryDependenceAnalysis.cpp | ||
ModuleDebugInfoPrinter.cpp | ||
NoAliasAnalysis.cpp | ||
PathNumbering.cpp | ||
PathProfileInfo.cpp | ||
PathProfileVerifier.cpp | ||
PHITransAddr.cpp | ||
PostDominators.cpp | ||
ProfileEstimatorPass.cpp | ||
ProfileInfo.cpp | ||
ProfileInfoLoader.cpp | ||
ProfileInfoLoaderPass.cpp | ||
ProfileVerifierPass.cpp | ||
README.txt | ||
RegionInfo.cpp | ||
RegionPass.cpp | ||
RegionPrinter.cpp | ||
ScalarEvolution.cpp | ||
ScalarEvolutionAliasAnalysis.cpp | ||
ScalarEvolutionExpander.cpp | ||
ScalarEvolutionNormalization.cpp | ||
SparsePropagation.cpp | ||
Trace.cpp | ||
TypeBasedAliasAnalysis.cpp | ||
ValueTracking.cpp |
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//