mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-14 11:32:34 +00:00
7c7b72a066
This patch adds limited support in ValueTracking for inferring known bits of a value from conditional expressions which must be true to reach the instruction we're trying to optimize. At this time, the feature is off by default. Once landed, I'm hoping for feedback from others on both profitability and compile time impact. Forms of conditional value propagation have been tried in LLVM before and have failed due to compile time problems. In an attempt to side step that, this patch only considers conditions where the edge leaving the branch dominates the context instruction. It does not attempt full dataflow. Even with that restriction, it handles many interesting cases: * Early exits from functions * Early exits from loops (for context instructions in the loop and after the check) * Conditions which control entry into loops, including multi-version loops (such as those produced during vectorization, IRCE, loop unswitch, etc..) Possible applications include optimizing using information provided by constructs such as: preconditions, assumptions, null checks, & range checks. This patch implements two approaches to the problem that need further benchmarking. Approach 1 is to directly walk the dominator tree looking for interesting conditions. Approach 2 is to inspect other uses of the value being queried for interesting comparisons. From initial benchmarking, it appears that Approach 2 is faster than Approach 1, but this needs to be further validated. Differential Revision: http://reviews.llvm.org/D7708 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231879 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
IPA | ||
AliasAnalysis.cpp | ||
AliasAnalysisCounter.cpp | ||
AliasAnalysisEvaluator.cpp | ||
AliasDebugger.cpp | ||
AliasSetTracker.cpp | ||
Analysis.cpp | ||
AssumptionCache.cpp | ||
BasicAliasAnalysis.cpp | ||
BlockFrequencyInfo.cpp | ||
BlockFrequencyInfoImpl.cpp | ||
BranchProbabilityInfo.cpp | ||
CaptureTracking.cpp | ||
CFG.cpp | ||
CFGPrinter.cpp | ||
CFLAliasAnalysis.cpp | ||
CGSCCPassManager.cpp | ||
CMakeLists.txt | ||
CodeMetrics.cpp | ||
ConstantFolding.cpp | ||
CostModel.cpp | ||
Delinearization.cpp | ||
DependenceAnalysis.cpp | ||
DominanceFrontier.cpp | ||
DomPrinter.cpp | ||
InstCount.cpp | ||
InstructionSimplify.cpp | ||
Interval.cpp | ||
IntervalPartition.cpp | ||
IVUsers.cpp | ||
LazyCallGraph.cpp | ||
LazyValueInfo.cpp | ||
LibCallAliasAnalysis.cpp | ||
LibCallSemantics.cpp | ||
Lint.cpp | ||
LLVMBuild.txt | ||
Loads.cpp | ||
LoopAccessAnalysis.cpp | ||
LoopInfo.cpp | ||
LoopPass.cpp | ||
Makefile | ||
MemDepPrinter.cpp | ||
MemDerefPrinter.cpp | ||
MemoryBuiltins.cpp | ||
MemoryDependenceAnalysis.cpp | ||
ModuleDebugInfoPrinter.cpp | ||
NoAliasAnalysis.cpp | ||
PHITransAddr.cpp | ||
PostDominators.cpp | ||
PtrUseVisitor.cpp | ||
README.txt | ||
RegionInfo.cpp | ||
RegionPass.cpp | ||
RegionPrinter.cpp | ||
ScalarEvolution.cpp | ||
ScalarEvolutionAliasAnalysis.cpp | ||
ScalarEvolutionExpander.cpp | ||
ScalarEvolutionNormalization.cpp | ||
ScopedNoAliasAA.cpp | ||
SparsePropagation.cpp | ||
StratifiedSets.h | ||
TargetLibraryInfo.cpp | ||
TargetTransformInfo.cpp | ||
Trace.cpp | ||
TypeBasedAliasAnalysis.cpp | ||
ValueTracking.cpp |
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//