llvm-6502/unittests/IR/PassManagerTest.cpp

345 lines
11 KiB
C++
Raw Normal View History

//===- llvm/unittest/IR/PassManager.cpp - PassManager tests ---------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
#include "llvm/AsmParser/Parser.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/PassManager.h"
#include "llvm/Support/SourceMgr.h"
#include "gtest/gtest.h"
using namespace llvm;
namespace {
class TestFunctionAnalysis {
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
public:
struct Result {
Result(int Count) : InstructionCount(Count) {}
int InstructionCount;
};
/// \brief Returns an opaque, unique ID for this pass type.
static void *ID() { return (void *)&PassID; }
TestFunctionAnalysis(int &Runs) : Runs(Runs) {}
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Run the analysis pass over the function and return a result.
Result run(Function *F, FunctionAnalysisManager *AM) {
++Runs;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
int Count = 0;
for (Function::iterator BBI = F->begin(), BBE = F->end(); BBI != BBE; ++BBI)
for (BasicBlock::iterator II = BBI->begin(), IE = BBI->end(); II != IE;
++II)
++Count;
return Result(Count);
}
private:
/// \brief Private static data to provide unique ID.
static char PassID;
int &Runs;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
};
char TestFunctionAnalysis::PassID;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
class TestModuleAnalysis {
public:
struct Result {
Result(int Count) : FunctionCount(Count) {}
int FunctionCount;
};
static void *ID() { return (void *)&PassID; }
TestModuleAnalysis(int &Runs) : Runs(Runs) {}
Result run(Module *M, ModuleAnalysisManager *AM) {
++Runs;
int Count = 0;
for (Module::iterator I = M->begin(), E = M->end(); I != E; ++I)
++Count;
return Result(Count);
}
private:
static char PassID;
int &Runs;
};
char TestModuleAnalysis::PassID;
struct TestModulePass {
TestModulePass(int &RunCount) : RunCount(RunCount) {}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
PreservedAnalyses run(Module *M) {
++RunCount;
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
return PreservedAnalyses::none();
}
static StringRef name() { return "TestModulePass"; }
int &RunCount;
};
struct TestPreservingModulePass {
PreservedAnalyses run(Module *M) { return PreservedAnalyses::all(); }
static StringRef name() { return "TestPreservingModulePass"; }
};
struct TestMinPreservingModulePass {
PreservedAnalyses run(Module *M, ModuleAnalysisManager *AM) {
PreservedAnalyses PA;
// Force running an analysis.
(void)AM->getResult<TestModuleAnalysis>(M);
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
PA.preserve<FunctionAnalysisManagerModuleProxy>();
return PA;
}
static StringRef name() { return "TestMinPreservingModulePass"; }
};
struct TestFunctionPass {
TestFunctionPass(int &RunCount, int &AnalyzedInstrCount,
int &AnalyzedFunctionCount,
bool OnlyUseCachedResults = false)
: RunCount(RunCount), AnalyzedInstrCount(AnalyzedInstrCount),
AnalyzedFunctionCount(AnalyzedFunctionCount),
OnlyUseCachedResults(OnlyUseCachedResults) {}
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
PreservedAnalyses run(Function *F, FunctionAnalysisManager *AM) {
++RunCount;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
const ModuleAnalysisManager &MAM =
AM->getResult<ModuleAnalysisManagerFunctionProxy>(F).getManager();
if (TestModuleAnalysis::Result *TMA =
MAM.getCachedResult<TestModuleAnalysis>(F->getParent()))
AnalyzedFunctionCount += TMA->FunctionCount;
if (OnlyUseCachedResults) {
// Hack to force the use of the cached interface.
if (TestFunctionAnalysis::Result *AR =
AM->getCachedResult<TestFunctionAnalysis>(F))
AnalyzedInstrCount += AR->InstructionCount;
} else {
// Typical path just runs the analysis as needed.
TestFunctionAnalysis::Result &AR = AM->getResult<TestFunctionAnalysis>(F);
AnalyzedInstrCount += AR.InstructionCount;
}
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
return PreservedAnalyses::all();
}
static StringRef name() { return "TestFunctionPass"; }
int &RunCount;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
int &AnalyzedInstrCount;
int &AnalyzedFunctionCount;
bool OnlyUseCachedResults;
};
// A test function pass that invalidates all function analyses for a function
// with a specific name.
struct TestInvalidationFunctionPass {
TestInvalidationFunctionPass(StringRef FunctionName) : Name(FunctionName) {}
PreservedAnalyses run(Function *F) {
return F->getName() == Name ? PreservedAnalyses::none()
: PreservedAnalyses::all();
}
static StringRef name() { return "TestInvalidationFunctionPass"; }
StringRef Name;
};
Module *parseIR(const char *IR) {
LLVMContext &C = getGlobalContext();
SMDiagnostic Err;
return parseAssemblyString(IR, Err, C).release();
}
class PassManagerTest : public ::testing::Test {
protected:
std::unique_ptr<Module> M;
public:
PassManagerTest()
: M(parseIR("define void @f() {\n"
"entry:\n"
" call void @g()\n"
" call void @h()\n"
" ret void\n"
"}\n"
"define void @g() {\n"
" ret void\n"
"}\n"
"define void @h() {\n"
" ret void\n"
"}\n")) {}
};
TEST_F(PassManagerTest, BasicPreservedAnalyses) {
PreservedAnalyses PA1 = PreservedAnalyses();
EXPECT_FALSE(PA1.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA1.preserved<TestModuleAnalysis>());
PreservedAnalyses PA2 = PreservedAnalyses::none();
EXPECT_FALSE(PA2.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA2.preserved<TestModuleAnalysis>());
PreservedAnalyses PA3 = PreservedAnalyses::all();
EXPECT_TRUE(PA3.preserved<TestFunctionAnalysis>());
EXPECT_TRUE(PA3.preserved<TestModuleAnalysis>());
PreservedAnalyses PA4 = PA1;
EXPECT_FALSE(PA4.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA4.preserved<TestModuleAnalysis>());
PA4 = PA3;
EXPECT_TRUE(PA4.preserved<TestFunctionAnalysis>());
EXPECT_TRUE(PA4.preserved<TestModuleAnalysis>());
PA4 = std::move(PA2);
EXPECT_FALSE(PA4.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA4.preserved<TestModuleAnalysis>());
PA4.preserve<TestFunctionAnalysis>();
EXPECT_TRUE(PA4.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA4.preserved<TestModuleAnalysis>());
PA1.preserve<TestModuleAnalysis>();
EXPECT_FALSE(PA1.preserved<TestFunctionAnalysis>());
EXPECT_TRUE(PA1.preserved<TestModuleAnalysis>());
PA1.preserve<TestFunctionAnalysis>();
EXPECT_TRUE(PA1.preserved<TestFunctionAnalysis>());
EXPECT_TRUE(PA1.preserved<TestModuleAnalysis>());
PA1.intersect(PA4);
EXPECT_TRUE(PA1.preserved<TestFunctionAnalysis>());
EXPECT_FALSE(PA1.preserved<TestModuleAnalysis>());
}
TEST_F(PassManagerTest, Basic) {
FunctionAnalysisManager FAM;
int FunctionAnalysisRuns = 0;
FAM.registerPass(TestFunctionAnalysis(FunctionAnalysisRuns));
ModuleAnalysisManager MAM;
int ModuleAnalysisRuns = 0;
MAM.registerPass(TestModuleAnalysis(ModuleAnalysisRuns));
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
MAM.registerPass(FunctionAnalysisManagerModuleProxy(FAM));
FAM.registerPass(ModuleAnalysisManagerFunctionProxy(MAM));
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
ModulePassManager MPM;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
// Count the runs over a Function.
int FunctionPassRunCount1 = 0;
int AnalyzedInstrCount1 = 0;
int AnalyzedFunctionCount1 = 0;
{
// Pointless scoped copy to test move assignment.
ModulePassManager NestedMPM;
FunctionPassManager FPM;
{
// Pointless scope to test move assignment.
FunctionPassManager NestedFPM;
NestedFPM.addPass(TestFunctionPass(FunctionPassRunCount1, AnalyzedInstrCount1,
AnalyzedFunctionCount1));
FPM = std::move(NestedFPM);
}
NestedMPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
MPM = std::move(NestedMPM);
}
// Count the runs over a module.
int ModulePassRunCount = 0;
MPM.addPass(TestModulePass(ModulePassRunCount));
// Count the runs over a Function in a separate manager.
int FunctionPassRunCount2 = 0;
int AnalyzedInstrCount2 = 0;
int AnalyzedFunctionCount2 = 0;
{
FunctionPassManager FPM;
FPM.addPass(TestFunctionPass(FunctionPassRunCount2, AnalyzedInstrCount2,
AnalyzedFunctionCount2));
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
}
// A third function pass manager but with only preserving intervening passes
// and with a function pass that invalidates exactly one analysis.
MPM.addPass(TestPreservingModulePass());
int FunctionPassRunCount3 = 0;
int AnalyzedInstrCount3 = 0;
int AnalyzedFunctionCount3 = 0;
{
FunctionPassManager FPM;
FPM.addPass(TestFunctionPass(FunctionPassRunCount3, AnalyzedInstrCount3,
AnalyzedFunctionCount3));
FPM.addPass(TestInvalidationFunctionPass("f"));
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
}
// A fourth function pass manager but with a minimal intervening passes.
MPM.addPass(TestMinPreservingModulePass());
int FunctionPassRunCount4 = 0;
int AnalyzedInstrCount4 = 0;
int AnalyzedFunctionCount4 = 0;
{
FunctionPassManager FPM;
FPM.addPass(TestFunctionPass(FunctionPassRunCount4, AnalyzedInstrCount4,
AnalyzedFunctionCount4));
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
}
// A fifth function pass manager but which uses only cached results.
int FunctionPassRunCount5 = 0;
int AnalyzedInstrCount5 = 0;
int AnalyzedFunctionCount5 = 0;
{
FunctionPassManager FPM;
FPM.addPass(TestInvalidationFunctionPass("f"));
FPM.addPass(TestFunctionPass(FunctionPassRunCount5, AnalyzedInstrCount5,
AnalyzedFunctionCount5,
/*OnlyUseCachedResults=*/true));
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
}
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
MPM.run(M.get(), &MAM);
// Validate module pass counters.
EXPECT_EQ(1, ModulePassRunCount);
// Validate all function pass counter sets are the same.
EXPECT_EQ(3, FunctionPassRunCount1);
EXPECT_EQ(5, AnalyzedInstrCount1);
EXPECT_EQ(0, AnalyzedFunctionCount1);
EXPECT_EQ(3, FunctionPassRunCount2);
EXPECT_EQ(5, AnalyzedInstrCount2);
EXPECT_EQ(0, AnalyzedFunctionCount2);
EXPECT_EQ(3, FunctionPassRunCount3);
EXPECT_EQ(5, AnalyzedInstrCount3);
EXPECT_EQ(0, AnalyzedFunctionCount3);
EXPECT_EQ(3, FunctionPassRunCount4);
EXPECT_EQ(5, AnalyzedInstrCount4);
EXPECT_EQ(0, AnalyzedFunctionCount4);
EXPECT_EQ(3, FunctionPassRunCount5);
EXPECT_EQ(2, AnalyzedInstrCount5); // Only 'g' and 'h' were cached.
EXPECT_EQ(0, AnalyzedFunctionCount5);
// Validate the analysis counters:
// first run over 3 functions, then module pass invalidates
// second run over 3 functions, nothing invalidates
// third run over 0 functions, but 1 function invalidated
// fourth run over 1 function
EXPECT_EQ(7, FunctionAnalysisRuns);
EXPECT_EQ(1, ModuleAnalysisRuns);
}
}