llvm-6502/lib/IR/PassManager.cpp

165 lines
5.7 KiB
C++
Raw Normal View History

Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
//===- PassManager.h - Infrastructure for managing & running IR passes ----===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
#include "llvm/IR/PassManager.h"
#include "llvm/ADT/STLExtras.h"
using namespace llvm;
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
PreservedAnalyses ModulePassManager::run(Module *M) {
PreservedAnalyses PA = PreservedAnalyses::all();
for (unsigned Idx = 0, Size = Passes.size(); Idx != Size; ++Idx) {
PreservedAnalyses PassPA = Passes[Idx]->run(M);
if (AM)
AM->invalidate(M, PassPA);
PA.intersect(llvm_move(PassPA));
}
return PA;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
void ModuleAnalysisManager::invalidate(Module *M, const PreservedAnalyses &PA) {
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
// FIXME: This is a total hack based on the fact that erasure doesn't
// invalidate iteration for DenseMap.
for (ModuleAnalysisResultMapT::iterator I = ModuleAnalysisResults.begin(),
E = ModuleAnalysisResults.end();
I != E; ++I)
if (I->second->invalidate(M, PA))
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
ModuleAnalysisResults.erase(I);
}
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
const detail::AnalysisResultConcept<Module> &
ModuleAnalysisManager::getResultImpl(void *PassID, Module *M) {
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
ModuleAnalysisResultMapT::iterator RI;
bool Inserted;
llvm::tie(RI, Inserted) = ModuleAnalysisResults.insert(std::make_pair(
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
PassID, polymorphic_ptr<detail::AnalysisResultConcept<Module> >()));
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
if (Inserted) {
// We don't have a cached result for this result. Look up the pass and run
// it to produce a result, which we then add to the cache.
ModuleAnalysisPassMapT::const_iterator PI =
ModuleAnalysisPasses.find(PassID);
assert(PI != ModuleAnalysisPasses.end() &&
"Analysis passes must be registered prior to being queried!");
RI->second = PI->second->run(M);
}
return *RI->second;
}
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
void ModuleAnalysisManager::invalidateImpl(void *PassID, Module *M) {
ModuleAnalysisResults.erase(PassID);
}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
PreservedAnalyses FunctionPassManager::run(Function *F) {
PreservedAnalyses PA = PreservedAnalyses::all();
for (unsigned Idx = 0, Size = Passes.size(); Idx != Size; ++Idx) {
PreservedAnalyses PassPA = Passes[Idx]->run(F);
if (AM)
AM->invalidate(F, PassPA);
PA.intersect(llvm_move(PassPA));
}
return PA;
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
void FunctionAnalysisManager::invalidate(Function *F, const PreservedAnalyses &PA) {
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
// Clear all the invalidated results associated specifically with this
// function.
SmallVector<void *, 8> InvalidatedPassIDs;
FunctionAnalysisResultListT &ResultsList = FunctionAnalysisResultLists[F];
for (FunctionAnalysisResultListT::iterator I = ResultsList.begin(),
E = ResultsList.end();
I != E;)
if (I->second->invalidate(F, PA)) {
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
InvalidatedPassIDs.push_back(I->first);
I = ResultsList.erase(I);
} else {
++I;
}
while (!InvalidatedPassIDs.empty())
FunctionAnalysisResults.erase(
std::make_pair(InvalidatedPassIDs.pop_back_val(), F));
}
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
bool FunctionAnalysisManager::empty() const {
assert(FunctionAnalysisResults.empty() ==
FunctionAnalysisResultLists.empty() &&
"The storage and index of analysis results disagree on how many there "
"are!");
return FunctionAnalysisResults.empty();
}
void FunctionAnalysisManager::clear() {
FunctionAnalysisResults.clear();
FunctionAnalysisResultLists.clear();
}
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
const detail::AnalysisResultConcept<Function> &
FunctionAnalysisManager::getResultImpl(void *PassID, Function *F) {
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
FunctionAnalysisResultMapT::iterator RI;
bool Inserted;
llvm::tie(RI, Inserted) = FunctionAnalysisResults.insert(std::make_pair(
std::make_pair(PassID, F), FunctionAnalysisResultListT::iterator()));
if (Inserted) {
// We don't have a cached result for this result. Look up the pass and run
// it to produce a result, which we then add to the cache.
FunctionAnalysisPassMapT::const_iterator PI =
FunctionAnalysisPasses.find(PassID);
assert(PI != FunctionAnalysisPasses.end() &&
"Analysis passes must be registered prior to being queried!");
FunctionAnalysisResultListT &ResultList = FunctionAnalysisResultLists[F];
ResultList.push_back(std::make_pair(PassID, PI->second->run(F)));
RI->second = llvm::prior(ResultList.end());
}
return *RI->second->second;
}
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
void FunctionAnalysisManager::invalidateImpl(void *PassID, Function *F) {
FunctionAnalysisResultMapT::iterator RI =
FunctionAnalysisResults.find(std::make_pair(PassID, F));
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
if (RI == FunctionAnalysisResults.end())
return;
FunctionAnalysisResultLists[F].erase(RI->second);
}
char FunctionAnalysisModuleProxy::PassID;
FunctionAnalysisModuleProxy::Result
FunctionAnalysisModuleProxy::run(Module *M) {
assert(FAM.empty() && "Function analyses ran prior to the module proxy!");
return Result(FAM);
}
FunctionAnalysisModuleProxy::Result::~Result() {
// Clear out the analysis manager if we're being destroyed -- it means we
// didn't even see an invalidate call when we got invalidated.
FAM.clear();
}
bool FunctionAnalysisModuleProxy::Result::invalidate(Module *M, const PreservedAnalyses &PA) {
// If this proxy isn't marked as preserved, then it is has an invalid set of
// Function objects in the cache making it impossible to incrementally
// preserve them. Just clear the entire manager.
if (!PA.preserved(ID())) {
FAM.clear();
return false;
}
// The set of functions was preserved some how, so just directly invalidate
// any analysis results not preserved.
for (Module::iterator I = M->begin(), E = M->end(); I != E; ++I)
FAM.invalidate(I, PA);
// Return false to indicate that this result is still a valid proxy.
return false;
}