llvm-6502/include/llvm/IR/PassManager.h

841 lines
32 KiB
C
Raw Normal View History

//===- PassManager.h - Pass management infrastructure -----------*- C++ -*-===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
/// \file
///
/// This header defines various interfaces for pass management in LLVM. There
/// is no "pass" interface in LLVM per se. Instead, an instance of any class
/// which supports a method to 'run' it over a unit of IR can be used as
/// a pass. A pass manager is generally a tool to collect a sequence of passes
/// which run over a particular IR construct, and run each of them in sequence
/// over each such construct in the containing IR construct. As there is no
/// containing IR construct for a Module, a manager for passes over modules
/// forms the base case which runs its managed passes in sequence over the
/// single module provided.
///
/// The core IR library provides managers for running passes over
/// modules and functions.
///
/// * FunctionPassManager can run over a Module, runs each pass over
/// a Function.
/// * ModulePassManager must be directly run, runs each pass over the Module.
///
/// Note that the implementations of the pass managers use concept-based
/// polymorphism as outlined in the "Value Semantics and Concept-based
/// Polymorphism" talk (or its abbreviated sibling "Inheritance Is The Base
/// Class of Evil") by Sean Parent:
/// * http://github.com/sean-parent/sean-parent.github.com/wiki/Papers-and-Presentations
/// * http://www.youtube.com/watch?v=_BpMYeUFXv8
/// * http://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil
///
//===----------------------------------------------------------------------===//
#ifndef LLVM_IR_PASSMANAGER_H
#define LLVM_IR_PASSMANAGER_H
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/STLExtras.h"
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
#include "llvm/ADT/SmallPtrSet.h"
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
#include "llvm/IR/Function.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/PassManagerInternal.h"
#include "llvm/Support/type_traits.h"
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
#include <list>
#include <memory>
#include <vector>
namespace llvm {
class Module;
class Function;
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
/// \brief An abstract set of preserved analyses following a transformation pass
/// run.
///
/// When a transformation pass is run, it can return a set of analyses whose
/// results were preserved by that transformation. The default set is "none",
/// and preserving analyses must be done explicitly.
///
/// There is also an explicit all state which can be used (for example) when
/// the IR is not mutated at all.
class PreservedAnalyses {
public:
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
PreservedAnalyses() {}
PreservedAnalyses(const PreservedAnalyses &Arg)
: PreservedPassIDs(Arg.PreservedPassIDs) {}
PreservedAnalyses(PreservedAnalyses &&Arg)
: PreservedPassIDs(std::move(Arg.PreservedPassIDs)) {}
friend void swap(PreservedAnalyses &LHS, PreservedAnalyses &RHS) {
using std::swap;
swap(LHS.PreservedPassIDs, RHS.PreservedPassIDs);
}
PreservedAnalyses &operator=(PreservedAnalyses RHS) {
swap(*this, RHS);
return *this;
}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
/// \brief Convenience factory function for the empty preserved set.
static PreservedAnalyses none() { return PreservedAnalyses(); }
/// \brief Construct a special preserved set that preserves all passes.
static PreservedAnalyses all() {
PreservedAnalyses PA;
PA.PreservedPassIDs.insert((void *)AllPassesID);
return PA;
}
/// \brief Mark a particular pass as preserved, adding it to the set.
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
template <typename PassT> void preserve() { preserve(PassT::ID()); }
/// \brief Mark an abstract PassID as preserved, adding it to the set.
void preserve(void *PassID) {
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
if (!areAllPreserved())
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
PreservedPassIDs.insert(PassID);
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
}
/// \brief Intersect this set with another in place.
///
/// This is a mutating operation on this preserved set, removing all
/// preserved passes which are not also preserved in the argument.
void intersect(const PreservedAnalyses &Arg) {
if (Arg.areAllPreserved())
return;
if (areAllPreserved()) {
PreservedPassIDs = Arg.PreservedPassIDs;
return;
}
for (void *P : PreservedPassIDs)
if (!Arg.PreservedPassIDs.count(P))
PreservedPassIDs.erase(P);
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
}
/// \brief Intersect this set with a temporary other set in place.
///
/// This is a mutating operation on this preserved set, removing all
/// preserved passes which are not also preserved in the argument.
void intersect(PreservedAnalyses &&Arg) {
if (Arg.areAllPreserved())
return;
if (areAllPreserved()) {
PreservedPassIDs = std::move(Arg.PreservedPassIDs);
return;
}
for (void *P : PreservedPassIDs)
if (!Arg.PreservedPassIDs.count(P))
PreservedPassIDs.erase(P);
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
}
/// \brief Query whether a pass is marked as preserved by this set.
template <typename PassT> bool preserved() const {
return preserved(PassT::ID());
}
/// \brief Query whether an abstract pass ID is marked as preserved by this
/// set.
bool preserved(void *PassID) const {
return PreservedPassIDs.count((void *)AllPassesID) ||
PreservedPassIDs.count(PassID);
}
/// \brief Test whether all passes are preserved.
///
/// This is used primarily to optimize for the case of no changes which will
/// common in many scenarios.
bool areAllPreserved() const {
return PreservedPassIDs.count((void *)AllPassesID);
}
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
private:
// Note that this must not be -1 or -2 as those are already used by the
// SmallPtrSet.
static const uintptr_t AllPassesID = (intptr_t)(-3);
[PM] Add the preservation system to the new pass manager. This adds a new set-like type which represents a set of preserved analysis passes. The set is managed via the opaque PassT::ID() void*s. The expected convenience templates for interacting with specific passes are provided. It also supports a symbolic "all" state which is represented by an invalid pointer in the set. This state is nicely saturating as it comes up often. Finally, it supports intersection which is used when finding the set of preserved passes after N different transforms. The pass API is then changed to return the preserved set rather than a bool. This is much more self-documenting than the previous system. Returning "none" is a conservatively correct solution just like returning "true" from todays passes and not marking any passes as preserved. Passes can also be dynamically preserved or not throughout the run of the pass, and whatever gets returned is the binding state. Finally, preserving "all" the passes is allowed for no-op transforms that simply can't harm such things. Finally, the analysis managers are changed to instead of blindly invalidating all of the analyses, invalidate those which were not preserved. This should rig up all of the basic preservation functionality. This also correctly combines the preservation moving up from one IR-layer to the another and the preservation aggregation across N pass runs. Still to go is incrementally correct invalidation and preservation across IR layers incrementally during N pass runs. That will wait until we have a device for even exposing analyses across IR layers. While the core of this change is obvious, I'm not happy with the current testing, so will improve it to cover at least some of the invalidation that I can test easily in a subsequent commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 11:31:50 +00:00
SmallPtrSet<void *, 2> PreservedPassIDs;
};
// We define the pass managers prior to the analysis managers that they use.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
class ModuleAnalysisManager;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Manages a sequence of passes over Modules of IR.
///
/// A module pass manager contains a sequence of module passes. It is also
/// itself a module pass. When it is run over a module of LLVM IR, it will
/// sequentially run each pass it contains over that module.
///
/// If it is run with a \c ModuleAnalysisManager argument, it will propagate
/// that analysis manager to each pass it runs, as well as calling the analysis
/// manager's invalidation routine with the PreservedAnalyses of each pass it
/// runs.
///
/// Module passes can rely on having exclusive access to the module they are
/// run over. No other threads will access that module, and they can mutate it
/// freely. However, they must not mutate other LLVM IR modules.
class ModulePassManager {
public:
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
ModulePassManager() {}
ModulePassManager(ModulePassManager &&Arg) : Passes(std::move(Arg.Passes)) {}
ModulePassManager &operator=(ModulePassManager &&RHS) {
Passes = std::move(RHS.Passes);
return *this;
}
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
/// \brief Run all of the module passes in this module pass manager over
/// a module.
///
/// This method should only be called for a single module as there is the
/// expectation that the lifetime of a pass is bounded to that of a module.
PreservedAnalyses run(Module &M, ModuleAnalysisManager *AM = nullptr);
template <typename ModulePassT> void addPass(ModulePassT Pass) {
Passes.emplace_back(new ModulePassModel<ModulePassT>(std::move(Pass)));
}
static StringRef name() { return "ModulePassManager"; }
private:
// Pull in the concept type and model template specialized for modules.
typedef detail::PassConcept<Module, ModuleAnalysisManager> ModulePassConcept;
template <typename PassT>
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
struct ModulePassModel
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: detail::PassModel<Module, ModuleAnalysisManager, PassT> {
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
ModulePassModel(PassT Pass)
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: detail::PassModel<Module, ModuleAnalysisManager, PassT>(
std::move(Pass)) {}
};
ModulePassManager(const ModulePassManager &) LLVM_DELETED_FUNCTION;
ModulePassManager &operator=(const ModulePassManager &) LLVM_DELETED_FUNCTION;
std::vector<std::unique_ptr<ModulePassConcept>> Passes;
};
// We define the pass managers prior to the analysis managers that they use.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
class FunctionAnalysisManager;
/// \brief Manages a sequence of passes over a Function of IR.
///
/// A function pass manager contains a sequence of function passes. It is also
/// itself a function pass. When it is run over a function of LLVM IR, it will
/// sequentially run each pass it contains over that function.
///
/// If it is run with a \c FunctionAnalysisManager argument, it will propagate
/// that analysis manager to each pass it runs, as well as calling the analysis
/// manager's invalidation routine with the PreservedAnalyses of each pass it
/// runs.
///
/// Function passes can rely on having exclusive access to the function they
/// are run over. They should not read or modify any other functions! Other
/// threads or systems may be manipulating other functions in the module, and
/// so their state should never be relied on.
/// FIXME: Make the above true for all of LLVM's actual passes, some still
/// violate this principle.
///
/// Function passes can also read the module containing the function, but they
/// should not modify that module outside of the use lists of various globals.
/// For example, a function pass is not permitted to add functions to the
/// module.
/// FIXME: Make the above true for all of LLVM's actual passes, some still
/// violate this principle.
class FunctionPassManager {
public:
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
FunctionPassManager() {}
FunctionPassManager(FunctionPassManager &&Arg)
: Passes(std::move(Arg.Passes)) {}
FunctionPassManager &operator=(FunctionPassManager &&RHS) {
Passes = std::move(RHS.Passes);
return *this;
}
template <typename FunctionPassT> void addPass(FunctionPassT Pass) {
Passes.emplace_back(new FunctionPassModel<FunctionPassT>(std::move(Pass)));
}
PreservedAnalyses run(Function &F, FunctionAnalysisManager *AM = nullptr);
static StringRef name() { return "FunctionPassManager"; }
private:
// Pull in the concept type and model template specialized for functions.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
typedef detail::PassConcept<Function, FunctionAnalysisManager>
FunctionPassConcept;
template <typename PassT>
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
struct FunctionPassModel
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: detail::PassModel<Function, FunctionAnalysisManager, PassT> {
FunctionPassModel(PassT Pass)
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: detail::PassModel<Function, FunctionAnalysisManager, PassT>(
std::move(Pass)) {}
};
FunctionPassManager(const FunctionPassManager &) LLVM_DELETED_FUNCTION;
FunctionPassManager &
operator=(const FunctionPassManager &) LLVM_DELETED_FUNCTION;
std::vector<std::unique_ptr<FunctionPassConcept>> Passes;
};
namespace detail {
/// \brief A CRTP base used to implement analysis managers.
///
/// This class template serves as the boiler plate of an analysis manager. Any
/// analysis manager can be implemented on top of this base class. Any
/// implementation will be required to provide specific hooks:
///
/// - getResultImpl
/// - getCachedResultImpl
/// - invalidateImpl
///
/// The details of the call pattern are within.
template <typename DerivedT, typename IRUnitT> class AnalysisManagerBase {
DerivedT *derived_this() { return static_cast<DerivedT *>(this); }
const DerivedT *derived_this() const {
return static_cast<const DerivedT *>(this);
}
AnalysisManagerBase(const AnalysisManagerBase &) LLVM_DELETED_FUNCTION;
AnalysisManagerBase &
operator=(const AnalysisManagerBase &) LLVM_DELETED_FUNCTION;
protected:
typedef detail::AnalysisResultConcept<IRUnitT> ResultConceptT;
typedef detail::AnalysisPassConcept<IRUnitT, DerivedT> PassConceptT;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
// FIXME: Provide template aliases for the models when we're using C++11 in
// a mode supporting them.
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
AnalysisManagerBase() {}
AnalysisManagerBase(AnalysisManagerBase &&Arg)
: AnalysisPasses(std::move(Arg.AnalysisPasses)) {}
AnalysisManagerBase &operator=(AnalysisManagerBase &&RHS) {
AnalysisPasses = std::move(RHS.AnalysisPasses);
return *this;
}
public:
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Get the result of an analysis pass for this module.
///
/// If there is not a valid cached result in the manager already, this will
/// re-run the analysis to produce a valid result.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
template <typename PassT> typename PassT::Result &getResult(IRUnitT &IR) {
assert(AnalysisPasses.count(PassT::ID()) &&
"This analysis pass was not registered prior to being queried");
ResultConceptT &ResultConcept =
derived_this()->getResultImpl(PassT::ID(), IR);
typedef detail::AnalysisResultModel<IRUnitT, PassT, typename PassT::Result>
ResultModelT;
return static_cast<ResultModelT &>(ResultConcept).Result;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
}
/// \brief Get the cached result of an analysis pass for this module.
///
/// This method never runs the analysis.
///
/// \returns null if there is no cached result.
template <typename PassT>
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
typename PassT::Result *getCachedResult(IRUnitT &IR) const {
assert(AnalysisPasses.count(PassT::ID()) &&
"This analysis pass was not registered prior to being queried");
ResultConceptT *ResultConcept =
derived_this()->getCachedResultImpl(PassT::ID(), IR);
if (!ResultConcept)
return nullptr;
typedef detail::AnalysisResultModel<IRUnitT, PassT, typename PassT::Result>
ResultModelT;
return &static_cast<ResultModelT *>(ResultConcept)->Result;
}
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Register an analysis pass with the manager.
///
/// This provides an initialized and set-up analysis pass to the analysis
/// manager. Whomever is setting up analysis passes must use this to populate
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// the manager with all of the analysis passes available.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
template <typename PassT> void registerPass(PassT Pass) {
assert(!AnalysisPasses.count(PassT::ID()) &&
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
"Registered the same analysis pass twice!");
typedef detail::AnalysisPassModel<IRUnitT, DerivedT, PassT> PassModelT;
AnalysisPasses[PassT::ID()].reset(new PassModelT(std::move(Pass)));
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
}
/// \brief Invalidate a specific analysis pass for an IR module.
///
/// Note that the analysis result can disregard invalidation.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
template <typename PassT> void invalidate(IRUnitT &IR) {
assert(AnalysisPasses.count(PassT::ID()) &&
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
"This analysis pass was not registered prior to being invalidated");
derived_this()->invalidateImpl(PassT::ID(), IR);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
}
/// \brief Invalidate analyses cached for an IR unit.
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
///
/// Walk through all of the analyses pertaining to this unit of IR and
/// invalidate them unless they are preserved by the PreservedAnalyses set.
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
/// We accept the PreservedAnalyses set by value and update it with each
/// analyis pass which has been successfully invalidated and thus can be
/// preserved going forward. The updated set is returned.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
PreservedAnalyses invalidate(IRUnitT &IR, PreservedAnalyses PA) {
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
return derived_this()->invalidateImpl(IR, std::move(PA));
}
protected:
/// \brief Lookup a registered analysis pass.
PassConceptT &lookupPass(void *PassID) {
typename AnalysisPassMapT::iterator PI = AnalysisPasses.find(PassID);
assert(PI != AnalysisPasses.end() &&
"Analysis passes must be registered prior to being queried!");
return *PI->second;
}
/// \brief Lookup a registered analysis pass.
const PassConceptT &lookupPass(void *PassID) const {
typename AnalysisPassMapT::const_iterator PI = AnalysisPasses.find(PassID);
assert(PI != AnalysisPasses.end() &&
"Analysis passes must be registered prior to being queried!");
return *PI->second;
}
private:
/// \brief Map type from module analysis pass ID to pass concept pointer.
typedef DenseMap<void *, std::unique_ptr<PassConceptT>> AnalysisPassMapT;
/// \brief Collection of module analysis passes, indexed by ID.
AnalysisPassMapT AnalysisPasses;
};
} // End namespace detail
/// \brief A module analysis pass manager with lazy running and caching of
/// results.
class ModuleAnalysisManager
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: public detail::AnalysisManagerBase<ModuleAnalysisManager, Module> {
friend class detail::AnalysisManagerBase<ModuleAnalysisManager, Module>;
typedef detail::AnalysisManagerBase<ModuleAnalysisManager, Module> BaseT;
typedef BaseT::ResultConceptT ResultConceptT;
typedef BaseT::PassConceptT PassConceptT;
public:
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
ModuleAnalysisManager() {}
ModuleAnalysisManager(ModuleAnalysisManager &&Arg)
: BaseT(std::move(static_cast<BaseT &>(Arg))),
ModuleAnalysisResults(std::move(Arg.ModuleAnalysisResults)) {}
ModuleAnalysisManager &operator=(ModuleAnalysisManager &&RHS) {
BaseT::operator=(std::move(static_cast<BaseT &>(RHS)));
ModuleAnalysisResults = std::move(RHS.ModuleAnalysisResults);
return *this;
}
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
private:
ModuleAnalysisManager(const ModuleAnalysisManager &) LLVM_DELETED_FUNCTION;
ModuleAnalysisManager &
operator=(const ModuleAnalysisManager &) LLVM_DELETED_FUNCTION;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Get a module pass result, running the pass if necessary.
ResultConceptT &getResultImpl(void *PassID, Module &M);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Get a cached module pass result or return null.
ResultConceptT *getCachedResultImpl(void *PassID, Module &M) const;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Invalidate a module pass result.
void invalidateImpl(void *PassID, Module &M);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Invalidate results across a module.
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
PreservedAnalyses invalidateImpl(Module &M, PreservedAnalyses PA);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Map type from module analysis pass ID to pass result concept
/// pointer.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
typedef DenseMap<void *,
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
std::unique_ptr<detail::AnalysisResultConcept<Module>>>
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
ModuleAnalysisResultMapT;
/// \brief Cache of computed module analysis results for this module.
ModuleAnalysisResultMapT ModuleAnalysisResults;
};
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
/// \brief A function analysis manager to coordinate and cache analyses run over
/// a module.
class FunctionAnalysisManager
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
: public detail::AnalysisManagerBase<FunctionAnalysisManager, Function> {
friend class detail::AnalysisManagerBase<FunctionAnalysisManager, Function>;
typedef detail::AnalysisManagerBase<FunctionAnalysisManager, Function> BaseT;
typedef BaseT::ResultConceptT ResultConceptT;
typedef BaseT::PassConceptT PassConceptT;
public:
// Most public APIs are inherited from the CRTP base class.
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
FunctionAnalysisManager() {}
FunctionAnalysisManager(FunctionAnalysisManager &&Arg)
: BaseT(std::move(static_cast<BaseT &>(Arg))),
FunctionAnalysisResults(std::move(Arg.FunctionAnalysisResults)) {}
FunctionAnalysisManager &operator=(FunctionAnalysisManager &&RHS) {
BaseT::operator=(std::move(static_cast<BaseT &>(RHS)));
FunctionAnalysisResults = std::move(RHS.FunctionAnalysisResults);
return *this;
}
/// \brief Returns true if the analysis manager has an empty results cache.
bool empty() const;
/// \brief Clear the function analysis result cache.
///
/// This routine allows cleaning up when the set of functions itself has
/// potentially changed, and thus we can't even look up a a result and
/// invalidate it directly. Notably, this does *not* call invalidate
/// functions as there is nothing to be done for them.
void clear();
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
private:
FunctionAnalysisManager(const FunctionAnalysisManager &)
LLVM_DELETED_FUNCTION;
FunctionAnalysisManager &
operator=(const FunctionAnalysisManager &) LLVM_DELETED_FUNCTION;
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
/// \brief Get a function pass result, running the pass if necessary.
ResultConceptT &getResultImpl(void *PassID, Function &F);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Get a cached function pass result or return null.
ResultConceptT *getCachedResultImpl(void *PassID, Function &F) const;
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
/// \brief Invalidate a function pass result.
void invalidateImpl(void *PassID, Function &F);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Invalidate the results for a function..
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
PreservedAnalyses invalidateImpl(Function &F, PreservedAnalyses PA);
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief List of function analysis pass IDs and associated concept pointers.
///
/// Requires iterators to be valid across appending new entries and arbitrary
/// erases. Provides both the pass ID and concept pointer such that it is
/// half of a bijection and provides storage for the actual result concept.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
typedef std::list<std::pair<
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
void *, std::unique_ptr<detail::AnalysisResultConcept<Function>>>>
FunctionAnalysisResultListT;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Map type from function pointer to our custom list type.
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
typedef DenseMap<Function *, FunctionAnalysisResultListT>
FunctionAnalysisResultListMapT;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Map from function to a list of function analysis results.
///
/// Provides linear time removal of all analysis results for a function and
/// the ultimate storage for a particular cached analysis result.
FunctionAnalysisResultListMapT FunctionAnalysisResultLists;
/// \brief Map type from a pair of analysis ID and function pointer to an
/// iterator into a particular result list.
typedef DenseMap<std::pair<void *, Function *>,
FunctionAnalysisResultListT::iterator>
[PM] Split the analysis manager into a function-specific interface and a module-specific interface. This is the first of many steps necessary to generalize the infrastructure such that we can support both a Module-to-Function and Module-to-SCC-to-Function pass manager nestings. After a *lot* of attempts that never worked and didn't even make it to a committable state, it became clear that I had gotten the layering design of analyses flat out wrong. Four days later, I think I have most of the plan for how to correct this, and I'm starting to reshape the code into it. This is just a baby step I'm afraid, but starts separating the fundamentally distinct concepts of function analysis passes and module analysis passes so that in subsequent steps we can effectively layer them, and have a consistent design for the eventual SCC layer. As part of this, I've started some interface changes to make passes more regular. The module pass accepts the module in the run method, and some of the constructor parameters are gone. I'm still working out exactly where constructor parameters vs. method parameters will be used, so I expect this to fluctuate a bit. This actually makes the invalidation less "correct" at this phase, because now function passes don't invalidate module analysis passes, but that was actually somewhat of a misfeature. It will return in a better factored form which can scale to other units of IR. The documentation has gotten less verbose and helpful. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-20 04:01:38 +00:00
FunctionAnalysisResultMapT;
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-13 01:12:08 +00:00
/// \brief Map from an analysis ID and function to a particular cached
/// analysis result.
FunctionAnalysisResultMapT FunctionAnalysisResults;
};
/// \brief A module analysis which acts as a proxy for a function analysis
/// manager.
///
/// This primarily proxies invalidation information from the module analysis
/// manager and module pass manager to a function analysis manager. You should
/// never use a function analysis manager from within (transitively) a module
/// pass manager unless your parent module pass has received a proxy result
/// object for it.
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
class FunctionAnalysisManagerModuleProxy {
public:
class Result;
static void *ID() { return (void *)&PassID; }
static StringRef name() { return "FunctionAnalysisManagerModuleProxy"; }
explicit FunctionAnalysisManagerModuleProxy(FunctionAnalysisManager &FAM)
: FAM(&FAM) {}
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
FunctionAnalysisManagerModuleProxy(
const FunctionAnalysisManagerModuleProxy &Arg)
: FAM(Arg.FAM) {}
FunctionAnalysisManagerModuleProxy(FunctionAnalysisManagerModuleProxy &&Arg)
: FAM(std::move(Arg.FAM)) {}
FunctionAnalysisManagerModuleProxy &
operator=(FunctionAnalysisManagerModuleProxy RHS) {
std::swap(FAM, RHS.FAM);
return *this;
}
/// \brief Run the analysis pass and create our proxy result object.
///
/// This doesn't do any interesting work, it is primarily used to insert our
/// proxy result object into the module analysis cache so that we can proxy
/// invalidation to the function analysis manager.
///
/// In debug builds, it will also assert that the analysis manager is empty
/// as no queries should arrive at the function analysis manager prior to
/// this analysis being requested.
Result run(Module &M);
private:
static char PassID;
FunctionAnalysisManager *FAM;
};
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
/// \brief The result proxy object for the
/// \c FunctionAnalysisManagerModuleProxy.
///
/// See its documentation for more information.
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
class FunctionAnalysisManagerModuleProxy::Result {
public:
explicit Result(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
Result(const Result &Arg) : FAM(Arg.FAM) {}
Result(Result &&Arg) : FAM(std::move(Arg.FAM)) {}
Result &operator=(Result RHS) {
std::swap(FAM, RHS.FAM);
return *this;
}
~Result();
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
/// \brief Accessor for the \c FunctionAnalysisManager.
FunctionAnalysisManager &getManager() { return *FAM; }
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
/// \brief Handler for invalidation of the module.
///
/// If this analysis itself is preserved, then we assume that the set of \c
/// Function objects in the \c Module hasn't changed and thus we don't need
/// to invalidate *all* cached data associated with a \c Function* in the \c
/// FunctionAnalysisManager.
///
/// Regardless of whether this analysis is marked as preserved, all of the
/// analyses in the \c FunctionAnalysisManager are potentially invalidated
/// based on the set of preserved analyses.
bool invalidate(Module &M, const PreservedAnalyses &PA);
private:
FunctionAnalysisManager *FAM;
};
/// \brief A function analysis which acts as a proxy for a module analysis
/// manager.
///
/// This primarily provides an accessor to a parent module analysis manager to
/// function passes. Only the const interface of the module analysis manager is
/// provided to indicate that once inside of a function analysis pass you
/// cannot request a module analysis to actually run. Instead, the user must
/// rely on the \c getCachedResult API.
///
/// This proxy *doesn't* manage the invalidation in any way. That is handled by
/// the recursive return path of each layer of the pass manager and the
/// returned PreservedAnalysis set.
class ModuleAnalysisManagerFunctionProxy {
public:
/// \brief Result proxy object for \c ModuleAnalysisManagerFunctionProxy.
class Result {
public:
explicit Result(const ModuleAnalysisManager &MAM) : MAM(&MAM) {}
// We have to explicitly define all the special member functions because
// MSVC refuses to generate them.
Result(const Result &Arg) : MAM(Arg.MAM) {}
Result(Result &&Arg) : MAM(std::move(Arg.MAM)) {}
Result &operator=(Result RHS) {
std::swap(MAM, RHS.MAM);
return *this;
}
const ModuleAnalysisManager &getManager() const { return *MAM; }
/// \brief Handle invalidation by ignoring it, this pass is immutable.
bool invalidate(Function &) { return false; }
private:
const ModuleAnalysisManager *MAM;
};
static void *ID() { return (void *)&PassID; }
static StringRef name() { return "ModuleAnalysisManagerFunctionProxy"; }
ModuleAnalysisManagerFunctionProxy(const ModuleAnalysisManager &MAM)
: MAM(&MAM) {}
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
ModuleAnalysisManagerFunctionProxy(
const ModuleAnalysisManagerFunctionProxy &Arg)
: MAM(Arg.MAM) {}
ModuleAnalysisManagerFunctionProxy(ModuleAnalysisManagerFunctionProxy &&Arg)
: MAM(std::move(Arg.MAM)) {}
ModuleAnalysisManagerFunctionProxy &
operator=(ModuleAnalysisManagerFunctionProxy RHS) {
std::swap(MAM, RHS.MAM);
return *this;
}
/// \brief Run the analysis pass and create our proxy result object.
/// Nothing to see here, it just forwards the \c MAM reference into the
/// result.
Result run(Function &) { return Result(*MAM); }
private:
static char PassID;
const ModuleAnalysisManager *MAM;
};
/// \brief Trivial adaptor that maps from a module to its functions.
///
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
/// Designed to allow composition of a FunctionPass(Manager) and
/// a ModulePassManager. Note that if this pass is constructed with a pointer
/// to a \c ModuleAnalysisManager it will run the
/// \c FunctionAnalysisManagerModuleProxy analysis prior to running the function
/// pass over the module to enable a \c FunctionAnalysisManager to be used
/// within this run safely.
template <typename FunctionPassT> class ModuleToFunctionPassAdaptor {
public:
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
explicit ModuleToFunctionPassAdaptor(FunctionPassT Pass)
: Pass(std::move(Pass)) {}
// We have to explicitly define all the special member functions because MSVC
// refuses to generate them.
ModuleToFunctionPassAdaptor(const ModuleToFunctionPassAdaptor &Arg)
: Pass(Arg.Pass) {}
ModuleToFunctionPassAdaptor(ModuleToFunctionPassAdaptor &&Arg)
: Pass(std::move(Arg.Pass)) {}
friend void swap(ModuleToFunctionPassAdaptor &LHS,
ModuleToFunctionPassAdaptor &RHS) {
using std::swap;
swap(LHS.Pass, RHS.Pass);
}
ModuleToFunctionPassAdaptor &operator=(ModuleToFunctionPassAdaptor RHS) {
swap(*this, RHS);
return *this;
}
/// \brief Runs the function pass across every function in the module.
PreservedAnalyses run(Module &M, ModuleAnalysisManager *AM) {
FunctionAnalysisManager *FAM = nullptr;
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
if (AM)
// Setup the function analysis manager from its proxy.
FAM = &AM->getResult<FunctionAnalysisManagerModuleProxy>(M).getManager();
PreservedAnalyses PA = PreservedAnalyses::all();
for (Module::iterator I = M.begin(), E = M.end(); I != E; ++I) {
PreservedAnalyses PassPA = Pass.run(*I, FAM);
// We know that the function pass couldn't have invalidated any other
// function's analyses (that's the contract of a function pass), so
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
// directly handle the function analysis manager's invalidation here and
// update our preserved set to reflect that these have already been
// handled.
if (FAM)
[PM] Fix a pretty nasty bug where the new pass manager would invalidate passes too many time. I think this is actually the issue that someone raised with me at the developer's meeting and in an email, but that we never really got to the bottom of. Having all the testing utilities made it much easier to dig down and uncover the core issue. When a pass manager is running many passes over a single function, we need it to invalidate the analyses between each run so that they can be re-computed as needed. We also need to track the intersection of preserved higher-level analyses across all the passes that we run (for example, if there is one module analysis which all the function analyses preserve, we want to track that and propagate it). Unfortunately, this interacted poorly with any enclosing pass adaptor between two IR units. It would see the intersection of preserved analyses, and need to invalidate any other analyses, but some of the un-preserved analyses might have already been invalidated *and recomputed*! We would fail to propagate the fact that the analysis had already been invalidated. The solution to this struck me as really strange at first, but the more I thought about it, the more natural it seemed. After a nice discussion with Duncan about it on IRC, it seemed even nicer. The idea is that invalidating an analysis *causes* it to be preserved! Preserving the lack of result is trivial. If it is recomputed, great. Until something *else* invalidates it again, we're good. The consequence of this is that the invalidate methods on the analysis manager which operate over many passes now consume their PreservedAnalyses object, update it to "preserve" every analysis pass to which it delivers an invalidation (regardless of whether the pass chooses to be removed, or handles the invalidation itself by updating itself). Then we return this augmented set from the invalidate routine, letting the pass manager take the result and use the intersection of *that* across each pass run to compute the final preserved set. This accounts for all the places where the early invalidation of an analysis has already "preserved" it for a future run. I've beefed up the testing and adjusted the assertions to show that we no longer repeatedly invalidate or compute the analyses across nested pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 01:58:35 +00:00
PassPA = FAM->invalidate(*I, std::move(PassPA));
// Then intersect the preserved set so that invalidation of module
// analyses will eventually occur when the module pass completes.
PA.intersect(std::move(PassPA));
}
// By definition we preserve the proxy. This precludes *any* invalidation
// of function analyses by the proxy, but that's OK because we've taken
// care to invalidate analyses in the function analysis manager
// incrementally above.
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
PA.preserve<FunctionAnalysisManagerModuleProxy>();
return PA;
}
static StringRef name() { return "ModuleToFunctionPassAdaptor"; }
private:
FunctionPassT Pass;
};
/// \brief A function to deduce a function pass type and wrap it in the
/// templated adaptor.
template <typename FunctionPassT>
ModuleToFunctionPassAdaptor<FunctionPassT>
[PM] Switch analysis managers to be threaded through the run methods rather than the constructors of passes. This simplifies the APIs of passes significantly and removes an error prone pattern where the *same* manager had to be given to every different layer. With the new API the analysis managers themselves will have to be cross connected with proxy analyses that allow a pass at one layer to query for the analysis manager of another layer. The proxy will both expose a handle to the other layer's manager and it will provide the invalidation hooks to ensure things remain consistent across layers. Finally, the outer-most analysis manager has to be passed to the run method of the outer-most pass manager. The rest of the propagation is automatic. I've used SFINAE again to allow passes to completely disregard the analysis manager if they don't need or want to care. This helps keep simple things simple for users of the new pass manager. Also, the system specifically supports passing a null pointer into the outer-most run method if your pass pipeline neither needs nor wants to deal with analyses. I find this of dubious utility as while some *passes* don't care about analysis, I'm not sure there are any real-world users of the pass manager itself that need to avoid even creating an analysis manager. But it is easy to support, so there we go. Finally I renamed the module proxy for the function analysis manager to the more verbose but less confusing name of FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea what else to name these things. I'm expecting in the fullness of time to potentially have the complete cross product of types at the proxy layer: {Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy (except for XAnalysisManagerXProxy which doesn't make any sense) This should make it somewhat easier to do the next phases which is to build the upward proxy and get its invalidation correct, as well as to make the invalidation within the Module -> Function mapping pass be more fine grained so as to invalidate fewer fuction analyses. After all of the proxy analyses are done and the invalidation working, I'll finally be able to start working on the next two fun fronts: how to adapt an existing pass to work in both the legacy pass world and the new one, and building the SCC, Loop, and Region counterparts. Fun times! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195400 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-22 00:43:29 +00:00
createModuleToFunctionPassAdaptor(FunctionPassT Pass) {
return std::move(ModuleToFunctionPassAdaptor<FunctionPassT>(std::move(Pass)));
}
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 02:10:51 +00:00
/// \brief A template utility pass to force an analysis result to be available.
///
/// This is a no-op pass which simply forces a specific analysis pass's result
/// to be available when it is run.
template <typename AnalysisT> struct RequireAnalysisPass {
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 02:10:51 +00:00
/// \brief Run this pass over some unit of IR.
///
/// This pass can be run over any unit of IR and use any analysis manager
/// provided they satisfy the basic API requirements. When this pass is
/// created, these methods can be instantiated to satisfy whatever the
/// context requires.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
template <typename IRUnitT, typename AnalysisManagerT>
PreservedAnalyses run(IRUnitT &Arg, AnalysisManagerT *AM) {
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 02:10:51 +00:00
if (AM)
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
(void)AM->template getResult<AnalysisT>(Arg);
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 02:10:51 +00:00
return PreservedAnalyses::all();
}
static StringRef name() { return "RequireAnalysisPass"; }
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 02:10:51 +00:00
};
/// \brief A template utility pass to force an analysis result to be
/// invalidated.
///
/// This is a no-op pass which simply forces a specific analysis result to be
/// invalidated when it is run.
template <typename AnalysisT> struct InvalidateAnalysisPass {
/// \brief Run this pass over some unit of IR.
///
/// This pass can be run over any unit of IR and use any analysis manager
/// provided they satisfy the basic API requirements. When this pass is
/// created, these methods can be instantiated to satisfy whatever the
/// context requires.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
template <typename IRUnitT, typename AnalysisManagerT>
PreservedAnalyses run(IRUnitT &Arg, AnalysisManagerT *AM) {
if (AM)
// We have to directly invalidate the analysis result as we can't
// enumerate all other analyses and use the preserved set to control it.
[PM] Sink the reference vs. value decision for IR units out of the templated interface. So far, every single IR unit I can come up with has address-identity. That is, when two units of IR are both active in LLVM, their addresses will be distinct of the IR is distinct. This is clearly true for Modules, Functions, BasicBlocks, and Instructions. It turns out that the only practical way to make the CGSCC stuff work the way we want is to make it true for SCCs as well. I expect this pattern to continue. When first designing the pass manager code, I kept this dimension of freedom in the type parameters, essentially allowing for a wrapper-type whose address did not form identity. But that really no longer makes sense and is making the code more complex or subtle for no gain. If we ever have an actual use case for this, we can figure out what makes sense then and there. It will be better because then we will have the actual example in hand. While the simplifications afforded in this patch are fairly small (mostly sinking the '&' out of many type parameters onto a few interfaces), it would have become much more pronounced with subsequent changes. I have a sequence of changes that will completely remove the code duplication that currently exists between all of the pass managers and analysis managers. =] Should make things much cleaner and avoid bug fixing N times for the N pass managers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:53:31 +00:00
(void)AM->template invalidate<AnalysisT>(Arg);
return PreservedAnalyses::all();
}
static StringRef name() { return "InvalidateAnalysisPass"; }
};
/// \brief A utility pass that does nothing but preserves no analyses.
///
/// As a consequence fo not preserving any analyses, this pass will force all
/// analysis passes to be re-run to produce fresh results if any are needed.
struct InvalidateAllAnalysesPass {
/// \brief Run this pass over some unit of IR.
template <typename T> PreservedAnalyses run(T &&Arg) {
return PreservedAnalyses::none();
}
static StringRef name() { return "InvalidateAllAnalysesPass"; }
};
}
#endif