2014-02-06 04:37:03 +00:00
|
|
|
//===- LazyCallGraph.h - Analysis of a Module's call graph ------*- C++ -*-===//
|
|
|
|
//
|
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
/// \file
|
|
|
|
///
|
|
|
|
/// Implements a lazy call graph analysis and related passes for the new pass
|
|
|
|
/// manager.
|
|
|
|
///
|
|
|
|
/// NB: This is *not* a traditional call graph! It is a graph which models both
|
|
|
|
/// the current calls and potential calls. As a consequence there are many
|
|
|
|
/// edges in this call graph that do not correspond to a 'call' or 'invoke'
|
|
|
|
/// instruction.
|
|
|
|
///
|
|
|
|
/// The primary use cases of this graph analysis is to facilitate iterating
|
|
|
|
/// across the functions of a module in ways that ensure all callees are
|
|
|
|
/// visited prior to a caller (given any SCC constraints), or vice versa. As
|
|
|
|
/// such is it particularly well suited to organizing CGSCC optimizations such
|
|
|
|
/// as inlining, outlining, argument promotion, etc. That is its primary use
|
|
|
|
/// case and motivates the design. It may not be appropriate for other
|
|
|
|
/// purposes. The use graph of functions or some other conservative analysis of
|
|
|
|
/// call instructions may be interesting for optimizations and subsequent
|
|
|
|
/// analyses which don't work in the context of an overly specified
|
|
|
|
/// potential-call-edge graph.
|
|
|
|
///
|
|
|
|
/// To understand the specific rules and nature of this call graph analysis,
|
|
|
|
/// see the documentation of the \c LazyCallGraph below.
|
|
|
|
///
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2014-08-13 16:26:38 +00:00
|
|
|
#ifndef LLVM_ANALYSIS_LAZYCALLGRAPH_H
|
|
|
|
#define LLVM_ANALYSIS_LAZYCALLGRAPH_H
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
#include "llvm/ADT/DenseMap.h"
|
|
|
|
#include "llvm/ADT/PointerUnion.h"
|
|
|
|
#include "llvm/ADT/STLExtras.h"
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
#include "llvm/ADT/SetVector.h"
|
2014-03-04 10:07:28 +00:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
2014-04-24 07:48:18 +00:00
|
|
|
#include "llvm/ADT/iterator.h"
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
#include "llvm/ADT/iterator_range.h"
|
2014-02-06 04:37:03 +00:00
|
|
|
#include "llvm/IR/BasicBlock.h"
|
2014-03-04 10:07:28 +00:00
|
|
|
#include "llvm/IR/Function.h"
|
|
|
|
#include "llvm/IR/Module.h"
|
2015-01-13 02:51:47 +00:00
|
|
|
#include "llvm/IR/PassManager.h"
|
2014-02-06 04:37:03 +00:00
|
|
|
#include "llvm/Support/Allocator.h"
|
|
|
|
#include <iterator>
|
|
|
|
|
|
|
|
namespace llvm {
|
|
|
|
class PreservedAnalyses;
|
|
|
|
class raw_ostream;
|
|
|
|
|
|
|
|
/// \brief A lazily constructed view of the call graph of a module.
|
|
|
|
///
|
|
|
|
/// With the edges of this graph, the motivating constraint that we are
|
|
|
|
/// attempting to maintain is that function-local optimization, CGSCC-local
|
|
|
|
/// optimizations, and optimizations transforming a pair of functions connected
|
|
|
|
/// by an edge in the graph, do not invalidate a bottom-up traversal of the SCC
|
|
|
|
/// DAG. That is, no optimizations will delete, remove, or add an edge such
|
|
|
|
/// that functions already visited in a bottom-up order of the SCC DAG are no
|
|
|
|
/// longer valid to have visited, or such that functions not yet visited in
|
|
|
|
/// a bottom-up order of the SCC DAG are not required to have already been
|
|
|
|
/// visited.
|
|
|
|
///
|
|
|
|
/// Within this constraint, the desire is to minimize the merge points of the
|
|
|
|
/// SCC DAG. The greater the fanout of the SCC DAG and the fewer merge points
|
|
|
|
/// in the SCC DAG, the more independence there is in optimizing within it.
|
|
|
|
/// There is a strong desire to enable parallelization of optimizations over
|
|
|
|
/// the call graph, and both limited fanout and merge points will (artificially
|
|
|
|
/// in some cases) limit the scaling of such an effort.
|
|
|
|
///
|
|
|
|
/// To this end, graph represents both direct and any potential resolution to
|
|
|
|
/// an indirect call edge. Another way to think about it is that it represents
|
|
|
|
/// both the direct call edges and any direct call edges that might be formed
|
|
|
|
/// through static optimizations. Specifically, it considers taking the address
|
|
|
|
/// of a function to be an edge in the call graph because this might be
|
|
|
|
/// forwarded to become a direct call by some subsequent function-local
|
|
|
|
/// optimization. The result is that the graph closely follows the use-def
|
|
|
|
/// edges for functions. Walking "up" the graph can be done by looking at all
|
|
|
|
/// of the uses of a function.
|
|
|
|
///
|
|
|
|
/// The roots of the call graph are the external functions and functions
|
|
|
|
/// escaped into global variables. Those functions can be called from outside
|
|
|
|
/// of the module or via unknowable means in the IR -- we may not be able to
|
|
|
|
/// form even a potential call edge from a function body which may dynamically
|
|
|
|
/// load the function and call it.
|
|
|
|
///
|
|
|
|
/// This analysis still requires updates to remain valid after optimizations
|
|
|
|
/// which could potentially change the set of potential callees. The
|
|
|
|
/// constraints it operates under only make the traversal order remain valid.
|
|
|
|
///
|
|
|
|
/// The entire analysis must be re-computed if full interprocedural
|
|
|
|
/// optimizations run at any point. For example, globalopt completely
|
|
|
|
/// invalidates the information in this analysis.
|
|
|
|
///
|
|
|
|
/// FIXME: This class is named LazyCallGraph in a lame attempt to distinguish
|
|
|
|
/// it from the existing CallGraph. At some point, it is expected that this
|
|
|
|
/// will be the only call graph and it will be renamed accordingly.
|
|
|
|
class LazyCallGraph {
|
|
|
|
public:
|
|
|
|
class Node;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
class SCC;
|
2014-02-06 04:37:03 +00:00
|
|
|
typedef SmallVector<PointerUnion<Function *, Node *>, 4> NodeVectorT;
|
2014-03-10 02:50:21 +00:00
|
|
|
typedef SmallVectorImpl<PointerUnion<Function *, Node *>> NodeVectorImplT;
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
/// \brief A lazy iterator used for both the entry nodes and child nodes.
|
|
|
|
///
|
|
|
|
/// When this iterator is dereferenced, if not yet available, a function will
|
|
|
|
/// be scanned for "calls" or uses of functions and its child information
|
|
|
|
/// will be constructed. All of these results are accumulated and cached in
|
|
|
|
/// the graph.
|
2014-04-29 01:57:35 +00:00
|
|
|
class iterator
|
|
|
|
: public iterator_adaptor_base<iterator, NodeVectorImplT::iterator,
|
2014-05-01 10:41:51 +00:00
|
|
|
std::forward_iterator_tag, Node> {
|
2014-02-06 04:37:03 +00:00
|
|
|
friend class LazyCallGraph;
|
|
|
|
friend class LazyCallGraph::Node;
|
|
|
|
|
2014-04-16 11:14:28 +00:00
|
|
|
LazyCallGraph *G;
|
2014-05-01 10:41:51 +00:00
|
|
|
NodeVectorImplT::iterator E;
|
2014-02-06 04:37:03 +00:00
|
|
|
|
2014-04-26 22:43:56 +00:00
|
|
|
// Build the iterator for a specific position in a node list.
|
2014-05-01 10:41:51 +00:00
|
|
|
iterator(LazyCallGraph &G, NodeVectorImplT::iterator NI,
|
|
|
|
NodeVectorImplT::iterator E)
|
|
|
|
: iterator_adaptor_base(NI), G(&G), E(E) {
|
|
|
|
while (I != E && I->isNull())
|
[LCG] Actually test the *basic* edge removal bits (IE, the non-SCC
bits), and discover that it's totally broken. Yay tests. Boo bug. Fix
the basic edge removal so that it works by nulling out the removed edges
rather than actually removing them. This leaves the indices valid in the
map from callee to index, and preserves some of the locality for
iterating over edges. The iterator is made bidirectional to reflect that
it now has to skip over null entries, and the skipping logic is layered
onto it.
As future work, I would like to track essentially the "load factor" of
the edge list, and when it falls below a threshold do a compaction.
An alternative I considered (and continue to consider) is storing the
callees in a doubly linked list where each element of the list is in
a set (which is essentially the classical linked-hash-table
datastructure). The problem with that approach is that either you need
to heap allocate the linked list nodes and use pointers to them, or use
a bucket hash table (with even *more* linked list pointer overhead!),
etc. It's pretty easy to get 5x overhead for values that are just
pointers. So far, I think punching holes in the vector, and periodic
compaction is likely to be much more efficient overall in the space/time
tradeoff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207619 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 07:45:27 +00:00
|
|
|
++I;
|
|
|
|
}
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
public:
|
2014-04-26 03:36:42 +00:00
|
|
|
iterator() {}
|
|
|
|
|
[LCG] Actually test the *basic* edge removal bits (IE, the non-SCC
bits), and discover that it's totally broken. Yay tests. Boo bug. Fix
the basic edge removal so that it works by nulling out the removed edges
rather than actually removing them. This leaves the indices valid in the
map from callee to index, and preserves some of the locality for
iterating over edges. The iterator is made bidirectional to reflect that
it now has to skip over null entries, and the skipping logic is layered
onto it.
As future work, I would like to track essentially the "load factor" of
the edge list, and when it falls below a threshold do a compaction.
An alternative I considered (and continue to consider) is storing the
callees in a doubly linked list where each element of the list is in
a set (which is essentially the classical linked-hash-table
datastructure). The problem with that approach is that either you need
to heap allocate the linked list nodes and use pointers to them, or use
a bucket hash table (with even *more* linked list pointer overhead!),
etc. It's pretty easy to get 5x overhead for values that are just
pointers. So far, I think punching holes in the vector, and periodic
compaction is likely to be much more efficient overall in the space/time
tradeoff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207619 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 07:45:27 +00:00
|
|
|
using iterator_adaptor_base::operator++;
|
|
|
|
iterator &operator++() {
|
|
|
|
do {
|
|
|
|
++I;
|
2014-05-01 10:41:51 +00:00
|
|
|
} while (I != E && I->isNull());
|
[LCG] Actually test the *basic* edge removal bits (IE, the non-SCC
bits), and discover that it's totally broken. Yay tests. Boo bug. Fix
the basic edge removal so that it works by nulling out the removed edges
rather than actually removing them. This leaves the indices valid in the
map from callee to index, and preserves some of the locality for
iterating over edges. The iterator is made bidirectional to reflect that
it now has to skip over null entries, and the skipping logic is layered
onto it.
As future work, I would like to track essentially the "load factor" of
the edge list, and when it falls below a threshold do a compaction.
An alternative I considered (and continue to consider) is storing the
callees in a doubly linked list where each element of the list is in
a set (which is essentially the classical linked-hash-table
datastructure). The problem with that approach is that either you need
to heap allocate the linked list nodes and use pointers to them, or use
a bucket hash table (with even *more* linked list pointer overhead!),
etc. It's pretty easy to get 5x overhead for values that are just
pointers. So far, I think punching holes in the vector, and periodic
compaction is likely to be much more efficient overall in the space/time
tradeoff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207619 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 07:45:27 +00:00
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
reference operator*() const {
|
2014-04-26 22:43:56 +00:00
|
|
|
if (I->is<Node *>())
|
|
|
|
return *I->get<Node *>();
|
2014-02-06 04:37:03 +00:00
|
|
|
|
2014-04-26 22:43:56 +00:00
|
|
|
Function *F = I->get<Function *>();
|
2014-04-23 23:20:36 +00:00
|
|
|
Node &ChildN = G->get(*F);
|
2014-04-26 22:43:56 +00:00
|
|
|
*I = &ChildN;
|
2014-04-23 23:34:48 +00:00
|
|
|
return ChildN;
|
2014-02-06 04:37:03 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2014-04-17 09:40:13 +00:00
|
|
|
/// \brief A node in the call graph.
|
|
|
|
///
|
|
|
|
/// This represents a single node. It's primary roles are to cache the list of
|
|
|
|
/// callees, de-duplicate and provide fast testing of whether a function is
|
|
|
|
/// a callee, and facilitate iteration of child nodes in the graph.
|
|
|
|
class Node {
|
|
|
|
friend class LazyCallGraph;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
friend class LazyCallGraph::SCC;
|
2014-04-17 09:40:13 +00:00
|
|
|
|
|
|
|
LazyCallGraph *G;
|
|
|
|
Function &F;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
|
|
|
// We provide for the DFS numbering and Tarjan walk lowlink numbers to be
|
|
|
|
// stored directly within the node.
|
|
|
|
int DFSNumber;
|
|
|
|
int LowLink;
|
|
|
|
|
2014-04-17 09:40:13 +00:00
|
|
|
mutable NodeVectorT Callees;
|
2014-04-23 04:00:17 +00:00
|
|
|
DenseMap<Function *, size_t> CalleeIndexMap;
|
2014-04-17 09:40:13 +00:00
|
|
|
|
|
|
|
/// \brief Basic constructor implements the scanning of F into Callees and
|
2014-04-23 04:00:17 +00:00
|
|
|
/// CalleeIndexMap.
|
2014-04-17 09:40:13 +00:00
|
|
|
Node(LazyCallGraph &G, Function &F);
|
|
|
|
|
2014-04-28 11:10:23 +00:00
|
|
|
/// \brief Internal helper to insert a callee.
|
|
|
|
void insertEdgeInternal(Function &Callee);
|
|
|
|
|
2014-04-30 10:48:36 +00:00
|
|
|
/// \brief Internal helper to insert a callee.
|
|
|
|
void insertEdgeInternal(Node &CalleeN);
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
/// \brief Internal helper to remove a callee from this node.
|
|
|
|
void removeEdgeInternal(Function &Callee);
|
|
|
|
|
2014-04-17 09:40:13 +00:00
|
|
|
public:
|
|
|
|
typedef LazyCallGraph::iterator iterator;
|
|
|
|
|
|
|
|
Function &getFunction() const {
|
|
|
|
return F;
|
|
|
|
};
|
|
|
|
|
2014-05-01 10:41:51 +00:00
|
|
|
iterator begin() const {
|
|
|
|
return iterator(*G, Callees.begin(), Callees.end());
|
|
|
|
}
|
|
|
|
iterator end() const { return iterator(*G, Callees.end(), Callees.end()); }
|
2014-04-17 09:40:13 +00:00
|
|
|
|
|
|
|
/// Equality is defined as address equality.
|
|
|
|
bool operator==(const Node &N) const { return this == &N; }
|
|
|
|
bool operator!=(const Node &N) const { return !operator==(N); }
|
|
|
|
};
|
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
/// \brief An SCC of the call graph.
|
|
|
|
///
|
|
|
|
/// This represents a Strongly Connected Component of the call graph as
|
|
|
|
/// a collection of call graph nodes. While the order of nodes in the SCC is
|
|
|
|
/// stable, it is not any particular order.
|
|
|
|
class SCC {
|
|
|
|
friend class LazyCallGraph;
|
|
|
|
friend class LazyCallGraph::Node;
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
LazyCallGraph *G;
|
2014-04-24 09:22:31 +00:00
|
|
|
SmallPtrSet<SCC *, 1> ParentSCCs;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
SmallVector<Node *, 1> Nodes;
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
SCC(LazyCallGraph &G) : G(&G) {}
|
2014-04-26 01:03:46 +00:00
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
void insert(Node &N);
|
2014-04-23 11:03:03 +00:00
|
|
|
|
2014-04-26 09:06:53 +00:00
|
|
|
void
|
2014-04-27 01:59:50 +00:00
|
|
|
internalDFS(SmallVectorImpl<std::pair<Node *, Node::iterator>> &DFSStack,
|
2014-04-26 09:06:53 +00:00
|
|
|
SmallVectorImpl<Node *> &PendingSCCStack, Node *N,
|
|
|
|
SmallVectorImpl<SCC *> &ResultSCCs);
|
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
public:
|
|
|
|
typedef SmallVectorImpl<Node *>::const_iterator iterator;
|
2014-04-24 09:22:31 +00:00
|
|
|
typedef pointee_iterator<SmallPtrSet<SCC *, 1>::const_iterator> parent_iterator;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
|
|
|
iterator begin() const { return Nodes.begin(); }
|
|
|
|
iterator end() const { return Nodes.end(); }
|
2014-04-23 09:57:18 +00:00
|
|
|
|
|
|
|
parent_iterator parent_begin() const { return ParentSCCs.begin(); }
|
|
|
|
parent_iterator parent_end() const { return ParentSCCs.end(); }
|
|
|
|
|
|
|
|
iterator_range<parent_iterator> parents() const {
|
|
|
|
return iterator_range<parent_iterator>(parent_begin(), parent_end());
|
|
|
|
}
|
2014-04-27 01:59:50 +00:00
|
|
|
|
2014-05-01 12:12:42 +00:00
|
|
|
/// \brief Test if this SCC is a parent of \a C.
|
|
|
|
bool isParentOf(const SCC &C) const { return C.isChildOf(*this); }
|
|
|
|
|
|
|
|
/// \brief Test if this SCC is an ancestor of \a C.
|
|
|
|
bool isAncestorOf(const SCC &C) const { return C.isDescendantOf(*this); }
|
|
|
|
|
|
|
|
/// \brief Test if this SCC is a child of \a C.
|
|
|
|
bool isChildOf(const SCC &C) const {
|
|
|
|
return ParentSCCs.count(const_cast<SCC *>(&C));
|
|
|
|
}
|
|
|
|
|
|
|
|
/// \brief Test if this SCC is a descendant of \a C.
|
|
|
|
bool isDescendantOf(const SCC &C) const;
|
|
|
|
|
2015-01-05 12:21:44 +00:00
|
|
|
/// \brief Short name useful for debugging or logging.
|
|
|
|
///
|
|
|
|
/// We use the name of the first function in the SCC to name the SCC for
|
|
|
|
/// the purposes of debugging and logging.
|
|
|
|
StringRef getName() const { return (*begin())->getFunction().getName(); }
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
///@{
|
|
|
|
/// \name Mutation API
|
|
|
|
///
|
|
|
|
/// These methods provide the core API for updating the call graph in the
|
|
|
|
/// presence of a (potentially still in-flight) DFS-found SCCs.
|
|
|
|
///
|
|
|
|
/// Note that these methods sometimes have complex runtimes, so be careful
|
|
|
|
/// how you call them.
|
|
|
|
|
2014-04-30 10:48:36 +00:00
|
|
|
/// \brief Insert an edge from one node in this SCC to another in this SCC.
|
|
|
|
///
|
|
|
|
/// By the definition of an SCC, this does not change the nature or make-up
|
|
|
|
/// of any SCCs.
|
|
|
|
void insertIntraSCCEdge(Node &CallerN, Node &CalleeN);
|
|
|
|
|
2014-05-01 12:18:20 +00:00
|
|
|
/// \brief Insert an edge whose tail is in this SCC and head is in some
|
|
|
|
/// child SCC.
|
|
|
|
///
|
|
|
|
/// There must be an existing path from the caller to the callee. This
|
|
|
|
/// operation is inexpensive and does not change the set of SCCs in the
|
|
|
|
/// graph.
|
|
|
|
void insertOutgoingEdge(Node &CallerN, Node &CalleeN);
|
|
|
|
|
2014-05-04 09:38:32 +00:00
|
|
|
/// \brief Insert an edge whose tail is in a descendant SCC and head is in
|
|
|
|
/// this SCC.
|
|
|
|
///
|
|
|
|
/// There must be an existing path from the callee to the caller in this
|
|
|
|
/// case. NB! This is has the potential to be a very expensive function. It
|
|
|
|
/// inherently forms a cycle in the prior SCC DAG and we have to merge SCCs
|
|
|
|
/// to resolve that cycle. But finding all of the SCCs which participate in
|
|
|
|
/// the cycle can in the worst case require traversing every SCC in the
|
|
|
|
/// graph. Every attempt is made to avoid that, but passes must still
|
|
|
|
/// exercise caution calling this routine repeatedly.
|
|
|
|
///
|
|
|
|
/// FIXME: We could possibly optimize this quite a bit for cases where the
|
|
|
|
/// caller and callee are very nearby in the graph. See comments in the
|
|
|
|
/// implementation for details, but that use case might impact users.
|
|
|
|
SmallVector<SCC *, 1> insertIncomingEdge(Node &CallerN, Node &CalleeN);
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
/// \brief Remove an edge whose source is in this SCC and target is *not*.
|
|
|
|
///
|
|
|
|
/// This removes an inter-SCC edge. All inter-SCC edges originating from
|
|
|
|
/// this SCC have been fully explored by any in-flight DFS SCC formation,
|
|
|
|
/// so this is always safe to call once you have the source SCC.
|
|
|
|
///
|
|
|
|
/// This operation does not change the set of SCCs or the members of the
|
|
|
|
/// SCCs and so is very inexpensive. It may change the connectivity graph
|
|
|
|
/// of the SCCs though, so be careful calling this while iterating over
|
|
|
|
/// them.
|
|
|
|
void removeInterSCCEdge(Node &CallerN, Node &CalleeN);
|
|
|
|
|
|
|
|
/// \brief Remove an edge which is entirely within this SCC.
|
|
|
|
///
|
|
|
|
/// Both the \a Caller and the \a Callee must be within this SCC. Removing
|
|
|
|
/// such an edge make break cycles that form this SCC and thus this
|
|
|
|
/// operation may change the SCC graph significantly. In particular, this
|
|
|
|
/// operation will re-form new SCCs based on the remaining connectivity of
|
|
|
|
/// the graph. The following invariants are guaranteed to hold after
|
|
|
|
/// calling this method:
|
|
|
|
///
|
|
|
|
/// 1) This SCC is still an SCC in the graph.
|
|
|
|
/// 2) This SCC will be the parent of any new SCCs. Thus, this SCC is
|
|
|
|
/// preserved as the root of any new SCC directed graph formed.
|
|
|
|
/// 3) No SCC other than this SCC has its member set changed (this is
|
2014-05-15 01:52:21 +00:00
|
|
|
/// inherent in the definition of removing such an edge).
|
2014-04-27 01:59:50 +00:00
|
|
|
/// 4) All of the parent links of the SCC graph will be updated to reflect
|
|
|
|
/// the new SCC structure.
|
|
|
|
/// 5) All SCCs formed out of this SCC, excluding this SCC, will be
|
|
|
|
/// returned in a vector.
|
|
|
|
/// 6) The order of the SCCs in the vector will be a valid postorder
|
|
|
|
/// traversal of the new SCCs.
|
|
|
|
///
|
|
|
|
/// These invariants are very important to ensure that we can build
|
|
|
|
/// optimization pipeliens on top of the CGSCC pass manager which
|
|
|
|
/// intelligently update the SCC graph without invalidating other parts of
|
|
|
|
/// the SCC graph.
|
|
|
|
///
|
|
|
|
/// The runtime complexity of this method is, in the worst case, O(V+E)
|
|
|
|
/// where V is the number of nodes in this SCC and E is the number of edges
|
|
|
|
/// leaving the nodes in this SCC. Note that E includes both edges within
|
|
|
|
/// this SCC and edges from this SCC to child SCCs. Some effort has been
|
|
|
|
/// made to minimize the overhead of common cases such as self-edges and
|
|
|
|
/// edge removals which result in a spanning tree with no more cycles.
|
|
|
|
SmallVector<SCC *, 1> removeIntraSCCEdge(Node &CallerN, Node &CalleeN);
|
|
|
|
|
|
|
|
///@}
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/// \brief A post-order depth-first SCC iterator over the call graph.
|
|
|
|
///
|
|
|
|
/// This iterator triggers the Tarjan DFS-based formation of the SCC DAG for
|
|
|
|
/// the call graph, walking it lazily in depth-first post-order. That is, it
|
|
|
|
/// always visits SCCs for a callee prior to visiting the SCC for a caller
|
|
|
|
/// (when they are in different SCCs).
|
|
|
|
class postorder_scc_iterator
|
2014-04-26 22:51:31 +00:00
|
|
|
: public iterator_facade_base<postorder_scc_iterator,
|
|
|
|
std::forward_iterator_tag, SCC> {
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
friend class LazyCallGraph;
|
|
|
|
friend class LazyCallGraph::Node;
|
|
|
|
|
|
|
|
/// \brief Nonce type to select the constructor for the end iterator.
|
|
|
|
struct IsAtEndT {};
|
|
|
|
|
|
|
|
LazyCallGraph *G;
|
|
|
|
SCC *C;
|
|
|
|
|
|
|
|
// Build the begin iterator for a node.
|
|
|
|
postorder_scc_iterator(LazyCallGraph &G) : G(&G) {
|
|
|
|
C = G.getNextSCCInPostOrder();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Build the end iterator for a node. This is selected purely by overload.
|
|
|
|
postorder_scc_iterator(LazyCallGraph &G, IsAtEndT /*Nonce*/)
|
|
|
|
: G(&G), C(nullptr) {}
|
|
|
|
|
|
|
|
public:
|
2014-04-23 08:08:49 +00:00
|
|
|
bool operator==(const postorder_scc_iterator &Arg) const {
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
return G == Arg.G && C == Arg.C;
|
|
|
|
}
|
|
|
|
|
2014-04-23 23:51:07 +00:00
|
|
|
reference operator*() const { return *C; }
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
2014-04-26 22:51:31 +00:00
|
|
|
using iterator_facade_base::operator++;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
postorder_scc_iterator &operator++() {
|
|
|
|
C = G->getNextSCCInPostOrder();
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
/// \brief Construct a graph for the given module.
|
|
|
|
///
|
|
|
|
/// This sets up the graph and computes all of the entry points of the graph.
|
|
|
|
/// No function definitions are scanned until their nodes in the graph are
|
|
|
|
/// requested during traversal.
|
|
|
|
LazyCallGraph(Module &M);
|
|
|
|
|
|
|
|
LazyCallGraph(LazyCallGraph &&G);
|
2014-04-18 11:02:33 +00:00
|
|
|
LazyCallGraph &operator=(LazyCallGraph &&RHS);
|
2014-03-10 08:08:59 +00:00
|
|
|
|
2014-05-01 10:41:51 +00:00
|
|
|
iterator begin() {
|
|
|
|
return iterator(*this, EntryNodes.begin(), EntryNodes.end());
|
|
|
|
}
|
|
|
|
iterator end() { return iterator(*this, EntryNodes.end(), EntryNodes.end()); }
|
2014-02-06 04:37:03 +00:00
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
postorder_scc_iterator postorder_scc_begin() {
|
|
|
|
return postorder_scc_iterator(*this);
|
|
|
|
}
|
|
|
|
postorder_scc_iterator postorder_scc_end() {
|
|
|
|
return postorder_scc_iterator(*this, postorder_scc_iterator::IsAtEndT());
|
|
|
|
}
|
|
|
|
|
|
|
|
iterator_range<postorder_scc_iterator> postorder_sccs() {
|
|
|
|
return iterator_range<postorder_scc_iterator>(postorder_scc_begin(),
|
|
|
|
postorder_scc_end());
|
|
|
|
}
|
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
/// \brief Lookup a function in the graph which has already been scanned and
|
|
|
|
/// added.
|
|
|
|
Node *lookup(const Function &F) const { return NodeMap.lookup(&F); }
|
|
|
|
|
2014-04-23 09:57:18 +00:00
|
|
|
/// \brief Lookup a function's SCC in the graph.
|
|
|
|
///
|
|
|
|
/// \returns null if the function hasn't been assigned an SCC via the SCC
|
|
|
|
/// iterator walk.
|
2014-04-23 23:12:06 +00:00
|
|
|
SCC *lookupSCC(Node &N) const { return SCCMap.lookup(&N); }
|
2014-04-23 09:57:18 +00:00
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
/// \brief Get a graph node for a given function, scanning it to populate the
|
|
|
|
/// graph data as necessary.
|
2014-04-23 23:20:36 +00:00
|
|
|
Node &get(Function &F) {
|
2014-02-06 04:37:03 +00:00
|
|
|
Node *&N = NodeMap[&F];
|
|
|
|
if (N)
|
2014-04-23 23:20:36 +00:00
|
|
|
return *N;
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
return insertInto(F, N);
|
|
|
|
}
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
///@{
|
|
|
|
/// \name Pre-SCC Mutation API
|
|
|
|
///
|
|
|
|
/// These methods are only valid to call prior to forming any SCCs for this
|
|
|
|
/// call graph. They can be used to update the core node-graph during
|
|
|
|
/// a node-based inorder traversal that precedes any SCC-based traversal.
|
|
|
|
///
|
|
|
|
/// Once you begin manipulating a call graph's SCCs, you must perform all
|
|
|
|
/// mutation of the graph via the SCC methods.
|
|
|
|
|
2014-04-28 11:10:23 +00:00
|
|
|
/// \brief Update the call graph after inserting a new edge.
|
|
|
|
void insertEdge(Node &Caller, Function &Callee);
|
|
|
|
|
|
|
|
/// \brief Update the call graph after inserting a new edge.
|
|
|
|
void insertEdge(Function &Caller, Function &Callee) {
|
|
|
|
return insertEdge(get(Caller), Callee);
|
|
|
|
}
|
|
|
|
|
2014-04-23 11:03:03 +00:00
|
|
|
/// \brief Update the call graph after deleting an edge.
|
|
|
|
void removeEdge(Node &Caller, Function &Callee);
|
|
|
|
|
|
|
|
/// \brief Update the call graph after deleting an edge.
|
|
|
|
void removeEdge(Function &Caller, Function &Callee) {
|
2014-04-23 23:20:36 +00:00
|
|
|
return removeEdge(get(Caller), Callee);
|
2014-04-23 11:03:03 +00:00
|
|
|
}
|
|
|
|
|
2014-04-27 01:59:50 +00:00
|
|
|
///@}
|
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
private:
|
|
|
|
/// \brief Allocator that holds all the call graph nodes.
|
|
|
|
SpecificBumpPtrAllocator<Node> BPA;
|
|
|
|
|
|
|
|
/// \brief Maps function->node for fast lookup.
|
|
|
|
DenseMap<const Function *, Node *> NodeMap;
|
|
|
|
|
|
|
|
/// \brief The entry nodes to the graph.
|
|
|
|
///
|
|
|
|
/// These nodes are reachable through "external" means. Put another way, they
|
|
|
|
/// escape at the module scope.
|
|
|
|
NodeVectorT EntryNodes;
|
|
|
|
|
2014-04-23 04:00:17 +00:00
|
|
|
/// \brief Map of the entry nodes in the graph to their indices in
|
|
|
|
/// \c EntryNodes.
|
|
|
|
DenseMap<Function *, size_t> EntryIndexMap;
|
2014-02-06 04:37:03 +00:00
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
/// \brief Allocator that holds all the call graph SCCs.
|
|
|
|
SpecificBumpPtrAllocator<SCC> SCCBPA;
|
|
|
|
|
|
|
|
/// \brief Maps Function -> SCC for fast lookup.
|
2014-04-23 23:12:06 +00:00
|
|
|
DenseMap<Node *, SCC *> SCCMap;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
|
|
|
/// \brief The leaf SCCs of the graph.
|
|
|
|
///
|
|
|
|
/// These are all of the SCCs which have no children.
|
|
|
|
SmallVector<SCC *, 4> LeafSCCs;
|
|
|
|
|
2014-04-24 11:05:20 +00:00
|
|
|
/// \brief Stack of nodes in the DFS walk.
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
SmallVector<std::pair<Node *, iterator>, 4> DFSStack;
|
|
|
|
|
|
|
|
/// \brief Set of entry nodes not-yet-processed into SCCs.
|
2014-04-26 09:45:55 +00:00
|
|
|
SmallVector<Function *, 4> SCCEntryNodes;
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
2014-04-24 11:05:20 +00:00
|
|
|
/// \brief Stack of nodes the DFS has walked but not yet put into a SCC.
|
|
|
|
SmallVector<Node *, 4> PendingSCCStack;
|
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
/// \brief Counter for the next DFS number to assign.
|
|
|
|
int NextDFSNumber;
|
|
|
|
|
2014-02-06 04:37:03 +00:00
|
|
|
/// \brief Helper to insert a new function, with an already looked-up entry in
|
|
|
|
/// the NodeMap.
|
2014-04-23 23:20:36 +00:00
|
|
|
Node &insertInto(Function &F, Node *&MappedN);
|
2014-02-06 04:37:03 +00:00
|
|
|
|
2014-04-18 11:02:33 +00:00
|
|
|
/// \brief Helper to update pointers back to the graph object during moves.
|
|
|
|
void updateGraphPtrs();
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
|
2014-04-23 06:09:03 +00:00
|
|
|
/// \brief Helper to form a new SCC out of the top of a DFSStack-like
|
|
|
|
/// structure.
|
2014-04-24 11:05:20 +00:00
|
|
|
SCC *formSCC(Node *RootN, SmallVectorImpl<Node *> &NodeStack);
|
2014-04-23 06:09:03 +00:00
|
|
|
|
[LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-18 10:50:32 +00:00
|
|
|
/// \brief Retrieve the next node in the post-order SCC walk of the call graph.
|
|
|
|
SCC *getNextSCCInPostOrder();
|
2014-02-06 04:37:03 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
// Provide GraphTraits specializations for call graphs.
|
|
|
|
template <> struct GraphTraits<LazyCallGraph::Node *> {
|
|
|
|
typedef LazyCallGraph::Node NodeType;
|
|
|
|
typedef LazyCallGraph::iterator ChildIteratorType;
|
|
|
|
|
|
|
|
static NodeType *getEntryNode(NodeType *N) { return N; }
|
|
|
|
static ChildIteratorType child_begin(NodeType *N) { return N->begin(); }
|
|
|
|
static ChildIteratorType child_end(NodeType *N) { return N->end(); }
|
|
|
|
};
|
|
|
|
template <> struct GraphTraits<LazyCallGraph *> {
|
|
|
|
typedef LazyCallGraph::Node NodeType;
|
|
|
|
typedef LazyCallGraph::iterator ChildIteratorType;
|
|
|
|
|
|
|
|
static NodeType *getEntryNode(NodeType *N) { return N; }
|
|
|
|
static ChildIteratorType child_begin(NodeType *N) { return N->begin(); }
|
|
|
|
static ChildIteratorType child_end(NodeType *N) { return N->end(); }
|
|
|
|
};
|
|
|
|
|
|
|
|
/// \brief An analysis pass which computes the call graph for a module.
|
|
|
|
class LazyCallGraphAnalysis {
|
|
|
|
public:
|
|
|
|
/// \brief Inform generic clients of the result type.
|
|
|
|
typedef LazyCallGraph Result;
|
|
|
|
|
|
|
|
static void *ID() { return (void *)&PassID; }
|
|
|
|
|
2015-01-05 12:21:44 +00:00
|
|
|
static StringRef name() { return "Lazy CallGraph Analysis"; }
|
|
|
|
|
2014-08-29 21:53:01 +00:00
|
|
|
/// \brief Compute the \c LazyCallGraph for the module \c M.
|
2014-02-06 04:37:03 +00:00
|
|
|
///
|
|
|
|
/// This just builds the set of entry points to the call graph. The rest is
|
|
|
|
/// built lazily as it is walked.
|
2015-01-05 02:47:05 +00:00
|
|
|
LazyCallGraph run(Module &M) { return LazyCallGraph(M); }
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
private:
|
|
|
|
static char PassID;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// \brief A pass which prints the call graph to a \c raw_ostream.
|
|
|
|
///
|
|
|
|
/// This is primarily useful for testing the analysis.
|
|
|
|
class LazyCallGraphPrinterPass {
|
|
|
|
raw_ostream &OS;
|
|
|
|
|
|
|
|
public:
|
|
|
|
explicit LazyCallGraphPrinterPass(raw_ostream &OS);
|
|
|
|
|
2015-01-05 02:47:05 +00:00
|
|
|
PreservedAnalyses run(Module &M, ModuleAnalysisManager *AM);
|
2014-02-06 04:37:03 +00:00
|
|
|
|
|
|
|
static StringRef name() { return "LazyCallGraphPrinterPass"; }
|
|
|
|
};
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|