llvm-6502/lib/Analysis/LazyCallGraph.cpp

198 lines
7.6 KiB
C++
Raw Normal View History

[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200903 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 04:37:03 +00:00
//===- LazyCallGraph.cpp - Analysis of a Module's call graph --------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
#include "llvm/Analysis/LazyCallGraph.h"
#include "llvm/ADT/SCCIterator.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/PassManager.h"
#include "llvm/Support/CallSite.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/InstVisitor.h"
using namespace llvm;
static void findCallees(
SmallVectorImpl<Constant *> &Worklist, SmallPtrSetImpl<Constant *> &Visited,
SmallVectorImpl<PointerUnion<Function *, LazyCallGraph::Node *> > &Callees,
SmallPtrSetImpl<Function *> &CalleeSet) {
while (!Worklist.empty()) {
Constant *C = Worklist.pop_back_val();
if (Function *F = dyn_cast<Function>(C)) {
// Note that we consider *any* function with a definition to be a viable
// edge. Even if the function's definition is subject to replacement by
// some other module (say, a weak definition) there may still be
// optimizations which essentially speculate based on the definition and
// a way to check that the specific definition is in fact the one being
// used. For example, this could be done by moving the weak definition to
// a strong (internal) definition and making the weak definition be an
// alias. Then a test of the address of the weak function against the new
// strong definition's address would be an effective way to determine the
// safety of optimizing a direct call edge.
if (!F->isDeclaration() && CalleeSet.insert(F))
Callees.push_back(F);
continue;
}
for (User::value_op_iterator OI = C->value_op_begin(),
OE = C->value_op_end();
OI != OE; ++OI)
if (Visited.insert(cast<Constant>(*OI)))
Worklist.push_back(cast<Constant>(*OI));
}
}
LazyCallGraph::Node::Node(LazyCallGraph &G, Function &F) : G(G), F(F) {
SmallVector<Constant *, 16> Worklist;
SmallPtrSet<Constant *, 16> Visited;
// Find all the potential callees in this function. First walk the
// instructions and add every operand which is a constant to the worklist.
for (Function::iterator BBI = F.begin(), BBE = F.end(); BBI != BBE; ++BBI)
for (BasicBlock::iterator II = BBI->begin(), IE = BBI->end(); II != IE;
++II)
for (User::value_op_iterator OI = II->value_op_begin(),
OE = II->value_op_end();
OI != OE; ++OI)
if (Constant *C = dyn_cast<Constant>(*OI))
if (Visited.insert(C))
Worklist.push_back(C);
// We've collected all the constant (and thus potentially function or
// function containing) operands to all of the instructions in the function.
// Process them (recursively) collecting every function found.
findCallees(Worklist, Visited, Callees, CalleeSet);
}
LazyCallGraph::Node::Node(LazyCallGraph &G, const Node &OtherN)
: G(G), F(OtherN.F), CalleeSet(OtherN.CalleeSet) {
// Loop over the other node's callees, adding the Function*s to our list
// directly, and recursing to add the Node*s.
Callees.reserve(OtherN.Callees.size());
for (NodeVectorImplT::iterator OI = OtherN.Callees.begin(),
OE = OtherN.Callees.end();
OI != OE; ++OI)
if (Function *Callee = OI->dyn_cast<Function *>())
Callees.push_back(Callee);
else
Callees.push_back(G.copyInto(*OI->get<Node *>()));
}
#if LLVM_HAS_RVALUE_REFERENCES
LazyCallGraph::Node::Node(LazyCallGraph &G, Node &&OtherN)
: G(G), F(OtherN.F), Callees(std::move(OtherN.Callees)),
CalleeSet(std::move(OtherN.CalleeSet)) {
// Loop over our Callees. They've been moved from another node, but we need
// to move the Node*s to live under our bump ptr allocator.
for (NodeVectorImplT::iterator CI = Callees.begin(), CE = Callees.end();
CI != CE; ++CI)
if (Node *ChildN = CI->dyn_cast<Node *>())
*CI = G.moveInto(std::move(*ChildN));
}
#endif
LazyCallGraph::LazyCallGraph(Module &M) : M(M) {
for (Module::iterator FI = M.begin(), FE = M.end(); FI != FE; ++FI)
if (!FI->isDeclaration() && !FI->hasLocalLinkage())
if (EntryNodeSet.insert(&*FI))
EntryNodes.push_back(&*FI);
// Now add entry nodes for functions reachable via initializers to globals.
SmallVector<Constant *, 16> Worklist;
SmallPtrSet<Constant *, 16> Visited;
for (Module::global_iterator GI = M.global_begin(), GE = M.global_end(); GI != GE; ++GI)
if (GI->hasInitializer())
if (Visited.insert(GI->getInitializer()))
Worklist.push_back(GI->getInitializer());
findCallees(Worklist, Visited, EntryNodes, EntryNodeSet);
}
LazyCallGraph::LazyCallGraph(const LazyCallGraph &G)
: M(G.M), EntryNodeSet(G.EntryNodeSet) {
EntryNodes.reserve(G.EntryNodes.size());
for (NodeVectorImplT::const_iterator EI = G.EntryNodes.begin(),
EE = G.EntryNodes.end();
[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200903 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 04:37:03 +00:00
EI != EE; ++EI)
if (Function *Callee = EI->dyn_cast<Function *>())
EntryNodes.push_back(Callee);
else
EntryNodes.push_back(copyInto(*EI->get<Node *>()));
}
#if LLVM_HAS_RVALUE_REFERENCES
// FIXME: This would be crazy simpler if BumpPtrAllocator were movable without
// invalidating any of the allocated memory. We should make that be the case at
// some point and delete this.
LazyCallGraph::LazyCallGraph(LazyCallGraph &&G)
: M(G.M), EntryNodes(std::move(G.EntryNodes)),
EntryNodeSet(std::move(G.EntryNodeSet)) {
// Loop over our EntryNodes. They've been moved from another graph, so we
// need to move the Node*s to live under our bump ptr allocator. We can just
// do this in-place.
for (NodeVectorImplT::iterator EI = EntryNodes.begin(),
EE = EntryNodes.end();
[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200903 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 04:37:03 +00:00
EI != EE; ++EI)
if (Node *EntryN = EI->dyn_cast<Node *>())
*EI = moveInto(std::move(*EntryN));
[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200903 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 04:37:03 +00:00
}
#endif
LazyCallGraph::Node *LazyCallGraph::insertInto(Function &F, Node *&MappedN) {
return new (MappedN = BPA.Allocate()) Node(*this, F);
}
LazyCallGraph::Node *LazyCallGraph::copyInto(const Node &OtherN) {
Node *&N = NodeMap[&OtherN.F];
if (N)
return N;
return new (N = BPA.Allocate()) Node(*this, OtherN);
}
#if LLVM_HAS_RVALUE_REFERENCES
LazyCallGraph::Node *LazyCallGraph::moveInto(Node &&OtherN) {
Node *&N = NodeMap[&OtherN.F];
if (N)
return N;
return new (N = BPA.Allocate()) Node(*this, std::move(OtherN));
}
#endif
char LazyCallGraphAnalysis::PassID;
LazyCallGraphPrinterPass::LazyCallGraphPrinterPass(raw_ostream &OS) : OS(OS) {}
static void printNodes(raw_ostream &OS, LazyCallGraph::Node &N,
SmallPtrSetImpl<LazyCallGraph::Node *> &Printed) {
// Recurse depth first through the nodes.
for (LazyCallGraph::iterator I = N.begin(), E = N.end(); I != E; ++I)
if (Printed.insert(*I))
printNodes(OS, **I, Printed);
OS << " Call edges in function: " << N.getFunction().getName() << "\n";
for (LazyCallGraph::iterator I = N.begin(), E = N.end(); I != E; ++I)
OS << " -> " << I->getFunction().getName() << "\n";
OS << "\n";
}
PreservedAnalyses LazyCallGraphPrinterPass::run(Module *M, ModuleAnalysisManager *AM) {
LazyCallGraph &G = AM->getResult<LazyCallGraphAnalysis>(M);
OS << "Printing the call graph for module: " << M->getModuleIdentifier() << "\n\n";
SmallPtrSet<LazyCallGraph::Node *, 16> Printed;
for (LazyCallGraph::iterator I = G.begin(), E = G.end(); I != E; ++I)
if (Printed.insert(*I))
printNodes(OS, **I, Printed);
return PreservedAnalyses::all();
}