Writing an LLVM Pass |
Written by Chris Lattner
Introduction - What is a pass? |
All LLVM passes are subclasses of the Pass class, which implement functionality by overriding virtual methods inherited from Pass. Depending on how your pass works, you may be able to inherit from the FunctionPass or BasicBlockPass, which gives the system more information about what your pass does, and how it can be combined with other passes. One of the main features of the LLVM Pass Framework is that it schedules passes to run in an efficient way based on the constraints that your pass has.
We start by showing you how to construct a pass, everything from setting up the code, to compiling, loading, and executing it. After the basics are down, more advanced features are discussed.
Quick Start - Writing hello world |
Setting up the build environment |
# Makefile for hello pass # Path to top level of LLVM heirarchy LEVEL = ../../.. # Name of the library to build LIBRARYNAME = hello # Build a dynamically loadable shared object SHARED_LIBRARY = 1 # Include the makefile implementation stuff include $(LEVEL)/Makefile.common
This makefile specifies that all of the .cpp files in the current directory are to be compiled and linked together into a lib/Debug/libhello.so shared object that can be dynamically loaded by the opt or analyze tools.
Now that we have the build scripts set up, we just need to write the code for the pass itself.
Basic code required |
#include "llvm/Pass.h" #include "llvm/Function.h"Which are needed because we are writing a Pass, and we are operating on Function's.
Next we have:
namespace {
... which starts out an anonymous namespace. Anonymous namespaces are to C++ what the "static" keyword is to C (at global scope). It makes the things declared inside of the anonymous namespace only visible to the current file. If you're not familiar with them, consult a decent C++ book for more information.
Next, we declare our pass itself:
struct Hello : public FunctionPass {
This declares a "Hello" class that is a subclass of FunctionPass. The different builtin pass subclasses are described in detail later, but for now, know that FunctionPass's operate a function at a time.
virtual bool runOnFunction(Function &F) { std::cerr << "Hello: " << F.getName() << "\n"; return false; } }; // end of struct HelloWe declare a "runOnFunction" method, which overloads an abstract virtual method inherited from FunctionPass. This is where we are supposed to do our thing, so we just print out our message with the name of each function.
RegisterOpt<Hello> X("hello", "Hello World Pass"); } // end of anonymous namespace
Lastly, we register our class Hello, giving it a command line argument "hello", and a name "Hello World Pass". There are several different ways of registering your pass, depending on what it is to be used for. For "optimizations" we use the RegisterOpt template.
As a whole, the .cpp file looks like:
#include "llvm/Pass.h" #include "llvm/Function.h" namespace { struct Hello : public FunctionPass { virtual bool runOnFunction(Function &F) { std::cerr << "Hello: " << F.getName() << "\n"; return false; } }; RegisterOpt<Hello> X("hello", "Hello World Pass"); }
Now that it's all together, compile the file with a simple "gmake" command in the local directory and you should get a new "lib/Debug/libhello.so file. Note that everything in this file is contained in an anonymous namespace: this reflects the fact that passes are self contained units that do not need external interfaces (although they can have them) to be useful.
Running a pass with opt or analyze |
To test it, follow the example at the end of the Getting Started Guide to compile "Hello World" to LLVM. We can now run the bytecode file (hello.bc) for the program through our transformation like this (or course, any bytecode file will work):
$ opt -load ../../../lib/Debug/libhello.so -hello < hello.bc > /dev/null Hello: __main Hello: puts Hello: main
The '-load' option specifies that 'opt' should load your pass as a shared object, which makes '-hello' a valid command line argument (which is one reason you need to register your pass). Because the hello pass does not modify the program in any interesting way, we just throw away the result of opt (sending it to /dev/null).
To see what happened to the other string you registered, try running opt with the --help option:
$ opt -load ../../../lib/Debug/libhello.so --help OVERVIEW: llvm .bc -> .bc modular optimizer USAGE: opt [options] <input bytecode> OPTIONS: Optimizations available: ... -funcresolve - Resolve Functions -gcse - Global Common Subexpression Elimination -globaldce - Dead Global Elimination -hello - Hello World Pass -indvars - Cannonicalize Induction Variables -inline - Function Integration/Inlining -instcombine - Combine redundant instructions ...
The pass name get added as the information string for your pass, giving some documentation to users of opt. Now that you have a working pass, you would go ahead and make it do the cool transformations you want. Once you get it all working and tested, it may become useful to find out how fast your pass is. The PassManager provides a nice command line option (--time-passes) that allows you to get information about the execution time of your pass along with the other passes you queue up. For example:
$ opt -load ../../../lib/Debug/libhello.so -hello -time-passes < hello.bc > /dev/null Hello: __main Hello: puts Hello: main =============================================================================== ... Pass execution timing report ... =============================================================================== Total Execution Time: 0.02 seconds (0.0479059 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Pass Name --- 0.0100 (100.0%) 0.0000 ( 0.0%) 0.0100 ( 50.0%) 0.0402 ( 84.0%) Bytecode Writer 0.0000 ( 0.0%) 0.0100 (100.0%) 0.0100 ( 50.0%) 0.0031 ( 6.4%) Dominator Set Construction 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0013 ( 2.7%) Module Verifier 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0033 ( 6.9%) Hello World Pass 0.0100 (100.0%) 0.0100 (100.0%) 0.0200 (100.0%) 0.0479 (100.0%) TOTAL
As you can see, our implementation above is pretty fast :). The additional passes listed are automatically inserted by the 'opt' tool to verify that the LLVM emitted by your pass is still valid and well formed LLVM, which hasn't been broken somehow. Now that you have seen the basics of the mechanics behind passes, we can talk about some more details of how they work and how to use them.
Pass classes and requirements |
When choosing a superclass for your Pass, you should choose the most specific class possible, while still being able to meet the requirements listed. This gives the LLVM Pass Infrastructure information neccesary to optimize how passes are run, so that the resultant compiler isn't unneccesarily slow.
The ImmutablePass class |
Although this pass class is very infrequently used, it is important for providing information about the current target machine being compiled for, and other static information that can affect the various transformations.
ImmutablePass's never invalidate other transformations, are never invalidated, and are never "run".
The Pass class |
To write a correct Pass subclass, derive from Pass and overload the run method with the following signature:
virtual bool run(Module &M) = 0;
The run method performs the interesting work of the pass, and should return true if the module was modified by the transformation, false otherwise.
The FunctionPass class |
To be explicit, FunctionPass subclasses are not allowed to:
Implementing a FunctionPass is usually straightforward (See the Hello World pass for example). FunctionPass's may overload three virtual methods to do their work. All of these methods should return true if they modified the program, or false if they didn't.
virtual bool doInitialization(Module &M);
The doIninitialize method is allowed to do most of the things that FunctionPass's are not allowed to do. They can add and remove functions, get pointers to functions, etc. The doInitialization method is designed to do simple initialization type of stuff that does not depend on the functions being processed. The doInitialization method call is not scheduled to overlap with any other pass executions (thus it should be very fast).
A good example of how this method should be used is the LowerAllocations pass. This pass converts malloc and free instructions into platform dependant malloc() and free() function calls. It uses the doInitialization method to get a reference to the malloc and free functions that it needs, adding prototypes to the module if neccesary.
virtual bool runOnFunction(Function &F) = 0;
The runOnFunction method must be implemented by your subclass to do the transformation or analysis work of your pass. As usual, a true value should be returned if the function is modified.
virtual bool doFinalization(Module &M);The doFinalization method is an infrequently used method that is called when the pass framework has finished calling runOnFunction for every function in the program being compiled.
The BasicBlockPass class |
BasicBlockPass's are useful for traditional local and "peephole" optimizations. They may override the same doInitialization(Module &) and doFinalization(Module &) methods that FunctionPass's have, but also have the following virtual methods that may also be implemented:
virtual bool doInitialization(Function &F);
The doIninitialize method is allowed to do most of the things that BasicBlockPass's are not allowed to do, but that FunctionPass's can. The doInitialization method is designed to do simple initialization type of stuff that does not depend on the BasicBlocks being processed. The doInitialization method call is not scheduled to overlap with any other pass executions (thus it should be very fast).
virtual bool runOnBasicBlock(BasicBlock &BB) = 0;
Override this function to do the work of the BasicBlockPass. This function is not allowed to inspect or modify basic blocks other than the parameter, and are not allowed to modify the CFG. A true value must be returned if the basic block is modified.
virtual bool doFinalization(Function &F);The doFinalization method is an infrequently used method that is called when the pass framework has finished calling runOnBasicBlock for every BasicBlock in the program being compiled. This can be used to perform per-function finalization.
Pass registration |
Passes can be registered in several different ways. Depending on the general classification of the pass, you should use one of the following templates to register the pass:
Regardless of how you register your pass, you must specify at least two parameters. The first parameter is the name of the pass that is to be used on the command line to specify that the pass should be added to a program (for example opt or analyze). The second argument is the name of the pass, which is to be used for the --help output of programs, as well as for debug output generated by the --debug-pass option.
If you pass is constructed by its default constructor, you only ever have to pass these two arguments. If, on the other hand, you require other information (like target specific information), you must pass an additional argument. This argument is a pointer to a function used to create the pass. For an example of how this works, look at the LowerAllocations.cpp file.
If a pass is registered to be used by the analyze utility, you should implement the virtual print method:
virtual void print(std::ostream &O, const Module *M) const;
The print method must be implemented by "analyses" in order to print a human readable version of the analysis results. This is useful for debugging an analysis itself, as well as for other people to figure out how an analysis works. The analyze tool uses this method to generate its output.
The ostream parameter specifies the stream to write the results on, and the Module parameter gives a pointer to the top level module of the program that has been analyzed. Note however that this pointer may be null in certain circumstances (such as calling the Pass::dump() from a debugger), so it should only be used to enhance debug output, it should not be depended on.
Specifying interactions between passes |
Typically this functionality is used to require that analysis results are computed before your pass is run. Running arbitrary transformation passes can invalidate the computed analysis results, which is what the invalidation set specifies. If a pass does not implement the getAnalysisUsage method, it defaults to not having any prerequisite passes, and invalidating all other passes.
virtual void getAnalysisUsage(AnalysisUsage &Info) const;
By implementing the getAnalysisUsage method, the required and invalidated sets may be specified for your transformation. The implementation should fill in the AnalysisUsage object with information about which passes are required and not invalidated. To do this, the following set methods are provided by the AnalysisUsage class:
// addRequires - Add the specified pass to the required set for your pass. template<class PassClass> AnalysisUsage &AnalysisUsage::addRequired(); // addPreserved - Add the specified pass to the set of analyses preserved by // this pass template<class PassClass> AnalysisUsage &AnalysisUsage::addPreserved(); // setPreservesAll - Call this if the pass does not modify its input at all void AnalysisUsage::setPreservesAll(); // setPreservesCFG - This function should be called by the pass, iff they do not: // // 1. Add or remove basic blocks from the function // 2. Modify terminator instructions in any way. // // This is automatically implied for BasicBlockPass's // void AnalysisUsage::setPreservesCFG();
Some examples of how to use these methods are:
// This is an example implementation from an analysis, which does not modify // the program at all, yet has a prerequisite. void PostDominanceFrontier::getAnalysisUsage(AnalysisUsage &AU) const { AU.setPreservesAll(); AU.addRequired<PostDominatorTree>(); }
and:
// This example modifies the program, but does not modify the CFG void LICM::getAnalysisUsage(AnalysisUsage &AU) const { AU.setPreservesCFG(); AU.addRequired<LoopInfo>(); }
template<typename PassClass> AnalysisType &getAnalysis();
This method call returns a reference to the pass desired. You may get a runtime assertion failure if you attempt to get an analysis that you did not declare as required in your getAnalysisUsage implementation. This method can be called by your run* method implementation, or by any other local method invoked by your run* method.
Implementing Analysis Groups |
In particular, some analyses are defined such that there is a single simple interface to the analysis results, but multiple ways of calculating them. Consider alias analysis for example. The most trivial alias analysis returns "may alias" for any alias query. The most sophisticated analysis a flow-sensitive, context-sensitive interprocedural analysis that can take a significant amount of time to execute (and obviously, there is a lot of room between these two extremes for other implementations). To cleanly support situations like this, the LLVM Pass Infrastructure supports the notion of Analysis Groups.
Analysis groups are used by client passes just like other passes are: the AnalysisUsage::addRequired() and Pass::getAnalysis() methods. In order to resolve this requirement, the PassManager scans the available passes to see if any implementations of the analysis group are available. If none is available, the default implementation is created for the pass to use. All standard rules for interaction between passes still apply.
Although Pass Registration is optional for normal passes, all analysis group implementations must be registered, and must use the RegisterAnalysisGroup template to join the implementation pool. Also, a default implementation of the interface must be registered with RegisterAnalysisGroup.
As a concrete example of an Analysis Group in action, consider the AliasAnalysis analysis group. The default implementation of the alias analysis interface (the basicaa pass) just does a few simple checks that don't require significant analysis to compute (such as: two different globals can never alias each other, etc). Passes that use the AliasAnalysis interface (for example the gcse pass), do not care which implementation of alias analysis is actually provided, they just use the designated interface.
From the user's perspective, commands work just like normal. Issuing the command 'opt -gcse ...' will cause the basicaa class to be instantiated and added to the pass sequence. Issuing the command 'opt -somefancyaa -gcse ...' will cause the gcse pass to use the somefancyaa alias analysis (which doesn't actually exist, it's just a hypothetical example) instead.
static RegisterAnalysisGroup<AliasAnalysis> A("Alias Analysis");
Once the analysis is registered, passes can declare that they are valid implementations of the interface by using the following code:
namespace { // Analysis Group implementations must be registered normally... RegisterOpt<FancyAA> B("somefancyaa", "A more complex alias analysis implementation"); // Declare that we implement the AliasAnalysis interface RegisterAnalysisGroup<AliasAnalysis, FancyAA> C; }
This just shows a class FancyAA that is registered normally, then uses the RegisterAnalysisGroup template to "join" the AliasAnalysis analysis group. Every implementation of an analysis group should join using this template. A single pass may join multiple different analysis groups with no problem.
namespace { // Analysis Group implementations must be registered normally... RegisterOpt<BasicAliasAnalysis> D("basicaa", "Basic Alias Analysis (default AA impl)"); // Declare that we implement the AliasAnalysis interface RegisterAnalysisGroup<AliasAnalysis, BasicAliasAnalysis, true> E; }
Here we show how the default implementation is specified (using the extra argument to the RegisterAnalysisGroup template). There must be exactly one default implementation available at all times for an Analysis Group to be used. Here we declare that the BasicAliasAnalysis pass is the default implementation for the interface.
What PassManager does |
The PassManager does two main things to try to reduce the execution time of a series of passes:
This improves the cache behavior of the compiler, because it is only touching the LLVM program representation for a single function at a time, instead of traversing the entire program. It reduces the memory consumption of compiler, because, for example, only one DominatorSet needs to be calculated at a time. This also makes it possible some interesting enhancements in the future.
The effectiveness of the PassManager is influenced directly by how much information it has about the behaviors of the passes it is scheduling. For example, the "preserved" set is intentionally conservative in the face of an unimplemented getAnalysisUsage method. Not implementing when it should be implemented will have the effect of not allowing any analysis results to live across the execution of your pass.
The PassManager class exposes a --debug-pass command line options that is useful for debugging pass execution, seeing how things work, and diagnosing when you should be preserving more analyses than you currently are (To get information about all of the variants of the --debug-pass option, just type 'opt --help-hidden').
By using the --debug-pass=Structure option, for example, we can see how our Hello World pass interacts with other passes. Lets try it out with the gcse and licm passes:
$ opt -load ../../../lib/Debug/libhello.so -gcse -licm --debug-pass=Structure < hello.bc > /dev/null Module Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination -- Immediate Dominators Construction -- Global Common Subexpression Elimination Natural Loop Construction Loop Invariant Code Motion -- Natural Loop Construction -- Loop Invariant Code Motion Module Verifier -- Dominator Set Construction -- Module Verifier Bytecode Writer --Bytecode Writer
This output shows us when passes are constructed and when the analysis results are known to be dead (prefixed with '--'). Here we see that GCSE uses dominator and immediate dominator information to do its job. The LICM pass uses natural loop information, which uses dominator sets, but not immediate dominators. Because immediate dominators are no longer useful after the GCSE pass, it is immediately destroyed. The dominator sets are then reused to compute natural loop information, which is then used by the LICM pass.
After the LICM pass, the module verifier runs (which is automatically added by the 'opt' tool), which uses the dominator set to check that the resultant LLVM code is well formed. After it finishes, the dominator set information is destroyed, after being computed once, and shared by three passes.
Lets see how this changes when we run the Hello World pass in between the two passes:
$ opt -load ../../../lib/Debug/libhello.so -gcse -hello -licm --debug-pass=Structure < hello.bc > /dev/null Module Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination -- Dominator Set Construction -- Immediate Dominators Construction -- Global Common Subexpression Elimination Hello World Pass -- Hello World Pass Dominator Set Construction Natural Loop Construction Loop Invariant Code Motion -- Natural Loop Construction -- Loop Invariant Code Motion Module Verifier -- Dominator Set Construction -- Module Verifier Bytecode Writer --Bytecode Writer Hello: __main Hello: puts Hello: main
Here we see that the Hello World pass has killed the Dominator Set pass, even though it doesn't modify the code at all! To fix this, we need to add the following getAnalysisUsage method to our pass:
// We don't modify the program, so we preserve all analyses virtual void getAnalysisUsage(AnalysisUsage &AU) const { AU.setPreservesAll(); }
Now when we run our pass, we get this output:
$ opt -load ../../../lib/Debug/libhello.so -gcse -hello -licm --debug-pass=Structure < hello.bc > /dev/null Pass Arguments: -gcse -hello -licm Module Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination -- Immediate Dominators Construction -- Global Common Subexpression Elimination Hello World Pass -- Hello World Pass Natural Loop Construction Loop Invariant Code Motion -- Loop Invariant Code Motion -- Natural Loop Construction Module Verifier -- Dominator Set Construction -- Module Verifier Bytecode Writer --Bytecode Writer Hello: __main Hello: puts Hello: main
Which shows that we don't accidentally invalidate dominator information anymore, and therefore do not have to compute it twice.
virtual void releaseMemory();
The PassManager automatically determines when to compute analysis results, and how long to keep them around for. Because the lifetime of the pass object itself is effectively the entire duration of the compilation process, we need some way to free analysis results when they are no longer useful. The releaseMemory virtual method is the way to do this.
If you are writing an analysis or any other pass that retains a significant amount of state (for use by another pass which "requires" your pass and uses the getAnalysis method) you should implement releaseMEmory to, well, release the memory allocated to maintain this internal state. This method is called after the run* method for the class, before the next call of run* in your pass.
Using GDB with dynamically loaded passes |
For sake of discussion, I'm going to assume that you are debugging a transformation invoked by opt, although nothing described here depends on that.
$ gdb opt GNU gdb 5.0 Copyright 2000 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "sparc-sun-solaris2.6"... (gdb)
Note that opt has a lot of debugging information in it, so it takes time to load. Be patient. Since we cannot set a breakpoint in our pass yet (the shared object isn't loaded until runtime), we must execute the process, and have it stop before it invokes our pass, but after it has loaded the shared object. The most foolproof way of doing this is to set a breakpoint in PassManager::run and then run the process with the arguments you want:
(gdb) break PassManager::run Breakpoint 1 at 0x2413bc: file Pass.cpp, line 70. (gdb) run test.bc -load /shared/lattner/cvs/llvm/lib/Debug/[libname].so -[passoption] Starting program: /shared/lattner/cvs/llvm/tools/Debug/opt test.bc -load /shared/lattner/cvs/llvm/lib/Debug/[libname].so -[passoption] Breakpoint 1, PassManager::run (this=0xffbef174, M=@0x70b298) at Pass.cpp:70 70 bool PassManager::run(Module &M) { return PM->run(M); } (gdb)Once the opt stops in the PassManager::run method you are now free to set breakpoints in your pass so that you can trace through execution or do other standard debugging stuff.
Future extensions planned |
This implementation would prevent each of the passes from having to implement multithreaded constructs, requiring only the LLVM core to have locking in a few places (for global resources). Although this is a simple extension, we simply haven't had time (or multiprocessor machines, thus a reason) to implement this. Despite that, we have kept the LLVM passes SMP ready, and you should too.
To solve this problem, eventually the PassManger class will accept a ModuleSource object instead of a Module itself. When complete, this will also allow for streaming of functions out of the bytecode representation, allowing us to avoid holding the entire program in memory at once if we only are dealing with FunctionPass's.
As part of a different issue, eventually the bytecode loader will be extended to allow on-demand loading of functions from the bytecode representation, in order to better support the runtime reoptimizer. The bytecode format is already capable of this, the loader just needs to be reworked a bit.
Note that it is no problem for a FunctionPass to require the results of a Pass, only the other way around.