This document is meant to highlight some of the important classes and interfaces available in the LLVM source-base. This manual is not intended to explain what LLVM is, how it works, and what LLVM code looks like. It assumes that you know the basics of LLVM and are interested in writing transformations or otherwise analyzing or manipulating the code.
This document should get you oriented so that you can find your way in the continuously growing source code that makes up the LLVM infrastructure. Note that this manual is not intended to serve as a replacement for reading the source code, so if you think there should be a method in one of these classes to do something, but it's not listed, check the source. Links to the doxygen sources are provided to make this as easy as possible.
The first section of this document describes general information that is useful to know when working in the LLVM infrastructure, and the second describes the Core LLVM classes. In the future this manual will be extended with information describing how to use extension libraries, such as dominator information, CFG traversal routines, and useful utilities like the InstVisitor template.
This section contains general information that is useful if you are working in the LLVM source-base, but that isn't specific to any particular API.
LLVM makes heavy use of the C++ Standard Template Library (STL), perhaps much more than you are used to, or have seen before. Because of this, you might want to do a little background reading in the techniques used and capabilities of the library. There are many good pages that discuss the STL, and several books on the subject that you can get, so it will not be discussed in this document.
Here are some useful links:
You are also encouraged to take a look at the LLVM Coding Standards guide which focuses on how to write maintainable code more than where to put your curly braces.
Here we highlight some LLVM APIs that are generally useful and good to know about when writing transformations.
The LLVM source-base makes extensive use of a custom form of RTTI. These templates have many similarities to the C++ dynamic_cast<> operator, but they don't have some drawbacks (primarily stemming from the fact that dynamic_cast<> only works on classes that have a v-table). Because they are used so often, you must know what they do and how they work. All of these templates are defined in the llvm/Support/Casting.h file (note that you very rarely have to include this file directly).
The isa<> operator works exactly like the Java "instanceof" operator. It returns true or false depending on whether a reference or pointer points to an instance of the specified class. This can be very useful for constraint checking of various sorts (example below).
The cast<> operator is a "checked cast" operation. It converts a pointer or reference from a base class to a derived cast, causing an assertion failure if it is not really an instance of the right type. This should be used in cases where you have some information that makes you believe that something is of the right type. An example of the isa<> and cast<> template is:
static bool isLoopInvariant(const Value *V, const Loop *L) { if (isa<Constant>(V) || isa<Argument>(V) || isa<GlobalValue>(V)) return true; // Otherwise, it must be an instruction... return !L->contains(cast<Instruction>(V)->getParent()); }
Note that you should not use an isa<> test followed by a cast<>, for that use the dyn_cast<> operator.
The dyn_cast<> operator is a "checking cast" operation. It checks to see if the operand is of the specified type, and if so, returns a pointer to it (this operator does not work with references). If the operand is not of the correct type, a null pointer is returned. Thus, this works very much like the dynamic_cast<> operator in C++, and should be used in the same circumstances. Typically, the dyn_cast<> operator is used in an if statement or some other flow control statement like this:
if (AllocationInst *AI = dyn_cast<AllocationInst>(Val)) { // ... }
This form of the if statement effectively combines together a call to isa<> and a call to cast<> into one statement, which is very convenient.
Note that the dyn_cast<> operator, like C++'s dynamic_cast<> or Java's instanceof operator, can be abused. In particular, you should not use big chained if/then/else blocks to check for lots of different variants of classes. If you find yourself wanting to do this, it is much cleaner and more efficient to use the InstVisitor class to dispatch over the instruction type directly.
The cast_or_null<> operator works just like the cast<> operator, except that it allows for a null pointer as an argument (which it then propagates). This can sometimes be useful, allowing you to combine several null checks into one.
The dyn_cast_or_null<> operator works just like the dyn_cast<> operator, except that it allows for a null pointer as an argument (which it then propagates). This can sometimes be useful, allowing you to combine several null checks into one.
These five templates can be used with any classes, whether they have a v-table or not. To add support for these templates, you simply need to add classof static methods to the class you are interested casting to. Describing this is currently outside the scope of this document, but there are lots of examples in the LLVM source base.
Often when working on your pass you will put a bunch of debugging printouts and other code into your pass. After you get it working, you want to remove it, but you may need it again in the future (to work out new bugs that you run across).
Naturally, because of this, you don't want to delete the debug printouts, but you don't want them to always be noisy. A standard compromise is to comment them out, allowing you to enable them if you need them in the future.
The "llvm/Support/Debug.h" file provides a macro named DEBUG() that is a much nicer solution to this problem. Basically, you can put arbitrary code into the argument of the DEBUG macro, and it is only executed if 'opt' (or any other tool) is run with the '-debug' command line argument:
DEBUG(std::cerr << "I am here!\n");
Then you can run your pass like this:
$ opt < a.bc > /dev/null -mypass <no output> $ opt < a.bc > /dev/null -mypass -debug I am here!
Using the DEBUG() macro instead of a home-brewed solution allows you to not have to create "yet another" command line option for the debug output for your pass. Note that DEBUG() macros are disabled for optimized builds, so they do not cause a performance impact at all (for the same reason, they should also not contain side-effects!).
One additional nice thing about the DEBUG() macro is that you can enable or disable it directly in gdb. Just use "set DebugFlag=0" or "set DebugFlag=1" from the gdb if the program is running. If the program hasn't been started yet, you can always just run it with -debug.
Sometimes you may find yourself in a situation where enabling -debug just turns on too much information (such as when working on the code generator). If you want to enable debug information with more fine-grained control, you define the DEBUG_TYPE macro and the -debug only option as follows:
DEBUG(std::cerr << "No debug type\n"); #undef DEBUG_TYPE #define DEBUG_TYPE "foo" DEBUG(std::cerr << "'foo' debug type\n"); #undef DEBUG_TYPE #define DEBUG_TYPE "bar" DEBUG(std::cerr << "'bar' debug type\n"); #undef DEBUG_TYPE #define DEBUG_TYPE "" DEBUG(std::cerr << "No debug type (2)\n");
Then you can run your pass like this:
$ opt < a.bc > /dev/null -mypass <no output> $ opt < a.bc > /dev/null -mypass -debug No debug type 'foo' debug type 'bar' debug type No debug type (2) $ opt < a.bc > /dev/null -mypass -debug-only=foo 'foo' debug type $ opt < a.bc > /dev/null -mypass -debug-only=bar 'bar' debug type
Of course, in practice, you should only set DEBUG_TYPE at the top of a file, to specify the debug type for the entire module (if you do this before you #include "llvm/Support/Debug.h", you don't have to insert the ugly #undef's). Also, you should use names more meaningful than "foo" and "bar", because there is no system in place to ensure that names do not conflict. If two different modules use the same string, they will all be turned on when the name is specified. This allows, for example, all debug information for instruction scheduling to be enabled with -debug-type=InstrSched, even if the source lives in multiple files.
The "llvm/ADT/Statistic.h" file provides a template named Statistic that is used as a unified way to keep track of what the LLVM compiler is doing and how effective various optimizations are. It is useful to see what optimizations are contributing to making a particular program run faster.
Often you may run your pass on some big program, and you're interested to see how many times it makes a certain transformation. Although you can do this with hand inspection, or some ad-hoc method, this is a real pain and not very useful for big programs. Using the Statistic template makes it very easy to keep track of this information, and the calculated information is presented in a uniform manner with the rest of the passes being executed.
There are many examples of Statistic uses, but the basics of using it are as follows:
Define your statistic like this:
static Statistic<> NumXForms("mypassname", "The # of times I did stuff");
The Statistic template can emulate just about any data-type, but if you do not specify a template argument, it defaults to acting like an unsigned int counter (this is usually what you want).
Whenever you make a transformation, bump the counter:
++NumXForms; // I did stuff!
That's all you have to do. To get 'opt' to print out the statistics gathered, use the '-stats' option:
$ opt -stats -mypassname < program.bc > /dev/null ... statistics output ...
When running gccas on a C file from the SPEC benchmark suite, it gives a report that looks like this:
7646 bytecodewriter - Number of normal instructions 725 bytecodewriter - Number of oversized instructions 129996 bytecodewriter - Number of bytecode bytes written 2817 raise - Number of insts DCEd or constprop'd 3213 raise - Number of cast-of-self removed 5046 raise - Number of expression trees converted 75 raise - Number of other getelementptr's formed 138 raise - Number of load/store peepholes 42 deadtypeelim - Number of unused typenames removed from symtab 392 funcresolve - Number of varargs functions resolved 27 globaldce - Number of global variables removed 2 adce - Number of basic blocks removed 134 cee - Number of branches revectored 49 cee - Number of setcc instruction eliminated 532 gcse - Number of loads removed 2919 gcse - Number of instructions removed 86 indvars - Number of canonical indvars added 87 indvars - Number of aux indvars removed 25 instcombine - Number of dead inst eliminate 434 instcombine - Number of insts combined 248 licm - Number of load insts hoisted 1298 licm - Number of insts hoisted to a loop pre-header 3 licm - Number of insts hoisted to multiple loop preds (bad, no loop pre-header) 75 mem2reg - Number of alloca's promoted 1444 cfgsimplify - Number of blocks simplified
Obviously, with so many optimizations, having a unified framework for this stuff is very nice. Making your pass fit well into the framework makes it more maintainable and useful.
Several of the important data structures in LLVM are graphs: for example CFGs made out of LLVM BasicBlocks, CFGs made out of LLVM MachineBasicBlocks, and Instruction Selection DAGs. In many cases, while debugging various parts of the compiler, it is nice to instantly visualize these graphs.
LLVM provides several callbacks that are available in a debug build to do exactly that. If you call the Function::viewCFG() method, for example, the current LLVM tool will pop up a window containing the CFG for the function where each basic block is a node in the graph, and each node contains the instructions in the block. Similarly, there also exists Function::viewCFGOnly() (does not include the instructions), the MachineFunction::viewCFG() and MachineFunction::viewCFGOnly(), and the SelectionDAG::viewGraph() methods. Within GDB, for example, you can usually use something like call DAG.viewGraph() to pop up a window. Alternatively, you can sprinkle calls to these functions in your code in places you want to debug.
Getting this to work requires a small amount of configuration. On Unix systems with X11, install the graphviz toolkit, and make sure 'dot' and 'gv' are in your path. If you are running on Mac OS/X, download and install the Mac OS/X Graphviz program, and add /Applications/Graphviz.app/Contents/MacOS/ (or whereever you install it) to your path. Once in your system and path are set up, rerun the LLVM configure script and rebuild LLVM to enable this functionality.
SelectionDAG has been extended to make it easier to locate interesting nodes in large complex graphs. From gdb, if you call DAG.setGraphColor(node, "color"), then the next call DAG.viewGraph() would hilight the node in the specified color (choices of colors can be found at Colors.) More complex node attributes can be provided with call DAG.setGraphAttrs(node, "attributes") (choices can be found at Graph Attributes.) If you want to restart and clear all the current graph attributes, then you can call DAG.clearGraphAttrs().
This section describes how to perform some very simple transformations of LLVM code. This is meant to give examples of common idioms used, showing the practical side of LLVM transformations.
Because this is a "how-to" section, you should also read about the main classes that you will be working with. The Core LLVM Class Hierarchy Reference contains details and descriptions of the main classes that you should know about.
The LLVM compiler infrastructure have many different data structures that may be traversed. Following the example of the C++ standard template library, the techniques used to traverse these various data structures are all basically the same. For a enumerable sequence of values, the XXXbegin() function (or method) returns an iterator to the start of the sequence, the XXXend() function returns an iterator pointing to one past the last valid element of the sequence, and there is some XXXiterator data type that is common between the two operations.
Because the pattern for iteration is common across many different aspects of the program representation, the standard template library algorithms may be used on them, and it is easier to remember how to iterate. First we show a few common examples of the data structures that need to be traversed. Other data structures are traversed in very similar ways.
It's quite common to have a Function instance that you'd like to transform in some way; in particular, you'd like to manipulate its BasicBlocks. To facilitate this, you'll need to iterate over all of the BasicBlocks that constitute the Function. The following is an example that prints the name of a BasicBlock and the number of Instructions it contains:
// func is a pointer to a Function instance for (Function::iterator i = func->begin(), e = func->end(); i != e; ++i) // Print out the name of the basic block if it has one, and then the // number of instructions that it contains std::cerr << "Basic block (name=" << i->getName() << ") has " << i->size() << " instructions.\n";
Note that i can be used as if it were a pointer for the purposes of invoking member functions of the Instruction class. This is because the indirection operator is overloaded for the iterator classes. In the above code, the expression i->size() is exactly equivalent to (*i).size() just like you'd expect.
Just like when dealing with BasicBlocks in Functions, it's easy to iterate over the individual instructions that make up BasicBlocks. Here's a code snippet that prints out each instruction in a BasicBlock:
// blk is a pointer to a BasicBlock instance for (BasicBlock::iterator i = blk->begin(), e = blk->end(); i != e; ++i) // The next statement works since operator<<(ostream&,...) // is overloaded for Instruction& std::cerr << *i << "\n";
However, this isn't really the best way to print out the contents of a BasicBlock! Since the ostream operators are overloaded for virtually anything you'll care about, you could have just invoked the print routine on the basic block itself: std::cerr << *blk << "\n";.
If you're finding that you commonly iterate over a Function's BasicBlocks and then that BasicBlock's Instructions, InstIterator should be used instead. You'll need to include llvm/Support/InstIterator.h, and then instantiate InstIterators explicitly in your code. Here's a small example that shows how to dump all instructions in a function to the standard error stream:
#include "llvm/Support/InstIterator.h" // F is a ptr to a Function instance for (inst_iterator i = inst_begin(F), e = inst_end(F); i != e; ++i) std::cerr << *i << "\n";
Easy, isn't it? You can also use InstIterators to fill a worklist with its initial contents. For example, if you wanted to initialize a worklist to contain all instructions in a Function F, all you would need to do is something like:
std::set<Instruction*> worklist; worklist.insert(inst_begin(F), inst_end(F));
The STL set worklist would now contain all instructions in the Function pointed to by F.
Sometimes, it'll be useful to grab a reference (or pointer) to a class instance when all you've got at hand is an iterator. Well, extracting a reference or a pointer from an iterator is very straight-forward. Assuming that i is a BasicBlock::iterator and j is a BasicBlock::const_iterator:
Instruction& inst = *i; // Grab reference to instruction reference Instruction* pinst = &*i; // Grab pointer to instruction reference const Instruction& inst = *j;
However, the iterators you'll be working with in the LLVM framework are special: they will automatically convert to a ptr-to-instance type whenever they need to. Instead of dereferencing the iterator and then taking the address of the result, you can simply assign the iterator to the proper pointer type and you get the dereference and address-of operation as a result of the assignment (behind the scenes, this is a result of overloading casting mechanisms). Thus the last line of the last example,
Instruction* pinst = &*i;
is semantically equivalent to
Instruction* pinst = i;
It's also possible to turn a class pointer into the corresponding iterator, and this is a constant time operation (very efficient). The following code snippet illustrates use of the conversion constructors provided by LLVM iterators. By using these, you can explicitly grab the iterator of something without actually obtaining it via iteration over some structure:
void printNextInstruction(Instruction* inst) { BasicBlock::iterator it(inst); ++it; // After this line, it refers to the instruction after *inst if (it != inst->getParent()->end()) std::cerr << *it << "\n"; }
Say that you're writing a FunctionPass and would like to count all the locations in the entire module (that is, across every Function) where a certain function (i.e., some Function*) is already in scope. As you'll learn later, you may want to use an InstVisitor to accomplish this in a much more straight-forward manner, but this example will allow us to explore how you'd do it if you didn't have InstVisitor around. In pseudocode, this is what we want to do:
initialize callCounter to zero for each Function f in the Module for each BasicBlock b in f for each Instruction i in b if (i is a CallInst and calls the given function) increment callCounter
And the actual code is (remember, because we're writing a FunctionPass, our FunctionPass-derived class simply has to override the runOnFunction method):
Function* targetFunc = ...; class OurFunctionPass : public FunctionPass { public: OurFunctionPass(): callCounter(0) { } virtual runOnFunction(Function& F) { for (Function::iterator b = F.begin(), be = F.end(); b != be; ++b) { for (BasicBlock::iterator i = b->begin(); ie = b->end(); i != ie; ++i) { if (CallInst* callInst = dyn_cast<CallInst>(&*i)) { // We know we've encountered a call instruction, so we // need to determine if it's a call to the // function pointed to by m_func or not if (callInst->getCalledFunction() == targetFunc) ++callCounter; } } } } private: unsigned callCounter; };
You may have noticed that the previous example was a bit oversimplified in that it did not deal with call sites generated by 'invoke' instructions. In this, and in other situations, you may find that you want to treat CallInsts and InvokeInsts the same way, even though their most-specific common base class is Instruction, which includes lots of less closely-related things. For these cases, LLVM provides a handy wrapper class called CallSite. It is essentially a wrapper around an Instruction pointer, with some methods that provide functionality common to CallInsts and InvokeInsts.
This class has "value semantics": it should be passed by value, not by reference and it should not be dynamically allocated or deallocated using operator new or operator delete. It is efficiently copyable, assignable and constructable, with costs equivalents to that of a bare pointer. If you look at its definition, it has only a single pointer member.
Frequently, we might have an instance of the Value Class and we want to determine which Users use the Value. The list of all Users of a particular Value is called a def-use chain. For example, let's say we have a Function* named F to a particular function foo. Finding all of the instructions that use foo is as simple as iterating over the def-use chain of F:
Function* F = ...; for (Value::use_iterator i = F->use_begin(), e = F->use_end(); i != e; ++i) if (Instruction *Inst = dyn_cast<Instruction>(*i)) { std::cerr << "F is used in instruction:\n"; std::cerr << *Inst << "\n"; }
Alternately, it's common to have an instance of the User Class and need to know what Values are used by it. The list of all Values used by a User is known as a use-def chain. Instances of class Instruction are common Users, so we might want to iterate over all of the values that a particular instruction uses (that is, the operands of the particular Instruction):
Instruction* pi = ...; for (User::op_iterator i = pi->op_begin(), e = pi->op_end(); i != e; ++i) { Value* v = *i; // ... }
There are some primitive transformation operations present in the LLVM infrastructure that are worth knowing about. When performing transformations, it's fairly common to manipulate the contents of basic blocks. This section describes some of the common methods for doing so and gives example code.
Instantiating Instructions
Creation of Instructions is straight-forward: simply call the constructor for the kind of instruction to instantiate and provide the necessary parameters. For example, an AllocaInst only requires a (const-ptr-to) Type. Thus:
AllocaInst* ai = new AllocaInst(Type::IntTy);
will create an AllocaInst instance that represents the allocation of one integer in the current stack frame, at runtime. Each Instruction subclass is likely to have varying default parameters which change the semantics of the instruction, so refer to the doxygen documentation for the subclass of Instruction that you're interested in instantiating.
Naming values
It is very useful to name the values of instructions when you're able to, as this facilitates the debugging of your transformations. If you end up looking at generated LLVM machine code, you definitely want to have logical names associated with the results of instructions! By supplying a value for the Name (default) parameter of the Instruction constructor, you associate a logical name with the result of the instruction's execution at runtime. For example, say that I'm writing a transformation that dynamically allocates space for an integer on the stack, and that integer is going to be used as some kind of index by some other code. To accomplish this, I place an AllocaInst at the first point in the first BasicBlock of some Function, and I'm intending to use it within the same Function. I might do:
AllocaInst* pa = new AllocaInst(Type::IntTy, 0, "indexLoc");
where indexLoc is now the logical name of the instruction's execution value, which is a pointer to an integer on the runtime stack.
Inserting instructions
There are essentially two ways to insert an Instruction into an existing sequence of instructions that form a BasicBlock:
Given a BasicBlock* pb, an Instruction* pi within that BasicBlock, and a newly-created instruction we wish to insert before *pi, we do the following:
BasicBlock *pb = ...; Instruction *pi = ...; Instruction *newInst = new Instruction(...); pb->getInstList().insert(pi, newInst); // Inserts newInst before pi in pb
Appending to the end of a BasicBlock is so common that the Instruction class and Instruction-derived classes provide constructors which take a pointer to a BasicBlock to be appended to. For example code that looked like:
BasicBlock *pb = ...; Instruction *newInst = new Instruction(...); pb->getInstList().push_back(newInst); // Appends newInst to pb
becomes:
BasicBlock *pb = ...; Instruction *newInst = new Instruction(..., pb);
which is much cleaner, especially if you are creating long instruction streams.
Instruction instances that are already in BasicBlocks are implicitly associated with an existing instruction list: the instruction list of the enclosing basic block. Thus, we could have accomplished the same thing as the above code without being given a BasicBlock by doing:
Instruction *pi = ...; Instruction *newInst = new Instruction(...); pi->getParent()->getInstList().insert(pi, newInst);
In fact, this sequence of steps occurs so frequently that the Instruction class and Instruction-derived classes provide constructors which take (as a default parameter) a pointer to an Instruction which the newly-created Instruction should precede. That is, Instruction constructors are capable of inserting the newly-created instance into the BasicBlock of a provided instruction, immediately before that instruction. Using an Instruction constructor with a insertBefore (default) parameter, the above code becomes:
Instruction* pi = ...; Instruction* newInst = new Instruction(..., pi);
which is much cleaner, especially if you're creating a lot of instructions and adding them to BasicBlocks.
Deleting an instruction from an existing sequence of instructions that form a BasicBlock is very straight-forward. First, you must have a pointer to the instruction that you wish to delete. Second, you need to obtain the pointer to that instruction's basic block. You use the pointer to the basic block to get its list of instructions and then use the erase function to remove your instruction. For example:
Instruction *I = .. ; BasicBlock *BB = I->getParent(); BB->getInstList().erase(I);
Replacing individual instructions
Including "llvm/Transforms/Utils/BasicBlockUtils.h" permits use of two very useful replace functions: ReplaceInstWithValue and ReplaceInstWithInst.
This function replaces all uses (within a basic block) of a given instruction with a value, and then removes the original instruction. The following example illustrates the replacement of the result of a particular AllocaInst that allocates memory for a single integer with a null pointer to an integer.
AllocaInst* instToReplace = ...; BasicBlock::iterator ii(instToReplace); ReplaceInstWithValue(instToReplace->getParent()->getInstList(), ii, Constant::getNullValue(PointerType::get(Type::IntTy)));
This function replaces a particular instruction with another instruction. The following example illustrates the replacement of one AllocaInst with another.
AllocaInst* instToReplace = ...; BasicBlock::iterator ii(instToReplace); ReplaceInstWithInst(instToReplace->getParent()->getInstList(), ii, new AllocaInst(Type::IntTy, 0, "ptrToReplacedInt"));
Replacing multiple uses of Users and Values
You can use Value::replaceAllUsesWith and User::replaceUsesOfWith to change more than one use at a time. See the doxygen documentation for the Value Class and User Class, respectively, for more information.
This section describes some of the advanced or obscure API's that most clients do not need to be aware of. These API's tend manage the inner workings of the LLVM system, and only need to be accessed in unusual circumstances.
The LLVM type system has a very simple goal: allow clients to compare types for structural equality with a simple pointer comparison (aka a shallow compare). This goal makes clients much simpler and faster, and is used throughout the LLVM system.
Unfortunately achieving this goal is not a simple matter. In particular, recursive types and late resolution of opaque types makes the situation very difficult to handle. Fortunately, for the most part, our implementation makes most clients able to be completely unaware of the nasty internal details. The primary case where clients are exposed to the inner workings of it are when building a recursive type. In addition to this case, the LLVM bytecode reader, assembly parser, and linker also have to be aware of the inner workings of this system.
For our purposes below, we need three concepts. First, an "Opaque Type" is exactly as defined in the language reference. Second an "Abstract Type" is any type which includes an opaque type as part of its type graph (for example "{ opaque, int }"). Third, a concrete type is a type that is not an abstract type (e.g. "[ int, float }").
Because the most common question is "how do I build a recursive type with LLVM", we answer it now and explain it as we go. Here we include enough to cause this to be emitted to an output .ll file:
%mylist = type { %mylist*, int }
To build this, use the following LLVM APIs:
// Create the initial outer struct PATypeHolder StructTy = OpaqueType::get(); std::vector<const Type*> Elts; Elts.push_back(PointerType::get(StructTy)); Elts.push_back(Type::IntTy); StructType *NewSTy = StructType::get(Elts); // At this point, NewSTy = "{ opaque*, int }". Tell VMCore that // the struct and the opaque type are actually the same. cast<OpaqueType>(StructTy.get())->refineAbstractTypeTo(NewSTy); // NewSTy is potentially invalidated, but StructTy (a PATypeHolder) is // kept up-to-date NewSTy = cast<StructType>(StructTy.get()); // Add a name for the type to the module symbol table (optional) MyModule->addTypeName("mylist", NewSTy);
This code shows the basic approach used to build recursive types: build a non-recursive type using 'opaque', then use type unification to close the cycle. The type unification step is performed by the refineAbstractTypeTo method, which is described next. After that, we describe the PATypeHolder class.
The refineAbstractTypeTo method starts the type unification process. While this method is actually a member of the DerivedType class, it is most often used on OpaqueType instances. Type unification is actually a recursive process. After unification, types can become structurally isomorphic to existing types, and all duplicates are deleted (to preserve pointer equality).
In the example above, the OpaqueType object is definitely deleted. Additionally, if there is an "{ \2*, int}" type already created in the system, the pointer and struct type created are also deleted. Obviously whenever a type is deleted, any "Type*" pointers in the program are invalidated. As such, it is safest to avoid having any "Type*" pointers to abstract types live across a call to refineAbstractTypeTo (note that non-abstract types can never move or be deleted). To deal with this, the PATypeHolder class is used to maintain a stable reference to a possibly refined type, and the AbstractTypeUser class is used to update more complex datastructures.
PATypeHolder is a form of a "smart pointer" for Type objects. When VMCore happily goes about nuking types that become isomorphic to existing types, it automatically updates all PATypeHolder objects to point to the new type. In the example above, this allows the code to maintain a pointer to the resultant resolved recursive type, even though the Type*'s are potentially invalidated.
PATypeHolder is an extremely light-weight object that uses a lazy union-find implementation to update pointers. For example the pointer from a Value to its Type is maintained by PATypeHolder objects.
Some data structures need more to perform more complex updates when types get resolved. The SymbolTable class, for example, needs move and potentially merge type planes in its representation when a pointer changes.
To support this, a class can derive from the AbstractTypeUser class. This class allows it to get callbacks when certain types are resolved. To register to get callbacks for a particular type, the DerivedType::{add/remove}AbstractTypeUser methods can be called on a type. Note that these methods only work for abstract types. Concrete types (those that do not include an opaque objects somewhere) can never be refined.
This class provides a symbol table that the Function and Module classes use for naming definitions. The symbol table can provide a name for any Value or Type. SymbolTable is an abstract data type. It hides the data it contains and provides access to it through a controlled interface.
Note that the symbol table class is should not be directly accessed by most clients. It should only be used when iteration over the symbol table names themselves are required, which is very special purpose. Note that not all LLVM Values have names, and those without names (i.e. they have an empty name) do not exist in the symbol table.
To use the SymbolTable well, you need to understand the structure of the information it holds. The class contains two std::map objects. The first, pmap, is a map of Type* to maps of name (std::string) to Value*. The second, tmap, is a map of names to Type*. Thus, Values are stored in two-dimensions and accessed by Type and name. Types, however, are stored in a single dimension and accessed only by name.
The interface of this class provides three basic types of operations:
The following functions describe three types of iterators you can obtain the beginning or end of the sequence for both const and non-const. It is important to keep track of the different kinds of iterators. There are three idioms worth pointing out:
Units | Iterator | Idiom |
---|---|---|
Planes Of name/Value maps | PI | for (SymbolTable::plane_const_iterator PI = ST.plane_begin(), PE = ST.plane_end(); PI != PE; ++PI ) { PI->first // This is the Type* of the plane PI->second // This is the SymbolTable::ValueMap of name/Value pairs } |
All name/Type Pairs | TI | for (SymbolTable::type_const_iterator TI = ST.type_begin(), TE = ST.type_end(); TI != TE; ++TI ) { TI->first // This is the name of the type TI->second // This is the Type* value associated with the name } |
name/Value pairs in a plane | VI | for (SymbolTable::value_const_iterator VI = ST.value_begin(SomeType), VE = ST.value_end(SomeType); VI != VE; ++VI ) { VI->first // This is the name of the Value VI->second // This is the Value* value associated with the name } |
Using the recommended iterator names and idioms will help you avoid making mistakes. Of particular note, make sure that whenever you use value_begin(SomeType) that you always compare the resulting iterator with value_end(SomeType) not value_end(SomeOtherType) or else you will loop infinitely.
The Core LLVM classes are the primary means of representing the program being inspected or transformed. The core LLVM classes are defined in header files in the include/llvm/ directory, and implemented in the lib/VMCore directory.
#include "llvm/Value.h"
doxygen info: Value Class
The Value class is the most important class in the LLVM Source base. It represents a typed value that may be used (among other things) as an operand to an instruction. There are many different types of Values, such as Constants,Arguments. Even Instructions and Functions are Values.
A particular Value may be used many times in the LLVM representation for a program. For example, an incoming argument to a function (represented with an instance of the Argument class) is "used" by every instruction in the function that references the argument. To keep track of this relationship, the Value class keeps a list of all of the Users that is using it (the User class is a base class for all nodes in the LLVM graph that can refer to Values). This use list is how LLVM represents def-use information in the program, and is accessible through the use_* methods, shown below.
Because LLVM is a typed representation, every LLVM Value is typed, and this Type is available through the getType() method. In addition, all LLVM values can be named. The "name" of the Value is a symbolic string printed in the LLVM code:
%foo = add int 1, 2
The name of this instruction is "foo". NOTE that the name of any value may be missing (an empty string), so names should ONLY be used for debugging (making the source code easier to read, debugging printouts), they should not be used to keep track of values or map between them. For this purpose, use a std::map of pointers to the Value itself instead.
One important aspect of LLVM is that there is no distinction between an SSA variable and the operation that produces it. Because of this, any reference to the value produced by an instruction (or the value available as an incoming argument, for example) is represented as a direct pointer to the instance of the class that represents this value. Although this may take some getting used to, it simplifies the representation and makes it easier to manipulate.
These methods are the interface to access the def-use information in LLVM. As with all other iterators in LLVM, the naming conventions follow the conventions defined by the STL.
This method returns the Type of the Value.
This family of methods is used to access and assign a name to a Value, be aware of the precaution above.
This method traverses the use list of a Value changing all Users of the current value to refer to "V" instead. For example, if you detect that an instruction always produces a constant value (for example through constant folding), you can replace all uses of the instruction with the constant like this:
Inst->replaceAllUsesWith(ConstVal);
#include "llvm/User.h"
doxygen info: User Class
Superclass: Value
The User class is the common base class of all LLVM nodes that may refer to Values. It exposes a list of "Operands" that are all of the Values that the User is referring to. The User class itself is a subclass of Value.
The operands of a User point directly to the LLVM Value that it refers to. Because LLVM uses Static Single Assignment (SSA) form, there can only be one definition referred to, allowing this direct connection. This connection provides the use-def information in LLVM.
The User class exposes the operand list in two ways: through an index access interface and through an iterator based interface.
These two methods expose the operands of the User in a convenient form for direct access.
Together, these methods make up the iterator based interface to the operands of a User.
#include "llvm/Instruction.h"
doxygen info: Instruction Class
Superclasses: User, Value
The Instruction class is the common base class for all LLVM instructions. It provides only a few methods, but is a very commonly used class. The primary data tracked by the Instruction class itself is the opcode (instruction type) and the parent BasicBlock the Instruction is embedded into. To represent a specific type of instruction, one of many subclasses of Instruction are used.
Because the Instruction class subclasses the User class, its operands can be accessed in the same way as for other Users (with the getOperand()/getNumOperands() and op_begin()/op_end() methods).
An important file for the Instruction class is the llvm/Instruction.def file. This file contains some meta-data about the various different types of instructions in LLVM. It describes the enum values that are used as opcodes (for example Instruction::Add and Instruction::SetLE), as well as the concrete sub-classes of Instruction that implement the instruction (for example BinaryOperator and SetCondInst). Unfortunately, the use of macros in this file confuses doxygen, so these enum values don't show up correctly in the doxygen output.
Returns the BasicBlock that this Instruction is embedded into.
Returns true if the instruction writes to memory, i.e. it is a call,free,invoke, or store.
Returns the opcode for the Instruction.
Returns another instance of the specified instruction, identical in all ways to the original except that the instruction has no parent (ie it's not embedded into a BasicBlock), and it has no name
#include "llvm/BasicBlock.h"
doxygen info: BasicBlock
Class
Superclass: Value
This class represents a single entry multiple exit section of the code, commonly known as a basic block by the compiler community. The BasicBlock class maintains a list of Instructions, which form the body of the block. Matching the language definition, the last element of this list of instructions is always a terminator instruction (a subclass of the TerminatorInst class).
In addition to tracking the list of instructions that make up the block, the BasicBlock class also keeps track of the Function that it is embedded into.
Note that BasicBlocks themselves are Values, because they are referenced by instructions like branches and can go in the switch tables. BasicBlocks have type label.
The BasicBlock constructor is used to create new basic blocks for insertion into a function. The constructor optionally takes a name for the new block, and a Function to insert it into. If the Parent parameter is specified, the new BasicBlock is automatically inserted at the end of the specified Function, if not specified, the BasicBlock must be manually inserted into the Function.
These methods and typedefs are forwarding functions that have the same semantics as the standard library methods of the same names. These methods expose the underlying instruction list of a basic block in a way that is easy to manipulate. To get the full complement of container operations (including operations to update the list), you must use the getInstList() method.
This method is used to get access to the underlying container that actually holds the Instructions. This method must be used when there isn't a forwarding function in the BasicBlock class for the operation that you would like to perform. Because there are no forwarding functions for "updating" operations, you need to use this if you want to update the contents of a BasicBlock.
Returns a pointer to Function the block is embedded into, or a null pointer if it is homeless.
Returns a pointer to the terminator instruction that appears at the end of the BasicBlock. If there is no terminator instruction, or if the last instruction in the block is not a terminator, then a null pointer is returned.
#include "llvm/GlobalValue.h"
doxygen info: GlobalValue
Class
Superclasses: Constant,
User, Value
Global values (GlobalVariables or Functions) are the only LLVM values that are visible in the bodies of all Functions. Because they are visible at global scope, they are also subject to linking with other globals defined in different translation units. To control the linking process, GlobalValues know their linkage rules. Specifically, GlobalValues know whether they have internal or external linkage, as defined by the LinkageTypes enumeration.
If a GlobalValue has internal linkage (equivalent to being static in C), it is not visible to code outside the current translation unit, and does not participate in linking. If it has external linkage, it is visible to external code, and does participate in linking. In addition to linkage information, GlobalValues keep track of which Module they are currently part of.
Because GlobalValues are memory objects, they are always referred to by their address. As such, the Type of a global is always a pointer to its contents. It is important to remember this when using the GetElementPtrInst instruction because this pointer must be dereferenced first. For example, if you have a GlobalVariable (a subclass of GlobalValue) that is an array of 24 ints, type [24 x int], then the GlobalVariable is a pointer to that array. Although the address of the first element of this array and the value of the GlobalVariable are the same, they have different types. The GlobalVariable's type is [24 x int]. The first element's type is int. Because of this, accessing a global value requires you to dereference the pointer with GetElementPtrInst first, then its elements can be accessed. This is explained in the LLVM Language Reference Manual.
#include "llvm/Function.h"
doxygen
info: Function Class
Superclasses: GlobalValue,
Constant,
User,
Value
The Function class represents a single procedure in LLVM. It is actually one of the more complex classes in the LLVM heirarchy because it must keep track of a large amount of data. The Function class keeps track of a list of BasicBlocks, a list of formal Arguments, and a SymbolTable.
The list of BasicBlocks is the most commonly used part of Function objects. The list imposes an implicit ordering of the blocks in the function, which indicate how the code will be layed out by the backend. Additionally, the first BasicBlock is the implicit entry node for the Function. It is not legal in LLVM to explicitly branch to this initial block. There are no implicit exit nodes, and in fact there may be multiple exit nodes from a single Function. If the BasicBlock list is empty, this indicates that the Function is actually a function declaration: the actual body of the function hasn't been linked in yet.
In addition to a list of BasicBlocks, the Function class also keeps track of the list of formal Arguments that the function receives. This container manages the lifetime of the Argument nodes, just like the BasicBlock list does for the BasicBlocks.
The SymbolTable is a very rarely used LLVM feature that is only used when you have to look up a value by name. Aside from that, the SymbolTable is used internally to make sure that there are not conflicts between the names of Instructions, BasicBlocks, or Arguments in the function body.
Note that Function is a GlobalValue and therefore also a Constant. The value of the function is its address (after linking) which is guaranteed to be constant.
Constructor used when you need to create new Functions to add the the program. The constructor must specify the type of the function to create and what type of linkage the function should have. The FunctionType argument specifies the formal arguments and return value for the function. The same FunctionType value can be used to create multiple functions. The Parent argument specifies the Module in which the function is defined. If this argument is provided, the function will automatically be inserted into that module's list of functions.
Return whether or not the Function has a body defined. If the function is "external", it does not have a body, and thus must be resolved by linking with a function defined in a different translation unit.
These are forwarding methods that make it easy to access the contents of a Function object's BasicBlock list.
Returns the list of BasicBlocks. This is necessary to use when you need to update the list or perform a complex action that doesn't have a forwarding method.
These are forwarding methods that make it easy to access the contents of a Function object's Argument list.
Returns the list of Arguments. This is necessary to use when you need to update the list or perform a complex action that doesn't have a forwarding method.
Returns the entry BasicBlock for the function. Because the entry block for the function is always the first block, this returns the first block of the Function.
This traverses the Type of the Function and returns the return type of the function, or the FunctionType of the actual function.
Return a pointer to the SymbolTable for this Function.
#include "llvm/GlobalVariable.h"
doxygen info: GlobalVariable
Class
Superclasses: GlobalValue,
Constant,
User,
Value
Global variables are represented with the (suprise suprise) GlobalVariable class. Like functions, GlobalVariables are also subclasses of GlobalValue, and as such are always referenced by their address (global values must live in memory, so their "name" refers to their constant address). See GlobalValue for more on this. Global variables may have an initial value (which must be a Constant), and if they have an initializer, they may be marked as "constant" themselves (indicating that their contents never change at runtime).
Create a new global variable of the specified type. If isConstant is true then the global variable will be marked as unchanging for the program. The Linkage parameter specifies the type of linkage (internal, external, weak, linkonce, appending) for the variable. If the linkage is InternalLinkage, WeakLinkage, or LinkOnceLinkage, then the resultant global variable will have internal linkage. AppendingLinkage concatenates together all instances (in different translation units) of the variable into a single variable but is only applicable to arrays. See the LLVM Language Reference for further details on linkage types. Optionally an initializer, a name, and the module to put the variable into may be specified for the global variable as well.
Returns true if this is a global variable that is known not to be modified at runtime.
Returns true if this GlobalVariable has an intializer.
Returns the intial value for a GlobalVariable. It is not legal to call this method if there is no initializer.
#include "llvm/Module.h"
doxygen info:
Module Class
The Module class represents the top level structure present in LLVM programs. An LLVM module is effectively either a translation unit of the original program or a combination of several translation units merged by the linker. The Module class keeps track of a list of Functions, a list of GlobalVariables, and a SymbolTable. Additionally, it contains a few helpful member functions that try to make common operations easy.
Constructing a Module is easy. You can optionally provide a name for it (probably based on the name of the translation unit).
These are forwarding methods that make it easy to access the contents of a Module object's Function list.
Returns the list of Functions. This is necessary to use when you need to update the list or perform a complex action that doesn't have a forwarding method.
These are forwarding methods that make it easy to access the contents of a Module object's GlobalVariable list.
Returns the list of GlobalVariables. This is necessary to use when you need to update the list or perform a complex action that doesn't have a forwarding method.
Return a reference to the SymbolTable for this Module.
Look up the specified function in the Module SymbolTable. If it does not exist, return null.
Look up the specified function in the Module SymbolTable. If it does not exist, add an external declaration for the function and return it.
If there is at least one entry in the SymbolTable for the specified Type, return it. Otherwise return the empty string.
Insert an entry in the SymbolTable mapping Name to Ty. If there is already an entry for this name, true is returned and the SymbolTable is not modified.
Constant represents a base class for different types of constants. It is subclassed by ConstantBool, ConstantInt, ConstantArray etc for representing the various types of Constants.
Type as noted earlier is also a subclass of a Value class. Any primitive type (like int, short etc) in LLVM is an instance of Type Class. All other types are instances of subclasses of type like FunctionType, ArrayType etc. DerivedType is the interface for all such dervied types including FunctionType, ArrayType, PointerType, StructType. Types can have names. They can be recursive (StructType). There exists exactly one instance of any type structure at a time. This allows using pointer equality of Type *s for comparing types.
This subclass of Value defines the interface for incoming formal arguments to a function. A Function maintains a list of its formal arguments. An argument has a pointer to the parent Function.