diff --git a/docs/tutorial/OCamlLangImpl3.html b/docs/tutorial/OCamlLangImpl3.html new file mode 100644 index 00000000000..0edc726480c --- /dev/null +++ b/docs/tutorial/OCamlLangImpl3.html @@ -0,0 +1,1090 @@ + + + + + Kaleidoscope: Implementing code generation to LLVM IR + + + + + + + + +
Kaleidoscope: Code generation to LLVM IR
+ + + +
+

+ Written by Chris Lattner + and Erick Tryzelaar +

+
+ + +
Chapter 3 Introduction
+ + +
+ +

Welcome to Chapter 3 of the "Implementing a language +with LLVM" tutorial. This chapter shows you how to transform the Abstract Syntax Tree, built in Chapter 2, into +LLVM IR. This will teach you a little bit about how LLVM does things, as well +as demonstrate how easy it is to use. It's much more work to build a lexer and +parser than it is to generate LLVM IR code. :) +

+ +

Please note: the code in this chapter and later require LLVM 2.3 or +LLVM SVN to work. LLVM 2.2 and before will not work with it.

+ +
+ + +
Code Generation Setup
+ + +
+ +

+In order to generate LLVM IR, we want some simple setup to get started. First +we define virtual code generation (codegen) methods in each AST class:

+ +
+
+let rec codegen_expr = function
+  | Ast.Number n -> ...
+  | Ast.Variable name -> ...
+
+
+ +

The Codegen.codegen_expr function says to emit IR for that AST node +along with all the things it depends on, and they all return an LLVM Value +object. "Value" is the class used to represent a "Static Single +Assignment (SSA) register" or "SSA value" in LLVM. The most distinct aspect +of SSA values is that their value is computed as the related instruction +executes, and it does not get a new value until (and if) the instruction +re-executes. In other words, there is no way to "change" an SSA value. For +more information, please read up on Static Single +Assignment - the concepts are really quite natural once you grok them.

+ +

The +second thing we want is an "Error" exception like we used for the parser, which +will be used to report errors found during code generation (for example, use of +an undeclared parameter):

+ +
+
+exception Error of string
+
+let the_module = create_module "my cool jit"
+let builder = builder ()
+let named_values:(string, llvalue) Hashtbl.t = Hashtbl.create 10
+
+
+ +

The static variables will be used during code generation. +Codgen.the_module is the LLVM construct that contains all of the +functions and global variables in a chunk of code. In many ways, it is the +top-level structure that the LLVM IR uses to contain code.

+ +

The Codegen.builder object is a helper object that makes it easy to +generate LLVM instructions. Instances of the LLVMBuilder +class keep track of the current place to insert instructions and has methods to +create new instructions.

+ +

The Codegen.named_values map keeps track of which values are defined +in the current scope and what their LLVM representation is. (In other words, it +is a symbol table for the code). In this form of Kaleidoscope, the only things +that can be referenced are function parameters. As such, function parameters +will be in this map when generating code for their function body.

+ +

+With these basics in place, we can start talking about how to generate code for +each expression. Note that this assumes that the Codgen.builder has +been set up to generate code into something. For now, we'll assume +that this has already been done, and we'll just use it to emit code.

+ +
+ + +
Expression Code Generation
+ + +
+ +

Generating LLVM code for expression nodes is very straightforward: less +than 30 lines of commented code for all four of our expression nodes. First +we'll do numeric literals:

+ +
+
+  | Ast.Number n -> const_float double_type n
+
+
+ +

In the LLVM IR, numeric constants are represented with the +ConstantFP class, which holds the numeric value in an APFloat +internally (APFloat has the capability of holding floating point +constants of Arbitrary Precision). This code basically just +creates and returns a ConstantFP. Note that in the LLVM IR +that constants are all uniqued together and shared. For this reason, the API +uses "the foo::get(..)" idiom instead of "new foo(..)" or "foo::create(..)".

+ +
+
+  | Ast.Variable name ->
+      (try Hashtbl.find named_values name with
+        | Not_found -> raise (Error "unknown variable name"))
+
+
+ +

References to variables are also quite simple using LLVM. In the simple +version of Kaleidoscope, we assume that the variable has already been emited +somewhere and its value is available. In practice, the only values that can be +in the Codegen.named_values map are function arguments. This code +simply checks to see that the specified name is in the map (if not, an unknown +variable is being referenced) and returns the value for it. In future chapters, +we'll add support for loop induction variables +in the symbol table, and for local +variables.

+ +
+
+  | Ast.Binary (op, lhs, rhs) ->
+      let lhs_val = codegen_expr lhs in
+      let rhs_val = codegen_expr rhs in
+      begin
+        match op with
+        | '+' -> build_add lhs_val rhs_val "addtmp" builder
+        | '-' -> build_sub lhs_val rhs_val "subtmp" builder
+        | '*' -> build_mul lhs_val rhs_val "multmp" builder
+        | '<' ->
+            (* Convert bool 0/1 to double 0.0 or 1.0 *)
+            let i = build_fcmp Fcmp.Ult lhs_val rhs_val "cmptmp" builder in
+            build_uitofp i double_type "booltmp" builder
+        | _ -> raise (Error "invalid binary operator")
+			end
+
+
+ +

Binary operators start to get more interesting. The basic idea here is that +we recursively emit code for the left-hand side of the expression, then the +right-hand side, then we compute the result of the binary expression. In this +code, we do a simple switch on the opcode to create the right LLVM instruction. +

+ +

In the example above, the LLVM builder class is starting to show its value. +LLVMBuilder knows where to insert the newly created instruction, all you have to +do is specify what instruction to create (e.g. with Llvm.create_add), +which operands to use (lhs and rhs here) and optionally +provide a name for the generated instruction.

+ +

One nice thing about LLVM is that the name is just a hint. For instance, if +the code above emits multiple "addtmp" variables, LLVM will automatically +provide each one with an increasing, unique numeric suffix. Local value names +for instructions are purely optional, but it makes it much easier to read the +IR dumps.

+ +

LLVM instructions are constrained by +strict rules: for example, the Left and Right operators of +an add instruction must have the same +type, and the result type of the add must match the operand types. Because +all values in Kaleidoscope are doubles, this makes for very simple code for add, +sub and mul.

+ +

On the other hand, LLVM specifies that the fcmp instruction always returns an 'i1' value +(a one bit integer). The problem with this is that Kaleidoscope wants the value to be a 0.0 or 1.0 value. In order to get these semantics, we combine the fcmp instruction with +a uitofp instruction. This instruction +converts its input integer into a floating point value by treating the input +as an unsigned value. In contrast, if we used the sitofp instruction, the Kaleidoscope '<' +operator would return 0.0 and -1.0, depending on the input value.

+ +
+
+  | Ast.Call (callee, args) ->
+      (* Look up the name in the module table. *)
+      let callee =
+        match lookup_function callee the_module with
+        | Some callee -> callee
+        | None -> raise (Error "unknown function referenced")
+      in
+      let params = params callee in
+
+      (* If argument mismatch error. *)
+      if Array.length params == Array.length args then () else
+        raise (Error "incorrect # arguments passed");
+      let args = Array.map codegen_expr args in
+      build_call callee args "calltmp" builder
+
+
+ +

Code generation for function calls is quite straightforward with LLVM. The +code above initially does a function name lookup in the LLVM Module's symbol +table. Recall that the LLVM Module is the container that holds all of the +functions we are JIT'ing. By giving each function the same name as what the +user specifies, we can use the LLVM symbol table to resolve function names for +us.

+ +

Once we have the function to call, we recursively codegen each argument that +is to be passed in, and create an LLVM call +instruction. Note that LLVM uses the native C calling conventions by +default, allowing these calls to also call into standard library functions like +"sin" and "cos", with no additional effort.

+ +

This wraps up our handling of the four basic expressions that we have so far +in Kaleidoscope. Feel free to go in and add some more. For example, by +browsing the LLVM language reference you'll find +several other interesting instructions that are really easy to plug into our +basic framework.

+ +
+ + +
Function Code Generation
+ + +
+ +

Code generation for prototypes and functions must handle a number of +details, which make their code less beautiful than expression code +generation, but allows us to illustrate some important points. First, lets +talk about code generation for prototypes: they are used both for function +bodies and external function declarations. The code starts with:

+ +
+
+let codegen_proto = function
+  | Ast.Prototype (name, args) ->
+      (* Make the function type: double(double,double) etc. *)
+      let doubles = Array.make (Array.length args) double_type in
+      let ft = function_type double_type doubles in
+			let f =
+        match lookup_function name the_module with
+
+
+ +

This code packs a lot of power into a few lines. Note first that this +function returns a "Function*" instead of a "Value*" (although at the moment +they both are modeled by llvalue in ocaml). Because a "prototype" +really talks about the external interface for a function (not the value computed +by an expression), it makes sense for it to return the LLVM Function it +corresponds to when codegen'd.

+ +

The call to Llvm.function_type creates the Llvm.llvalue +that should be used for a given Prototype. Since all function arguments in +Kaleidoscope are of type double, the first line creates a vector of "N" LLVM +double types. It then uses the Llvm.function_type method to create a +function type that takes "N" doubles as arguments, returns one double as a +result, and that is not vararg (that uses the function +Llvm.var_arg_function_type). Note that Types in LLVM are uniqued just +like Constants are, so you don't "new" a type, you "get" it.

+ +

The final line above checks if the function has already been defined in +Codegen.the_module. If not, we will create it.

+ +
+
+        | None -> declare_function name ft the_module
+
+
+ +

This indicates the type and name to use, as well as which module to insert +into. By default we assume a function has +Llvm.Linkage.ExternalLinkage. "external +linkage" means that the function may be defined outside the current module +and/or that it is callable by functions outside the module. The "name" +passed in is the name the user specified: this name is registered in +"Codegen.the_module"s symbol table, which is used by the function call +code above.

+ +

In Kaleidoscope, I choose to allow redefinitions of functions in two cases: +first, we want to allow 'extern'ing a function more than once, as long as the +prototypes for the externs match (since all arguments have the same type, we +just have to check that the number of arguments match). Second, we want to +allow 'extern'ing a function and then definining a body for it. This is useful +when defining mutually recursive functions.

+ +
+
+        (* If 'f' conflicted, there was already something named 'name'. If it
+         * has a body, don't allow redefinition or reextern. *)
+        | Some f ->
+            (* If 'f' already has a body, reject this. *)
+            if Array.length (basic_blocks f) == 0 then () else
+              raise (Error "redefinition of function");
+
+            (* If 'f' took a different number of arguments, reject. *)
+            if Array.length (params f) == Array.length args then () else
+              raise (Error "redefinition of function with different # args");
+            f
+      in
+
+
+ +

In order to verify the logic above, we first check to see if the pre-existing +function is "empty". In this case, empty means that it has no basic blocks in +it, which means it has no body. If it has no body, it is a forward +declaration. Since we don't allow anything after a full definition of the +function, the code rejects this case. If the previous reference to a function +was an 'extern', we simply verify that the number of arguments for that +definition and this one match up. If not, we emit an error.

+ +
+
+      (* Set names for all arguments. *)
+      Array.iteri (fun i a ->
+        let n = args.(i) in
+        set_value_name n a;
+        Hashtbl.add named_values n a;
+      ) (params f);
+      f
+
+
+ +

The last bit of code for prototypes loops over all of the arguments in the +function, setting the name of the LLVM Argument objects to match, and registering +the arguments in the Codegen.named_values map for future use by the +Ast.Variable variant. Once this is set up, it returns the Function +object to the caller. Note that we don't check for conflicting +argument names here (e.g. "extern foo(a b a)"). Doing so would be very +straight-forward with the mechanics we have already used above.

+ +
+
+let codegen_func = function
+  | Ast.Function (proto, body) ->
+      Hashtbl.clear named_values;
+      let the_function = codegen_proto proto in
+
+
+ +

Code generation for function definitions starts out simply enough: we just +codegen the prototype (Proto) and verify that it is ok. We then clear out the +Codegen.named_values map to make sure that there isn't anything in it +from the last function we compiled. Code generation of the prototype ensures +that there is an LLVM Function object that is ready to go for us.

+ +
+
+      (* Create a new basic block to start insertion into. *)
+      let bb = append_block "entry" the_function in
+      position_at_end bb builder;
+
+      try
+        let ret_val = codegen_expr body in
+
+
+ +

Now we get to the point where the Codegen.builder is set up. The +first line creates a new +basic block (named +"entry"), which is inserted into the_function. The second line then +tells the builder that new instructions should be inserted into the end of the +new basic block. Basic blocks in LLVM are an important part of functions that +define the Control Flow Graph. +Since we don't have any control flow, our functions will only contain one +block at this point. We'll fix this in Chapter +5 :).

+ +
+
+        let ret_val = codegen_expr body in
+
+        (* Finish off the function. *)
+        let _ = build_ret ret_val builder in
+
+        (* Validate the generated code, checking for consistency. *)
+        Llvm_analysis.assert_valid_function the_function;
+
+        the_function
+
+
+ +

Once the insertion point is set up, we call the Codegen.codegen_func +method for the root expression of the function. If no error happens, this emits +code to compute the expression into the entry block and returns the value that +was computed. Assuming no error, we then create an LLVM ret instruction, which completes the function. +Once the function is built, we call +Llvm_analysis.assert_valid_function, which is provided by LLVM. This +function does a variety of consistency checks on the generated code, to +determine if our compiler is doing everything right. Using this is important: +it can catch a lot of bugs. Once the function is finished and validated, we +return it.

+ +
+
+      with e ->
+        delete_function the_function;
+        raise e
+
+
+ +

The only piece left here is handling of the error case. For simplicity, we +handle this by merely deleting the function we produced with the +Llvm.delete_function method. This allows the user to redefine a +function that they incorrectly typed in before: if we didn't delete it, it +would live in the symbol table, with a body, preventing future redefinition.

+ +

This code does have a bug, though. Since the Codegen.codegen_proto +can return a previously defined forward declaration, our code can actually delete +a forward declaration. There are a number of ways to fix this bug, see what you +can come up with! Here is a testcase:

+ +
+
+extern foo(a b);     # ok, defines foo.
+def foo(a b) c;      # error, 'c' is invalid.
+def bar() foo(1, 2); # error, unknown function "foo"
+
+
+ +
+ + +
Driver Changes and +Closing Thoughts
+ + +
+ +

+For now, code generation to LLVM doesn't really get us much, except that we can +look at the pretty IR calls. The sample code inserts calls to Codegen into the +"Toplevel.main_loop", and then dumps out the LLVM IR. This gives a +nice way to look at the LLVM IR for simple functions. For example: +

+ +
+
+ready> 4+5;
+Read top-level expression:
+define double @""() {
+entry:
+        %addtmp = add double 4.000000e+00, 5.000000e+00
+        ret double %addtmp
+}
+
+
+ +

Note how the parser turns the top-level expression into anonymous functions +for us. This will be handy when we add JIT +support in the next chapter. Also note that the code is very literally +transcribed, no optimizations are being performed. We will +add optimizations explicitly +in the next chapter.

+ +
+
+ready> def foo(a b) a*a + 2*a*b + b*b;
+Read function definition:
+define double @foo(double %a, double %b) {
+entry:
+        %multmp = mul double %a, %a
+        %multmp1 = mul double 2.000000e+00, %a
+        %multmp2 = mul double %multmp1, %b
+        %addtmp = add double %multmp, %multmp2
+        %multmp3 = mul double %b, %b
+        %addtmp4 = add double %addtmp, %multmp3
+        ret double %addtmp4
+}
+
+
+ +

This shows some simple arithmetic. Notice the striking similarity to the +LLVM builder calls that we use to create the instructions.

+ +
+
+ready> def bar(a) foo(a, 4.0) + bar(31337);
+Read function definition:
+define double @bar(double %a) {
+entry:
+        %calltmp = call double @foo( double %a, double 4.000000e+00 )
+        %calltmp1 = call double @bar( double 3.133700e+04 )
+        %addtmp = add double %calltmp, %calltmp1
+        ret double %addtmp
+}
+
+
+ +

This shows some function calls. Note that this function will take a long +time to execute if you call it. In the future we'll add conditional control +flow to actually make recursion useful :).

+ +
+
+ready> extern cos(x);
+Read extern:
+declare double @cos(double)
+
+ready> cos(1.234);
+Read top-level expression:
+define double @""() {
+entry:
+        %calltmp = call double @cos( double 1.234000e+00 )
+        ret double %calltmp
+}
+
+
+ +

This shows an extern for the libm "cos" function, and a call to it.

+ + +
+
+ready> ^D
+; ModuleID = 'my cool jit'
+
+define double @""() {
+entry:
+        %addtmp = add double 4.000000e+00, 5.000000e+00
+        ret double %addtmp
+}
+
+define double @foo(double %a, double %b) {
+entry:
+        %multmp = mul double %a, %a
+        %multmp1 = mul double 2.000000e+00, %a
+        %multmp2 = mul double %multmp1, %b
+        %addtmp = add double %multmp, %multmp2
+        %multmp3 = mul double %b, %b
+        %addtmp4 = add double %addtmp, %multmp3
+        ret double %addtmp4
+}
+
+define double @bar(double %a) {
+entry:
+        %calltmp = call double @foo( double %a, double 4.000000e+00 )
+        %calltmp1 = call double @bar( double 3.133700e+04 )
+        %addtmp = add double %calltmp, %calltmp1
+        ret double %addtmp
+}
+
+declare double @cos(double)
+
+define double @""() {
+entry:
+        %calltmp = call double @cos( double 1.234000e+00 )
+        ret double %calltmp
+}
+
+
+ +

When you quit the current demo, it dumps out the IR for the entire module +generated. Here you can see the big picture with all the functions referencing +each other.

+ +

This wraps up the third chapter of the Kaleidoscope tutorial. Up next, we'll +describe how to add JIT codegen and optimizer +support to this so we can actually start running code!

+ +
+ + + +
Full Code Listing
+ + +
+ +

+Here is the complete code listing for our running example, enhanced with the +LLVM code generator. Because this uses the LLVM libraries, we need to link +them in. To do this, we use the llvm-config tool to inform +our makefile/command line about which options to use:

+ +
+
+# Compile
+ocamlbuild toy.byte
+# Run
+./toy.byte
+
+
+ +

Here is the code:

+ +
+
_tags:
+
+
+<{lexer,parser}.ml>: use_camlp4, pp(camlp4of)
+<*.{byte,native}>: g++, use_llvm, use_llvm_analysis
+
+
+ +
myocamlbuild.ml:
+
+
+open Ocamlbuild_plugin;;
+
+ocaml_lib ~extern:true "llvm";;
+ocaml_lib ~extern:true "llvm_analysis";;
+
+flag ["link"; "ocaml"; "g++"] (S[A"-cc"; A"g++"]);;
+
+
+ +
token.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Lexer Tokens
+ *===----------------------------------------------------------------------===*)
+
+(* The lexer returns these 'Kwd' if it is an unknown character, otherwise one of
+ * these others for known things. *)
+type token =
+  (* commands *)
+  | Def | Extern
+
+  (* primary *)
+  | Ident of string | Number of float
+
+  (* unknown *)
+  | Kwd of char
+
+
+ +
lexer.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Lexer
+ *===----------------------------------------------------------------------===*)
+
+let rec lex = parser
+  (* Skip any whitespace. *)
+  | [< ' (' ' | '\n' | '\r' | '\t'); stream >] -> lex stream
+
+  (* identifier: [a-zA-Z][a-zA-Z0-9] *)
+  | [< ' ('A' .. 'Z' | 'a' .. 'z' as c); stream >] ->
+      let buffer = Buffer.create 1 in
+      Buffer.add_char buffer c;
+      lex_ident buffer stream
+
+  (* number: [0-9.]+ *)
+  | [< ' ('0' .. '9' as c); stream >] ->
+      let buffer = Buffer.create 1 in
+      Buffer.add_char buffer c;
+      lex_number buffer stream
+
+  (* Comment until end of line. *)
+  | [< ' ('#'); stream >] ->
+      lex_comment stream
+
+  (* Otherwise, just return the character as its ascii value. *)
+  | [< 'c; stream >] ->
+      [< 'Token.Kwd c; lex stream >]
+
+  (* end of stream. *)
+  | [< >] -> [< >]
+
+and lex_number buffer = parser
+  | [< ' ('0' .. '9' | '.' as c); stream >] ->
+      Buffer.add_char buffer c;
+      lex_number buffer stream
+  | [< stream=lex >] ->
+      [< 'Token.Number (float_of_string (Buffer.contents buffer)); stream >]
+
+and lex_ident buffer = parser
+  | [< ' ('A' .. 'Z' | 'a' .. 'z' | '0' .. '9' as c); stream >] ->
+      Buffer.add_char buffer c;
+      lex_ident buffer stream
+  | [< stream=lex >] ->
+      match Buffer.contents buffer with
+      | "def" -> [< 'Token.Def; stream >]
+      | "extern" -> [< 'Token.Extern; stream >]
+      | id -> [< 'Token.Ident id; stream >]
+
+and lex_comment = parser
+  | [< ' ('\n'); stream=lex >] -> stream
+  | [< 'c; e=lex_comment >] -> e
+  | [< >] -> [< >]
+
+
+ +
ast.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Abstract Syntax Tree (aka Parse Tree)
+ *===----------------------------------------------------------------------===*)
+
+(* expr - Base type for all expression nodes. *)
+type expr =
+  (* variant for numeric literals like "1.0". *)
+  | Number of float
+
+  (* variant for referencing a variable, like "a". *)
+  | Variable of string
+
+  (* variant for a binary operator. *)
+  | Binary of char * expr * expr
+
+  (* variant for function calls. *)
+  | Call of string * expr array
+
+(* proto - This type represents the "prototype" for a function, which captures
+ * its name, and its argument names (thus implicitly the number of arguments the
+ * function takes). *)
+type proto = Prototype of string * string array
+
+(* func - This type represents a function definition itself. *)
+type func = Function of proto * expr
+
+
+ +
parser.ml:
+
+
+(*===---------------------------------------------------------------------===
+ * Parser
+ *===---------------------------------------------------------------------===*)
+
+(* binop_precedence - This holds the precedence for each binary operator that is
+ * defined *)
+let binop_precedence:(char, int) Hashtbl.t = Hashtbl.create 10
+
+(* precedence - Get the precedence of the pending binary operator token. *)
+let precedence c = try Hashtbl.find binop_precedence c with Not_found -> -1
+
+(* primary
+ *   ::= identifier
+ *   ::= numberexpr
+ *   ::= parenexpr *)
+let rec parse_primary = parser
+  (* numberexpr ::= number *)
+  | [< 'Token.Number n >] -> Ast.Number n
+
+  (* parenexpr ::= '(' expression ')' *)
+  | [< 'Token.Kwd '('; e=parse_expr; 'Token.Kwd ')' ?? "expected ')'" >] -> e
+
+  (* identifierexpr
+   *   ::= identifier
+   *   ::= identifier '(' argumentexpr ')' *)
+  | [< 'Token.Ident id; stream >] ->
+      let rec parse_args accumulator = parser
+        | [< e=parse_expr; stream >] ->
+            begin parser
+              | [< 'Token.Kwd ','; e=parse_args (e :: accumulator) >] -> e
+              | [< >] -> e :: accumulator
+            end stream
+        | [< >] -> accumulator
+      in
+      let rec parse_ident id = parser
+        (* Call. *)
+        | [< 'Token.Kwd '(';
+             args=parse_args [];
+             'Token.Kwd ')' ?? "expected ')'">] ->
+            Ast.Call (id, Array.of_list (List.rev args))
+
+        (* Simple variable ref. *)
+        | [< >] -> Ast.Variable id
+      in
+      parse_ident id stream
+
+  | [< >] -> raise (Stream.Error "unknown token when expecting an expression.")
+
+(* binoprhs
+ *   ::= ('+' primary)* *)
+and parse_bin_rhs expr_prec lhs stream =
+  match Stream.peek stream with
+  (* If this is a binop, find its precedence. *)
+  | Some (Token.Kwd c) when Hashtbl.mem binop_precedence c ->
+      let token_prec = precedence c in
+
+      (* If this is a binop that binds at least as tightly as the current binop,
+       * consume it, otherwise we are done. *)
+      if token_prec < expr_prec then lhs else begin
+        (* Eat the binop. *)
+        Stream.junk stream;
+
+        (* Parse the primary expression after the binary operator. *)
+        let rhs = parse_primary stream in
+
+        (* Okay, we know this is a binop. *)
+        let rhs =
+          match Stream.peek stream with
+          | Some (Token.Kwd c2) ->
+              (* If BinOp binds less tightly with rhs than the operator after
+               * rhs, let the pending operator take rhs as its lhs. *)
+              let next_prec = precedence c2 in
+              if token_prec < next_prec
+              then parse_bin_rhs (token_prec + 1) rhs stream
+              else rhs
+          | _ -> rhs
+        in
+
+        (* Merge lhs/rhs. *)
+        let lhs = Ast.Binary (c, lhs, rhs) in
+        parse_bin_rhs expr_prec lhs stream
+      end
+  | _ -> lhs
+
+(* expression
+ *   ::= primary binoprhs *)
+and parse_expr = parser
+  | [< lhs=parse_primary; stream >] -> parse_bin_rhs 0 lhs stream
+
+(* prototype
+ *   ::= id '(' id* ')' *)
+let parse_prototype =
+  let rec parse_args accumulator = parser
+    | [< 'Token.Ident id; e=parse_args (id::accumulator) >] -> e
+    | [< >] -> accumulator
+  in
+
+  parser
+  | [< 'Token.Ident id;
+       'Token.Kwd '(' ?? "expected '(' in prototype";
+       args=parse_args [];
+       'Token.Kwd ')' ?? "expected ')' in prototype" >] ->
+      (* success. *)
+      Ast.Prototype (id, Array.of_list (List.rev args))
+
+  | [< >] ->
+      raise (Stream.Error "expected function name in prototype")
+
+(* definition ::= 'def' prototype expression *)
+let parse_definition = parser
+  | [< 'Token.Def; p=parse_prototype; e=parse_expr >] ->
+      Ast.Function (p, e)
+
+(* toplevelexpr ::= expression *)
+let parse_toplevel = parser
+  | [< e=parse_expr >] ->
+      (* Make an anonymous proto. *)
+      Ast.Function (Ast.Prototype ("", [||]), e)
+
+(*  external ::= 'extern' prototype *)
+let parse_extern = parser
+  | [< 'Token.Extern; e=parse_prototype >] -> e
+
+
+ +
codegen.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Code Generation
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+
+exception Error of string
+
+let the_module = create_module "my cool jit"
+let builder = builder ()
+let named_values:(string, llvalue) Hashtbl.t = Hashtbl.create 10
+
+let rec codegen_expr = function
+  | Ast.Number n -> const_float double_type n
+  | Ast.Variable name ->
+      (try Hashtbl.find named_values name with
+        | Not_found -> raise (Error "unknown variable name"))
+  | Ast.Binary (op, lhs, rhs) ->
+      let lhs_val = codegen_expr lhs in
+      let rhs_val = codegen_expr rhs in
+      begin
+        match op with
+        | '+' -> build_add lhs_val rhs_val "addtmp" builder
+        | '-' -> build_sub lhs_val rhs_val "subtmp" builder
+        | '*' -> build_mul lhs_val rhs_val "multmp" builder
+        | '<' ->
+            (* Convert bool 0/1 to double 0.0 or 1.0 *)
+            let i = build_fcmp Fcmp.Ult lhs_val rhs_val "cmptmp" builder in
+            build_uitofp i double_type "booltmp" builder
+        | _ -> raise (Error "invalid binary operator")
+      end
+  | Ast.Call (callee, args) ->
+      (* Look up the name in the module table. *)
+      let callee =
+        match lookup_function callee the_module with
+        | Some callee -> callee
+        | None -> raise (Error "unknown function referenced")
+      in
+      let params = params callee in
+
+      (* If argument mismatch error. *)
+      if Array.length params == Array.length args then () else
+        raise (Error "incorrect # arguments passed");
+      let args = Array.map codegen_expr args in
+      build_call callee args "calltmp" builder
+
+let codegen_proto = function
+  | Ast.Prototype (name, args) ->
+      (* Make the function type: double(double,double) etc. *)
+      let doubles = Array.make (Array.length args) double_type in
+      let ft = function_type double_type doubles in
+      let f =
+        match lookup_function name the_module with
+        | None -> declare_function name ft the_module
+
+        (* If 'f' conflicted, there was already something named 'name'. If it
+         * has a body, don't allow redefinition or reextern. *)
+        | Some f ->
+            (* If 'f' already has a body, reject this. *)
+            if block_begin f <> At_end f then
+              raise (Error "redefinition of function");
+
+            (* If 'f' took a different number of arguments, reject. *)
+            if element_type (type_of f) <> ft then
+              raise (Error "redefinition of function with different # args");
+            f
+      in
+
+      (* Set names for all arguments. *)
+      Array.iteri (fun i a ->
+        let n = args.(i) in
+        set_value_name n a;
+        Hashtbl.add named_values n a;
+      ) (params f);
+      f
+
+let codegen_func = function
+  | Ast.Function (proto, body) ->
+      Hashtbl.clear named_values;
+      let the_function = codegen_proto proto in
+
+      (* Create a new basic block to start insertion into. *)
+      let bb = append_block "entry" the_function in
+      position_at_end bb builder;
+
+      try
+        let ret_val = codegen_expr body in
+
+        (* Finish off the function. *)
+        let _ = build_ret ret_val builder in
+
+        (* Validate the generated code, checking for consistency. *)
+        Llvm_analysis.assert_valid_function the_function;
+
+        the_function
+      with e ->
+        delete_function the_function;
+        raise e
+
+
+ +
toplevel.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Top-Level parsing and JIT Driver
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+
+(* top ::= definition | external | expression | ';' *)
+let rec main_loop stream =
+  match Stream.peek stream with
+  | None -> ()
+
+  (* ignore top-level semicolons. *)
+  | Some (Token.Kwd ';') ->
+      Stream.junk stream;
+      main_loop stream
+
+  | Some token ->
+      begin
+        try match token with
+        | Token.Def ->
+            let e = Parser.parse_definition stream in
+            print_endline "parsed a function definition.";
+            dump_value (Codegen.codegen_func e);
+        | Token.Extern ->
+            let e = Parser.parse_extern stream in
+            print_endline "parsed an extern.";
+            dump_value (Codegen.codegen_proto e);
+        | _ ->
+            (* Evaluate a top-level expression into an anonymous function. *)
+            let e = Parser.parse_toplevel stream in
+            print_endline "parsed a top-level expr";
+            dump_value (Codegen.codegen_func e);
+        with Stream.Error s | Codegen.Error s ->
+          (* Skip token for error recovery. *)
+          Stream.junk stream;
+          print_endline s;
+      end;
+      print_string "ready> "; flush stdout;
+      main_loop stream
+
+
+ +
toy.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Main driver code.
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+
+let main () =
+  (* Install standard binary operators.
+   * 1 is the lowest precedence. *)
+  Hashtbl.add Parser.binop_precedence '<' 10;
+  Hashtbl.add Parser.binop_precedence '+' 20;
+  Hashtbl.add Parser.binop_precedence '-' 20;
+  Hashtbl.add Parser.binop_precedence '*' 40;    (* highest. *)
+
+  (* Prime the first token. *)
+  print_string "ready> "; flush stdout;
+  let stream = Lexer.lex (Stream.of_channel stdin) in
+
+  (* Run the main "interpreter loop" now. *)
+  Toplevel.main_loop stream;
+
+  (* Print out all the generated code. *)
+  dump_module Codegen.the_module
+;;
+
+main ()
+
+
+
+ +Next: Adding JIT and Optimizer Support +
+ + +
+
+ Valid CSS! + Valid HTML 4.01! + + Chris Lattner
+ Erick Tryzelaar
+ The LLVM Compiler Infrastructure
+ Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $ +
+ + diff --git a/docs/tutorial/OCamlLangImpl4.html b/docs/tutorial/OCamlLangImpl4.html new file mode 100644 index 00000000000..fc1caeb1f20 --- /dev/null +++ b/docs/tutorial/OCamlLangImpl4.html @@ -0,0 +1,1025 @@ + + + + + Kaleidoscope: Adding JIT and Optimizer Support + + + + + + + + +
Kaleidoscope: Adding JIT and Optimizer Support
+ + + +
+

+ Written by Chris Lattner + and Erick Tryzelaar +

+
+ + +
Chapter 4 Introduction
+ + +
+ +

Welcome to Chapter 4 of the "Implementing a language +with LLVM" tutorial. Chapters 1-3 described the implementation of a simple +language and added support for generating LLVM IR. This chapter describes +two new techniques: adding optimizer support to your language, and adding JIT +compiler support. These additions will demonstrate how to get nice, efficient code +for the Kaleidoscope language.

+ +
+ + +
Trivial Constant +Folding
+ + +
+ +

Note: the ocaml bindings already use LLVMFoldingBuilder.

+ +

+Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately, +it does not produce wonderful code. For example, when compiling simple code, +we don't get obvious optimizations:

+ +
+
+ready> def test(x) 1+2+x;
+Read function definition:
+define double @test(double %x) {
+entry:
+        %addtmp = add double 1.000000e+00, 2.000000e+00
+        %addtmp1 = add double %addtmp, %x
+        ret double %addtmp1
+}
+
+
+ +

This code is a very, very literal transcription of the AST built by parsing +the input. As such, this transcription lacks optimizations like constant folding +(we'd like to get "add x, 3.0" in the example above) as well as other +more important optimizations. Constant folding, in particular, is a very common +and very important optimization: so much so that many language implementors +implement constant folding support in their AST representation.

+ +

With LLVM, you don't need this support in the AST. Since all calls to build +LLVM IR go through the LLVM builder, it would be nice if the builder itself +checked to see if there was a constant folding opportunity when you call it. +If so, it could just do the constant fold and return the constant instead of +creating an instruction. This is exactly what the LLVMFoldingBuilder +class does. + +

All we did was switch from LLVMBuilder to +LLVMFoldingBuilder. Though we change no other code, we now have all of our +instructions implicitly constant folded without us having to do anything +about it. For example, the input above now compiles to:

+ +
+
+ready> def test(x) 1+2+x;
+Read function definition:
+define double @test(double %x) {
+entry:
+        %addtmp = add double 3.000000e+00, %x
+        ret double %addtmp
+}
+
+
+ +

Well, that was easy :). In practice, we recommend always using +LLVMFoldingBuilder when generating code like this. It has no +"syntactic overhead" for its use (you don't have to uglify your compiler with +constant checks everywhere) and it can dramatically reduce the amount of +LLVM IR that is generated in some cases (particular for languages with a macro +preprocessor or that use a lot of constants).

+ +

On the other hand, the LLVMFoldingBuilder is limited by the fact +that it does all of its analysis inline with the code as it is built. If you +take a slightly more complex example:

+ +
+
+ready> def test(x) (1+2+x)*(x+(1+2));
+ready> Read function definition:
+define double @test(double %x) {
+entry:
+        %addtmp = add double 3.000000e+00, %x
+        %addtmp1 = add double %x, 3.000000e+00
+        %multmp = mul double %addtmp, %addtmp1
+        ret double %multmp
+}
+
+
+ +

In this case, the LHS and RHS of the multiplication are the same value. We'd +really like to see this generate "tmp = x+3; result = tmp*tmp;" instead +of computing "x*3" twice.

+ +

Unfortunately, no amount of local analysis will be able to detect and correct +this. This requires two transformations: reassociation of expressions (to +make the add's lexically identical) and Common Subexpression Elimination (CSE) +to delete the redundant add instruction. Fortunately, LLVM provides a broad +range of optimizations that you can use, in the form of "passes".

+ +
+ + +
LLVM Optimization + Passes
+ + +
+ +

LLVM provides many optimization passes, which do many different sorts of +things and have different tradeoffs. Unlike other systems, LLVM doesn't hold +to the mistaken notion that one set of optimizations is right for all languages +and for all situations. LLVM allows a compiler implementor to make complete +decisions about what optimizations to use, in which order, and in what +situation.

+ +

As a concrete example, LLVM supports both "whole module" passes, which look +across as large of body of code as they can (often a whole file, but if run +at link time, this can be a substantial portion of the whole program). It also +supports and includes "per-function" passes which just operate on a single +function at a time, without looking at other functions. For more information +on passes and how they are run, see the How +to Write a Pass document and the List of LLVM +Passes.

+ +

For Kaleidoscope, we are currently generating functions on the fly, one at +a time, as the user types them in. We aren't shooting for the ultimate +optimization experience in this setting, but we also want to catch the easy and +quick stuff where possible. As such, we will choose to run a few per-function +optimizations as the user types the function in. If we wanted to make a "static +Kaleidoscope compiler", we would use exactly the code we have now, except that +we would defer running the optimizer until the entire file has been parsed.

+ +

In order to get per-function optimizations going, we need to set up a +Llvm.PassManager to hold and +organize the LLVM optimizations that we want to run. Once we have that, we can +add a set of optimizations to run. The code looks like this:

+ +
+
+  (* Create the JIT. *)
+  let the_module_provider = ModuleProvider.create Codegen.the_module in
+  let the_execution_engine = ExecutionEngine.create the_module_provider in
+  let the_fpm = PassManager.create_function the_module_provider in
+
+  (* Set up the optimizer pipeline.  Start with registering info about how the
+   * target lays out data structures. *)
+  TargetData.add (ExecutionEngine.target_data the_execution_engine) the_fpm;
+
+  (* Do simple "peephole" optimizations and bit-twiddling optzn. *)
+  add_instruction_combining the_fpm;
+
+  (* reassociate expressions. *)
+  add_reassociation the_fpm;
+
+  (* Eliminate Common SubExpressions. *)
+  add_gvn the_fpm;
+
+  (* Simplify the control flow graph (deleting unreachable blocks, etc). *)
+  add_cfg_simplification the_fpm;
+
+  (* Run the main "interpreter loop" now. *)
+  Toplevel.main_loop the_fpm the_execution_engine stream;
+
+
+ +

This code defines two values, an Llvm.llmoduleprovider and a +Llvm.PassManager.t. The former is basically a wrapper around our +Llvm.llmodule that the Llvm.PassManager.t requires. It +provides certain flexibility that we're not going to take advantage of here, +so I won't dive into any details about it.

+ +

The meat of the matter here, is the definition of "the_fpm". It +requires a pointer to the the_module (through the +the_module_provider) to construct itself. Once it is set up, we use a +series of "add" calls to add a bunch of LLVM passes. The first pass is +basically boilerplate, it adds a pass so that later optimizations know how the +data structures in the program are layed out. The +"the_execution_engine" variable is related to the JIT, which we will +get to in the next section.

+ +

In this case, we choose to add 4 optimization passes. The passes we chose +here are a pretty standard set of "cleanup" optimizations that are useful for +a wide variety of code. I won't delve into what they do but, believe me, +they are a good starting place :).

+ +

Once the Llvm.PassManager. is set up, we need to make use of it. +We do this by running it after our newly created function is constructed (in +Codegen.codegen_func), but before it is returned to the client:

+ +
+
+let codegen_func the_fpm = function
+			...
+      try
+        let ret_val = codegen_expr body in
+
+        (* Finish off the function. *)
+        let _ = build_ret ret_val builder in
+
+        (* Validate the generated code, checking for consistency. *)
+        Llvm_analysis.assert_valid_function the_function;
+
+        (* Optimize the function. *)
+        let _ = PassManager.run_function the_function the_fpm in
+
+        the_function
+
+
+ +

As you can see, this is pretty straightforward. The the_fpm +optimizes and updates the LLVM Function* in place, improving (hopefully) its +body. With this in place, we can try our test above again:

+ +
+
+ready> def test(x) (1+2+x)*(x+(1+2));
+ready> Read function definition:
+define double @test(double %x) {
+entry:
+        %addtmp = add double %x, 3.000000e+00
+        %multmp = mul double %addtmp, %addtmp
+        ret double %multmp
+}
+
+
+ +

As expected, we now get our nicely optimized code, saving a floating point +add instruction from every execution of this function.

+ +

LLVM provides a wide variety of optimizations that can be used in certain +circumstances. Some documentation about the various +passes is available, but it isn't very complete. Another good source of +ideas can come from looking at the passes that llvm-gcc or +llvm-ld run to get started. The "opt" tool allows you to +experiment with passes from the command line, so you can see if they do +anything.

+ +

Now that we have reasonable code coming out of our front-end, lets talk about +executing it!

+ +
+ + +
Adding a JIT Compiler
+ + +
+ +

Code that is available in LLVM IR can have a wide variety of tools +applied to it. For example, you can run optimizations on it (as we did above), +you can dump it out in textual or binary forms, you can compile the code to an +assembly file (.s) for some target, or you can JIT compile it. The nice thing +about the LLVM IR representation is that it is the "common currency" between +many different parts of the compiler. +

+ +

In this section, we'll add JIT compiler support to our interpreter. The +basic idea that we want for Kaleidoscope is to have the user enter function +bodies as they do now, but immediately evaluate the top-level expressions they +type in. For example, if they type in "1 + 2;", we should evaluate and print +out 3. If they define a function, they should be able to call it from the +command line.

+ +

In order to do this, we first declare and initialize the JIT. This is done +by adding a global variable and a call in main:

+ +
+
+...
+let main () =
+  ...
+	
+  (* Create the JIT. *)
+  let the_module_provider = ModuleProvider.create Codegen.the_module in
+	let the_execution_engine = ExecutionEngine.create the_module_provider in
+  ...
+
+
+ +

This creates an abstract "Execution Engine" which can be either a JIT +compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler +for you if one is available for your platform, otherwise it will fall back to +the interpreter.

+ +

Once the Llvm_executionengine.ExecutionEngine.t is created, the JIT +is ready to be used. There are a variety of APIs that are useful, but the +simplest one is the "Llvm_executionengine.ExecutionEngine.run_function" +function. This method JIT compiles the specified LLVM Function and returns a +function pointer to the generated machine code. In our case, this means that we +can change the code that parses a top-level expression to look like this:

+ +
+
+            (* Evaluate a top-level expression into an anonymous function. *)
+            let e = Parser.parse_toplevel stream in
+            print_endline "parsed a top-level expr";
+            let the_function = Codegen.codegen_func the_fpm e in
+            dump_value the_function;
+
+            (* JIT the function, returning a function pointer. *)
+            let result = ExecutionEngine.run_function the_function [||]
+              the_execution_engine in
+
+            print_string "Evaluated to ";
+            print_float (GenericValue.as_float double_type result);
+            print_newline ();
+
+
+ +

Recall that we compile top-level expressions into a self-contained LLVM +function that takes no arguments and returns the computed double. Because the +LLVM JIT compiler matches the native platform ABI, this means that you can just +cast the result pointer to a function pointer of that type and call it directly. +This means, there is no difference between JIT compiled code and native machine +code that is statically linked into your application.

+ +

With just these two changes, lets see how Kaleidoscope works now!

+ +
+
+ready> 4+5;
+define double @""() {
+entry:
+        ret double 9.000000e+00
+}
+
+Evaluated to 9.000000
+
+
+ +

Well this looks like it is basically working. The dump of the function +shows the "no argument function that always returns double" that we synthesize +for each top level expression that is typed in. This demonstrates very basic +functionality, but can we do more?

+ +
+
+ready> def testfunc(x y) x + y*2; 
+Read function definition:
+define double @testfunc(double %x, double %y) {
+entry:
+        %multmp = mul double %y, 2.000000e+00
+        %addtmp = add double %multmp, %x
+        ret double %addtmp
+}
+
+ready> testfunc(4, 10);
+define double @""() {
+entry:
+        %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
+        ret double %calltmp
+}
+
+Evaluated to 24.000000
+
+
+ +

This illustrates that we can now call user code, but there is something a bit +subtle going on here. Note that we only invoke the JIT on the anonymous +functions that call testfunc, but we never invoked it on testfunc +itself.

+ +

What actually happened here is that the anonymous function was JIT'd when +requested. When the Kaleidoscope app calls through the function pointer that is +returned, the anonymous function starts executing. It ends up making the call +to the "testfunc" function, and ends up in a stub that invokes the JIT, lazily, +on testfunc. Once the JIT finishes lazily compiling testfunc, +it returns and the code re-executes the call.

+ +

In summary, the JIT will lazily JIT code, on the fly, as it is needed. The +JIT provides a number of other more advanced interfaces for things like freeing +allocated machine code, rejit'ing functions to update them, etc. However, even +with this simple code, we get some surprisingly powerful capabilities - check +this out (I removed the dump of the anonymous functions, you should get the idea +by now :) :

+ +
+
+ready> extern sin(x);
+Read extern:
+declare double @sin(double)
+
+ready> extern cos(x);
+Read extern:
+declare double @cos(double)
+
+ready> sin(1.0);
+Evaluated to 0.841471
+
+ready> def foo(x) sin(x)*sin(x) + cos(x)*cos(x);
+Read function definition:
+define double @foo(double %x) {
+entry:
+        %calltmp = call double @sin( double %x )
+        %multmp = mul double %calltmp, %calltmp
+        %calltmp2 = call double @cos( double %x )
+        %multmp4 = mul double %calltmp2, %calltmp2
+        %addtmp = add double %multmp, %multmp4
+        ret double %addtmp
+}
+
+ready> foo(4.0);
+Evaluated to 1.000000
+
+
+ +

Whoa, how does the JIT know about sin and cos? The answer is surprisingly +simple: in this example, the JIT started execution of a function and got to a +function call. It realized that the function was not yet JIT compiled and +invoked the standard set of routines to resolve the function. In this case, +there is no body defined for the function, so the JIT ended up calling +"dlsym("sin")" on the Kaleidoscope process itself. Since +"sin" is defined within the JIT's address space, it simply patches up +calls in the module to call the libm version of sin directly.

+ +

The LLVM JIT provides a number of interfaces (look in the +llvm_executionengine.mli file) for controlling how unknown functions +get resolved. It allows you to establish explicit mappings between IR objects +and addresses (useful for LLVM global variables that you want to map to static +tables, for example), allows you to dynamically decide on the fly based on the +function name, and even allows you to have the JIT abort itself if any lazy +compilation is attempted.

+ +

One interesting application of this is that we can now extend the language +by writing arbitrary C code to implement operations. For example, if we add: +

+ +
+
+/* putchard - putchar that takes a double and returns 0. */
+extern "C"
+double putchard(double X) {
+  putchar((char)X);
+  return 0;
+}
+
+
+ +

Now we can produce simple output to the console by using things like: +"extern putchard(x); putchard(120);", which prints a lowercase 'x' on +the console (120 is the ASCII code for 'x'). Similar code could be used to +implement file I/O, console input, and many other capabilities in +Kaleidoscope.

+ +

This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At +this point, we can compile a non-Turing-complete programming language, optimize +and JIT compile it in a user-driven way. Next up we'll look into extending the language with control flow +constructs, tackling some interesting LLVM IR issues along the way.

+ +
+ + +
Full Code Listing
+ + +
+ +

+Here is the complete code listing for our running example, enhanced with the +LLVM JIT and optimizer. To build this example, use: +

+ +
+
_tags:
+
+
+<{lexer,parser}.ml>: use_camlp4, pp(camlp4of)
+<*.{byte,native}>: g++, use_llvm, use_llvm_analysis
+<*.{byte,native}>: use_llvm_executionengine, use_llvm_target
+<*.{byte,native}>: use_llvm_scalar_opts, use_bindings
+
+
+ +
myocamlbuild.ml:
+
+
+open Ocamlbuild_plugin;;
+
+ocaml_lib ~extern:true "llvm";;
+ocaml_lib ~extern:true "llvm_analysis";;
+ocaml_lib ~extern:true "llvm_executionengine";;
+ocaml_lib ~extern:true "llvm_target";;
+ocaml_lib ~extern:true "llvm_scalar_opts";;
+
+flag ["link"; "ocaml"; "g++"] (S[A"-cc"; A"g++"]);;
+dep ["link"; "ocaml"; "use_bindings"] ["bindings.o"];;
+
+
+ +
token.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Lexer Tokens
+ *===----------------------------------------------------------------------===*)
+
+(* The lexer returns these 'Kwd' if it is an unknown character, otherwise one of
+ * these others for known things. *)
+type token =
+  (* commands *)
+  | Def | Extern
+
+  (* primary *)
+  | Ident of string | Number of float
+
+  (* unknown *)
+  | Kwd of char
+
+
+ +
lexer.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Lexer
+ *===----------------------------------------------------------------------===*)
+
+let rec lex = parser
+  (* Skip any whitespace. *)
+  | [< ' (' ' | '\n' | '\r' | '\t'); stream >] -> lex stream
+
+  (* identifier: [a-zA-Z][a-zA-Z0-9] *)
+  | [< ' ('A' .. 'Z' | 'a' .. 'z' as c); stream >] ->
+      let buffer = Buffer.create 1 in
+      Buffer.add_char buffer c;
+      lex_ident buffer stream
+
+  (* number: [0-9.]+ *)
+  | [< ' ('0' .. '9' as c); stream >] ->
+      let buffer = Buffer.create 1 in
+      Buffer.add_char buffer c;
+      lex_number buffer stream
+
+  (* Comment until end of line. *)
+  | [< ' ('#'); stream >] ->
+      lex_comment stream
+
+  (* Otherwise, just return the character as its ascii value. *)
+  | [< 'c; stream >] ->
+      [< 'Token.Kwd c; lex stream >]
+
+  (* end of stream. *)
+  | [< >] -> [< >]
+
+and lex_number buffer = parser
+  | [< ' ('0' .. '9' | '.' as c); stream >] ->
+      Buffer.add_char buffer c;
+      lex_number buffer stream
+  | [< stream=lex >] ->
+      [< 'Token.Number (float_of_string (Buffer.contents buffer)); stream >]
+
+and lex_ident buffer = parser
+  | [< ' ('A' .. 'Z' | 'a' .. 'z' | '0' .. '9' as c); stream >] ->
+      Buffer.add_char buffer c;
+      lex_ident buffer stream
+  | [< stream=lex >] ->
+      match Buffer.contents buffer with
+      | "def" -> [< 'Token.Def; stream >]
+      | "extern" -> [< 'Token.Extern; stream >]
+      | id -> [< 'Token.Ident id; stream >]
+
+and lex_comment = parser
+  | [< ' ('\n'); stream=lex >] -> stream
+  | [< 'c; e=lex_comment >] -> e
+  | [< >] -> [< >]
+
+
+ +
ast.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Abstract Syntax Tree (aka Parse Tree)
+ *===----------------------------------------------------------------------===*)
+
+(* expr - Base type for all expression nodes. *)
+type expr =
+  (* variant for numeric literals like "1.0". *)
+  | Number of float
+
+  (* variant for referencing a variable, like "a". *)
+  | Variable of string
+
+  (* variant for a binary operator. *)
+  | Binary of char * expr * expr
+
+  (* variant for function calls. *)
+  | Call of string * expr array
+
+(* proto - This type represents the "prototype" for a function, which captures
+ * its name, and its argument names (thus implicitly the number of arguments the
+ * function takes). *)
+type proto = Prototype of string * string array
+
+(* func - This type represents a function definition itself. *)
+type func = Function of proto * expr
+
+
+ +
parser.ml:
+
+
+(*===---------------------------------------------------------------------===
+ * Parser
+ *===---------------------------------------------------------------------===*)
+
+(* binop_precedence - This holds the precedence for each binary operator that is
+ * defined *)
+let binop_precedence:(char, int) Hashtbl.t = Hashtbl.create 10
+
+(* precedence - Get the precedence of the pending binary operator token. *)
+let precedence c = try Hashtbl.find binop_precedence c with Not_found -> -1
+
+(* primary
+ *   ::= identifier
+ *   ::= numberexpr
+ *   ::= parenexpr *)
+let rec parse_primary = parser
+  (* numberexpr ::= number *)
+  | [< 'Token.Number n >] -> Ast.Number n
+
+  (* parenexpr ::= '(' expression ')' *)
+  | [< 'Token.Kwd '('; e=parse_expr; 'Token.Kwd ')' ?? "expected ')'" >] -> e
+
+  (* identifierexpr
+   *   ::= identifier
+   *   ::= identifier '(' argumentexpr ')' *)
+  | [< 'Token.Ident id; stream >] ->
+      let rec parse_args accumulator = parser
+        | [< e=parse_expr; stream >] ->
+            begin parser
+              | [< 'Token.Kwd ','; e=parse_args (e :: accumulator) >] -> e
+              | [< >] -> e :: accumulator
+            end stream
+        | [< >] -> accumulator
+      in
+      let rec parse_ident id = parser
+        (* Call. *)
+        | [< 'Token.Kwd '(';
+             args=parse_args [];
+             'Token.Kwd ')' ?? "expected ')'">] ->
+            Ast.Call (id, Array.of_list (List.rev args))
+
+        (* Simple variable ref. *)
+        | [< >] -> Ast.Variable id
+      in
+      parse_ident id stream
+
+  | [< >] -> raise (Stream.Error "unknown token when expecting an expression.")
+
+(* binoprhs
+ *   ::= ('+' primary)* *)
+and parse_bin_rhs expr_prec lhs stream =
+  match Stream.peek stream with
+  (* If this is a binop, find its precedence. *)
+  | Some (Token.Kwd c) when Hashtbl.mem binop_precedence c ->
+      let token_prec = precedence c in
+
+      (* If this is a binop that binds at least as tightly as the current binop,
+       * consume it, otherwise we are done. *)
+      if token_prec < expr_prec then lhs else begin
+        (* Eat the binop. *)
+        Stream.junk stream;
+
+        (* Parse the primary expression after the binary operator. *)
+        let rhs = parse_primary stream in
+
+        (* Okay, we know this is a binop. *)
+        let rhs =
+          match Stream.peek stream with
+          | Some (Token.Kwd c2) ->
+              (* If BinOp binds less tightly with rhs than the operator after
+               * rhs, let the pending operator take rhs as its lhs. *)
+              let next_prec = precedence c2 in
+              if token_prec < next_prec
+              then parse_bin_rhs (token_prec + 1) rhs stream
+              else rhs
+          | _ -> rhs
+        in
+
+        (* Merge lhs/rhs. *)
+        let lhs = Ast.Binary (c, lhs, rhs) in
+        parse_bin_rhs expr_prec lhs stream
+      end
+  | _ -> lhs
+
+(* expression
+ *   ::= primary binoprhs *)
+and parse_expr = parser
+  | [< lhs=parse_primary; stream >] -> parse_bin_rhs 0 lhs stream
+
+(* prototype
+ *   ::= id '(' id* ')' *)
+let parse_prototype =
+  let rec parse_args accumulator = parser
+    | [< 'Token.Ident id; e=parse_args (id::accumulator) >] -> e
+    | [< >] -> accumulator
+  in
+
+  parser
+  | [< 'Token.Ident id;
+       'Token.Kwd '(' ?? "expected '(' in prototype";
+       args=parse_args [];
+       'Token.Kwd ')' ?? "expected ')' in prototype" >] ->
+      (* success. *)
+      Ast.Prototype (id, Array.of_list (List.rev args))
+
+  | [< >] ->
+      raise (Stream.Error "expected function name in prototype")
+
+(* definition ::= 'def' prototype expression *)
+let parse_definition = parser
+  | [< 'Token.Def; p=parse_prototype; e=parse_expr >] ->
+      Ast.Function (p, e)
+
+(* toplevelexpr ::= expression *)
+let parse_toplevel = parser
+  | [< e=parse_expr >] ->
+      (* Make an anonymous proto. *)
+      Ast.Function (Ast.Prototype ("", [||]), e)
+
+(*  external ::= 'extern' prototype *)
+let parse_extern = parser
+  | [< 'Token.Extern; e=parse_prototype >] -> e
+
+
+ +
codegen.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Code Generation
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+
+exception Error of string
+
+let the_module = create_module "my cool jit"
+let builder = builder ()
+let named_values:(string, llvalue) Hashtbl.t = Hashtbl.create 10
+
+let rec codegen_expr = function
+  | Ast.Number n -> const_float double_type n
+  | Ast.Variable name ->
+      (try Hashtbl.find named_values name with
+        | Not_found -> raise (Error "unknown variable name"))
+  | Ast.Binary (op, lhs, rhs) ->
+      let lhs_val = codegen_expr lhs in
+      let rhs_val = codegen_expr rhs in
+      begin
+        match op with
+        | '+' -> build_add lhs_val rhs_val "addtmp" builder
+        | '-' -> build_sub lhs_val rhs_val "subtmp" builder
+        | '*' -> build_mul lhs_val rhs_val "multmp" builder
+        | '<' ->
+            (* Convert bool 0/1 to double 0.0 or 1.0 *)
+            let i = build_fcmp Fcmp.Ult lhs_val rhs_val "cmptmp" builder in
+            build_uitofp i double_type "booltmp" builder
+        | _ -> raise (Error "invalid binary operator")
+      end
+  | Ast.Call (callee, args) ->
+      (* Look up the name in the module table. *)
+      let callee =
+        match lookup_function callee the_module with
+        | Some callee -> callee
+        | None -> raise (Error "unknown function referenced")
+      in
+      let params = params callee in
+
+      (* If argument mismatch error. *)
+      if Array.length params == Array.length args then () else
+        raise (Error "incorrect # arguments passed");
+      let args = Array.map codegen_expr args in
+      build_call callee args "calltmp" builder
+
+let codegen_proto = function
+  | Ast.Prototype (name, args) ->
+      (* Make the function type: double(double,double) etc. *)
+      let doubles = Array.make (Array.length args) double_type in
+      let ft = function_type double_type doubles in
+      let f =
+        match lookup_function name the_module with
+        | None -> declare_function name ft the_module
+
+        (* If 'f' conflicted, there was already something named 'name'. If it
+         * has a body, don't allow redefinition or reextern. *)
+        | Some f ->
+            (* If 'f' already has a body, reject this. *)
+            if block_begin f <> At_end f then
+              raise (Error "redefinition of function");
+
+            (* If 'f' took a different number of arguments, reject. *)
+            if element_type (type_of f) <> ft then
+              raise (Error "redefinition of function with different # args");
+            f
+      in
+
+      (* Set names for all arguments. *)
+      Array.iteri (fun i a ->
+        let n = args.(i) in
+        set_value_name n a;
+        Hashtbl.add named_values n a;
+      ) (params f);
+      f
+
+let codegen_func the_fpm = function
+  | Ast.Function (proto, body) ->
+      Hashtbl.clear named_values;
+      let the_function = codegen_proto proto in
+
+      (* Create a new basic block to start insertion into. *)
+      let bb = append_block "entry" the_function in
+      position_at_end bb builder;
+
+      try
+        let ret_val = codegen_expr body in
+
+        (* Finish off the function. *)
+        let _ = build_ret ret_val builder in
+
+        (* Validate the generated code, checking for consistency. *)
+        Llvm_analysis.assert_valid_function the_function;
+
+        (* Optimize the function. *)
+        let _ = PassManager.run_function the_function the_fpm in
+
+        the_function
+      with e ->
+        delete_function the_function;
+        raise e
+
+
+ +
toplevel.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Top-Level parsing and JIT Driver
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+open Llvm_executionengine
+
+(* top ::= definition | external | expression | ';' *)
+let rec main_loop the_fpm the_execution_engine stream =
+  match Stream.peek stream with
+  | None -> ()
+
+  (* ignore top-level semicolons. *)
+  | Some (Token.Kwd ';') ->
+      Stream.junk stream;
+      main_loop the_fpm the_execution_engine stream
+
+  | Some token ->
+      begin
+        try match token with
+        | Token.Def ->
+            let e = Parser.parse_definition stream in
+            print_endline "parsed a function definition.";
+            dump_value (Codegen.codegen_func the_fpm e);
+        | Token.Extern ->
+            let e = Parser.parse_extern stream in
+            print_endline "parsed an extern.";
+            dump_value (Codegen.codegen_proto e);
+        | _ ->
+            (* Evaluate a top-level expression into an anonymous function. *)
+            let e = Parser.parse_toplevel stream in
+            print_endline "parsed a top-level expr";
+            let the_function = Codegen.codegen_func the_fpm e in
+            dump_value the_function;
+
+            (* JIT the function, returning a function pointer. *)
+            let result = ExecutionEngine.run_function the_function [||]
+              the_execution_engine in
+
+            print_string "Evaluated to ";
+            print_float (GenericValue.as_float double_type result);
+            print_newline ();
+        with Stream.Error s | Codegen.Error s ->
+          (* Skip token for error recovery. *)
+          Stream.junk stream;
+          print_endline s;
+      end;
+      print_string "ready> "; flush stdout;
+      main_loop the_fpm the_execution_engine stream
+
+
+ +
toy.ml:
+
+
+(*===----------------------------------------------------------------------===
+ * Main driver code.
+ *===----------------------------------------------------------------------===*)
+
+open Llvm
+open Llvm_executionengine
+open Llvm_target
+open Llvm_scalar_opts
+
+let main () =
+  (* Install standard binary operators.
+   * 1 is the lowest precedence. *)
+  Hashtbl.add Parser.binop_precedence '<' 10;
+  Hashtbl.add Parser.binop_precedence '+' 20;
+  Hashtbl.add Parser.binop_precedence '-' 20;
+  Hashtbl.add Parser.binop_precedence '*' 40;    (* highest. *)
+
+  (* Prime the first token. *)
+  print_string "ready> "; flush stdout;
+  let stream = Lexer.lex (Stream.of_channel stdin) in
+
+  (* Create the JIT. *)
+  let the_module_provider = ModuleProvider.create Codegen.the_module in
+  let the_execution_engine = ExecutionEngine.create the_module_provider in
+  let the_fpm = PassManager.create_function the_module_provider in
+
+  (* Set up the optimizer pipeline.  Start with registering info about how the
+   * target lays out data structures. *)
+  TargetData.add (ExecutionEngine.target_data the_execution_engine) the_fpm;
+
+  (* Do simple "peephole" optimizations and bit-twiddling optzn. *)
+  add_instruction_combining the_fpm;
+
+  (* reassociate expressions. *)
+  add_reassociation the_fpm;
+
+  (* Eliminate Common SubExpressions. *)
+  add_gvn the_fpm;
+
+  (* Simplify the control flow graph (deleting unreachable blocks, etc). *)
+  add_cfg_simplification the_fpm;
+
+  (* Run the main "interpreter loop" now. *)
+  Toplevel.main_loop the_fpm the_execution_engine stream;
+
+  (* Print out all the generated code. *)
+  dump_module Codegen.the_module
+;;
+
+main ()
+
+
+ +
bindings.c
+
+
+#include <stdio.h>
+
+/* putchard - putchar that takes a double and returns 0. */
+extern double putchard(double X) {
+  putchar((char)X);
+  return 0;
+}
+
+
+
+ +Next: Extending the language: control flow +
+ + +
+
+ Valid CSS! + Valid HTML 4.01! + + Chris Lattner
+ Erick Tryzelaar
+ The LLVM Compiler Infrastructure
+ Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $ +
+ +