From bdbcb8a260ecf517e6ea4796c1ffe1842d30d4d3 Mon Sep 17 00:00:00 2001 From: Reid Spencer Date: Sat, 5 Jun 2004 14:39:24 +0000 Subject: [PATCH] Fix a few typos, spellos, grammaros. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14043 91177308-0d34-0410-b5e6-96231b3b80d8 --- docs/CodeGenerator.html | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/docs/CodeGenerator.html b/docs/CodeGenerator.html index 383a85e5c6f..dfdc0528455 100644 --- a/docs/CodeGenerator.html +++ b/docs/CodeGenerator.html @@ -68,12 +68,12 @@ The LLVM target-independent code generator consists of five main components:

  1. Abstract target description interfaces which -capture improtant properties about various aspects of the machine independently +capture important properties about various aspects of the machine, independently of how they will be used. These interfaces are defined in include/llvm/Target/.
  2. Classes used to represent the machine code being -generator for a target. These classes are intended to be abstract enough to +generated for a target. These classes are intended to be abstract enough to represent the machine code for any target machine. These classes are defined in include/llvm/CodeGen/.
  3. @@ -99,8 +99,8 @@ Depending on which part of the code generator you are interested in working on, different pieces of this will be useful to you. In any case, you should be familiar with the target description and machine code representation classes. If you want to add -a backend for a new target, you will need implement the -targe description classes for your new target and understand the implement the +target description classes for your new target and understand the LLVM code representation. If you are interested in implementing a new code generation algorithm, it should only depend on the target-description and machine code representation @@ -133,7 +133,7 @@ implements these two interfaces, and does its own thing. Another example of a code generator like this is a (purely hypothetical) backend that converts LLVM to the GCC RTL form and uses GCC to emit machine code for a target.

    -

    The other implication of this design is that it is possible to design and +

    This design also implies that it is possible to design and implement radically different code generators in the LLVM system that do not make use of any of the built-in components. Doing so is not recommended at all, but could be required for radically different targets that do not fit into the @@ -164,9 +164,9 @@ quality code generation for standard register-based microprocessors. Code generation in this model is divided into the following stages:

      -
    1. Instruction Selection - Determining a efficient implementation of the +
    2. Instruction Selection - Determining an efficient implementation of the input LLVM code in the target instruction set. This stage produces the initial -code for the program in the target instruction set the makes use of virtual +code for the program in the target instruction set, then makes use of virtual registers in SSA form and physical registers that represent any required register assignments due to target constraints or calling conventions.
    3. @@ -191,7 +191,7 @@ elimination and stack packing. "final" machine code can go here, such as spill code scheduling and peephole optimizations. -
    4. Code Emission - The final stage actually outputs the machine code for +
    5. Code Emission - The final stage actually outputs the code for the current function, either in the target assembler format or in machine code.
    6. @@ -200,11 +200,13 @@ code.

      The code generator is based on the assumption that the instruction selector will use an optimal pattern matching selector to create high-quality sequences of -native code. Alternative code generator designs based on pattern expansion and -aggressive iterative peephole optimization are much slower. This design is -designed to permit efficient compilation (important for JIT environments) and -aggressive optimization (used when generate code offline) by allowing components -of varying levels of sophisication to be used for any step of compilation.

      +native instructions. Alternative code generator designs based on pattern +expansion and +aggressive iterative peephole optimization are much slower. This design +permits efficient compilation (important for JIT environments) and +aggressive optimization (used when generating code offline) by allowing +components of varying levels of sophisication to be used for any step of +compilation.

      In addition to these stages, target implementations can insert arbitrary @@ -253,7 +255,7 @@ as inputs or other algorithm-specific data structures).

      All of the target description classes (except the TargetData class) are designed to be subclassed by the concrete target implementation, and have virtual methods implemented. To -get to these implementations, TargetMachine class provides accessors that should be implemented by the target.

      @@ -269,7 +271,7 @@ should be implemented by the target.

      The TargetMachine class provides virtual methods that are used to access the target-specific implementations of the various target description classes (with the getInstrInfo, getRegisterInfo, -getFrameInfo, ... methods). This class is designed to be subclassed by +getFrameInfo, ... methods). This class is designed to be specialized by a concrete target implementation (e.g., X86TargetMachine) which implements the various virtual methods. The only required target description class is the TargetData class, but if the