From 91e10c42ea395627b7b0e28720a801bfffd87733 Mon Sep 17 00:00:00 2001
From: Bill Wendling The LLVM target-independent code generator is a framework that provides a
suite of reusable components for translating the LLVM internal representation to
-the machine code for a specified target -- either in assembly form (suitable for
-a static compiler) or in binary machine code format (usable for a JIT compiler).
-The LLVM target-independent code generator consists of five main components:
-The code generator is based on the assumption that the instruction selector will
-use an optimal pattern matching selector to create high-quality sequences of
+ The code generator is based on the assumption that the instruction selector
+will use an optimal pattern matching selector to create high-quality sequences of
native instructions. Alternative code generator designs based on pattern
-expansion and
-aggressive iterative peephole optimization are much slower. This design
-permits efficient compilation (important for JIT environments) and
+expansion and aggressive iterative peephole optimization are much slower. This
+design permits efficient compilation (important for JIT environments) and
aggressive optimization (used when generating code offline) by allowing
components of varying levels of sophistication to be used for any step of
compilation.
-In addition to these stages, target implementations can insert arbitrary
+ In addition to these stages, target implementations can insert arbitrary
target-specific passes into the flow. For example, the X86 target uses a
special pass to handle the 80x87 floating point stack architecture. Other
-targets with unusual requirements can be supported with custom passes as needed.
-
As LLVM continues to be developed and refined, we plan to move more and more -of the target description to be in .td form. Doing so gives us a +of the target description to the .td form. Doing so gives us a number of advantages. The most important is that it makes it easier to port -LLVM, because it reduces the amount of C++ code that has to be written and the +LLVM because it reduces the amount of C++ code that has to be written, and the surface area of the code generator that needs to be understood before someone -can get in an get something working. Second, it is also important to us because -it makes it easier to change things: in particular, if tables and other things -are all emitted by tblgen, we only need to change one place (tblgen) to update -all of the targets to a new interface.
+can get something working. Second, it makes it easier to change things. In +particular, if tables and other things are all emitted by tblgen, we +only need a change in one place (tblgen) to update all of the targets +to a new interface. @@ -287,9 +285,9 @@ all of the targets to a new interface.The LLVM target description classes (which are located in the +
The LLVM target description classes (located in the include/llvm/Target directory) provide an abstract description of the -target machine; independent of any particular client. These classes are +target machine independent of any particular client. These classes are designed to capture the abstract properties of the target (such as the instructions and registers it has), and do not incorporate any particular pieces of code generation algorithms.
@@ -349,14 +347,16 @@ little-endian or big-endian.The TargetLowering class is used by SelectionDAG based instruction selectors primarily to describe how LLVM code should be lowered to SelectionDAG -operations. Among other things, this class indicates: -
Registers in the code generator are represented in the code generator by -unsigned numbers. Physical registers (those that actually exist in the target +unsigned integers. Physical registers (those that actually exist in the target description) are unique small numbers, and virtual registers are generally large. Note that register #0 is reserved as a flag value.
Each register in the processor description has an associated -TargetRegisterDesc entry, which provides a textual name for the register -(used for assembly output and debugging dumps) and a set of aliases (used to -indicate that one register overlaps with another). +TargetRegisterDesc entry, which provides a textual name for the +register (used for assembly output and debugging dumps) and a set of aliases +(used to indicate whether one register overlaps with another).
In addition to the per-register description, the MRegisterInfo class @@ -409,7 +409,8 @@ href="TableGenFundamentals.html">TableGen description of the register file. instruction the target supports. Descriptors define things like the mnemonic for the opcode, the number of operands, the list of implicit register uses and defs, whether the instruction has certain target-independent properties - (accesses memory, is commutable, etc), and holds any target-specific flags.
+ (accesses memory, is commutable, etc), and holds any target-specific + flags. @@ -421,7 +422,7 @@ href="TableGenFundamentals.html">TableGen description of the register file.The TargetFrameInfo class is used to provide information about the stack frame layout of the target. It holds the direction of stack growth, the known stack alignment on entry to each function, and the offset to the - locals area. The offset to the local area is the offset from the stack + local area. The offset to the local area is the offset from the stack pointer on function entry to the first location where function data (local variables, spill locations) can be stored.
@@ -432,13 +433,11 @@ href="TableGenFundamentals.html">TableGen description of the register file.
The TargetSubtarget class is used to provide information about the specific chip set being targeted. A sub-target informs code generation of which instructions are supported, instruction latencies and instruction execution itinerary; i.e., which processing units are used, in what order, and - for how long. -
+ for how long.The TargetJITInfo class exposes an abstract interface used by the + Just-In-Time code generator to perform target-specific activities, such as + emitting stubs. If a TargetMachine supports JIT code generation, it + should provide one of these objects through the getJITInfo + method.
+-At the high-level, LLVM code is translated to a machine specific representation -formed out of MachineFunction, -MachineBasicBlock, and At the high-level, LLVM code is translated to a machine specific +representation formed out of +MachineFunction, +MachineBasicBlock, and MachineInstr instances -(defined in include/llvm/CodeGen). This representation is completely target -agnostic, representing instructions in their most abstract form: an opcode and a -series of operands. This representation is designed to support both SSA -representation for machine code, as well as a register allocated, non-SSA form. -
+(defined in include/llvm/CodeGen). This representation is completely +target agnostic, representing instructions in their most abstract form: an +opcode and a series of operands. This representation is designed to support +both an SSA representation for machine code, as well as a register allocated, +non-SSA form.The opcode number is a simple unsigned number that only has meaning to a +
The opcode number is a simple unsigned integer that only has meaning to a specific backend. All of the instructions for a target should be defined in the *InstrInfo.td file for the target. The opcode enum values are auto-generated from this description. The MachineInstr class does not have any information about how to interpret the instruction (i.e., what the -semantics of the instruction are): for that you must refer to the +semantics of the instruction are); for that you must refer to the TargetInstrInfo class.
The operands of a machine instruction can be of several different types: -they can be a register reference, constant integer, basic block reference, etc. -In addition, a machine operand should be marked as a def or a use of the value +a register reference, a constant integer, a basic block reference, etc. In +addition, a machine operand should be marked as a def or a use of the value (though only registers are allowed to be defs).
By convention, the LLVM code generator orders instruction operands so that @@ -505,11 +512,13 @@ first.
list has several advantages. In particular, the debugging printer will print the instruction like this: +- %r3 = add %i1, %i2 +%r3 = add %i1, %i2+
If the first operand is a def, and it is also easier to Also if the first operand is a def, it is easier to create instructions whose only def is the first operand.
@@ -525,39 +534,44 @@ operand.Machine instructions are created by using the BuildMI functions, located in the include/llvm/CodeGen/MachineInstrBuilder.h file. The BuildMI functions make it easy to build arbitrary machine -instructions. Usage of the BuildMI functions look like this: -
+instructions. Usage of the BuildMI functions look like this: +- // Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42') - // instruction. The '1' specifies how many operands will be added. - MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42); +// Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42') +// instruction. The '1' specifies how many operands will be added. +MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42); - // Create the same instr, but insert it at the end of a basic block. - MachineBasicBlock &MBB = ... - BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42); +// Create the same instr, but insert it at the end of a basic block. +MachineBasicBlock &MBB = ... +BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42); - // Create the same instr, but insert it before a specified iterator point. - MachineBasicBlock::iterator MBBI = ... - BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42); +// Create the same instr, but insert it before a specified iterator point. +MachineBasicBlock::iterator MBBI = ... +BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42); - // Create a 'cmp Reg, 0' instruction, no destination reg. - MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0); - // Create an 'sahf' instruction which takes no operands and stores nothing. - MI = BuildMI(X86::SAHF, 0); +// Create a 'cmp Reg, 0' instruction, no destination reg. +MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0); +// Create an 'sahf' instruction which takes no operands and stores nothing. +MI = BuildMI(X86::SAHF, 0); - // Create a self looping branch instruction. - BuildMI(MBB, X86::JNE, 1).addMBB(&MBB); +// Create a self looping branch instruction. +BuildMI(MBB, X86::JNE, 1).addMBB(&MBB);+
-The key thing to remember with the BuildMI functions is that you have -to specify the number of operands that the machine instruction will take. This -allows for efficient memory allocation. You also need to specify if operands -default to be uses of values, not definitions. If you need to add a definition -operand (other than the optional destination register), you must explicitly -mark it as such. -
+The key thing to remember with the BuildMI functions is that you +have to specify the number of operands that the machine instruction will take. +This allows for efficient memory allocation. You also need to specify if +operands default to be uses of values, not definitions. If you need to add a +definition operand (other than the optional destination register), you must +explicitly mark it as such:
+ ++MI.addReg(Reg, MachineOperand::Def); ++
For example, consider this simple LLVM example:
+- int %test(int %X, int %Y) { - %Z = div int %X, %Y - ret int %Z - } +int %test(int %X, int %Y) { + %Z = div int %X, %Y + ret int %Z +}+
The X86 instruction selector produces this machine code for the div -and ret (use +
The X86 instruction selector produces this machine code for the div +and ret (use "llc X.bc -march=x86 -print-machineinstrs" to get this):
+- ;; Start of div - %EAX = mov %reg1024 ;; Copy X (in reg1024) into EAX - %reg1027 = sar %reg1024, 31 - %EDX = mov %reg1027 ;; Sign extend X into EDX - idiv %reg1025 ;; Divide by Y (in reg1025) - %reg1026 = mov %EAX ;; Read the result (Z) out of EAX +;; Start of div +%EAX = mov %reg1024 ;; Copy X (in reg1024) into EAX +%reg1027 = sar %reg1024, 31 +%EDX = mov %reg1027 ;; Sign extend X into EDX +idiv %reg1025 ;; Divide by Y (in reg1025) +%reg1026 = mov %EAX ;; Read the result (Z) out of EAX - ;; Start of ret - %EAX = mov %reg1026 ;; 32-bit return value goes in EAX - ret +;; Start of ret +%EAX = mov %reg1026 ;; 32-bit return value goes in EAX +ret+
By the end of code generation, the register allocator has coalesced -the registers and deleted the resultant identity moves, producing the +the registers and deleted the resultant identity moves producing the following code:
+- ;; X is in EAX, Y is in ECX - mov %EAX, %EDX - sar %EDX, 31 - idiv %ECX - ret +;; X is in EAX, Y is in ECX +mov %EAX, %EDX +sar %EDX, 31 +idiv %ECX +ret+
This approach is extremely general (if it can handle the X86 architecture, it can handle anything!) and allows all of the target specific knowledge about the instruction stream to be isolated in the instruction selector. Note that physical registers should have a short lifetime for good -code generation, and all physical registers are assumed dead on entry and -exit of basic blocks (before register allocation). Thus if you need a value +code generation, and all physical registers are assumed dead on entry to and +exit from basic blocks (before register allocation). Thus, if you need a value to be live across basic block boundaries, it must live in a virtual register.
@@ -628,18 +648,18 @@ register.MachineInstr's are initially selected in SSA-form, and are maintained in SSA-form until register allocation happens. For the most -part, this is trivially simple since LLVM is already in SSA form: LLVM PHI nodes +part, this is trivially simple since LLVM is already in SSA form; LLVM PHI nodes become machine code PHI nodes, and virtual registers are only allowed to have a single definition.
-After register allocation, machine code is no longer in SSA-form, as there +
After register allocation, machine code is no longer in SSA-form because there are no virtual registers left in the code.
The MachineBasicBlock class contains a list of machine instructions -(MachineInstr instances). It roughly corresponds to -the LLVM code input to the instruction selector, but there can be a one-to-many -mapping (i.e. one LLVM basic block can map to multiple machine basic blocks). -The MachineBasicBlock class has a "getBasicBlock" method, which returns -the LLVM basic block that it comes from. -
+(MachineInstr instances). It roughly +corresponds to the LLVM code input to the instruction selector, but there can be +a one-to-many mapping (i.e. one LLVM basic block can map to multiple machine +basic blocks). The MachineBasicBlock class has a +"getBasicBlock" method, which returns the LLVM basic block that it +comes from.The MachineFunction class contains a list of machine basic blocks -(MachineBasicBlock instances). It corresponds -one-to-one with the LLVM function input to the instruction selector. In -addition to a list of basic blocks, the MachineFunction contains a -the MachineConstantPool, MachineFrameInfo, MachineFunctionInfo, -SSARegMap, and a set of live in and live out registers for the function. See -MachineFunction.h for more information. -
+(MachineBasicBlock instances). It +corresponds one-to-one with the LLVM function input to the instruction selector. +In addition to a list of basic blocks, the MachineFunction contains a +a MachineConstantPool, a MachineFrameInfo, a +MachineFunctionInfo, a SSARegMap, and a set of live in and +live out registers for the function. See +include/llvm/CodeGen/MachineFunction.h for more information.Portions of the DAG instruction selector are generated from the target -description files (*.td) files. Eventually, we aim for the entire -instruction selector to be generated from these .td files.
+description (*.td) files. Our goal is for the entire instruction +selector to be generated from these .td files.-The SelectionDAG provides an abstraction for code representation in a way that -is amenable to instruction selection using automatic techniques -(e.g. dynamic-programming based optimal pattern matching selectors), It is also -well suited to other phases of code generation; in particular, +
The SelectionDAG provides an abstraction for code representation in a way +that is amenable to instruction selection using automatic techniques +(e.g. dynamic-programming based optimal pattern matching selectors). It is also +well-suited to other phases of code generation; in particular, instruction scheduling (SelectionDAG's are very close to scheduling DAGs post-selection). Additionally, the SelectionDAG provides a host representation where a large variety of very-low-level (but target-independent) optimizations may be -performed: ones which require extensive information about the instructions -efficiently supported by the target. -
+performed; ones which require extensive information about the instructions +efficiently supported by the target. --The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the +
The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the SDNode class. The primary payload of the SDNode is its operation code (Opcode) that indicates what operation the node performs and the operands to the operation. @@ -750,38 +765,33 @@ both the dividend and the remainder. Many other situations require multiple values as well. Each node also has some number of operands, which are edges to the node defining the used value. Because nodes may define multiple values, edges are represented by instances of the SDOperand class, which is -a <SDNode, unsigned> pair, indicating the node and result -value being used, respectively. Each value produced by an SDNode has an -associated MVT::ValueType, indicating what type the value is. -
+a <SDNode, unsigned> pair, indicating the node and result +value being used, respectively. Each value produced by an SDNode has +an associated MVT::ValueType indicating what type the value is. --SelectionDAGs contain two different kinds of values: those that represent data -flow and those that represent control flow dependencies. Data values are simple -edges with an integer or floating point value type. Control edges are -represented as "chain" edges which are of type MVT::Other. These edges provide -an ordering between nodes that have side effects (such as -loads/stores/calls/return/etc). All nodes that have side effects should take a -token chain as input and produce a new one as output. By convention, token -chain inputs are always operand #0, and chain results are always the last +
SelectionDAGs contain two different kinds of values: those that represent +data flow and those that represent control flow dependencies. Data values are +simple edges with an integer or floating point value type. Control edges are +represented as "chain" edges which are of type MVT::Other. These edges +provide an ordering between nodes that have side effects (such as +loads, stores, calls, returns, etc). All nodes that have side effects should +take a token chain as input and produce a new one as output. By convention, +token chain inputs are always operand #0, and chain results are always the last value produced by an operation.
--A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is -always a marker node with an Opcode of ISD::EntryToken. The Root node is the -final side-effecting node in the token chain. For example, in a single basic -block function, this would be the return node. -
+A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is +always a marker node with an Opcode of ISD::EntryToken. The Root node +is the final side-effecting node in the token chain. For example, in a single +basic block function it would be the return node.
+ +One important concept for SelectionDAGs is the notion of a "legal" vs. +"illegal" DAG. A legal DAG for a target is one that only uses supported +operations and supported types. On a 32-bit PowerPC, for example, a DAG with +a value of type i1, i8, i16, or i64 would be illegal, as would a DAG that uses a +SREM or UREM operation. The +legalize phase is responsible for turning +an illegal DAG into a legal DAG.
--One important concept for SelectionDAGs is the notion of a "legal" vs. "illegal" -DAG. A legal DAG for a target is one that only uses supported operations and -supported types. On a 32-bit PowerPC, for example, a DAG with any values of i1, -i8, i16, -or i64 type would be illegal, as would a DAG that uses a SREM or UREM operation. -The legalize -phase is responsible for turning an illegal DAG into a legal DAG. -
-SelectionDAG-based instruction selection consists of the following steps: -
+SelectionDAG-based instruction selection consists of the following steps:
-The initial SelectionDAG is naively peephole expanded from the LLVM input by -the SelectionDAGLowering class in the SelectionDAGISel.cpp file. The -intent of this pass is to expose as much low-level, target-specific details -to the SelectionDAG as possible. This pass is mostly hard-coded (e.g. an LLVM -add turns into an SDNode add while a geteelementptr is expanded into the obvious -arithmetic). This pass requires target-specific hooks to lower calls and -returns, varargs, etc. For these features, the TargetLowering interface is -used. -
+The initial SelectionDAG is naively peephole expanded from the LLVM input by +the SelectionDAGLowering class in the +lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp file. The intent of this +pass is to expose as much low-level, target-specific details to the SelectionDAG +as possible. This pass is mostly hard-coded (e.g. an LLVM add turns +into an SDNode add while a geteelementptr is expanded into the +obvious arithmetic). This pass requires target-specific hooks to lower calls, +returns, varargs, etc. For these features, the +TargetLowering interface is used.
A target implementation tells the legalizer which types are supported (and which register class to use for them) by calling the - "addRegisterClass" method in its TargetLowering constructor.
+ addRegisterClass method in its TargetLowering constructor.Eliminate operations that are not supported by the target.
Targets often have weird constraints, such as not supporting every operation on every supported datatype (e.g. X86 does not support byte conditional moves and PowerPC does not support sign-extending loads from - a 16-bit memory location). Legalize takes care by open-coding + a 16-bit memory location). Legalize takes care of this by open-coding another sequence of operations to emulate the operation ("expansion"), by - promoting to a larger type that supports the operation - (promotion), or using a target-specific hook to implement the - legalization (custom).
+ promoting one type to a larger type that supports the operation + ("promotion"), or by using a target-specific hook to implement the + legalization ("custom").A target implementation tells the legalizer which operations are not supported (and which of the above three actions to take) by calling the - "setOperationAction" method in its TargetLowering constructor.
+ setOperationAction method in its TargetLowering + constructor.-Prior to the existance of the Legalize pass, we required that every -target selector supported and handled every +
Prior to the existance of the Legalize pass, we required that every target +selector supported and handled every operator and type even if they are not natively supported. The introduction of -the Legalize phase allows all of the -cannonicalization patterns to be shared across targets, and makes it very -easy to optimize the cannonicalized code because it is still in the form of -a DAG. -
+the Legalize phase allows all of the cannonicalization patterns to be shared +across targets, and makes it very easy to optimize the cannonicalized code +because it is still in the form of a DAG. @@ -918,27 +921,24 @@ a DAG.-The SelectionDAG optimization phase is run twice for code generation: once +
The SelectionDAG optimization phase is run twice for code generation: once immediately after the DAG is built and once after legalization. The first run of the pass allows the initial code to be cleaned up (e.g. performing optimizations that depend on knowing that the operators have restricted type inputs). The second run of the pass cleans up the messy code generated by the Legalize pass, which allows Legalize to be very simple (it can focus on making -code legal instead of focusing on generating good and legal code). -
+code legal instead of focusing on generating good and legal code). + +One important class of optimizations performed is optimizing inserted sign +and zero extension instructions. We currently use ad-hoc techniques, but could +move to more rigorous techniques in the future. Here are some good papers on +the subject:
-One important class of optimizations performed is optimizing inserted sign and -zero extension instructions. We currently use ad-hoc techniques, but could move -to more rigorous techniques in the future. Here are some good -papers on the subject:
- -
-"Widening
-integer arithmetic"
-Kevin Redwine and Norman Ramsey
-International Conference on Compiler Construction (CC) 2004
+ "Widening
+ integer arithmetic"
+ Kevin Redwine and Norman Ramsey
+ International Conference on Compiler Construction (CC) 2004
The Select phase is the bulk of the target-specific code for instruction -selection. This phase takes a legal SelectionDAG as input, -pattern matches the instructions supported by the target to this DAG, and -produces a new DAG of target code. For example, consider the following LLVM -fragment:
+selection. This phase takes a legal SelectionDAG as input, pattern matches the +instructions supported by the target to this DAG, and produces a new DAG of +target code. For example, consider the following LLVM fragment: +- %t1 = add float %W, %X - %t2 = mul float %t1, %Y - %t3 = add float %t2, %Z +%t1 = add float %W, %X +%t2 = mul float %t1, %Y +%t3 = add float %t2, %Z+
This LLVM code corresponds to a SelectionDAG that looks basically like this: -
+This LLVM code corresponds to a SelectionDAG that looks basically like +this:
+- (fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z) +(fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z)+
If a target supports floating point multiply-and-add (FMA) operations, one of the adds can be merged with the multiply. On the PowerPC, for example, the output of the instruction selector might look like this DAG:
+- (FMADDS (FADDS W, X), Y, Z) +(FMADDS (FADDS W, X), Y, Z)+
-The FMADDS instruction is a ternary instruction that multiplies its first two -operands and adds the third (as single-precision floating-point numbers). The -FADDS instruction is a simple binary single-precision add instruction. To -perform this pattern match, the PowerPC backend includes the following -instruction definitions: -
+The FMADDS instruction is a ternary instruction that multiplies its +first two operands and adds the third (as single-precision floating-point +numbers). The FADDS instruction is a simple binary single-precision +add instruction. To perform this pattern match, the PowerPC backend includes +the following instruction definitions:
+def FMADDS : AForm_1<59, 29, (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRC, F4RC:$FRB), @@ -1005,6 +1009,7 @@ def FADDS : AForm_2<59, 21, "fadds $FRT, $FRA, $FRB", [(set F4RC:$FRT, (fadd F4RC:$FRA, F4RC:$FRB))]>;+
The portion of the instruction definition in bold indicates the pattern used to match the instruction. The DAG operators (like fmul/fadd) @@ -1012,8 +1017,8 @@ are defined in the lib/Target/TargetSelectionDAG.td file. "F4RC" is the register class of the input and result values.
The TableGen DAG instruction selector generator reads the instruction -patterns in the .td and automatically builds parts of the pattern matching code -for your target. It has the following strengths:
+patterns in the .td file and automatically builds parts of the pattern +matching code for your target. It has the following strengths:- // Arbitrary immediate support. Implement in terms of LIS/ORI. - def : Pat<(i32 imm:$imm), - (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>; +// Arbitrary immediate support. Implement in terms of LIS/ORI. +def : Pat<(i32 imm:$imm), + (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>;- +
-While it has many strengths, the system currently has some limitations, -primarily because it is a work in progress and is not yet finished: -
+While it has many strengths, the system currently has some limitations, +primarily because it is a work in progress and is not yet finished:
Note that this phase is logically separate from the instruction selection phase, but is tied to it closely in the code because it operates on @@ -1121,7 +1129,7 @@ SelectionDAGs.
To Be Written
To Be Written
For the JIT or .o file writer
+For the JIT or .o file writer
To Be Written
-The X86 code generator lives in the lib/Target/X86 directory. This +
The X86 code generator lives in the lib/Target/X86 directory. This code generator currently targets a generic P6-like processor. As such, it produces a few P6-and-above instructions (like conditional moves), but it does not make use of newer features like MMX or SSE. In the future, the X86 backend @@ -1210,11 +1212,10 @@ implementations.
-The following are the known target triples that are supported by the X86 -backend. This is not an exhaustive list, but it would be useful to add those -that people test. -
+ +The following are the known target triples that are supported by the X86 +backend. This is not an exhaustive list, and it would be useful to add those +that people test.
- Base+[1,2,4,8]*IndexReg+Disp32 +Base + [1,2,4,8] * IndexReg + Disp32+
In order to represent this, LLVM tracks no less than 4 operands for each -memory operand of this form. This means that the "load" form of 'mov' has the -following MachineOperands in this order:
+memory operand of this form. This means that the "load" form of 'mov' +has the following MachineOperands in this order:Index: 0 | 1 2 3 4 @@ -1252,7 +1255,7 @@ OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg, SignExtImm
Stores, and all other instructions, treat the four memory operands in the -same way, in the same order.
+same way and in the same order.-An instruction name consists of the base name, a default operand size, and a +
An instruction name consists of the base name, a default operand size, and a a character per operand with an optional special size. For example: