actually *removes* one of the operands, instead of just assigning both operands
the same register. This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.
Changing this also gets rid of a bunch of hacky code in various places.
This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30108 91177308-0d34-0410-b5e6-96231b3b80d8
This pass:
1. Splits TargetMachine into TargetMachine (generic targets, can be implemented
any way, like the CBE) and LLVMTargetMachine (subclass of TM that is used by
things using libcodegen and other support).
2. Instead of having each target fully populate the passmgr for file or JIT
output, move all this to common code, and give targets hooks they can
implement.
3. Commonalize the target population stuff between file emission and JIT
emission.
4. All (native code) codegen stuff now happens in a FunctionPassManager, which
paves the way for "fast -O0" stuff in the CFE later, and now LLC could
lazily stream .bc files from disk to use less memory.
5. There are now many fewer #includes and the targets don't depend on the
scalar xforms or libanalysis anymore (but codegen does).
6. Changing common code generator pass ordering stuff no longer requires
touching all targets.
7. The JIT now has the option of "-fast" codegen or normal optimized codegen,
which is now orthogonal to the fact that JIT'ing is being done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30081 91177308-0d34-0410-b5e6-96231b3b80d8
Suppose the TokenFactor can reach the Op:
[Load chain]
^
|
[Load]
^ ^
| |
/ \-
/ |
/ [Op]
/ ^ ^
| .. |
| / |
[TokenFactor] |
^ |
| |
\ /
\ /
[Store]
If we move the Load below the TokenFactor, we would have created a cycle in
the DAG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30040 91177308-0d34-0410-b5e6-96231b3b80d8
in the start of an array and a count of operands where applicable. In many
cases, the number of operands is known, so this static array can be allocated
on the stack, avoiding the heap. In many other cases, a SmallVector can be
used, which has the same benefit in the common cases.
I updated a lot of code calling getNode that takes a vector, but ran out of
time. The rest of the code should be updated, and these methods should be
removed.
We should also do the same thing to eliminate the methods that take a
vector of MVT::ValueTypes.
It would be extra nice to convert the dagiselemitter to avoid creating vectors
for operands when calling getTargetNode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29566 91177308-0d34-0410-b5e6-96231b3b80d8
selection is done. That's rather expensive especially in situations where it
isn't really needed.
Move back to a searching the predecessors, but make use of topological order
to trim the search space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29559 91177308-0d34-0410-b5e6-96231b3b80d8
Looks like libstdc++ implementation does not scale very well. Switch back
to using directly managed arrays.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29469 91177308-0d34-0410-b5e6-96231b3b80d8
The CFE refers to all single-register constraints (like "A") by their 16-bit
name, even though the 8 or 32-bit version of the register may be needed.
The X86 backend should realize what is going on and redecode the name back
to its proper form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29420 91177308-0d34-0410-b5e6-96231b3b80d8