the point where it is to the 95% feature complete mark, it just
needs result updating to be done (then testing, optimization
etc).
More specificallly, this adds support for chain and flag handling
on the result nodes, support for sdnodexforms, support for variadic
nodes, memrefs, pinned physreg inputs, and probably lots of other
stuff.
In the old DAGISelEmitter, this deletes the dead code related to
OperatorMap, cleans up a variety of dead stuff handling "implicit
remapping" from things like globaladdr -> targetglobaladdr (which
is no longer used because globaladdr always needs to be legalized),
and some minor formatting fixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96716 91177308-0d34-0410-b5e6-96231b3b80d8
which is not always true if the mask contains undefs. Modified it to return
the first non undef value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96621 91177308-0d34-0410-b5e6-96231b3b80d8
Moderate the weight given to very small intervals.
The spill weight given to new intervals created when spilling was not
normalized in the same way as the original spill weights calculated by
CalcSpillWeights. That meant that restored registers would tend to hang around
because they had a much higher spill weight that unspilled registers.
This improves the runtime of a few tests by up to 10%, and there are no
significant regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96613 91177308-0d34-0410-b5e6-96231b3b80d8
I'd like to eventually rip it out, but for now producing the
same selections as the old matcher is more important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96458 91177308-0d34-0410-b5e6-96231b3b80d8
CheckComplexPattern function. Though it is logically const,
I don't have the fortitude to clean up all the targets now,
and it not being const doesn't block anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96426 91177308-0d34-0410-b5e6-96231b3b80d8
use and only call IsProfitableToFold/IsLegalToFold on the load
being folded, like the old dagiselemitter does. This
substantially simplifies the code and improves opportunities for
sharing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96368 91177308-0d34-0410-b5e6-96231b3b80d8
IsLegalToFold and IsProfitableToFold. The generic version of the later simply checks whether the folding candidate has a single use.
This allows the target isel routines more flexibility in deciding whether folding makes sense. The specific case we are interested in is folding constant pool loads with multiple uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96255 91177308-0d34-0410-b5e6-96231b3b80d8
produce a table based matcher instead of gobs of C++ Code.
Though it's not done yet, the shrinkage seems promising,
the table for the X86 ISel is 75K and still has a lot of
optimization to come (compare to the ~1.5M of .o generated
the old way, much of which will go away).
The code is currently disabled by default (the #if 0 in
DAGISelEmitter.cpp). When enabled it generates a dead
SelectCode2 function in the DAGISel Header which will
eventually replace SelectCode.
There is still a lot of stuff left to do, which are
documented with a trail of FIXMEs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96215 91177308-0d34-0410-b5e6-96231b3b80d8
created. This ensures it's updated at all time. It means targets which perform
dynamic stack alignment would know whether it is required and whether frame
pointer register cannot be made available register allocation.
This is a fix for rdar://7625239. Sorry, I can't create a reasonably sized test
case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96069 91177308-0d34-0410-b5e6-96231b3b80d8
reduce down to a single value. InstCombine already does this transformation
but DAG legalization may introduce new opportunities. This has turned out to
be important for ARM where 64-bit values are split up during type legalization:
InstCombine is not able to remove the PHI cycles on the 64-bit values but
the separate 32-bit values can be optimized. I measured the compile time
impact of this (running llc on 176.gcc) and it was not significant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95951 91177308-0d34-0410-b5e6-96231b3b80d8
The major win of this is that the code is simpler and they
print on the same line as the instruction again:
movl %eax, 96(%esp) ## 4-byte Spill
movl 96(%esp), %eax ## 4-byte Reload
cmpl 92(%esp), %eax ## 4-byte Folded Reload
jl LBB7_86
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95738 91177308-0d34-0410-b5e6-96231b3b80d8
into TargetOpcodes.h. #include the new TargetOpcodes.h
into MachineInstr. Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the
codebase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95687 91177308-0d34-0410-b5e6-96231b3b80d8
are from debug info. Add an iterator to MachineRegisterInfo
to skip Debug operands when walking the use list. No
functional change yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95473 91177308-0d34-0410-b5e6-96231b3b80d8
Instruction selection for X86 now can choose an instruction
sequence that will fit any address of any symbol, no matter
the pointer width. X86-64 uses a mov+call-via-reg sequence
for this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95323 91177308-0d34-0410-b5e6-96231b3b80d8
mccontext instead of having AsmPrinter do it. This allows other
types of MCStreamer's to be passed in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95155 91177308-0d34-0410-b5e6-96231b3b80d8
"visit*" method is called, take the newly created nodes, walk them in a DFS
fashion, and if they don't have an ordering set, then give it one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94757 91177308-0d34-0410-b5e6-96231b3b80d8
This allows code gen and the exception table writer to cooperate to make sure
landing pads are associated with the correct invoke locations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94726 91177308-0d34-0410-b5e6-96231b3b80d8
runOnMachineFunction, and switch PPC to use EmitFunctionBody.
The two ppc asmprinters now don't heave to define
runOnMachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94722 91177308-0d34-0410-b5e6-96231b3b80d8
Move the X86 implementation of function body emission up to
AsmPrinter::EmitFunctionBody, which works by calling the virtual
EmitInstruction method.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94716 91177308-0d34-0410-b5e6-96231b3b80d8
which allows targets to override function entry label emission.
Use it to convert linux/ppc to use EmitFunctionHeader().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94667 91177308-0d34-0410-b5e6-96231b3b80d8
which is more convenient, and change getPICJumpTableRelocBaseExpr
to take a MachineFunction to match.
Next, move the X86 code that create a PICBase symbol to
X86TargetLowering::getPICBaseSymbol from
X86MCInstLower::GetPICBaseSymbol, which was an asmprinter specific
library. This eliminates a 'gross hack', and allows us to
implement X86ISelLowering::getPICJumpTableRelocBaseExpr which now
calls it.
This in turn allows us to eliminate the
X86AsmPrinter::printPICJumpTableSetLabel method, which was the
only overload of printPICJumpTableSetLabel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94526 91177308-0d34-0410-b5e6-96231b3b80d8
make it private and non-virtual. It handles the non-pic
case too, so just use it, simplifying EmitJumpTableInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94517 91177308-0d34-0410-b5e6-96231b3b80d8
MachineFunctionAnalysis dole them out, instead of having
AsmPrinter do both. Have the AsmPrinter::SetupMachineFunction
method set the 'AsmPrinter::MF' variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94509 91177308-0d34-0410-b5e6-96231b3b80d8
1. MachineJumpTableInfo is now created lazily for a function the first time
it actually makes a jump table instead of for every function.
2. The encoding of jump table entries is now described by the
MachineJumpTableInfo::JTEntryKind enum. This enum is determined by the
TLI::getJumpTableEncoding() hook, instead of by lots of code scattered
throughout the compiler that "knows" that jump table entries are always
32-bits in pic mode (for example).
3. The size and alignment of jump table entries is now calculated based on
their kind, instead of at machinefunction creation time.
Future work includes using the EntryKind in more places in the compiler,
eliminating other logic that "knows" the layout of jump tables in various
situations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94470 91177308-0d34-0410-b5e6-96231b3b80d8
the '-pre-RA-sched' flag. It actually makes more sense to do it this way. Also,
keep track of the SDNode ordering by default. Eventually, we would like to make
this ordering a way to break a "tie" in the scheduler. However, doing that now
breaks the "CodeGen/X86/abi-isel.ll" test for 32-bit Linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94308 91177308-0d34-0410-b5e6-96231b3b80d8
to MCExpr then emit them through MCStreamer with EmitValue. I think all
global variable initializers are now going through mcstreamer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94293 91177308-0d34-0410-b5e6-96231b3b80d8
of int initializers), change some methods to be static functions,
use raw_ostream::write_hex instead of a smallstring dance with
APValue::toStringUnsigned(S, 16).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93991 91177308-0d34-0410-b5e6-96231b3b80d8
function can support dynamic stack realignment. That's a much easier question
to answer at instruction selection stage than whether the function actually
will have dynamic alignment prologue. This allows the removal of the
stack alignment heuristic pass, and improves code quality for cases where
the heuristic would result in dynamic alignment code being generated when
it was not strictly necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93885 91177308-0d34-0410-b5e6-96231b3b80d8
doing global variable classification anymore) and hookized, sink almost
all target targets global variable emission code into AsmPrinter and out
of each target.
Some notes:
1. PIC16 does completely custom and crazy stuff, so it is not changed.
2. XCore has some custom handling for extra directives. I'll look at it next.
3. This switches linux/ppc to use .globl instead of .global. If .globl is
actually wrong, let me know and I'll fix it.
4. This makes linux/ppc get a lot of random cases right which were obviously
wrong before, it is probably now a bit healthier.
5. Blackfin will probably start getting .comm and other things that it didn't
before. If this is undesirable, it should explicitly opt out of these
things by clearing the relevant fields of MCAsmInfo.
This leads to a nice diffstat:
14 files changed, 127 insertions(+), 830 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93858 91177308-0d34-0410-b5e6-96231b3b80d8
This makes a similar code dead in all the other targets, I'll clean it up
in a bit.
This also moves handling of lcomm up before acquisition of a section,
since lcomm never needs a section.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93851 91177308-0d34-0410-b5e6-96231b3b80d8
as it emits code. Switch .globl directives to use OutStreamer instead of
doing it textually (in x86)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93700 91177308-0d34-0410-b5e6-96231b3b80d8
and add an explicit ForcePrivate argument.
Switch FunctionEHFrameInfo to be MCSymbol based instead of string based.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93646 91177308-0d34-0410-b5e6-96231b3b80d8
replace it. Upgrade Alpha, Blackfin, and part of CellSPU to not
use mangler anymore. CellSPU needs more invasive surgery.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93589 91177308-0d34-0410-b5e6-96231b3b80d8
print/dumpWithDepth allows one to dump a DAG up to N levels deep.
dump/printWithFullDepth prints the whole DAG, subject to a depth limit
on 100 in the default case (to prevent infinite recursion).
Have CannotYetSelect to a dumpWithFullDepth so it is clearer exactly
what the non-matching DAG looks like.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93538 91177308-0d34-0410-b5e6-96231b3b80d8
Remove most of old Mach-O Writer support, it has been replaced by MCMachOStreamer
Further refactoring to completely remove MachOWriter and drive the object file
writer with the AsmPrinter MCInst/MCSection logic is forthcoming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93527 91177308-0d34-0410-b5e6-96231b3b80d8
For now, this pass is fairly conservative. It only perform the replacement when both the pre- and post- extension values are used in the block. It will miss cases where the post-extension values are live, but not used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93278 91177308-0d34-0410-b5e6-96231b3b80d8
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92849 91177308-0d34-0410-b5e6-96231b3b80d8
An instruction like this:
%reg1097:1<def> = VMOVSR %R3<kill>, 14, %reg0
Must be replaced with this when substituting physical registers:
%S0<def> = VMOVSR %R3<kill>, 14, %reg0, %D0<imp-def>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92812 91177308-0d34-0410-b5e6-96231b3b80d8
clear what information these functions are actually using.
This is also a micro-optimization, as passing a SDNode * around is
simpler than passing a { SDNode *, int } by value or reference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92564 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes an in-place update bug where code inserted at the end of basic blocks may not be covered by existing intervals which were live across the entire block. It is also consistent with the way ranges are specified for live intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91859 91177308-0d34-0410-b5e6-96231b3b80d8
- Move DisableScheduling flag into TargetOption.h
- Move SDNodeOrdering into its own header file. Give it a minimal interface that
doesn't conflate construction with storage.
- Move assigning the ordering into the SelectionDAGBuilder.
This isn't used yet, so there should be no functional changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91727 91177308-0d34-0410-b5e6-96231b3b80d8
LegalizeDAG.cpp. Unlike the code it replaces, which simply decrements the simple
type by one, getHalfSizedIntegerVT() searches for the smallest simple integer
type that is at least half the size of the type it is called on. This approach
has the advantage that it will continue working if a new value type (such as
i24) is added to MVT.
Also, in preparation for new value types, remove the assertions that
non-power-of-2 8-bit-mutiple types are Extended when legalizing extload and
truncstore operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91614 91177308-0d34-0410-b5e6-96231b3b80d8
remove start/finishGVStub and the BufferState helper class from the
MachineCodeEmitter interface. It has the side-effect of not setting the
indirect global writable and then executable on ARM, but that shouldn't be
necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91464 91177308-0d34-0410-b5e6-96231b3b80d8
isPodLike type trait. This is a generally useful type trait for
more than just DenseMap, and we really care about whether something
acts like a pod, not whether it really is a pod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91421 91177308-0d34-0410-b5e6-96231b3b80d8
stuff isn't used just yet.
We want to model the GCC `-fno-schedule-insns' and `-fno-schedule-insns2'
flags. The hypothesis is that the people who use these flags know what they are
doing, and have hand-optimized the C code to reduce latencies and other
conflicts.
The idea behind our scheme to turn off scheduling is to create a map "on the
side" during DAG generation. It will order the nodes by how they appeared in the
code. This map is then used during scheduling to get the ordering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91392 91177308-0d34-0410-b5e6-96231b3b80d8
- Loosen the restrictions when checking of it branches to a landing pad.
- Make the loop more efficient by checking the '.insert' return value.
- Do cheaper checks first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91101 91177308-0d34-0410-b5e6-96231b3b80d8
more than one successor. Normally, these extra successors are dead. However,
some of them may branch to exception handling landing pads. If we remove those
successors, then the landing pads could go away if all predecessors to it are
removed. Before, it was checking if the direct successor was the landing
pad. But it could be the result of jumping through multiple basic blocks to get
to it. If we were to only check for the existence of an EH_LABEL in the basic
block and not remove successors if it's in there, then it could stop actually
dead basic blocks from being removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91092 91177308-0d34-0410-b5e6-96231b3b80d8