- getToken is modeled after StringRef::split but it can split on multiple
separator chars and skips leading seperators.
- SplitString is a StringRef::split variant for more than 2 elements with the
same behaviour as getToken.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93161 91177308-0d34-0410-b5e6-96231b3b80d8
R11, and then asserting that the target was in R9. Since R9 isn't reserved for
the target anymore, and is used as an argument, this patch changes the
assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93065 91177308-0d34-0410-b5e6-96231b3b80d8
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92849 91177308-0d34-0410-b5e6-96231b3b80d8
(or (x << c) | (y >> (64 - c))) ==> (shld64 x, y, c)
The isel patterns may not catch all the cases if general dag combine has reduced width of source operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92513 91177308-0d34-0410-b5e6-96231b3b80d8
return partial registers. This affected the back-end lowering code some.
Also patch up some places I missed before in the "get" functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91880 91177308-0d34-0410-b5e6-96231b3b80d8
incrementing the simple value type of the 16-bit type, which would give the
wrong type if an intemediate MVT (such as i24) were introduced.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91602 91177308-0d34-0410-b5e6-96231b3b80d8
divide/remainder since these operations can trap by unroll them and adding undefs
for the resulting vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90108 91177308-0d34-0410-b5e6-96231b3b80d8
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89711 91177308-0d34-0410-b5e6-96231b3b80d8
The large code model is documented at
http://www.x86-64.org/documentation/abi.pdf and says that calls should
assume their target doesn't live within the 32-bit pc-relative offset
that fits in the call instruction.
To do this, we turn off the global-address->target-global-address
conversion in X86TargetLowering::LowerCall(). The first attempt at
this broke the lazy JIT because it can separate the movabs(imm->reg)
from the actual call instruction. The lazy JIT receives the address of
the movabs as a relocation and needs to record the return address from
the call; and then when that call happens, it needs to patch the
movabs with the newly-compiled target. We could thread the call
instruction into the relocation and record the movabs<->call mapping
explicitly, but that seems to require at least as much new
complication in the code generator as this change.
To fix this, we make lazy functions _always_ go through a call
stub. You'd think we'd only have to force lazy calls through a stub on
difficult platforms, but that turns out to break indirect calls
through a function pointer. The right fix for that is to distinguish
between calls and address-of operations on uncompiled functions, but
that's complex enough to leave for someone else to do.
Another attempt at this defined a new CALL64i pseudo-instruction,
which expanded to a 2-instruction sequence in the assembly output and
was special-cased in the X86CodeEmitter's emitInstruction()
function. That broke indirect calls in the same way as above.
This patch also removes a hack forcing Darwin to the small code model.
Without far-call-stubs, the small code model requires things of the
JITMemoryManager that the DefaultJITMemoryManager can't provide.
Thanks to echristo for lots of testing!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88984 91177308-0d34-0410-b5e6-96231b3b80d8
slots. The AsmPrinter will use this information to determine whether to
print a spill/reload comment.
Remove default argument values. It's too easy to pass a wrong argument
value when multiple arguments have default values. Make everything
explicit to trap bugs early.
Update all targets to adhere to the new interfaces..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87022 91177308-0d34-0410-b5e6-96231b3b80d8
1. rename the movhp patfrag to movlhps, since thats what it actually matches
2. eliminate the bogus movhps load and store patterns, they were incorrect. The load transforms are already handled (correctly) by shufps/unpack.
3. revert a recent test change to its correct form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86415 91177308-0d34-0410-b5e6-96231b3b80d8
encounters an OEQ or UNE comparison, and update its callers to check
for this return status and recover. This fixes a problem resulting from
the LowerOperation hooks being called from LegalizeVectorOps, because
LegalizeVectorOps only lowers vectors, so OEQ and UNE comparisons may
still be at large. This fixes PR5092.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84640 91177308-0d34-0410-b5e6-96231b3b80d8
stack slots and giving them different PseudoSourceValue's did not fix the
problem of post-alloc scheduling miscompiling llvm itself.
- Apply Dan's conservative workaround by assuming any non fixed stack slots can
alias other memory locations. This means a load from spill slot #1 cannot
move above a store of spill slot #2.
- Enable post-alloc scheduling for x86 at optimization leverl Default and above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84424 91177308-0d34-0410-b5e6-96231b3b80d8
it to hold the address of an sret return value, for x86-64 ABI purposes.
Also, fix the test that was originally intended to test this to actually
test it, using FileCheck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83853 91177308-0d34-0410-b5e6-96231b3b80d8
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
This eliminates MachineInstr's std::list member and allows the data to be
created by isel and live for the remainder of codegen, avoiding a lot of
copying and unnecessary translation. This also shrinks MemSDNode.
- Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
fields for MachineMemOperands.
- Change MemSDNode to have a MachineMemOperand member instead of its own
fields with the same information. This introduces some redundancy, but
it's more consistent with what MachineInstr will eventually want.
- Ignore alignment when searching for redundant loads for CSE, but remember
the greatest alignment.
Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82794 91177308-0d34-0410-b5e6-96231b3b80d8
And fix a bug with the behavior of min/max instructions formed from
fcmp uge comparisons.
Also, use FiniteOnlyFPMath() for this code instead of UnsafeFPMath,
as it is more specific.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82466 91177308-0d34-0410-b5e6-96231b3b80d8