on x86, to avoid explicit test instructions. A few existing tests changed
due to arbitrary register allocation differences.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82263 91177308-0d34-0410-b5e6-96231b3b80d8
the Intel instruction tables.
The patterns will stay blank because ADD reg, reg
is faster, but having the encoding available is
useful for the disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81994 91177308-0d34-0410-b5e6-96231b3b80d8
Intel tables, where the source operand is
specified by the R/M field and the destination
operand by the Reg field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81914 91177308-0d34-0410-b5e6-96231b3b80d8
to the Intel register table.
Added 16- and 64-bit MOVs to and from the segment
registers to the Intel instruction tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81895 91177308-0d34-0410-b5e6-96231b3b80d8
disabling the use of 16-bit operations on x86. This doesn't yet work for
inline asms with 16-bit constraints, vectors with 16-bit elements,
trampoline code, and perhaps other obscurities, but it's enough to try
some experiments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80930 91177308-0d34-0410-b5e6-96231b3b80d8
instruction tables to support segmented addressing (and other objects
of obscure type).
Modified the X86 assembly printers to handle these new operand types.
Added JMP and CALL instructions that use segmented addresses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80857 91177308-0d34-0410-b5e6-96231b3b80d8
leads to partial-register definitions. To help avoid redundant
zero-extensions, also teach the h-register matching patterns that
use movzbl to match anyext as well as zext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80099 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78142 91177308-0d34-0410-b5e6-96231b3b80d8
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.
This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.
Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77582 91177308-0d34-0410-b5e6-96231b3b80d8
of lea. It is better for code size (and presumably efficiency) to use:
movl $foo, %eax
rather than:
leal foo, eax
Both give a nice zero extending "move immediate" instruction, the former is just
smaller. Note that global addresses should be handled different by the x86
backend, but I chose to follow the style already in place and add more fixme's.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75403 91177308-0d34-0410-b5e6-96231b3b80d8
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not. Instead, those decisions are made by isel lowering
and propagated through to the asm printer. To achieve this, we:
1. Represent RIP relative addresses by setting the base of the X86 addr
mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
X86ISD::WrapperRIP. When it is unsafe to use RIP, it lowers to
X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
when to emit (%rip), they just print the symbol.
I think this is a big improvement over the previous situation. It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier. This is a short term hack, there is
a much better, but more involved, solution. 2. I had to xfail an
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction. This specific test is easy to fix without
-aggressive-remat, which I intend to do next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74372 91177308-0d34-0410-b5e6-96231b3b80d8
a global with that gets printed with the :mem modifier. All operands to lea's
should be handled with the lea32mem operand kind, and this allows the TLS stuff
to do this. There are several better ways to do this, but I went for the minimal
change since I can't really test this (beyond make check).
This also makes the use of EBX explicit in the operand list in the 32-bit,
instead of implicit in the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73834 91177308-0d34-0410-b5e6-96231b3b80d8
LEA64_32r, eliminating a bunch of modifier logic stuff on addr modes.
Implement support for printing mbb labels as operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73817 91177308-0d34-0410-b5e6-96231b3b80d8
that push immediate operands of 1, 2, and 4 bytes (extended to the native
register size in each case). The assembly mnemonics are "pushl" and "pushq."
One such instruction appears at the beginning of the "start" function , so this
is essential for accurate disassembly when unwinding."
Patch by Sean Callanan!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73407 91177308-0d34-0410-b5e6-96231b3b80d8
relocation model on x86-64. Higher level logic should override
the relocation model to PIC on x86_64-apple-darwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72746 91177308-0d34-0410-b5e6-96231b3b80d8