No test case unfortunately as i couldn't find a target which fit all
the conditions needed to hit this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163075 91177308-0d34-0410-b5e6-96231b3b80d8
NEON domain conversion was too heavy-handed with its widened
registers, which could have stripped existing instructions of their
dependency, leaving them vulnerable to scheduling errors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163070 91177308-0d34-0410-b5e6-96231b3b80d8
output chain is correctly setup.
As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.
rdar://11457792
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163036 91177308-0d34-0410-b5e6-96231b3b80d8
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163021 91177308-0d34-0410-b5e6-96231b3b80d8
- In addition to undefined, if V2 is zero vector, skip 2nd PSHUFB and POR as
well as PSHUFB will zero elements with negative indices.
Patch by Sriram Murali <sriram.murali@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163018 91177308-0d34-0410-b5e6-96231b3b80d8
on the size of the extraction and its position in the 64 bit word.
This patch allows support of the dext transformations with mips64 direct
object output.
0 <= msb < 32 0 <= lsb < 32 0 <= pos < 32 1 <= size <= 32
DINS
The field is entirely contained in the right-most word of the doubleword
32 <= msb < 64 0 <= lsb < 32 0 <= pos < 32 2 <= size <= 64
DINSM
The field straddles the words of the doubleword
32 <= msb < 64 32 <= lsb < 64 32 <= pos < 64 1 <= size <= 32
DINSU
The field is entirely contained in the left-most word of the doubleword
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163010 91177308-0d34-0410-b5e6-96231b3b80d8
I was too optimistic, inline asm can have tied operands that don't
follow the def order.
Fixes PR13742.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162998 91177308-0d34-0410-b5e6-96231b3b80d8
- Overloading operator<< for raw_ostream and pointers is dangerous, it alters
the behavior of code that includes the header.
- Remove unused ID.
- Use LLVM's byte swapping helpers instead of a hand-coded.
- Make ReadProfilingData work directly on a pointer.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162992 91177308-0d34-0410-b5e6-96231b3b80d8
Thumb2 instructions are mostly constrained to rGPR, not tGPR which is
for Thumb1.
rdar://problem/12203728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162968 91177308-0d34-0410-b5e6-96231b3b80d8
The assembly string for the VMOVPQIto64rr instruction incorrectly lacked the 'v'
prefix, resulting in mis-assembly of the vanilla movd instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162963 91177308-0d34-0410-b5e6-96231b3b80d8
because it does not support CMOV of vectors. To implement this efficientlyi, we broadcast the condition bit and use a sequence of NAND-OR
to select between the two operands. This is the same sequence we use for targets that don't have vector BLENDs (like SSE2).
rdar://12201387
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162926 91177308-0d34-0410-b5e6-96231b3b80d8
AsmMatcherEmitter. This function maps inline assembly operands to MCInst
operands.
For example, '__asm mov j, eax' is represented by the follow MCInst:
<MCInst 1460 <MCOperand Reg:0> <MCOperand Imm:1> <MCOperand Reg:0>
<MCOperand Expr:(j)> <MCOperand Reg:0> <MCOperand Reg:43>>
The first 5 MCInst operands are a result of j matching as a memory operand
consisting of a BaseReg (Reg:0), MemScale (Imm:1), MemIndexReg(Reg:0),
Expr (Expr:(j), and a MemSegReg (Reg:0). The 6th MCInst operand represents
the eax register (Reg:43).
This translation is necessary to determine the Input and Output Exprs. If a
single asm operand maps to multiple MCInst operands, the index of the first
MCInst operand is returned. Ideally, it would return the operand we really
care out (i.e., the Expr:(j) in this case), but I haven't found an easy way
of doing this yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162920 91177308-0d34-0410-b5e6-96231b3b80d8
- Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
enabled.
As the penalty of inter-mixing SSE and AVX instructions, we need
prevent SSE legacy insn from being generated except explicitly
specified through some intrinsics. For patterns supported by both
SSE and AVX, so far, we force AVX insn will be tried first relying on
AddedComplexity or position in td file. It's error-prone and
introduces bugs accidentally.
'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
by AVX, we need this predicate to force VEX encoding or SSE legacy
encoding only.
For insns not inherited by AVX, we still use the previous predicates,
i.e. 'HasSSEx'. So far, these insns fall into the following
categories:
* SSE insns with MMX operands
* SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
CRC, and etc.)
* SSE4A insns.
* MMX insns.
* x87 insns added by SSE.
2 test cases are modified:
- test/CodeGen/X86/fast-isel-x86-64.ll
AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
selected by fast-isel due to complicated pattern and fast-isel
fallback to materialize it from constant pool.
- test/CodeGen/X86/widen_load-1.ll
AVX code generation is different from SSE one after fixing SSE/AVX
inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
'vmovaps'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162919 91177308-0d34-0410-b5e6-96231b3b80d8