because it does not support CMOV of vectors. To implement this efficientlyi, we broadcast the condition bit and use a sequence of NAND-OR
to select between the two operands. This is the same sequence we use for targets that don't have vector BLENDs (like SSE2).
rdar://12201387
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162926 91177308-0d34-0410-b5e6-96231b3b80d8
AsmMatcherEmitter. This function maps inline assembly operands to MCInst
operands.
For example, '__asm mov j, eax' is represented by the follow MCInst:
<MCInst 1460 <MCOperand Reg:0> <MCOperand Imm:1> <MCOperand Reg:0>
<MCOperand Expr:(j)> <MCOperand Reg:0> <MCOperand Reg:43>>
The first 5 MCInst operands are a result of j matching as a memory operand
consisting of a BaseReg (Reg:0), MemScale (Imm:1), MemIndexReg(Reg:0),
Expr (Expr:(j), and a MemSegReg (Reg:0). The 6th MCInst operand represents
the eax register (Reg:43).
This translation is necessary to determine the Input and Output Exprs. If a
single asm operand maps to multiple MCInst operands, the index of the first
MCInst operand is returned. Ideally, it would return the operand we really
care out (i.e., the Expr:(j) in this case), but I haven't found an easy way
of doing this yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162920 91177308-0d34-0410-b5e6-96231b3b80d8
- Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
enabled.
As the penalty of inter-mixing SSE and AVX instructions, we need
prevent SSE legacy insn from being generated except explicitly
specified through some intrinsics. For patterns supported by both
SSE and AVX, so far, we force AVX insn will be tried first relying on
AddedComplexity or position in td file. It's error-prone and
introduces bugs accidentally.
'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
by AVX, we need this predicate to force VEX encoding or SSE legacy
encoding only.
For insns not inherited by AVX, we still use the previous predicates,
i.e. 'HasSSEx'. So far, these insns fall into the following
categories:
* SSE insns with MMX operands
* SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
CRC, and etc.)
* SSE4A insns.
* MMX insns.
* x87 insns added by SSE.
2 test cases are modified:
- test/CodeGen/X86/fast-isel-x86-64.ll
AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
selected by fast-isel due to complicated pattern and fast-isel
fallback to materialize it from constant pool.
- test/CodeGen/X86/widen_load-1.ll
AVX code generation is different from SSE one after fixing SSE/AVX
inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
'vmovaps'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162919 91177308-0d34-0410-b5e6-96231b3b80d8
[Tobias von Koch] What's happening here is that the CR6SET/CR6UNSET is breaking the chain of register copies glued to the function call (BL_SVR4 node). The scheduler then moves other instructions in between those and the function call, which isn't good!
Right. That's the case where there is no chain of register copies before the call, so InFlag == 0... Attached is a new revision of the patch which should fix this for good.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162916 91177308-0d34-0410-b5e6-96231b3b80d8
The old PHI updating code in loop-rotate was replaced with SSAUpdater a while
ago, it has no problems with comples PHIs. What had to be fixed is detecting
whether a loop was already rotated and updating dominators when multiple exits
were present.
This change increases overall code size a bit, mostly due to additional loop
unrolling opportunities. Passes test-suite and selfhost with -verify-dom-info.
Fixes PR7447.
Thanks to Andy for the input on the domtree updating code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162912 91177308-0d34-0410-b5e6-96231b3b80d8
When a MachineInstr is constructed, its implicit operands are added
first, then the explicit operands are inserted before the implicits.
MCInstrDesc has oprand flags like early clobber and operand ties that
apply to the explicit operands.
Don't look at those flags when the implicit operands are first added in
the explicit operands's positions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162910 91177308-0d34-0410-b5e6-96231b3b80d8
code and allow better code reuse. Make the code a bit more conforming
to LLVM code style.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162895 91177308-0d34-0410-b5e6-96231b3b80d8
Changes the hash result for strings containing characters
with values >= 128, such as UTF8 strings (not normal ASCII).
Changed mostly so we match other implementations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162882 91177308-0d34-0410-b5e6-96231b3b80d8
- The root cause is that target constant materialization in X86 fast-isel
creates a PC-rel addressing which may overflow 32-bit range in non-Small code
model if .rodata section is allocated too far away from code segment in
MCJIT, which uses Large code model so far.
- Follow the similar logic to fix non-Small code model in fast-isel by skipping
non-Small code model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162881 91177308-0d34-0410-b5e6-96231b3b80d8
When there are multiple tied use-def pairs on an inline asm instruction,
the tied uses must appear in the same order as the defs.
It is possible to write an LLVM IR inline asm instruction that breaks
this constraint, but there is no reason for a front end to emit the
operands out of order.
The gnu inline asm syntax specifies tied operands as a single read/write
constraint "+r", so ouf of order operands are not possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162878 91177308-0d34-0410-b5e6-96231b3b80d8
Tombstones and full hash collisions are rare, mark the "empty"
and "no collision" paths as likely. The bug in simplifycfg
that prevented the hints from being picked during selfhost
up was fixed recently :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162874 91177308-0d34-0410-b5e6-96231b3b80d8
For normal instructions, isTied() is set automatically by addOperand(),
based on MCInstrDesc, but inline asm has tied operands outside the
descriptor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162869 91177308-0d34-0410-b5e6-96231b3b80d8
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162861 91177308-0d34-0410-b5e6-96231b3b80d8
This means the same as LoadInst/StoreInst::isUnordered(), and implies
!isVolatile().
Atomic loads and stored are also ordered, and this is the right method
to check if it is safe to reorder memory operations. Ordered atomics
can't be reordered wrt normal loads and stores, which is a stronger
constraint than volatile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162859 91177308-0d34-0410-b5e6-96231b3b80d8
It is technically allowed to move a normal load across a volatile load,
but probably not a good idea.
It is not allowed to move a load across an atomic load with
Ordering > Monotonic, and we model those with MOVolatile as well.
I recently removed the mayStore flag from atomic load instructions, so
they don't need a pseudo-opcode. This patch makes up for the difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162857 91177308-0d34-0410-b5e6-96231b3b80d8
This lets the user run the program from a different directory and still have the
.gcda files show up in the correct place.
<rdar://problem/12179524>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162855 91177308-0d34-0410-b5e6-96231b3b80d8
We need to reserve space for the mandatory traceback fields,
though leaving them as zero is appropriate for now.
Although the ABI calls for these fields to be filled in fully, no
compiler on Linux currently does this, and GDB does not read these
fields. GDB uses the first word of zeroes during exception handling to
find the end of the function and the size field, allowing it to compute
the beginning of the function. DWARF information is used for everything
else. We need the extra 8 bytes of pad so the size field is found in
the right place.
As a comparison, GCC fills in a few of the fields -- language, number
of saved registers -- but ignores the rest. IBM's proprietary OSes do
make use of the full traceback table facility.
Patch by Bill Schmidt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162854 91177308-0d34-0410-b5e6-96231b3b80d8
The operands on an INLINEASM machine instruction are divided into groups
headed by immediate flag operands. Verify this structure.
Extract verifyTiedOperands(), and only call it for non-inlineasm
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162849 91177308-0d34-0410-b5e6-96231b3b80d8
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.
Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.
Fixes PR13694 and probably others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162841 91177308-0d34-0410-b5e6-96231b3b80d8