Commit Graph

11381 Commits

Author SHA1 Message Date
Craig Topper
951d088ae7 [X86] Convert all the i8imm used by SSE and AVX instructions to u8imm.
This makes the assembler check their size and removes a hack from the disassembler to avoid sign extending the immediate.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226645 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 08:15:54 +00:00
Craig Topper
f81b1f346a [x86] Add assembly parser bounds checking to the immediate value for cmpss/cmpsd/cmpps/cmppd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226642 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 06:07:53 +00:00
Craig Topper
d624796bc6 [x86] Add some mayLoad/hasSideEffects flags. Remove one that was already covered by a pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226562 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 12:15:30 +00:00
Simon Pilgrim
2a2f94c5f2 [X86][AVX] Missing AVX1 memory folding float instructions
Now that we can create much more exhaustive X86 memory folding tests, this patch adds the missing AVX1/F16C floating point instruction stack foldings we can easily test for including the scalar intrinsics (add, div, max, min, mul, sub), conversions float/int to double, half precision conversions, rounding, dot product and bit test. The patch also adds a couple of obviously missing SSE instructions (more to follow once we have full SSE testing).

Now that scalar folding is working it broke a very old test (2006-10-07-ScalarSSEMiscompile.ll) - this test appears to make no sense as its trying to ensure that a scalar subtraction isn't folded as it 'would zero the top elts of the loaded vector' - this test just appears to be wrong to me.

Differential Revision: http://reviews.llvm.org/D7055



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 22:40:45 +00:00
Rafael Espindola
a23cc6a1ea Add r224985 back with fixes.
The fixes are to note that AArch64 has additional restrictions on when local
relocations can be used. In particular, ld64 requires that relocations to
cstring/cfstrings use linker visible symbols.

Original message:

In an assembly expression like

bar:
  .long L0 + 1

the intended semantics is that bar will contain a pointer one byte past L0.

In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.

The solution used in ELF to use relocation with symbols if there is a non-zero
addend.

In MachO before this patch we would just keep all symbols in some sections.

This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.

This patch implements the non-zero addend logic for MachO too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226503 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 21:11:14 +00:00
Craig Topper
6f91bd79f3 [x86] Change AVX512 intrinsics to take a 8-bit immediate for the comparision kind instead of a 32-bit immediate. This better aligns with the emitted instruction. It also matches SSE and AVX1 equivalents. Also add auto upgrade support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226430 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 06:07:27 +00:00
David Blaikie
341a7e245e std::unique_ptrify the MCStreamer argument to createAsmPrinter
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226414 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 20:29:04 +00:00
Saleem Abdulrasool
d4bdb4e0b2 X86: fix comment typo in AsmParser
Fix a typo.  NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226313 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 20:16:06 +00:00
Adam Nemet
ad2ac976af [AVX512] Add intrinsics for masked aligned FP loads and stores
Similar to the unaligned cases.

Test was generated with update_llc_test_checks.py.

Part of <rdar://problem/17688758>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 18:50:09 +00:00
Andrea Di Biagio
ac7b9c828f [X86][DAG] Disable target specific combine on INSERTPS dag nodes at -O0.
This patch disables target specific combine on X86ISD::INSERTPS dag nodes
if optlevel is CodeGenOpt::None.

The backend currently implements a target specific combine rule that converts
a vector load used by an INSERTPS dag node into a scalar load plus a
scalar_to_vector. This allows ISel to select a single INSERTPSrm instead of
two instructions (i.e. a vector load plus INSERTPSrr).

However, the existing target combine rule on INSERTPS nodes only works under
the assumption that ISel will always be able to match an INSERTPSrm. This is
not true in general at -O0, since the backend only allows folding a load into
the memory operand of an instruction if the optimization level is not
CodeGenOpt::None.

In the example below:

//
__m128 test(__m128 a, __m128 *b) {
  __m128 c = _mm_insert_ps(a, *b, 1 << 6);
  return c;
}
//

Before this patch, at -O0, the backend would have canonicalized the load to 'b'
into a scalar load plus scalar_to_vector. Later on, ISel would have selected an
INSERTPSrr leaving the insertps mask in an inconsistent state:

  movss 4(%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # xmm0 = xmm1[1],xmm0[1,2,3].

With this patch, the backend avoids folding the vector load into the operand of
the INSERTPS. The new codegen at -O0 is:

  movaps (%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # %xmm1[1],xmm0[1,2,3].


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226277 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 14:55:26 +00:00
Joerg Sonnenberger
638077aa41 Support @PLT loads on 32bit x86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226182 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 17:59:02 +00:00
Craig Topper
06185e7f6b Hide some redundant AVX512 instructions from the asm parser, but force them to show up in the disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226155 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 09:37:15 +00:00
Rafael Espindola
8327f0bca1 Revert "Add r224985 back with two fixes."
This reverts commit r225644 while I debug a regression.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226022 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 19:07:23 +00:00
David Majnemer
3eafccc036 Use the operand vector instead so inline assembly can be validated too
The buildbots got upset after r225941, this should hopefully fix things.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225954 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 06:14:36 +00:00
Saleem Abdulrasool
e1f65e239a X86: only access operands if they are present
If there is no associated immediate (MS style inline asm), do not try to access
the operand, assume that it is valid.  This should fix the buildbots after SVN
r225941.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225950 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 05:37:10 +00:00
JF Bastien
7f0cbb5703 Revert "Insert random noops to increase security against ROP attacks (llvm)"
This reverts commit:
http://reviews.llvm.org/D3392

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225948 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 05:24:33 +00:00
Saleem Abdulrasool
1679d0d3c2 X86: validate 'int' instruction
The int instruction takes as an operand an 8-bit immediate value.  Validate that
the input is valid rather than silently truncating the value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225941 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 05:10:21 +00:00
JF Bastien
21befa7761 Insert random noops to increase security against ROP attacks (llvm)
A pass that adds random noops to X86 binaries to introduce diversity with the goal of increasing security against most return-oriented programming attacks.

Command line options:
  -noop-insertion // Enable noop insertion.
  -noop-insertion-percentage=X // X% of assembly instructions will have a noop prepended (default: 50%, requires -noop-insertion)
  -max-noops-per-instruction=X // Randomly generate X noops per instruction. ie. roll the dice X times with probability set above (default: 1). This doesn't guarantee X noop instructions.

In addition, the following 'quick switch' in clang enables basic diversity using default settings (currently: noop insertion and schedule randomization; it is intended to be extended in the future).
  -fdiversify

This is the llvm part of the patch.
clang part: D3393

http://reviews.llvm.org/D3392
Patch by Stephen Crane (@rinon)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225908 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:07:26 +00:00
Adam Nemet
293f71ddd2 [AVX512] Unpack support in new shuffle lowering
This now handles both 32 and 64-bit element sizes.

In this version, the test are in vector-shuffle-512-v8.ll, canonicalized by
Chandler's update_llc_test_checks.py.

Part of <rdar://problem/17688758>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225838 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 22:20:18 +00:00
Adam Nemet
cebcef6329 [AVX512] Add pretty-printing of shuffle mask for unpacks
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225837 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 22:20:14 +00:00
Reid Kleckner
d8f69a7201 Rename llvm.recoverframeallocation to llvm.framerecover
This name is less descriptive, but it sort of puts things in the
'llvm.frame...' namespace, relating it to frameallocate and
frameaddress. It also avoids using "allocate" and "allocation" together.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225752 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 01:51:34 +00:00
Reid Kleckner
221a7075cf Add the llvm.frameallocate and llvm.recoverframeallocation intrinsics
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.

Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.

These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.

Reviewers: echristo, andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D6493

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225746 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 00:48:10 +00:00
Simon Pilgrim
c815f15537 [X86][SSE] Minor regression fix for r225551
r225551 vector byte shuffle optimization caused an assertion as fully zeroable vectors can be produced under certain circumstances. This fix drops the assert and returns a zero vector where the assert would have failed.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225718 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 22:38:08 +00:00
Ahmed Bougacha
cd5bbd8bad [X86] Also create+widen FMIN/FMAX nodes for v2f32.
This happens in the HINT benchmark, where the SLP-vectorizer created
v2f32 fcmp/select code.  The "correct" solution would have been to
teach the vectorizer cost model that v2f32 isn't legal (because really,
it isn't), but if we can vectorize we might as well do so.

We legalize these v2f32 FMIN/FMAX nodes by widening to v4f32 later on.
v3f32 were already widened to v4f32 by the generic unroll-and-build-vector
legalization.

rdar://15763436
Differential Revision: http://reviews.llvm.org/D6557


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225691 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 20:31:30 +00:00
Rafael Espindola
5512415ade Add r224985 back with two fixes.
One is that AArch64 has additional restrictions on when local relocations can
be used. We have to take those into consideration when deciding to put a L
symbol in the symbol table or not.

The other is that ld64 requires the relocations to cstring to use linker
visible symbols on AArch64.

Thanks to Michael Zolotukhin for testing this!

Remove doesSectionRequireSymbols.

In an assembly expression like

bar:
.long L0 + 1

the intended semantics is that bar will contain a pointer one byte past L0.

In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.

The solution used in ELF to use relocation with symbols if there is a non-zero
addend.

In MachO before this patch we would just keep all symbols in some sections.

This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.

This patch implements the non-zero addend logic for MachO too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225644 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 18:13:07 +00:00
Simon Pilgrim
dc5a2dabfe [X86][SSE] Minor fix to VPBLENDW AVX2 commutation.
D6015 / rL221313 enabled commutation for SSE immediate blend instructions, but due to a typo the AVX2 VPBLENDW ymm instructions weren't flagged as commutative along with the others in the tables, but were still being commuted in code and tested for.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225612 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-11 22:08:01 +00:00
David Majnemer
85a0cb9bf2 Revert most of r225597
We can't rely on a DataLayout enlightened constant folder.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225599 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-11 07:29:51 +00:00
David Majnemer
d2f4460ee7 X86: Properly decode shuffle masks when the constant pool type is weird
It's possible for the constant pool entry for the shuffle mask to come
from a completely different operation.  This occurs when Constants have
the same bit pattern but have different types.

Make DecodePSHUFBMask tolerant of types which, after a bitcast, are
appropriately sized vector types.

This fixes PR22188.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225597 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-11 05:08:57 +00:00
Saleem Abdulrasool
776673ea09 X86: teach X86TargetLowering about L,M,O constraints
Teach the ISelLowering for X86 about the L,M,O target specific constraints.
Although, for the moment, clang performs constraint validation and prevents
passing along inline asm which may have immediate constant constraints violated,
the backend should be able to cope with the invalid inline asm a bit better.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225596 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-11 04:39:24 +00:00
Simon Pilgrim
47abf0e3da [X86][SSE] Improved (v)insertps shuffle matching
In the current code we only attempt to match against insertps if we have exactly one element from the second input vector, irrespective of how much of the shuffle result is zeroable.

This patch checks to see if there is a single non-zeroable element from either input that requires insertion. It also supports matching of cases where only one of the inputs need to be referenced.

We also split insertps shuffle matching off into a new lowerVectorShuffleAsInsertPS function.

Differential Revision: http://reviews.llvm.org/D6879



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225589 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-10 19:45:33 +00:00
Simon Pilgrim
34630b6ea9 [X86][SSE] Avoid vector byte shuffles with zero by using pshufb to create zeros
pshufb can shuffle in zero bytes as well as bytes from a source vector - we can use this to avoid having to shuffle 2 vectors and ORing the result when the used inputs from a vector are all zeroable.

Differential Revision: http://reviews.llvm.org/D6878



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225551 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-09 22:03:19 +00:00
Lang Hames
4c553e0367 Recommit r224935 with a fix for the ObjC++/AArch64 bug that that revision
introduced.

A test case for the bug was already committed in r225385.

Patch by Rafael Espindola.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225534 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-09 18:55:42 +00:00
Chandler Carruth
72dd2f22b5 [x86] Add a flag to control the vector shuffle legality predicates that
complements the new vector shuffle lowering code path. This flag,
naturally, is *off* because we've not tested or evaluated the results of
this at all. However, the flag will make it much easier to evaluate
whether we can be this aggressive and whether there are missing vector
shuffle lowering optimizations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225491 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-09 01:24:36 +00:00
Ahmed Bougacha
3df0ee1141 [X86] Reflow comment. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225455 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-08 17:49:48 +00:00
Michael Kuperstein
0858c28ca8 [X86] Don't try to generate direct calls to TLS globals
The call lowering assumes that if the callee is a global, we want to emit a direct call.
This is correct for regular globals, but not for TLS ones.

Differential Revision: http://reviews.llvm.org/D6862

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225438 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-08 11:50:58 +00:00
Craig Topper
367b67df3e [X86] Don't print 'dword ptr' or 'qword ptr' on the operand to some of the LEA variants in Intel syntax. The memory operand is inherently unsized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225432 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-08 07:41:30 +00:00
Ahmed Bougacha
7fac1d945f [SelectionDAG] Allow targets to specify legality of extloads' result
type (in addition to the memory type).

The *LoadExt* legalization handling used to only have one type, the
memory type.  This forced users to assume that as long as the extload
for the memory type was declared legal, and the result type was legal,
the whole extload was legal.

However, this isn't always the case.  For instance, on X86, with AVX,
this is legal:
    v4i32 load, zext from v4i8
but this isn't:
    v4i64 load, zext from v4i8
Whereas v4i64 is (arguably) legal, even without AVX2.

Note that the same thing was done a while ago for truncstores (r46140),
but I assume no one needed it yet for extloads, so here we go.

Calls to getLoadExtAction were changed to add the value type, found
manually in the surrounding code.

Calls to setLoadExtAction were mechanically changed, by wrapping the
call in a loop, to match previous behavior.  The loop iterates over
the MVT subrange corresponding to the memory type (FP vectors, etc...).
I also pulled neighboring setTruncStoreActions into some of the loops;
those shouldn't make a difference, as the additional types are illegal.
(e.g., i128->i1 truncstores on PPC.)

No functional change intended.

Differential Revision: http://reviews.llvm.org/D6532


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225421 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-08 00:51:32 +00:00
Matthias Braun
e081880cc1 X86: VZeroUpperInserter: shortcut should not trigger if we have any function live-ins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225419 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-08 00:33:48 +00:00
Ahmed Bougacha
8065738154 [CodeGen] Use MVT iterator_ranges in legality loops. NFC intended.
A few loops do trickier things than just iterating on an MVT subset,
so I'll leave them be for now.
Follow-up of r225387.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225392 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 21:27:10 +00:00
Ahmed Bougacha
1d04b11927 [X86] Fix 512->256 typo in comments. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225367 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 19:38:50 +00:00
David Majnemer
60812c05e7 X86: Allow the stack probe size to be configurable per function
LLVM emits stack probes on Windows targets to ensure that the stack is
correctly accessed.  However, the amount of stack allocated before
emitting such a probe is hardcoded to 4096.

It is desirable to have this be configurable so that a function might
opt-out of stack probes.  Our level of granularity is at the function
level instead of, say, the module level to permit proper generation of
code after LTO.

Patch by Andrew H!

N.B.  The inliner needs to be updated to properly consider what happens
after inlining a function with a specific stack-probe-size into another
function with a different stack-probe-size.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225360 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 18:14:07 +00:00
Ahmed Bougacha
d412c608fc [X86] Teach FCOPYSIGN lowering to recognize constant magnitudes.
For code like:
    float foo(float x) { return copysign(1.0, x); }
We used to generate:
    andps  <-0.000000e+00,0,0,0>, %xmm0
    movss  <1.000000e+00>, %xmm1
    andps  <nan>, %xmm1
    orps   %xmm0, %xmm1
Basically doing an abs(1.0f) in the two middle instructions.

We now generate:
    andps  <-0.000000e+00,0,0,0>, %xmm0
    orps   <1.000000e+00,0,0,0>, %xmm0

Builds on cleanups r223415, r223542.
rdar://19049548
Differential Revision: http://reviews.llvm.org/D6555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225357 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 17:33:03 +00:00
Craig Topper
76bdb26595 [X86] Merge a switch statement inside a default case of another switch statement on the same variable. There was no additional code in the default so this should be no functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225345 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 08:10:38 +00:00
Craig Topper
ebddeaee40 [X86] Don't mark the shift by 1 instructions as isConvertibleToThreeAddress. There is no handling for them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225344 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 08:10:36 +00:00
Craig Topper
decea4d6a6 [X86] Remove some unused TYPE enums from the disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225343 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 07:47:52 +00:00
Lang Hames
84acf09f32 Revert r224935 "Refactor duplicated code. No intended functionality change."
This is affecting the behavior of some ObjC++ / AArch64 test cases on Darwin.
Reverting to get the bots green while I track down the source of the changed
behavior.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225311 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 23:04:36 +00:00
Craig Topper
f6145affbf [X86] Add OpSize32 to XBEGIN_4. Add XBEGIN_2 with OpSize16.
Requires new AsmParserOperand types that detect 16-bit and 32/64-bit mode so that we choose the right instruction based on default sizing without predicates. This is necessary since predicates mess up the disassembler table building.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225256 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 08:59:30 +00:00
Craig Topper
88dc6cd5c9 [X86] Make isel select the 2-byte register form of INC/DEC even in non-64-bit mode. Convert to the 1-byte form in non-64-bit mode as part of MCInst lowering.
Overall this seems simpler. It reduces duplication of patterns between both modes and it simplifies the memory folding/unfolding tables as they don't need to create fake instructions just to keep track of 64-bitness.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225252 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 07:35:50 +00:00
David Majnemer
b3065539bd X86: Don't make illegal GOTTPOFF relocations
"ELF Handling for Thread-Local Storage" specifies that R_X86_64_GOTTPOFF
relocation target a movq or addq instruction.

Prohibit the truncation of such loads to movl or addl.

This fixes PR22083.

Differential Revision: http://reviews.llvm.org/D6839

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225250 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 07:12:52 +00:00
Craig Topper
e16c86e524 [X86] Remove 16-bit and 32-bit offset jump instructions from the AsmParser. We always select the 8-bit size and let the assembler backend relax to the larger size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225243 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 04:23:57 +00:00
Craig Topper
1299ae2403 [X86] Make isel select the shorter form of jump instructions instead of the long form.
The assembler backend will relax to the long form if necessary. This removes a swap from long form to short form in the MCInstLowering code. Selecting the long form used to be required by the old JIT.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 04:23:53 +00:00
Lang Hames
bce877c84c Revert r225048: It broke ObjC on AArch64.
I've filed http://llvm.org/PR22100 to track this issue.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225228 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 00:54:32 +00:00
Brad Smith
0d3cd751c4 Remove X86 .quad workaround for buggy GNU assembler on OpenBSD / Bitrig.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225227 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-06 00:53:52 +00:00
Simon Pilgrim
a5b2142af1 [X86][SSE] lowerVectorShuffleAsByteShift tidyup
Removed local isSequential predicate and use standard helper isSequentialOrUndefInRange instead.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225216 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 22:08:48 +00:00
Simon Pilgrim
dd0552884b [X86][SSE] Fixed description for isSequentialOrUndefInRange. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225202 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 21:09:48 +00:00
Craig Topper
9bf73516cb Replace several 'assert(false' with 'llvm_unreachable' or fold a condition into the assert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225160 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 10:15:49 +00:00
Craig Topper
a5ba79effa [X86] Remove the predicates from the register forms of the 2-byte inc and dec instructions. Remove the 32-bit mode only versions that existed for the disassembler. Move the patterns out of the instructions so they can still be qualified with predicates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225157 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:19:12 +00:00
Craig Topper
34f96b9cc3 [X86] Simplify code a little by just summing flags instead of conditionally incrementing. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225156 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:19:10 +00:00
Craig Topper
a481b50d75 [X86] Remove unnecessary redeclaration of a variable with the same assignment as the beginning of the function. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225155 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:19:07 +00:00
Craig Topper
d571fdb20b [X86] Remove a strange fixme referring to a hack that doesn't seem to exist since the code is in a comment. Can't figure out what the body of the 'if' was supposed to be anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225154 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:19:05 +00:00
Craig Topper
1d8d3b62ad [x86] Reduce text duplication for similar operand class declarations in tablegen instruction info. No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225153 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:19:03 +00:00
Craig Topper
68f83ee530 [X86] Fix the immediate size to match the address size in the operand types for the move to/from absolute memory instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225152 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 08:18:59 +00:00
Craig Topper
00d70e98f0 Minor cleanup to all the switches after MatchInstructionImpl in all the AsmParsers.
Make sure they all have llvm_unreachable on the default path out of the switch. Remove unnecessary "default: break". Remove a 'return' after unreachable. Fix some indentation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225114 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-03 08:16:34 +00:00
Craig Topper
01c99892ca [X86] Disassembler support for move to/from %rax with a 32-bit memory offset is REX.W and AdSize prefix are both present.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225099 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-03 00:00:20 +00:00
Craig Topper
e3c96460b2 [X86] Use 32-bit sign extended immediate for 64-bit LOCK_ArithBinOp with sign extended immediate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225098 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-03 00:00:14 +00:00
Andrea Di Biagio
7fee03e4d5 Improved comments. No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225080 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-02 10:47:46 +00:00
Craig Topper
8ae0fd2bb3 [X86] Bring some better consistency to the naming of the move to/from %al/ax/eax/rax with memory offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225078 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-02 07:36:23 +00:00
Craig Topper
71fc42dbf6 [X86] Make the instructions that use AdSize16/32/64 co-exist together without using mode predicates.
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.

Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225075 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-02 07:02:25 +00:00
Rafael Espindola
8093abb745 Add r224985 back with a fix.
The issues was that AArch64 has additional restrictions on when local
relocations can be used. We have to take those into consideration when
deciding to put a L symbol in the symbol table or not.

Original message:

Remove doesSectionRequireSymbols.

In an assembly expression like

bar:
.long L0 + 1

the intended semantics is that bar will contain a pointer one byte past L0.

In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.

The solution used in ELF to use relocation with symbols if there is a non-zero
addend.

In MachO before this patch we would just keep all symbols in some sections.

This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.

This patch implements the non-zero addend logic for MachO too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225048 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 17:19:34 +00:00
Rafael Espindola
937e781f49 Revert "Remove doesSectionRequireSymbols."
This reverts commit r224985.

I am investigating why it made an Apple bot unhappy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225044 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 16:06:48 +00:00
Craig Topper
51f423ff30 [X86] Fix disassembly of absolute moves to work correctly in 16 and 32-bit modes with all 4 combinations of OpSize and AdSize prefixes being present or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225036 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 07:07:31 +00:00
Craig Topper
e8ffd99e4e [x86] Simplify detection of jcxz/jecxz/jrcxz in disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225035 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 07:07:11 +00:00
Peter Collingbourne
d8ae3e1fee x86_64: Fix calls to __morestack under the large code model.
Under the large code model, we cannot assume that __morestack lives within
2^31 bytes of the call site, so we cannot use pc-relative addressing. We
cannot perform the call via a temporary register, as the rax register may
be used to store the static chain, and all other suitable registers may be
either callee-save or used for parameter passing. We cannot use the stack
at this point either because __morestack manipulates the stack directly.

To avoid these issues, perform an indirect call via a read-only memory
location containing the address.

This solution is not perfect, as it assumes that the .rodata section
is laid out within 2^31 bytes of each function body, but this seems to
be sufficient for JIT.

Differential Revision: http://reviews.llvm.org/D6787

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225003 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-30 20:05:19 +00:00
Rafael Espindola
65300b95e6 Remove doesSectionRequireSymbols.
In an assembly expression like

bar:
.long L0 + 1

the intended semantics is that bar will contain a pointer one byte past L0.

In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.

The solution used in ELF to use relocation with symbols if there is a non-zero
addend.

In MachO before this patch we would just keep all symbols in some sections.

This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.

This patch implements the non-zero addend logic for MachO too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224985 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-30 13:13:27 +00:00
Craig Topper
d52bd88fad [X86] Fix some cases where some 8-bit instructions were marked as being convertible to three address instructions, but aren't really.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224940 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 16:25:26 +00:00
Craig Topper
67044e9a6a [X86] Add the 0x82 instructions to the disassebmler. They are identical in functionality to the 0x80 opcode instructions, but are not valid in 64-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224939 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 16:25:23 +00:00
Craig Topper
702d11e595 [x86] Refactor some tablegen instruction info classes slightly to prepare for another change. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224938 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 16:25:22 +00:00
Craig Topper
b96ee8810f [x86] Remove unused classes from tablegen instruction info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224937 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 16:25:19 +00:00
Rafael Espindola
a21d820952 Add segmented stack support for DragonFlyBSD.
Patch by Michael Neumann.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224936 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 15:47:28 +00:00
Rafael Espindola
2a1c1c9dea Refactor duplicated code.
No intended functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224935 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 15:18:31 +00:00
Keno Fischer
41bda9f201 [X86][ISel] Fix a regression I introduced in r224884
The else case ResultReg was not checked for validity.
To my surprise, this case was not hit in any of the
existing test cases. This includes a new test cases
that tests this path.

Also drop the `target triple` declaration from the
original test as suggested by H.J. Lu, because
apparently with it the test won't be run on Linux

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224901 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-28 15:20:57 +00:00
Michael Kuperstein
bfa4a373f4 [X86] Add missing memory variants to AVX false dependency breaking
Adds missing memory instruction variants to AVX false dependency breaking handling. (SSE was handled in r224246)

Differential Revision: http://reviews.llvm.org/D6780

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224900 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-28 13:15:05 +00:00
Andrea Di Biagio
70a7cda495 [CodeGenPrepare] Teach when it is profitable to speculate calls to @llvm.cttz/ctlz.
If the control flow is modelling an if-statement where the only instruction in
the 'then' basic block (excluding the terminator) is a call to cttz/ctlz,
CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control
flow graph.

Example:
\code
entry:
  %cmp = icmp eq i64 %val, 0
  br i1 %cmp, label %end.bb, label %then.bb

then.bb:
  %c = tail call i64 @llvm.cttz.i64(i64 %val, i1 true)
  br label %end.bb

end.bb:
  %cond = phi i64 [ %c, %then.bb ], [ 64, %entry]
\code

In this example, basic block %then.bb is taken if value %val is not zero.
Also, the phi node in %end.bb would propagate the size-of in bits of %val
only if %val is equal to zero.

With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb
into basic block %entry only if cttz is cheap to speculate for the target.

Added two new hooks in TargetLowering.h to let targets customize the behavior
(i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The
two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'.
By default, both methods return 'false'.
On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has
LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI.

Differential Revision: http://reviews.llvm.org/D6728


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224899 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-28 11:07:35 +00:00
Craig Topper
04c853b269 [x86] Prevent instruction selection of AVX512 cmp.ps/pd/ss/sd intrinsics with illegal immediates. Correctly this time. I did the wrong patterns the first time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224891 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 20:08:45 +00:00
Aaron Ballman
22376afd76 Fixing another -Wunused-variable warning, this time in release builds without asserts. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224889 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 19:17:53 +00:00
Aaron Ballman
88e25192c2 Removing a variable that is set but never used, to silence a -Wunused-but-set-variable warning; NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224888 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 19:01:19 +00:00
Craig Topper
d840bf4ba9 [x86] Prevent instruction selection of AVX512 cmp.ps/pd/ss/sd intrinsics with illegal immediates. Forgot to do this when I did SSE/SSE2/AVX/AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224887 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 18:51:06 +00:00
Craig Topper
3e9bf4c0d0 [x86] Assert on invalid immediates in the instruction printer for cmp.ps/pd/ss/sd instead of truncating the immediate. The assembly parser and instruction selection shouldn't generate invalid immediates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224886 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 18:11:00 +00:00
Craig Topper
6ba84e58da [x86] Prevent llvm.x86.cmp.ps/pd/ss/sd from being selected with bad immediates. The frontend now checks this when the builtin is used. This will allow the instruction printer to not have to deal with invalid immediates on these instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224885 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 18:10:56 +00:00
Keno Fischer
cc80af1b4f [FastIsel][X86] Fix invalid register replacement for bool args
Summary:
Consider the following IR:

  %3 = load i8* undef
  %4 = trunc i8 %3 to i1
  %5 = call %jl_value_t.0* @foo(..., i1 %4, ...)
  ret %jl_value_t.0* %5

Bools (that are the result of direct truncs) are lowered as whatever
the argument to the trunc was and a "and 1", causing the part of the
MBB responsible for this argument to look something like this:

  %vreg8<def,tied1> = AND8ri %vreg7<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg8,%vreg7

Later, when the load is lowered, it will insert

  %vreg15<def> = MOV8rm %vreg14, 1, %noreg, 0, %noreg; mem:LD1[undef] GR8:%vreg15 GR64:%vreg14

but remember to (at the end of isel) replace vreg7 by vreg15. Now for
the bug. In fast isel lowering, we mistakenly mark vreg8 as the result
of the load instead of the trunc. This adds a fixup to have
vreg8 replaced by whatever the result of the load is as well, so
we end up with

  %vreg15<def,tied1> = AND8ri %vreg15<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg15

which is an SSA violation and causes problems later down the road.

This fixes PR21557.

Test Plan: Test test case from PR21557 is added to the test suite.

Reviewers: ributzka

Reviewed By: ributzka

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6245

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224884 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-27 13:10:15 +00:00
Craig Topper
a996db696b [X86] Add the debug registers DR8-DR15 so we can assemble and disassemble references to them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224862 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 18:20:05 +00:00
Craig Topper
6eb3e3ce10 [X86] Don't fail disassembly if REX.R/REX.B is used on an MMX register. Similar fix to not fail to disassembler CR9-CR15 references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224861 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 18:19:44 +00:00
Craig Topper
654a66dbd3 Teach disassembler to handle illegal immediates on (v)cmpps/pd/ss/sd instructions. Instead of rejecting we'll just generate the _alt forms that don't try to alter the mnemonic. While I'm here, merge some common code in the Instruction printers for the condition code replacement and fix the mask on SSE to be 3-bits instead of 4.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224846 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 06:36:28 +00:00
Craig Topper
50d894e4d3 Use MCPhysReg for table of register encodings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224845 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 06:36:23 +00:00
Elena Demikhovsky
b31322328a Masked Load/Store - Changed the order of parameters in intrinsics.
No functional changes.
The documentation is coming.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224829 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-25 07:49:20 +00:00
Craig Topper
3bc4397f1f [X86] Remove the single AdSize indicator and replace it with separate AdSize16/32/64 flags.
This removes a hardcoded list of instructions in the CodeEmitter. Eventually I intend to remove the predicates on the affected instructions since in any given mode two of them are valid if we supported addr32/addr16 prefixes in the assembler.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224809 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-24 06:05:22 +00:00
Elena Demikhovsky
1a637e9fc0 AVX-512: Added FMA instructions, intrinsics an tests for KNL and SKX targets
by Asaf Badouh

http://reviews.llvm.org/D6456



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224764 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 10:30:39 +00:00
Elena Demikhovsky
6709428067 AVX-512: BLENDM - fixed encoding of the broadcast version
Added more intrinsics and encoding tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224760 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 09:36:28 +00:00
Jim Grosbach
860122b3b7 X86: Don't over-align combined loads.
When combining consecutive loads+inserts into a single vector load,
we should keep the alignment of the base load. Doing otherwise can, and does,
lead to using overly aligned instructions. In the included test case, for
example, using a 32-byte vmovaps on a 16-byte aligned value. Oops.

rdar://19190968

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224746 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 00:35:23 +00:00
Reid Kleckner
34b7fde802 Make musttail more robust for vector types on x86
Previously I tried to plug musttail into the existing vararg lowering
code. That turned out to be a mistake, because non-vararg calls use
significantly different register lowering, even on x86. For example, AVX
vectors are usually passed in registers to normal functions and memory
to vararg functions.  Now musttail uses a completely separate lowering.

Hopefully this can be used as the basis for non-x86 perfect forwarding.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D6156

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224745 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 23:58:37 +00:00
Bruno Cardoso Lopes
ba059464c3 [x86] Add vector @llvm.ctpop intrinsic custom lowering
Currently, when ctpop is supported for scalar types, the expansion of
@llvm.ctpop.vXiY uses vector element extractions, insertions and individual
calls to @llvm.ctpop.iY. When not, expansion with bit-math operations is used
for the scalar calls.

Local haswell measurements show that we can improve vector @llvm.ctpop.vXiY
expansion in some cases by using a using a vector parallel bit twiddling
approach, based on:

v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
v = ((v + (v >> 4) & 0xF0F0F0F)
v = v + (v >> 8)
v = v + (v >> 16)
v = v & 0x0000003F
(from http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel)

When scalar ctpop isn't supported, the approach above performs better for
v2i64, v4i32, v4i64 and v8i32 (see numbers below). And even when scalar ctpop
is supported, this approach performs ~2x better for v8i32.

Here, x86_64 implies -march=corei7-avx without ctpop and x86_64h includes ctpop
support with -march=core-avx2.

== [x86_64h - new]
v8i32: 0.661685
v4i32: 0.514678
v4i64: 0.652009
v2i64: 0.324289
== [x86_64h - old]
v8i32: 1.29578
v4i32: 0.528807
v4i64: 0.65981
v2i64: 0.330707

== [x86_64 - new]
v8i32: 1.003
v4i32: 0.656273
v4i64: 1.11711
v2i64: 0.754064
== [x86_64 - old]
v8i32: 2.34886
v4i32: 1.72053
v4i64: 1.41086
v2i64: 1.0244

More work for other vector types will come next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224725 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 19:45:43 +00:00
Elena Demikhovsky
c1aa521fb4 AVX-512: Added all forms of BLENDM instructions,
intrinsics, encoding tests for AVX-512F and skx instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224707 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 13:52:48 +00:00
Craig Topper
b10afb51d6 [X86] Add hasSideEffects = 0 to CALLpcrel16. This matches what is inferred from patterns for the 32-bit version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224692 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-21 20:05:06 +00:00
Craig Topper
b8f8f2dbed [X86] Swap operand order in Intel syntax on a bunch of aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224687 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 23:05:59 +00:00
Craig Topper
a9bae8c3da [X86] Swap operand order of imul aliases in Intel syntax. Also disable printing of the alias instead of the real instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224686 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 23:05:57 +00:00
Craig Topper
0c8f0f0403 [X86] Remove '*' from asm strings in far call/jump aliases for Intel syntax.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224685 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 23:05:55 +00:00
Craig Topper
58331b67cb [X86] Don't swap the order of segment and offset in immediate form of far call/jump in Intel syntax.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224684 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 23:05:52 +00:00
Craig Topper
ae39073d99 [X86] Immediate forms of far call/jump are not valid in x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224678 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 07:43:27 +00:00
Elena Demikhovsky
573b762b68 Masked load and store codegen - fixed 128-bit vectors
The codegen failed on 128-bit types on AVX2.
I added patterns and in td files and tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224647 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 23:27:57 +00:00
Reid Kleckner
0f85d54670 Add the ExceptionHandling::MSVC enumeration
It is intended to be used for a family of personality functions that
have similar IR preparation requirements. Typically when interoperating
with MSVC personality functions, bits of functionality need to be
outlined from the main function into helper functions. There is also
usually more than one landing pad per invoke, which does not match the
LLVM IR landingpad representation.

None of this is implemented yet. This change just adds a new enum that
is active for *-windows-msvc and delegates to the EH removal preparation
pass.  No functionality change for other targets.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224625 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 22:19:48 +00:00
Sanjay Patel
9ccbf1a260 Model sqrtss as a binary operation with one source operand tied to the destination (PR14221)
This is a continuation of r167064 ( http://llvm.org/viewvc/llvm-project?view=revision&revision=167064 ).
That patch started to fix PR14221 ( http://llvm.org/bugs/show_bug.cgi?id=14221 ), but it was not completed. 

Differential Revision: http://reviews.llvm.org/D6330



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224624 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 22:16:28 +00:00
Robert Khasanov
d25d7bb372 [AVX512] Enable FP arithmetic lowering for AVX512VL subsets.
Added RegOp2MemOpTable4 to transform 4th operand from register to memory in merge-masked versions of instructions. 
Added lowering tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224516 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 12:28:22 +00:00
Craig Topper
539998ec8d [X86] Use correct opsize on indirect call and jump aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224497 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 05:02:12 +00:00
Craig Topper
4ae31b7e7d [X86] Don't use PS prefix on LDMXCSR/STMXCSR.
Near as I can tell prefixes are ignored on these instructions except for a comment in the Intel docs about 0xf3. Binutils disassembler seems to ignore prefixes on these instructions. Our disassembler still doesn't distinguish PS and "no prefix" well enough for this to make a functional change, but it helps with experiments I'm doing on a potential new disassembler table builder.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224496 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 05:02:10 +00:00
Craig Topper
c1dd90797f [X86] Remove unnecessary 'In64BitMode' predicate for instructions that already indicate use of REX.W.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224495 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 05:02:08 +00:00
Michael Kuperstein
fd350586f5 [DAGCombine] Slightly improve lowering of BUILD_VECTOR into a shuffle.
This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors.

This fixes PR15872 and provides a partial fix for PR21711.

Differential Revision: http://reviews.llvm.org/D6678

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224429 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 12:32:17 +00:00
Quentin Colombet
1e2604dccc [CodeGenPrepare] Reapply r224351 with a fix for the assertion failure:
The type promotion helper does not support vector type, so when make
such it does not kick in in such cases.

Original commit message:
[CodeGenPrepare] Move sign/zero extensions near loads using type promotion.

This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.

Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.


** Context **

Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.


** Motivating Example **

Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
  %ld = load i8* %addr1
  %zextld = zext i8 %ld to i32
  %ld2 = load i32* %addr2
  %add = add nsw i32 %ld2, %zextld
  %sextadd = sext i32 %add to i64
  %zexta = zext i8 %a to i32
  %addza = add nsw i32 %zexta, %zextld
  %sextaddza = sext i32 %addza to i64
  %addb = add nsw i32 %b, %zextld
  %sextaddb = sext i32 %addb to i64
  call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
  ret void
}

As it is, this IR generates the following assembly on x86_64:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movl  (%rsi), %es      # plain load
  addl  %eax, %esi       # 32-bit add
  movslq  %esi, %rdi     # sign extend the result of add
  movzbl  %dl, %edx      # zero extend the first argument
  addl  %eax, %edx       # 32-bit add
  movslq  %edx, %rsi     # sign extend the result of add
  addl  %eax, %ecx       # 32-bit add
  movslq  %ecx, %rdx     # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.

Now, by promoting the additions to form more extended loads we would generate:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movslq  (%rsi), %rdi   # sign-extended load
  addq  %rax, %rdi       # 64-bit add
  movzbl  %dl, %esi      # zero extend the first argument
  addq  %rax, %rsi       # 64-bit add
  movslq  %ecx, %rdx     # sign extend the second argument
  addq  %rax, %rdx       # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.

This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.

Note: The throughput numbers are similar on Sandy Bridge and Haswell.


** Proposed Solution **

To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))

The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.


** Performance **

Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar:  ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%

The results are consistent for both O3 and Os.

<rdar://problem/18310086>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224402 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 01:36:17 +00:00
Reid Kleckner
0c7f4e46b6 Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type promotion."
This reverts commit r224351. It causes assertion failures when building
ICU.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224397 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 00:29:23 +00:00
Simon Pilgrim
2385e98928 [X86][SSE] Vector double -> float conversion memory folding (cvtpd2ps)
Added a missing memory folding relationship for the (V)CVTPD2PS instruction - we can safely fold these for stack reloads.

Differential Revision: http://reviews.llvm.org/D6663



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224383 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 22:30:10 +00:00
JF Bastien
50b8835451 x86-32: PUSHF/POPF use/def EFLAGS
Summary: As a side-quest for D6629 jvoung pointed out that I should use -verify-machineinstrs and this found a bug in x86-32's handling of EFLAGS for PUSHF/POPF. This patch fixes the use/def, and adds -verify-machineinstrs to all x86 tests which contain 'EFLAGS'. One exception: this patch leaves inline-asm-fpstack.ll as-is because it fails -verify-machineinstrs in a way unrelated to EFLAGS. This patch also modifies cmpxchg-clobber-flags.ll along the lines of what D6629 already does by also testing i386.

Test Plan: ninja check

Reviewers: t.p.northover, jvoung

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6687

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224359 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 20:15:45 +00:00
Quentin Colombet
93b6e016b1 [CodeGenPrepare] Move sign/zero extensions near loads using type promotion.
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.

Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.


** Context **

Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.


** Motivating Example **

Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
  %ld = load i8* %addr1
  %zextld = zext i8 %ld to i32
  %ld2 = load i32* %addr2
  %add = add nsw i32 %ld2, %zextld
  %sextadd = sext i32 %add to i64
  %zexta = zext i8 %a to i32
  %addza = add nsw i32 %zexta, %zextld
  %sextaddza = sext i32 %addza to i64
  %addb = add nsw i32 %b, %zextld
  %sextaddb = sext i32 %addb to i64
  call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
  ret void
}

As it is, this IR generates the following assembly on x86_64:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movl  (%rsi), %es      # plain load
  addl  %eax, %esi       # 32-bit add
  movslq  %esi, %rdi     # sign extend the result of add
  movzbl  %dl, %edx      # zero extend the first argument
  addl  %eax, %edx       # 32-bit add
  movslq  %edx, %rsi     # sign extend the result of add
  addl  %eax, %ecx       # 32-bit add
  movslq  %ecx, %rdx     # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.

Now, by promoting the additions to form more extended loads we would generate:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movslq  (%rsi), %rdi   # sign-extended load
  addq  %rax, %rdi       # 64-bit add
  movzbl  %dl, %esi      # zero extend the first argument
  addq  %rax, %rsi       # 64-bit add
  movslq  %ecx, %rdx     # sign extend the second argument
  addq  %rax, %rdx       # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.

This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.

Note: The throughput numbers are similar on Sandy Bridge and Haswell.


** Proposed Solution **

To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))

The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.


** Performance **

Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar:  ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%

The results are consistent for both O3 and Os.

<rdar://problem/18310086>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224351 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 19:09:03 +00:00
Robert Khasanov
41100285aa [AVX512] Enable integer arithmetic lowering for AVX512BW/VL subsets.
Added lowering tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224349 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 18:24:07 +00:00
Sanjay Patel
8fe9488a40 combine consecutive subvector 16-byte loads into one 32-byte load
This is a fix for PR21709 ( http://llvm.org/bugs/show_bug.cgi?id=21709 ).
When we have 2 consecutive 16-byte loads that are merged into one 32-byte vector,
we can use a single 32-byte load instead. 
But we don't do this for SandyBridge / IvyBridge because they have slower 32-byte memops.
We also don't bother using 32-byte *integer* loads on a machine that only has AVX1 (btver2)
because those operands would have to be split in half anyway since there is no support for
32-byte integer math ops.

Differential Revision: http://reviews.llvm.org/D6492



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224344 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 16:30:01 +00:00
Robert Khasanov
43906dedbf [AVX512] Add a comment for avx512_broadcast_pat multiclass
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224341 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 16:12:11 +00:00
Elena Demikhovsky
4519623e9f X86: Added FeatureVectorUAMem for all AVX architectures.
According to AVX specification:

"Most arithmetic and data processing instructions encoded using the VEX prefix and
performing memory accesses have more flexible memory alignment requirements
than instructions that are encoded without the VEX prefix. Specifically,
With the exception of explicitly aligned 16 or 32 byte SIMD load/store instructions,
most VEX-encoded, arithmetic and data processing instructions operate in
a flexible environment regarding memory address alignment, i.e. VEX-encoded
instruction with 32-byte or 16-byte load semantics will support unaligned load
operation by default. Memory arguments for most instructions with VEX prefix
operate normally without causing #GP(0) on any byte-granularity alignment
(unlike Legacy SSE instructions)."

The same for AVX-512.

This change does not affect anything right now, because only the "memop pattern fragment"
depends on FeatureVectorUAMem and it is not used in AVX patterns.
All AVX patterns are based on the "unaligned load" anyway.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224330 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 09:10:08 +00:00
JF Bastien
13c782674a x86: Emit LOCK prefix after DATA16
Summary: x86 allows either ordering for the LOCK and DATA16 prefixes, but using GCC+GAS leads to different code generation than using LLVM. This change matches the order that GAS emits the x86 prefixes when a semicolon isn't used in inline assembly (see tc-i386.c comment before define LOCK_PREFIX), and helps simplify tooling that operates on the instruction's byte sequence (such as NaCl's validator). This change shouldn't have any performance impact.

Test Plan: ninja check

Reviewers: craig.topper, jvoung

Subscribers: jfb, llvm-commits

Differential Revision: http://reviews.llvm.org/D6630

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224283 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 22:34:58 +00:00
Ahmed Bougacha
77effd8d7e [X86] Also pretty-print shuffle mask for INSERTPS rm variants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224260 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 19:17:54 +00:00
Michael Kuperstein
299e0d4c24 [X86] Break false dependencies before partial register updates when the source operand is in memory
Adds the various "rm" instruction variants into the list of instructions that have a partial register update. Also adds all variants of SQRTSD that were missing in the original list.

Differential Revision: http://reviews.llvm.org/D6620

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224246 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 13:18:21 +00:00
Elena Demikhovsky
3f2027522c AVX-512: Added EXPAND instructions and intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224241 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 10:03:52 +00:00
Elena Demikhovsky
1c3a1516f8 Loop Vectorizer minor changes in the code -
some comments, function names, identation.

Reviewed here: http://reviews.llvm.org/D6527


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224218 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-14 09:43:50 +00:00
Robert Khasanov
5dc8ac87f1 [AVX512] Enabling bit logic lowering
Added lowering tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224132 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 17:02:18 +00:00
Robert Khasanov
a4f5a5525d [AVX512] Enabling MIN/MAX lowering.
Added lowering tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224127 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 15:10:43 +00:00
Robert Khasanov
b59ec5ad50 [AVX512] Minor fix in lowering pattern for broadcast intrustions.
No functional change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224122 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 14:21:30 +00:00
Sanjay Patel
6f44989d39 remove function names from comments; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224080 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 23:38:43 +00:00
Sanjay Patel
033d8ea7a9 return without temporary; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224076 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 23:30:36 +00:00
Matthias Braun
8ac056b9dd Enable MachineVerifier in debug mode for X86, ARM, AArch64, Mips.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224075 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 23:18:03 +00:00
Ahmed Bougacha
11fcb48306 [X86] Add a temporary testcase for PR21876/r223996.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224074 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 23:07:52 +00:00
Matthias Braun
5b17297b3d [CodeGen] Add print and verify pass after each MachineFunctionPass by default
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.

To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.

This is the 2nd attempt at this after realizing that PassManager::add() may
actually delete the pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224059 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 21:26:47 +00:00
Rafael Espindola
428923cfe2 This reverts commit r224043 and r224042.
check-llvm was failing.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224045 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 20:03:57 +00:00
Matthias Braun
e9256e340b Enable machineverifier in debug mode for X86, ARM, AArch64, Mips
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224043 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 19:42:09 +00:00
Matthias Braun
71f56c4aac [CodeGen] Add print and verify pass after each MachineFunctionPass by default
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.

To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224042 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 19:42:05 +00:00
Cameron McInally
14273ae2e4 [AVX512] Add support for 512b variable bit shift intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224028 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 17:13:05 +00:00
Elena Demikhovsky
11fb1d0eb5 AVX-512: Added all forms of COMPRESS instruction
+ intrinsics + tests


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224019 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 15:02:24 +00:00
Michael Kuperstein
6ad268ca66 [X86] When converting movs to pushes, don't assume MOVmi operand is an actual immediate
This should fix PR21878.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224010 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 11:26:16 +00:00
Elena Demikhovsky
bcb1a626b6 AVX-512: Fixed a bug in lowering setcc for MVT::i1 type
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224008 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 10:21:12 +00:00
Ahmed Bougacha
0aac0703f8 [X86] Add back AVX2 VR256 PMOVX patterns.
We can't reach those from zext, but other parts of the backend (the shuffle
lowering) generate 256-bit VZEXT nodes.

Fixes PR21876.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223996 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 04:32:17 +00:00
Sanjay Patel
3cd5b83bb8 Match new shuffle codegen for MOVHPD patterns
Add patterns to match SSE (shufpd) and AVX (vpermilpd) shuffle codegen
when storing the high element of a v2f64. The existing patterns were
only checking for an unpckh type of shuffle. 

http://llvm.org/bugs/show_bug.cgi?id=21791

Differential Revision: http://reviews.llvm.org/D6586



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223929 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 16:58:54 +00:00
Michael Kuperstein
89db49fb9b [X86] Make a code path in EltsFromConsecutiveLoads work only on vectors it expects
EltsFromConsecutiveLoads was apparently only ever called for 128-bit vectors, and assumed this implicitly. r223518 started calling it for AVX-sized vectors, causing the code path that had this assumption to crash.
This adds a check to make this path fire only for 128-bit vectors.

Differential Revision: http://reviews.llvm.org/D6579

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223922 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 08:46:12 +00:00
Robert Khasanov
648f7c7eb1 [AVX512] Added lowering for VBROADCASTSS/SD instructions.
Lowering patterns were written through avx512_broadcast_pat multiclass as pattern generates VBROADCAST and COPY_TO_REGCLASS nodes.
Added lowering tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223804 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 18:45:30 +00:00
Robert Khasanov
c50f9f15f5 [AVX512] Added VPBROADCAST{BWDQ} (Load with Broadcast Integer Data from General Purpose Register) encodings for AVX512-BW/VL subsets
Added encoding tests.
        


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223787 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 16:38:41 +00:00
Chandler Carruth
de0cdb0890 [x86] Fix the test to actually test things for the CPU names, add the
missing barcelona CPU which that test uncovered, and remove the 32-bit
x86 CPUs which I really wasn't prepared to audit and test thoroughly.

If anyone wants to clean up the 32-bit only x86 CPUs, go for it.

Also, if anyone else wants to try to de-duplicate the AMD CPUs, that'd
be cool, but from the looks of it wouldn't save as much as it did for
the Intel CPUs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223774 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 14:25:55 +00:00
Aaron Ballman
93fac96d4b Removing an unused variable to silence a -Wunused-but-set-variable warning. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223773 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 13:20:11 +00:00
Chandler Carruth
8729843d31 [x86] Bring some sanity to the x86 CPU processor definitions.
Notably, this adds simple micro-architecture names for the Intel CPU
variants, and defines the old 'core'-based names as aliases. GCC has
started to simplify their documented interface to use these names as
well, so it seems like we can start to converge on a consistent pattern.

I'd appreciate Intel double checking the entries that aren't yet
documented widely, especially Atom (Bonnell and Silvermont), Knights
Landing, and Skylake. But this change shouldn't break any existing
users.

Also, ran clang-format to re-format this code and it actually worked
(modulo a tiny bug) so hopefully we can start to stop thinking about
formatting this stuff.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223769 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 10:58:36 +00:00
Elena Demikhovsky
ff3745b4ff AVX-512: Added some comments to ERI scalar intrinsics.
No functional change.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223761 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 07:06:32 +00:00
Michael Kuperstein
77c1b73211 [X86] Convert esp-relative movs of function arguments into pushes, step 1
This handles the simplest case for mov -> push conversion:
1. x86-32 calling convention, everything is passed through the stack.
2. There is no reserved call frame.
3. Only registers or immediates are pushed, no attempt to combine a mem-reg-mem sequence into a single PUSHmm.

Differential Revision: http://reviews.llvm.org/D6503

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223757 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 06:10:44 +00:00
Bruno Cardoso Lopes
43edafcc07 [CompactUnwind] Fix register encoding logic
Fix a compact unwind encoding logic bug which would try to encode
more callee saved registers than it should, leading to early bail out
in the encoding logic and abusive use of DWARF frame mode unnecessarily.

Also remove no-compact-unwind.ll which was testing the wrong thing
based on this bug and move it to valid 'compact unwind' tests. Added
other few more tests too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223676 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 18:18:32 +00:00
Andrea Di Biagio
eafdf26d89 [X86] Improved tablegen patters for matching TZCNT/LZCNT.
Teach ISel how to match a TZCNT/LZCNT from a conditional move if the
condition code is X86_COND_NE.
Existing tablegen patterns only allowed to match TZCNT/LZCNT from a
X86cond with condition code equal to X86_COND_E. To avoid introducing
extra rules, I added an 'ImmLeaf' definition that checks if the
condition code is COND_E or COND_NE.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223668 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 17:47:18 +00:00
Andrea Di Biagio
ae16ff1c42 [X86] Improved lowering of packed v8i16 vector shifts by non-constant count.
Before this patch, the backend sub-optimally expanded the non-constant shift
count of a v8i16 shift into a sequence of two 'movd' plus 'movzwl'.

With this patch the backend checks if the target features sse4.1. If so, then
it lets the shuffle legalizer deal with the expansion of the shift amount.

Example:
;;
define <8 x i16> @test(<8 x i16> %A, <8 x i16> %B) {
  %shamt = shufflevector <8 x i16> %B, <8 x i16> undef, <8 x i32> zeroinitializer
  %shl = shl <8 x i16> %A, %shamt
  ret <8 x i16> %shl
}
;;

Before (with -mattr=+avx):
  vmovd  %xmm1, %eax
  movzwl  %ax, %eax
  vmovd  %eax, %xmm1
  vpsllw  %xmm1, %xmm0, %xmm0
  retq

Now:
  vpxor  %xmm2, %xmm2, %xmm2
  vpblendw  $1, %xmm1, %xmm2, %xmm1
  vpsllw  %xmm1, %xmm0, %xmm0
  retq


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223660 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 14:36:51 +00:00
Elena Demikhovsky
c4fbd3fd62 X86 intrinsics moved form X86ISelLowering.cpp to X86IntrinsicsInfo.h
X86ISelLowering.cpp has a long switch for intrinsics. I moved a part of
this long switch to the new intrinsics table in X86IntrinsicsInfo.h.
No functional changes, just code and compile time optimization.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223641 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 09:03:08 +00:00
Ahmed Bougacha
3b9ac8c7c3 [X86] Refactor PMOV[SZ]Xrm to add missing AVX2 patterns.
Most patterns will go away once the extload legalization changes land.

Differential Revision: http://reviews.llvm.org/D6125


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223567 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-06 01:31:07 +00:00
Ahmed Bougacha
f5e810be25 [X86] Cleanup FCOPYSIGN lowering. NFC intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223542 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 23:11:36 +00:00
Sanjay Patel
ab4ad4f98e Optimize merging of scalar loads for 32-byte vectors [X86, AVX]
Fix the poor codegen seen in PR21710 ( http://llvm.org/bugs/show_bug.cgi?id=21710 ).
Before we crack 32-byte build vectors into smaller chunks (and then subsequently
glue them back together), we should look for the easy case where we can just load
all elements in a single op.

An example of the codegen change is:

From:

vmovss  16(%rdi), %xmm1
vmovups (%rdi), %xmm0
vinsertps       $16, 20(%rdi), %xmm1, %xmm1
vinsertps       $32, 24(%rdi), %xmm1, %xmm1
vinsertps       $48, 28(%rdi), %xmm1, %xmm1
vinsertf128     $1, %xmm1, %ymm0, %ymm0
retq

To:

vmovups (%rdi), %ymm0
retq

Differential Revision: http://reviews.llvm.org/D6536



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223518 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 21:28:14 +00:00
Jan Wen Voung
a44126f432 Use 32-bit ebp for NaCl64 in a limited case: llvm.frameaddress.
Summary:
Follow up to [x32] "Use ebp/esp as frame and stack pointer":
http://reviews.llvm.org/D4617

In that earlier patch, NaCl64 was made to always use rbp.
That's needed for most cases because rbp should hold a full
64-bit address within the NaCl sandbox so that load/stores
off of rbp don't require sandbox adjustment (zeroing the top
32-bits, then filling those by adding r15).

However, llvm.frameaddress returns a pointer and pointers
are 32-bit for NaCl64. In this case, use ebp instead, which
will make the register copy type check. A similar mechanism
may be needed for llvm.eh.return, but is not added in this change.

Test Plan: test/CodeGen/X86/frameaddr.ll

Reviewers: dschuff, nadav

Subscribers: jfb, llvm-commits

Differential Revision: http://reviews.llvm.org/D6514

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223510 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 20:55:53 +00:00
Andrea Di Biagio
6a9a49d7ab [X86] Improved lowering of packed vector shifts to vpsllq/vpsrlq.
SSE2/AVX non-constant packed shift instructions only use the lower 64-bit of
the shift count. 

This patch teaches function 'getTargetVShiftNode' how to deal with shifts
where the shift count node is of type MVT::i64.

Before this patch, function 'getTargetVShiftNode' only knew how to deal with
shift count nodes of type MVT::i32. This forced the backend to wrongly
truncate the shift count to MVT::i32, and then zero-extend it back to MVT::i64.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223505 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 20:02:22 +00:00
Andrea Di Biagio
54529ed1c4 [X86] Avoid introducing extra shuffles when lowering packed vector shifts.
When lowering a vector shift node, the backend checks if the shift count is a
shuffle with a splat mask. If so, then it introduces an extra dag node to
extract the splat value from the shuffle. The splat value is then used
to generate a shift count of a target specific shift.

However, if we know that the shift count is a splat shuffle, we can use the
splat index 'I' to extract the I-th element from the first shuffle operand.
The advantage is that the splat shuffle may become dead since we no longer
use it.

Example:

;;
define <4 x i32> @example(<4 x i32> %a, <4 x i32> %b) {
  %c = shufflevector <4 x i32> %b, <4 x i32> undef, <4 x i32> zeroinitializer
  %shl = shl <4 x i32> %a, %c
  ret <4 x i32> %shl
}
;;

Before this patch, llc generated the following code (-mattr=+avx):
  vpshufd $0, %xmm1, %xmm1   # xmm1 = xmm1[0,0,0,0]
  vpxor  %xmm2, %xmm2
  vpblendw $3, %xmm1, %xmm2, %xmm1 # xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]
  vpslld %xmm1, %xmm0, %xmm0
  retq

With this patch, the redundant splat operation is removed from the code.
  vpxor  %xmm2, %xmm2
  vpblendw $3, %xmm1, %xmm2, %xmm1 # xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]
  vpslld %xmm1, %xmm0, %xmm0
  retq


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223461 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 12:13:30 +00:00
Eric Christopher
52978c2adf Rename the x86 isTargetMacho to isTargetMachO for uniformity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223421 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 00:22:38 +00:00
Eric Christopher
62b1007007 Both of these subtargets have functions that check whether or
not the target is mach-o. Use them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223420 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-05 00:22:35 +00:00
Ahmed Bougacha
3d5af84aa6 [X86] Delete dead code in fcopysign lowering. NFC.
r32900 introduced custom lowering for fcopysign, with two checks to
change the magnitude value's type if it's larger/smaller than the sign
value's type.  r32932 replaced that code for the smaller case.
r43205 did the same for the larger case, but left the old code, now dead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223415 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 23:52:15 +00:00
Bruno Cardoso Lopes
9eb2a386c7 [x86] Fix isOffsetSuitableForCodeModel kernel code model offset
Offset == 0 is a valid offset for kernel code model according to the
x86_64 System V ABI. Found by inspection, no testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223383 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 20:36:06 +00:00
Michael Kuperstein
5e343e6fd0 [X86] Improve a dag-combine that handles a vector extract -> zext sequence.
The current DAG combine turns a sequence of extracts from <4 x i32> followed by zexts into a store followed by scalar loads.
According to measurements by Martin Krastev (see PR 21269) for x86-64, a sequence of an extract, movs and shifts gives better performance. However, for 32-bit x86, the previous sequence still seems better.

Differential Revision: http://reviews.llvm.org/D6501

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223360 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 13:49:51 +00:00
Andrea Di Biagio
e6cb70164e [X86] Simplify code. NFC.
Replaced some logic that checked if a build_vector node is doing a splat of a
non-undef value with a call to method BuildVectorSDNode::getSplatValue().
No functional change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223354 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 11:21:44 +00:00
Elena Demikhovsky
73ae1df82c Masked Load / Store Intrinsics - the CodeGen part.
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.

Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223348 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 09:40:44 +00:00
Michael Liao
d3c452a506 [X86] Clean up whitespace as well as minor coding style
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223339 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 05:20:33 +00:00
Michael Liao
fd0832ea89 [X86] Restore X86 base pointer after call to llvm.eh.sjlj.setjmp
Commit on 

- This patch fixes the bug described in
  http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-May/062343.html

The fix allocates an extra slot just below the GPRs and stores the base pointer
there. This is done only for functions containing llvm.eh.sjlj.setjmp that also
need a base pointer. Because code containing llvm.eh.sjlj.setjmp saves all of
the callee-save GPRs in the prologue, the offset to the extra slot can be
computed before prologue generation runs.

Impact at run-time on affected functions is::

  - One extra store in the prologue, The store saves the base pointer.
  - One extra load after a llvm.eh.sjlj.setjmp. The load restores the base pointer.

Because the extra slot is just above a gap between frame-pointer-relative and
base-pointer-relative chunks of memory, there is no impact on other offset
calculations other than ensuring there is room for the extra slot.

http://reviews.llvm.org/D6388

Patch by Arch Robison <arch.robison@intel.com>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223329 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 00:56:38 +00:00
Matt Arsenault
459e595697 Allow target to specify prefix for labels
Use the MCAsmInfo instead of the DataLayout, and allow
specifying a custom prefix for labels specifically. HSAIL
requires that labels begin with @, but global symbols with &.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223323 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 00:06:57 +00:00
Sanjay Patel
7e4c9bda0a fix typos, grammar, formatting; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223276 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 22:28:05 +00:00
Ahmed Bougacha
ad41590c48 [X86][MC] Intel syntax: accept implicit memory operand sizes larger than 80.
The X86AsmParser intel handling was refactored in r216481, making it
try each different memory operand size to see which one matches.
Operand sizes larger than 80 ("[xyz]mmword ptr") were forgotten, which
led to an "invalid operand" error for code such as:
  movdqa [rax], xmm0


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223187 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 02:03:26 +00:00
Simon Pilgrim
ec49b722fd [X86][SSE] Keep 4i32 vector insertions in integer domain on SSE4.1 targets
4i32 shuffles for single insertions into zero vectors lowers to X86vzmovl which was using (v)blendps - causing domain switch stalls. This patch fixes this by using (v)pblendw instead.

The updated tests on test/CodeGen/X86/sse41.ll still contain a domain stall due to the use of insertps - I'm looking at fixing this in a future patch.

Differential Revision: http://reviews.llvm.org/D6458



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223165 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 22:31:23 +00:00
Philip Reames
712af374c1 Remove unneccessary code introduced with 223101.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223132 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 18:06:10 +00:00
Sanjay Patel
0a24620459 fix typo in comment
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223127 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 17:25:27 +00:00
Nick Lewycky
1bd6c6210f Fix variable used only in assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223101 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 01:09:56 +00:00
Philip Reames
0dfac4002b Try to fix a bot failure due to a variable used only in an assert.
Specifically, bot lld-x86_64-darwin13.  Resulting from change 223085.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223092 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-01 23:27:45 +00:00
Philip Reames
78cc6fcb01 [Statepoints 2/4] Statepoint infrastructure for garbage collection: MI & x86-64 Backend
This is the second patch in a small series.  This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints.  It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code.  I will be submitting the SelectionDAG parts within the next 24-48 hours.  Since those pieces are by far the most complicated, I wanted to minimize the size of that patch.  That patch will include the tests which exercise the functionality in this patch.  The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.

The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots.  The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes.  The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT.  (Doing so would break relocation semantics for collectors which wish to relocate roots.)

The implementation of STATEPOINT is closely modeled on PATCHPOINT.  Eventually, much of the code in this patch will be removed.  The long term plan is to merge the functionality provided by statepoints and patchpoints.  Merging their implementations in the backend is likely to be a good starting point.

Reviewed by: atrick, ributzka



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223085 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-01 22:52:56 +00:00
Duncan P. N. Exon Smith
54786a0936 Revert "Masked Vector Load and Store Intrinsics."
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot.  I'll respond to the commit on the
list with a reproduction of one of the failures.

Conflicts:
	lib/Target/X86/X86TargetTransformInfo.cpp

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222936 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-28 21:29:14 +00:00
Sanjay Patel
c5992119fc Enable FeatureFastUAMem for btver2
Allow unaligned 16-byte memop codegen for btver2. No functional changes for any other subtargets.

Replace the existing supposed small memcpy test with an actual test of a small memcpy. 
The previous test wasn't using FileCheck either.

This patch should allow us to close PR21541 ( http://llvm.org/bugs/show_bug.cgi?id=21541 ).

Differential Revision: http://reviews.llvm.org/D6360



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222925 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-28 18:40:18 +00:00
Elena Demikhovsky
10c8f38047 AVX-512: Scalar ERI intrinsics
including SAE mode and memory operand.
Added AVX512_maskable_scalar template, that should cover all scalar instructions in the future.

The main difference between AVX512_maskable_scalar<> and AVX512_maskable<> is using X86select instead of vselect.
I need it, because I can't create vselect node for MVT::i1 mask for scalar instruction.

http://reviews.llvm.org/D6378



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222820 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-26 10:46:49 +00:00
Craig Topper
c0dae440e6 Replace neverHasSideEffects=1 with hasSideEffects=0 in all .td files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222801 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-26 00:46:26 +00:00
Simon Pilgrim
7f6cee9626 [X86][SSE] Improvements to byte shift shuffle matching
Since (v)pslldq / (v)psrldq instructions resolve to a single input argument it is useful to match it much earlier than we currently do - this prevents more complicated shuffles (notably insertion into a zero vector) matching before it.

Differential Revision: http://reviews.llvm.org/D6409



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222796 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-25 22:34:59 +00:00
Cameron McInally
9f4bb0420d [AVX512] Add 512b integer shift by variable intrinsics and patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222786 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-25 20:41:51 +00:00
Craig Topper
690b96281f Remove space before tab in all AVX512 mnemonic strings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222778 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-25 20:11:23 +00:00
Andrea Di Biagio
a1e1f01699 [X86] Improved target specific combine on VSELECT dag nodes.
This patch teaches function 'transformVSELECTtoBlendVECTOR_SHUFFLE' how to
convert VSELECT dag nodes to shuffles on targets that do not have SSE4.1.
On pre-SSE4.1 targets, we can still perform blend operations using movss/movsd.

Also, removed a target specific combine that performed a premature lowering of
VSELECT nodes to target specific MOVSS/MOVSD nodes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222647 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-24 12:23:15 +00:00
Michael Kuperstein
d539147834 [X86] Fixes bug in build_vector v4x32 lowering
r222375 made some improvements to build_vector lowering of v4x32 and v4xf32 into an insertps, but it missed a case where:

1. A single extracted element is used twice.
2. The lower of the two non-zero indexes should be preserved, and the higher should be used for the dest mask.

This caused a crash, since the source value for the insertps ends-up uninitialized.

Differential Revision: http://reviews.llvm.org/D6377

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222635 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-23 13:09:06 +00:00
Craig Topper
71777d18ad Add missing override keywords.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222634 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-23 09:40:13 +00:00
Elena Demikhovsky
ae1ae2c3a1 Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222632 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-23 08:07:43 +00:00
Simon Pilgrim
53a43d38df Tidied up target triple OS detection. NFC
Use Triple::isOS*() helper functions where possible.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222622 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-22 19:12:10 +00:00
Chandler Carruth
e915b4b7c8 [x86] Teach the vector shuffle yet another step of canonicalization.
No functionality changed yet, but this will prevent subsequent patches
from having to handle permutations of various interleaved shuffle
patterns.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222614 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-22 09:18:53 +00:00
Sanjay Patel
28660d4b2f Add a feature flag for slow 32-byte unaligned memory accesses [x86].
This patch adds a feature flag to avoid unaligned 32-byte load/store AVX codegen
for Sandy Bridge and Ivy Bridge. There is no functionality change intended for 
those chips. Previously, the absence of AVX2 was being used as a proxy to detect
this feature. But that hindered codegen for AVX-enabled AMD chips such as btver2
that do not have the 32-byte unaligned access slowdown.

Performance measurements are included in PR21541 ( http://llvm.org/bugs/show_bug.cgi?id=21541 ).

Differential Revision: http://reviews.llvm.org/D6355



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222544 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 17:40:04 +00:00
Chandler Carruth
46c5a97adc [x86] Restructure the checking patterns for v16 and v32 avx2 vector
shuffle lowering to allow much better blend matching.

Specifically, with the new structure the code seems clearer to me and we
correctly can hit the cases where merging two 128-bit lanes is a clear
win and can be shuffled cheaply afterward.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222539 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 14:53:03 +00:00
Chandler Carruth
0889d65fd5 [x86] Make the previous logic significantly less conservative and get
a bunch more improvements.

Non-lane-crossing is fine, the key is that lane merging only makes sense
for single-input shuffles. Not sure why I got so turned around here. The
code all works, I was just using the wrong model for it.

This only updates v4 and v8 lowering. The v16 and v32 lowering requires
restructuring the entire check sequence.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222537 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 14:33:24 +00:00
Chandler Carruth
bd357588a1 [x86] Teach the x86 vector shuffle lowering to detect mergable 128-bit
lanes.

By special casing these we can often either reduce the total number of
shuffles significantly or reduce the number of (high latency on Haswell)
AVX2 shuffles that potentially cross 128-bit lanes. Even when these
don't actually cross lanes, they have much higher latency to support
that. Doing two of them and a blend is worse than doing a single insert
across the 128-bit lanes to blend and then doing a single interleaved
shuffle.

While this seems like a narrow case, it kept cropping up on me and the
difference is *huge* as you can see in many of the test cases. I first
hit this trying to perfectly fix the interleaving shuffle patterns used
by Halide for AVX2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222533 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 13:56:05 +00:00
Alexey Volkov
d0d0424368 [X86] For Silvermont CPU use 16-bit division instead of 64-bit for small positive numbers
Differential Revision: http://reviews.llvm.org/D5938



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222521 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 11:19:34 +00:00
Craig Topper
e0ed7df6b0 Remove a bunch of unnecessary typecasts to 'const TargetRegisterClass *'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222509 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 05:58:21 +00:00
Quentin Colombet
c91f34ae54 [X86] Do not custom lower UINT_TO_FP when the target type does not
match the custom lowering.

<rdar://problem/19026326>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222489 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-21 00:47:19 +00:00
Reid Kleckner
9c390888f7 Fix more instances of -Wsentinel on Windows with s/NULL/nullptr/
Follow up to r221940, where I must not have caught em all. NFC

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222481 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-20 23:51:47 +00:00
Reid Kleckner
d12434058d Add out of line virtual destructors to all LLVMTargetMachine subclasses
These recently all grew a unique_ptr<TargetLoweringObjectFile> member in
r221878.  When anyone calls a virtual method of a class, clang-cl
requires all virtual methods to be semantically valid. This includes the
implicit virtual destructor, which triggers instantiation of the
unique_ptr destructor, which fails because the type being deleted is
incomplete.

This is just part of the ongoing saga of PR20337, which is affecting
Blink as well. Because the MSVC ABI doesn't have key functions, we end
up referencing the vtable and implicit destructor on any virtual call
through a class. We don't actually end up emitting the dtor, so it'd be
good if we could avoid this unneeded type completion work.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222480 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-20 23:37:18 +00:00
Saleem Abdulrasool
e6c1fc9a44 X86: use the correct alloca symbol for Windows Itanium
Windows itanium targets the MSVCRT, and the stack probe symbol is provided by
MSVCRT.  This corrects the emission of stack probes on i686-windows-itanium.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222439 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-20 18:01:26 +00:00
Craig Topper
136d5aeba4 Fix a typo in a comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222412 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-20 05:22:37 +00:00
Andrea Di Biagio
53daaff125 [X86] Improved lowering of v4x32 build_vector dag nodes.
This patch improves the lowering of v4f32 and v4i32 build_vector dag nodes
that are known to have at least two non-zero elements.

With this patch, a build_vector that performs a blend with zero is 
converted into a shuffle. This is done to let the shuffle legalizer expand
the dag node in a optimal way. For example, if we know that a build_vector
performs a blend with zero, we can try to lower it as a movq/blend instead of
always selecting an insertps.

This patch also improves the logic that lowers a build_vector into a insertps
with zero masking. See for example the extra test cases added to test sse41.ll.

Differential Revision: http://reviews.llvm.org/D6311


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222375 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-19 19:34:29 +00:00
Simon Pilgrim
a6943fff90 [X86][SSE] pslldq/psrldq byte shifts/rotation for SSE2
This patch builds on http://reviews.llvm.org/D5598 to perform byte rotation shuffles (lowerVectorShuffleAsByteRotate) on pre-SSSE3 (palignr) targets - pre-SSSE3 is only enabled on i8 and i16 vector targets where it is a more definite performance gain.

I've also added a separate byte shift shuffle (lowerVectorShuffleAsByteShift) that makes use of the ability of the SLLDQ/SRLDQ instructions to implicitly shift in zero bytes to avoid the need to create a zero register if we had used palignr.

Differential Revision: http://reviews.llvm.org/D5699



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222340 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-19 10:06:49 +00:00
David Blaikie
5401ba7099 Update SetVector to rely on the underlying set's insert to return a pair<iterator, bool>
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.

This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222334 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-19 07:49:26 +00:00
Simon Pilgrim
e6d1a2625f [X86][AVX] 256-bit vector stack unaligned load/stores identification
Under many circumstances the stack is not 32-byte aligned, resulting in the use of the vmovups/vmovupd/vmovdqu instructions when inserting ymm reloads/spills.

This minor patch adds these instructions to the isFrameLoadOpcode/isFrameStoreOpcode helpers so that they can be correctly identified and not be treated as folded reloads/spills.

This has also been noticed by http://llvm.org/bugs/show_bug.cgi?id=18846 where it was causing redundant spills - I've added a reduced test case at test/CodeGen/X86/pr18846.ll

Differential Revision: http://reviews.llvm.org/D6252



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222281 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-18 23:38:19 +00:00
Reid Kleckner
8083adcaca Revert "ADT: correctly report isMSVCEnvironment for windows itanium"
This reverts commit r222180.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222188 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-17 22:55:59 +00:00
Saleem Abdulrasool
2bd09db07f ADT: correctly report isMSVCEnvironment for windows itanium
The itanium environment on Windows uses MSVC and is a MSVC environment.  Report
this correctly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222180 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-17 22:13:26 +00:00
Alexey Volkov
19e8fe05dc [X86] Use ADD/SUB instead of INC/DEC for Haswell and Broadwell CPUs
Differential Revision: http://reviews.llvm.org/D5934



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222141 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-17 16:17:51 +00:00
Craig Topper
a51382a63a [x86] Remove two redundant isel patterns. They equivalent already exists in the instruction pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222094 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-16 09:24:16 +00:00
Simon Pilgrim
01e39346f3 [X86][SSE] Improve legal SHUFP and PSHUFD shuffle matching
Updated X86TargetLowering::isShuffleMaskLegal to match SHUFP masks with commuted inputs and PSHUFD masks that reference the second input.

As part of this I've refactored isPSHUFDMask to work in a more general manner and allow it to match against either the first or second input vector.

Differential Revision: http://reviews.llvm.org/D6287



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222087 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-15 21:13:05 +00:00
Reid Kleckner
0737b4ee14 Rename EH related stuff to be more precise
Summary:
The current "WinEH" exception handling type is more about Itanium-style
LSDA tables layered on top of the Windows native unwind info format
instead of .eh_frame tables or EHABI unwind info. Use the name
"ItaniumWinEH" to better reflect the hybrid nature of the design.

Also rename isExceptionHandlingDWARF to usesItaniumLSDAForExceptions,
since the LSDA is part of the Itanium C++ ABI document, and not the
DWARF standard.

Reviewers: echristo

Subscribers: llvm-commits, compnerd

Differential Revision: http://reviews.llvm.org/D6279

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222062 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-14 23:31:07 +00:00
Cameron McInally
b3625eb445 [AVX512] Add 512b masked integer shift by immediate patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222002 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-14 15:43:00 +00:00
Tim Northover
4a7bbf4c29 X86: use getConstant rather than getTargetConstant behind BUILD_VECTOR.
getTargetConstant should only be used when you can guarantee the instruction
selected will be able to cope with the raw value. BUILD_VECTOR is rather too
generic for this so we should use getConstant instead. In that case, an
instruction can still consume the constant, but if it doesn't it'll be
materialised through its own round of ISel.

Should fix PR21352.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221961 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-14 01:30:14 +00:00
Aditya Nandakumar
365df40768 We can get the TLOF from the TargetMachine - so constructor no longer requires TargetLoweringObjectFile to be passed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221926 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-13 21:29:21 +00:00
Elena Demikhovsky
18e1185ddf AVX-512: SINT_TO_FP cost model and some bugfixes
Checked some corner cases, for example translation
of <8 x i1> to <8 x double>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221883 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-13 11:46:16 +00:00
Aditya Nandakumar
847729d19a This patch changes the ownership of TLOF from TargetLoweringBase to TargetMachine so that different subtargets could share the TLOF effectively
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221878 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-13 09:26:31 +00:00
Chandler Carruth
4ea3097d08 [x86] Teach the vector shuffle lowering to make a more nuanced decision
between splitting a vector into 128-bit lanes and recombining them vs.
decomposing things into single-input shuffles and a final blend.

This handles a large number of cases in AVX1 where the cross-lane
shuffles would be much more expensive to represent even though we end up
with a fast blend at the root. Instead, we can do a better job of
shuffling in a single lane and then inserting it into the other lanes.

This fixes the remaining bits of Halide's regression captured in PR21281
for AVX1. However, the bug persists in AVX2 because I've made this
change reasonably conservative. The cases where it makes sense in AVX2
to split into 128-bit lanes are much more rare because we can often do
full permutations across all elements of the 256-bit vector. However,
the particular test case in PR21281 is an example of one of the rare
cases where it is *always* better to work in a single 128-bit lane. I'm
going to try to teach the logic to detect and form the good code even in
AVX2 next, but it will need to use a separate heuristic.

Finally, there is one pesky regression here where we previously would
craftily use vpermilps in AVX1 to shuffle both high and low halves at
the same time. We no longer pull that off, and not for any really good
reason. Ultimately, I think this is just another missing nuance to the
selection heuristic that I'll try to add in afterward, but this change
already seems strictly worth doing considering the magnitude of the
improvements in common matrix math shuffle patterns.

As always, please let me know if this causes a surprising regression for
you.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221861 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-13 04:06:10 +00:00
Chandler Carruth
927a5f45e0 [x86] Don't form overly fragmented blends when splitting and
re-combining shuffles because nothing was available in the wider vector
type.

The key observation (which I've put in the comments for future
maintainers) is that at this point, no further combining is really
possible. And so even though these shuffles trivially could be combined,
we need to actually do that as we produce them when producing them this
late in the lowering.

This fixes another (huge) part of the Halide vector shuffle regressions.
As it happens, this was already well covered by the tests, but I hadn't
noticed how bad some of these got. The specific patterns that turn
directly into unpckl/h patterns were occurring *many* times in common
vector processing code.

There are still more problems here sadly, but trying to incrementally
tease them apart and it looks like this is the core of the problem in
the splitting logic.

There is some chance of regression here, you can see it in the test
changes. Specifically, where we stop forming pshufb in some cases, it is
possible that pshufb was in fact faster. Intel "says" that pshufb is
slower than the instruction sequences replacing it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221852 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-13 02:42:08 +00:00
Sanjay Patel
dab91bcc3a Expose the number of Newton-Raphson iterations applied to the hardware's reciprocal estimate as a parameter (x86).
This is a follow-on to r221706 and r221731 and discussed in more detail in PR21385.

This patch also loosens the testcase checking for btver2. We know that the "1.0" will be loaded, but
we can't tell exactly when, so replace the CHECK-NEXT specifiers with plain CHECKs. The CHECK-NEXT
sequence relied on a quirk of post-RA-scheduling that may change independently of anything in these tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221819 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-12 21:39:01 +00:00
Cameron McInally
be30336912 [AVX512] Add integer shift by immediate intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221811 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-12 19:58:54 +00:00
Chandler Carruth
556578ec0c [x86] Start improving the matching of unpck instructions based on test
cases from Halide folks. This initial step was extracted from
a prototype change by Clay Wood to try and address regressions found
with Halide and the new vector shuffle lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221779 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-12 10:05:18 +00:00
Elena Demikhovsky
5f9c438577 AVX-512: Intrinsics for ERI
3 instructions: vrcp28, vrsqrt28, vexp2, only vector forms.
Intrinsics include SAE (Suppres All Exceptions) parameter.

http://reviews.llvm.org/D6214



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221774 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-12 07:31:03 +00:00
Rafael Espindola
6a222ec893 Pass an ArrayRef to MCDisassembler::getInstruction.
With this patch MCDisassembler::getInstruction takes an ArrayRef<uint8_t>
instead of a MemoryObject.

Even on X86 there is a maximum size an instruction can have. Given
that, it seems way simpler and more efficient to just pass an ArrayRef
to the disassembler instead of a MemoryObject and have it do a virtual
call every time it wants some extra bytes.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221751 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-12 02:04:27 +00:00
Sanjay Patel
4e9c3e9cee Initialize new subtarget feature variable for generating reciprocal estimate instructions.
This was missed in r221706.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221731 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 23:13:15 +00:00
Tom Roeder
63dea2c952 Add Forward Control-Flow Integrity.
This commit adds a new pass that can inject checks before indirect calls to
make sure that these calls target known locations. It supports three types of
checks and, at compile time, it can take the name of a custom function to call
when an indirect call check fails. The default failure function ignores the
error and continues.

This pass incidentally moves the function JumpInstrTables::transformType from
private to public and makes it static (with a new argument that specifies the
table type to use); this is so that the CFI code can transform function types
at call sites to determine which jump-instruction table to use for the check at
that site.

Also, this removes support for jumptables in ARM, pending further performance
analysis and discussion.

Review: http://reviews.llvm.org/D4167



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221708 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 21:08:02 +00:00
Sanjay Patel
e7c966f067 Use rcpss/rcpps (X86) to speed up reciprocal calcs (PR21385).
This is a first step for generating SSE rcp instructions for reciprocal
calcs when fast-math allows it. This is very similar to the rsqrt optimization
enabled in D5658 ( http://reviews.llvm.org/rL220570 ).

For now, be conservative and only enable this for AMD btver2 where performance
improves significantly both in terms of latency and throughput.

We may never enable this codegen for Intel Core* chips because the divider circuits
are just too fast. On SandyBridge, divss can be as fast as 10 cycles versus the 21
cycle critical path for the rcp + mul + sub + mul + add estimate.

Follow-on patches may allow configuration of the number of Newton-Raphson refinement
steps, add AVX512 support, and enable the optimization for more chips.

More background here: http://llvm.org/bugs/show_bug.cgi?id=21385

Differential Revision: http://reviews.llvm.org/D6175



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221706 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 20:51:00 +00:00
Rafael Espindola
71c70733b7 Use a 8 bit immediate when possible.
This fixes pr21529.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221700 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 19:46:36 +00:00
Dario Domizioli
949d328bee [X86][ELF] Fix PR20243 - leaf frame pointer bug with TLS access
The ISel lowering for global TLS access in PIC mode was creating a pseudo 
instruction that is later expanded to a call, but the code was not 
setting the hasCalls flag in the MachineFrameInfo alongside the adjustsStack 
flag. This caused some functions to be mistakenly recognized as leaf functions,
and this in turn affected the decision to eliminate the frame pointer.

With the fix, hasCalls is properly set and the leaf frame pointer is correctly
preserved.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221695 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 18:44:49 +00:00
Andrea Di Biagio
d6548ad013 [X86] Add missing check for 'isINSERTPSMask' in method 'isShuffleMaskLegal'.
This helps the DAGCombiner to identify more opportunities to fold shuffles.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221684 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 11:20:31 +00:00
Craig Topper
20d2a260c9 Use uint64_t as the type for the X86 TSFlag format enum. Allows removal of the VEXShift hack that was used to access the higher bits of TSFlags.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221673 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 07:32:32 +00:00
Michael Kuperstein
f2fe3b72a9 [X86] Fix pattern match for 32-to-64-bit zext in the presence of AssertSext
This fixes an issue with matching trunc -> assertsext -> zext on x86-64, which would not zero the high 32-bits. See PR20494 for details.
Recommitting - This time, with a hopefully working test.

Differential Revision: http://reviews.llvm.org/D6128

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221672 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 07:07:40 +00:00
Rafael Espindola
9272305648 MCAsmParserExtension has a copy of the MCAsmParser. Use it.
Base classes were storing a second copy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221667 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 05:18:41 +00:00
Quentin Colombet
8201185d61 [X86] Custom lower UINT_TO_FP from v4f32 to v4i32, and for v8f32 to v8i32 if
AVX2 is available.
According to IACA, the new lowering has a throughput of 8 cycles instead of 13
with the previous one.

Althought this lowering kicks in some SPECs benchmarks, the performance
improvement was within the noise.

Correctness testing has been done for the whole range of uint32_t with the
following program:
    uint4 v = (uint4) {0,1,2,3};
    uint32_t i;
    
    //Check correctness over entire range for uint4 -> float4 conversion
    for( i = 0; i < 1U << (32-2); i++ )
    {
        float4 t = test(v);
        float4 c = correct(v);
        
        if( 0xf != _mm_movemask_ps( t == c ))
        {
            printf( "Error @ %vx: %vf vs. %vf\n", v, c, t);
            return -1;
        }
        
        v += 4;
    }
Where "correct" is the old lowering and "test" the new one.

The patch adds a test case for the two custom lowering instruction.
It also modifies the vector cost model, which is why cast.ll and uitofp.ll are
modified.
2009-02-26-MachineLICMBug.ll is also modified because we now hoist 7
instructions instead of 4 (3 more constant loads).

rdar://problem/18153096>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221657 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-11 02:23:47 +00:00
Michael Kuperstein
dee48e7ad4 Reverting r221626 due to a too-strict test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221629 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-10 21:07:41 +00:00
Michael Kuperstein
1a66dc7468 [X86] Fix pattern match for 32-to-64-bit zext in the presence of AssertSext
This fixes an issue with matching trunc -> assertsext -> zext on x86-64, which would not zero the high 32-bits.
See PR20494 for details.

Differential Revision: http://reviews.llvm.org/D6128

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221626 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-10 20:40:21 +00:00
Rafael Espindola
d342c4c748 Misc style fixes. NFC.
This fixes a few cases of:

* Wrong variable name style.
* Lines longer than 80 columns.
* Repeated names in comments.
* clang-format of the above.

This make the next patch a lot easier to read.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221615 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-10 18:11:10 +00:00
Simon Pilgrim
de3d50643c [X86][SSE] Vector integer/float conversion memory folding (cvttps2dq / cvttpd2dq)
Fixed an issue with the (v)cvttps2dq and (v)cvttpd2dq instructions being incorrectly put in the 2 source operand folding tables instead of the 1 source operand and added the missing SSE/AVX versions.

Also added missing (v)cvtps2dq and (v)cvtpd2dq instructions to the folding tables.

Differential Revision: http://reviews.llvm.org/D6001



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221489 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 22:15:41 +00:00
Ahmed Bougacha
112aabeeeb [X86] Add VFMADDSUB cases for the 213->231 custom inserter.
Also add tests for vfmadd/vfmsub.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221488 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 22:04:15 +00:00
Ahmed Bougacha
f44d4cd925 [X86] Add missing FMA3 VFMADDSUB in the emitter.
Also reuse the fma4 intrinsic test to cover fma3 instructions too.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221487 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 21:58:11 +00:00
Andrea Di Biagio
f0f66a254d [X86] When commuting SSE immediate blend, make sure that the new blend mask is a valid imm8.
Example:
define <4 x i32> @test(<4 x i32> %a, <4 x i32> %b) {
  %shuffle = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 4, i32 5, i32 6, i32 3>
  ret <4 x i32> %shuffle
}

Before llc (-mattr=+sse4.1), produced the following assembly instruction:
  pblendw $4294967103, %xmm1, %xmm0

After
  pblendw $63, %xmm1, %xmm0


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221455 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 14:36:45 +00:00
David Majnemer
a67e1693fc X86, MC: Tidy up some whitespace in GetRelocType
No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221443 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 08:10:37 +00:00
Quentin Colombet
d465eced34 [X86] Lower VSELECT into SHRUNKBLEND when we shrink the bits used into the
condition to match a blend.
This prevents optimizations that work on VSELECT to perform invalid
transformations. Indeed, the optimized condition does not match the vector
boolean content that is expected and bad things may happen.

This patch yields the exact same code on the whole test-suite + specs (-O3 and
-O3 -march=core-avx2), it improves one test case (vector-blend.ll) and fixes a
bug reduced in vselect-avx.ll.

<rdar://problem/18819506>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221429 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-06 02:25:03 +00:00
Simon Pilgrim
3f1d66fe93 [X86][SSE] Vector integer to float conversion memory folding
Added missing memory folding for the (V)CVTDQ2PS instructions - we can safely fold these (but not the (V)CVTDQ2PD versions which have a register/memory size discrepancy in the source operand). I've added a test case demonstrating that stack folding now works.

Differential Revision: http://reviews.llvm.org/D5981



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221407 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-05 22:28:25 +00:00
Derek Schuff
07ffcb1007 [x86 fast-isel] Materialize allocas with the correct-sized lea for ILP32
Summary:
X86FastISel::fastMaterializeAlloca was incorrectly conditioning its
opcode selection on subtarget bitness rather than pointer size.

Differential Revision: http://reviews.llvm.org/D6136

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221386 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-05 19:27:21 +00:00
Andrea Di Biagio
042bee88f3 [X86] Teach method 'isVectorClearMaskLegal' how to check for legal blend masks.
This patch improves the folding of vector AND nodes into blend operations for
targets that feature SSE4.1. A vector AND node where one of the operands is
a constant build_vector with elements that are either zero or all-ones can be
converted into a blend.

This allows for example to simplify the following code:

define <4 x i32> @test(<4 x i32> %A, <4 x i32> %B) {
  %1 = and <4 x i32> %A, <i32 0, i32 0, i32 0, i32 -1>
  %2 = and <4 x i32> %B, <i32 -1, i32 -1, i32 -1, i32 0>
  %3 = or <4 x i32> %1, %2
  ret <4 x i32> %3
}

Before this patch llc (-mcpu=corei7) generated:
        andps  LCPI1_0(%rip), %xmm0, %xmm0
        andps  LCPI1_1(%rip), %xmm1, %xmm1
        orps   %xmm1, %xmm0, %xmm0
        retq

With this patch we generate a single 'vpblendw'.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221343 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-05 13:04:14 +00:00
Simon Pilgrim
4ad0654bb4 [X86][SSE] Enable commutation for SSE immediate blend instructions
Patch to allow (v)blendps, (v)blendpd, (v)pblendw and vpblendd instructions to be commuted - swaps the src registers and inverts the blend mask.

This is primarily to improve memory folding (see new tests), but it also improves the quality of shuffles (see modified tests).

Differential Revision: http://reviews.llvm.org/D6015



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221313 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-04 23:25:08 +00:00
Andrea Di Biagio
c270e8abbe [X86] Add 'FeatureSlowSHLD' to cpu 'bdver3'. Also explicit set FeatureAVX and FeatureSSE4A for all the bdver* cpus.
This patch adds 'FeatureSlowSHLD' to 'bdver3'.
According to the official AMD optimization guide for amdfam15: "Using
alternative code in place of SHLD achieves lower overall latency and
requires fewer execution resources. The 32-bit and 64-bit forms of
ADD, ADC, SHR, and LEA (except 16-bit form) are DirectPath
instructions, while SHLD is a VectorPath instruction."

This patch also explicitly sets feature AVX and SSE4A for all the bdver*
cpus. This part of the patch is a non-functional change and it is mainly
done for clarity reasons (Both XOP and FMA4 already imply AVX and SSE4A).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221296 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-04 21:18:09 +00:00
Ahmed Bougacha
38728dc043 [X86] Add debug print name for X86ISD::[US]MUL8. NFC-ish.
The opcodes were added in r220516, but I forgot to add the print names.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221185 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-03 21:25:18 +00:00
Ahmed Bougacha
40453da779 [X86] 8bit divrem: Improve codegen for AH register extraction.
For 8-bit divrems where the remainder is used, we used to generate:
    divb  %sil
    shrw  $8, %ax
    movzbl  %al, %eax

That was to avoid an H-reg access, which is problematic mainly because
it isn't possible in REX-prefixed instructions.

This patch optimizes that to:
    divb  %sil
    movzbl  %ah, %eax

To do that, we explicitly extend AH, and extract the L-subreg in the
resulting register.  The extension is done using the NOREX variants of
MOVZX.  To support signed operations, MOVSX_NOREX is also added.
Further, this introduces a new SDNode type, [us]divrem_ext_hreg, which is
then lowered to a sequence containing a single zext (rather than 2).

Differential Revision: http://reviews.llvm.org/D6064


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221176 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-03 20:26:35 +00:00
Rafael Espindola
5793838fc8 Remove redundant calls to isMaterializable.
This removes calls to isMaterializable in the following cases:

* It was redundant with a call to isDeclaration now that isDeclaration returns
  the correct answer for materializable functions.
* It was followed by a call to Materialize. Just call Materialize and check EC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221050 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-01 16:46:18 +00:00
Adrian Prantl
bdec4aee7b Revert "Temporarily revert r220777 to sort out build bot breakage."
This reverts commit r221028. Later commits depend on this and
reverting just this one causes even more bots to fail.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221041 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-01 03:19:45 +00:00
NAKAMURA Takumi
5ea912ac8d Revert r220779, "[AVX512] Removed special case for cmp instructions in getVectorMaskingNode. Now cmp intrinsics lower as other intrinsics through VSELECT, and then VSELECT tranforms to AND in PerformSELECTCombine."
Since r221028 (reverting r220777), this caused failures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221040 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-01 01:36:14 +00:00
Adrian Prantl
a4e1564971 Temporarily revert r220777 to sort out build bot breakage.
"[x86] Simplify vector selection if condition value type matches vselect value type and true value is all ones or false value is all zeros."

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221028 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-01 00:26:59 +00:00
Reid Kleckner
e1a4787d5d Work around bugs in MSVC "14" CTP 3's conversion logic
It appears to ignore or find ambiguous MachineInstrBuilder's conversion
operators that allow conversion to MachineInstr* and
MachineBasicBlock::bundle_iterator.

As a workaround, add an explicit way to get the MachineInstr.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221017 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-31 23:19:46 +00:00
Robert Khasanov
7d18d46ef2 [AVX512] Added VBROADCAST{SS/SD} encoding for VL subset.
Refactored through AVX512_maskable
        


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220908 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-30 14:21:47 +00:00
Robert Khasanov
63c2f3292e [AVX512] Implemented AVX512VL FP bnary packed instructions (VADDP*, VSUBP*, VMULP*, VDIVP*, VMAXP*, VMINP*)
Refactored through AVX512_maskable
Added encoding tests for them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220858 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-29 15:43:02 +00:00
Robert Khasanov
e1610162fb [AVX512] Fix VSQRT packed instructions internal names.
No functional change


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220808 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 18:22:41 +00:00
Robert Khasanov
9371efbcdb [AVX512] Extended avx512_sqrt_packed (sqrt instructions) to VL subset.
Refactored through AVX512_maskable



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220806 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 18:15:20 +00:00
Robert Khasanov
59cb03d329 [AVX-512] Expanded rsqrt/rcp instructions to VL subset.
Refactored multiclass through AVX512_maskable



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220783 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 16:37:13 +00:00
Robert Khasanov
d4345dd85f [AVX512] Removed special case for cmp instructions in getVectorMaskingNode. Now cmp intrinsics lower as other intrinsics through VSELECT, and then VSELECT tranforms to AND in PerformSELECTCombine.
No functional change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220779 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 16:17:14 +00:00
Robert Khasanov
edf556ec1f [x86] Simplify vector selection if condition value type matches vselect value type and true value is all ones or false value is all zeros.
This transformation worked if selector is produced by SETCC, however SETCC is needed only if we consider to swap operands. So I replaced SETCC check for this case.
Added tests for vselect of <X x i1> values.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220777 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 15:59:40 +00:00
Robert Khasanov
4a52493457 [AVX512] Bring back vector-shuffle lowering support through broadcasts
Ffter commit at rev219046 512-bit broadcasts lowering become non-optimal. Most of tests on broadcasting and embedded broadcasting were changed and they doesn’t produce efficient code.

Example below is from commit changes (it’s the first test from test/CodeGen/X86/avx512-vbroadcast.ll):

 define   <16 x i32> @_inreg16xi32(i32 %a) {
 ; CHECK-LABEL: _inreg16xi32:
 ; CHECK:       ## BB#0:
-; CHECK-NEXT:    vpbroadcastd %edi, %zmm0
+; CHECK-NEXT:    vmovd %edi, %xmm0
+; CHECK-NEXT:    vpbroadcastd %xmm0, %ymm0
+; CHECK-NEXT:    vinserti64x4 $1, %ymm0, %zmm0, %zmm0
 ; CHECK-NEXT:    retq
 %b = insertelement <16 x i32> undef, i32 %a, i32 0
 %c = shufflevector <16 x i32> %b, <16 x i32> undef, <16 x i32> zeroinitializer
 ret <16 x i32> %c
}

Here, 256-bit broadcast was generated instead of 512-bit one.

In this patch
1) I added vector-shuffle lowering through broadcasts
2) Removed asserts and branches likes because this is incorrect
-  assert(Subtarget->hasDQI() && "We can only lower v8i64 with AVX-512-DQI");
3) Fixed lowering tests


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220774 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 12:28:51 +00:00
Reid Kleckner
d5de327da0 X86: Implement the vectorcall calling convention
This is a Microsoft calling convention that supports both x86 and x86_64
subtargets. It passes vector and floating point arguments in XMM0-XMM5,
and passes them indirectly once they are consumed.

Homogenous vector aggregates of up to four elements can be passed in
sequential vector registers, but this part is not implemented in LLVM
and will be handled in Clang.

On 32-bit x86, it is similar to fastcall in that it uses ecx:edx as
integer register parameters and is callee cleanup. On x86_64, it
delegates to the normal win64 calling convention.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D5943

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220745 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-28 01:29:26 +00:00
Adam Nemet
6bc8d95153 [AVX512] Add vpermil variable version
This is implemented via a multiclass that derives from the vperm imm
multiclass.

Fixes <rdar://problem/18426089>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220737 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 23:08:40 +00:00
Adam Nemet
5c76721372 [AVX512] Clean up avx512_perm_imm to use X86VectorVTInfo
No functionality change.  No change in X86.td.expanded except that we only set
the CD8 attributes for the memory variants.  (This shouldn't be used unless we
have a memory operand.)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220736 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 23:08:37 +00:00
Adam Nemet
7ba4de2ccc [AVX512] Derive vpermil* from avx512_perm_imm
This used to derive from avx512_pshuf_imm which is confusing.

NFC.  Compared X86.td.expanded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220735 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 23:08:34 +00:00
Adam Nemet
d0ee9ada16 [AVX512] Fix copy-and-paste bugs in vpermil
1) i512mem -> f512mem (this is the packed FP input being permuted)
2) element size is 64 bits in EVEX_CD8 for PD.

(A good illustration why X86VectorVTInfo is useful)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220734 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 23:08:31 +00:00
Pete Cooper
68aeef61f4 Fix a stackmap bug introduced in r220710.
For a call to not return in to the stackmap shadow, the shadow must end with the call.

To do this, we must insert any required nops *before* the call, and not after it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220728 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 22:38:45 +00:00
Pete Cooper
7476f9c513 Stackmap shadows should consider call returns a branch target.
To avoid emitting too many nops, a stackmap shadow can include emitted instructions in the shadow, but these must not include branch targets.

A return from a call should count as a branch target as patching over the instructions after the call would lead to incorrect behaviour for threads currently making that call, when they return.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220710 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 19:40:35 +00:00
Yuri Gorshenin
75bb472c06 [asan-asm-instrumentation] Added comment describing how asm instrumentation works.
Summary: [asan-asm-instrumentation] Added comment describing how asm instrumentation works.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5970

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220670 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-27 08:38:54 +00:00
Elena Demikhovsky
9e19cf1ffd AVX-512: Fixed encoding of VPBROADCASTM and added SKX forms of this instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220638 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-26 09:52:24 +00:00
Simon Pilgrim
c31aaa5a3f [X86][SSE] Vector integer/float conversion memory folding
Tidied up some entries in the folding tables so that they are under the correct comment section (they were categorised as AVX2 instructions when they're AVX1).

Minor patch agreed with qcolombet.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220613 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-25 08:11:20 +00:00
Kevin Enderby
44ccedc273 Fix a Mach-O assembler segfault for a subtraction expression with an undefined symbol.
In a Mach-O object file a relocatable expression of the form
SymbolA - SymbolB + constant is allowed when both symbols are
defined in a section.  But when either symbol is undefined it
is an error.

The code was crashing when it had an undefined symbol in this case.
And should have printed a error message using the location information
in the relocation entry.

rdar://18678402


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220599 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 22:39:40 +00:00
Simon Pilgrim
44efa200e2 [X86][SSE] Bitcast assertion in XFormVExtractWithShuffleIntoLoad
Minor patch to fix an issue in XFormVExtractWithShuffleIntoLoad where a load is unary shuffled, then bitcast (to a type with the same number of elements) before extracting an element.

An undef was created for the second shuffle operand using the original (post-bitcasted) vector type instead of the pre-bitcasted type like the rest of the shuffle node - this was then causing an assertion on the different types later on inside SelectionDAG::getVectorShuffle.

Differential Revision: http://reviews.llvm.org/D5917



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220592 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 21:04:41 +00:00
Sanjay Patel
51fa1bcef3 Allow AVX vrsqrtps generation.
This is a follow-on to r220570 that allows a 256-bit (v8f32)
version of vrsqrtps to be generated.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220579 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 17:59:18 +00:00
Sanjay Patel
a46f06efe2 Use rsqrt (X86) to speed up reciprocal square root calcs
This is a first step for generating SSE rsqrt instructions for
reciprocal square root calcs when fast-math is allowed.

For now, be conservative and only enable this for AMD btver2
where performance improves significantly - for example, 29%
on llvm/projects/test-suite/SingleSource/Benchmarks/BenchmarkGame/n-body.c
(if we convert the data type to single-precision float).

This patch adds a two constant version of the Newton-Raphson
refinement algorithm to DAGCombiner that can be selected by any target
via a parameter returned by getRsqrtEstimate()..

See PR20900 for more details:
http://llvm.org/bugs/show_bug.cgi?id=20900

Differential Revision: http://reviews.llvm.org/D5658



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220570 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 17:02:16 +00:00
Adam Nemet
7f7bf0da6a [AVX512] FMA support for the 231 variants
This is asm/diasm-only support, similar to AVX.

For ISeling the register variant, they are no different from 213 other than
whether the multiplication or the addition operand is destructed.

For ISeling the memory variant, i.e. to fold a load, they are no different
than the 132 variant.  The addition operand (op3) in both cases can come from
memory.  Again the ony difference is which operand is destructed.

There could be a post-RA pass that would convert a 213 or 132 into a 231.

Part of <rdar://problem/17082571>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220540 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 00:03:00 +00:00
Adam Nemet
97a3cd4383 [AVX512] Introduce fma3p_forms from AVX
This multiclass generates the different forms: 213, 231, 132 in AVX.

132 in AVX512 is a separate class but I am planning to use this same
multiclass to generate 231 relying on the nice the null_frag trick from AVX to
disable codegen pattern for 231.

No functionality change, no change in X86.td.expanded except for the different
instruction definition names.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220539 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-24 00:02:55 +00:00
Ahmed Bougacha
4d962b05ca [X86] Improve mul w/ overflow codegen, to MUL8+SETO.
Currently, @llvm.smul.with.overflow.i8 expands to 9 instructions, where
3 are really needed.

This adds X86ISD::UMUL8/SMUL8 SD nodes, and custom lowers them to
MUL8/IMUL8 + SETO.

i8 is a special case because there is no two/three operand variants of
(I)MUL8, so the first operand and return value need to go in AL/AX.

Also, we can't write patterns for these instructions: TableGen refuses
patterns where output operands don't match SDNode results. In this case,
instructions where the output operand is an implicitly defined register.

A related special case (and FIXME) exists for MUL8 (X86InstrArith.td):

  // FIXME: Used for 8-bit mul, ignore result upper 8 bits.
  // This probably ought to be moved to a def : Pat<> if the
  // syntax can be accepted.
  [(set AL, (mul AL, GR8:$src)), (implicit EFLAGS)]

Ideally, these go away with UMUL8, but we still need to improve TableGen
support of implicit operands in patterns.

Before this change:
  movsbl  %sil, %eax
  movsbl  %dil, %ecx
  imull   %eax, %ecx
  movb    %cl, %al
  sarb    $7, %al
  movzbl  %al, %eax
  movzbl  %ch, %esi
  cmpl    %eax, %esi
  setne   %al

After:
  movb    %dil, %al
  imulb   %sil
  seto    %al

Also, remove a made-redundant testcase for PR19858, and enable more FastISel
ALU-overflow tests for SelectionDAG too.

Differential Revision: http://reviews.llvm.org/D5809


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220516 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-23 21:55:31 +00:00
Matt Arsenault
015776f38c Add minnum / maxnum codegen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220342 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 23:01:01 +00:00
NAKAMURA Takumi
c65b578e25 X86AsmInstrumentation.cpp: Dissolve initializer-ranged-for. MSC17 disliked it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220301 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 16:22:52 +00:00
Yuri Gorshenin
7ffc5bb51a [asan-asm-instrumentation] Fixed memory accesses with rbp as a base or an index register.
Summary: Fixed memory accesses with rbp as a base or an index register.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5819

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220283 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 10:22:27 +00:00
Rafael Espindola
45968c54e9 Fix a bit of confusion about .set and produce more readable assembly.
Every target we support has support for assembly that looks like

a = b - c
.long a

What is special about MachO is that the above combination suppresses the
production of a relocation.

With this change we avoid producing the intermediary labels when they don't
add any value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220256 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 01:17:30 +00:00
Quentin Colombet
a37862e2de [X86] Fix a bug in the lowering of the mask of VSELECT.
X86 code to lower VSELECT messed a bit with the bits set in the mask of VSELECT
when it knows it can be lowered into BLEND. Indeed, only the high bits need to be
set for those and it optimizes those accordingly.
However, when the mask is a compile time constant, the lowering will be handled
by the generic optimizer and those modifications will generate bad code in the
generic optimizer.

This patch fixes that by preventing the optimization if the VSELECT will be
handled by the generic optimizer.

<rdar://problem/18675020>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220242 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 23:13:30 +00:00
Simon Pilgrim
0d1978b813 [X86] Memory folding for commutative instructions (updated)
This patch improves support for commutative instructions in the x86 memory folding implementation by attempting to fold a commuted version of the instruction if the original folding fails - if that folding fails as well the instruction is 're-commuted' back to its original order before returning.

Updated version of r219584 (reverted in r219595) - the commutation attempt now explicitly ensures that neither of the commuted source operands are tied to the destination operand / register, which was the source of all the regressions that occurred with the original patch attempt.

Added additional regression test case provided by Joerg Sonnenberger.

Differential Revision: http://reviews.llvm.org/D5818



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220239 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 22:14:22 +00:00
Andrea Di Biagio
5512b50db5 [X86] Fix missed selection of non-temporal store of zero vector.
When the input to a store instruction was a zero vector, the backend
always selected a normal vector store regardless of the non-temporal
hint. This is fixed by this patch.

This fixes PR19370.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220054 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 17:27:06 +00:00
Adam Nemet
fb9d61a8d6 [AVX512] Add DQ subvector inserts
In AVX512f we support 64x2 and 32x8 inserts via matching them to 32x4 and 64x4
respectively.  These are matched by "Alt" Pat<>'s (Alt stands for alternative
VTs).

Since DQ has native support for these intructions, I peeled off the non-"Alt"
part of the baseclass into vinsert_for_size_no_alt. The DQ instructions are
derived from this multiclass.  The "Alt" Pat<>'s are disabled with DQ.

Fixes <rdar://problem/18426089>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219874 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:17 +00:00
Adam Nemet
ccebe7258e [AVX512] Two new attributes in X86VectorVTInfo for subvector insert
The new attributes are NumElts and the CD8TupleForm.  This prepares the code
to enable x8 and x2 inserts.

NFC, no change in X86.td.expanded except for the new attributes.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219871 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:09 +00:00
Adam Nemet
80b9e006aa [AVX512] Rename arg from Opcode32/64 to Opcode128/256 in vinsert_for_size
It's the W bit that selects between 32 or 64 elt type and not the opcode.  The
opcode selects between the width of the insert (128 or 256).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219870 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:04 +00:00
Rafael Espindola
90ce9f70e2 Simplify handling of --noexecstack by using getNonexecutableStackSection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219799 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 16:12:52 +00:00
Rafael Espindola
b510f8d08c Move getNonexecutableStackSection up to the base ELF class.
The .note.GNU-stack section is not SystemZ/X86 specific.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219796 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 15:44:16 +00:00
Simon Pilgrim
84a3feea38 [X86][SSE] pslldq/psrldq shuffle mask decodes
Patch to provide shuffle decodes and asm comments for the sse pslldq/psrldq SSE2/AVX2 byte shift instructions.

Differential Revision: http://reviews.llvm.org/D5598


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219738 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-14 22:31:34 +00:00
Hans Wennborg
76806748d4 [x86 asm] allow fwait alias in both At&t and Intel modes (PR21208)
Differential Revision: http://reviews.llvm.org/D5741

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219725 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-14 21:41:17 +00:00
Robert Khasanov
ad5d223cb5 [AVX512] Extended avx512_binop_rm to DQ/VL subsets.
Added encoding tests.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219686 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-14 15:13:56 +00:00