Commit Graph

1211 Commits

Author SHA1 Message Date
Igor Breger
368de4c9d6 AVX : Fix ISA disabling in case AVX512VL , some instructions should be disabled only if AVX512BW present.
Tests added.

Differential Revision: http://reviews.llvm.org/D11122

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242270 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 07:08:10 +00:00
Simon Pilgrim
9c64d9cc04 [X86][SSE] (V)PMINSB is commutable.
(V)PMINSB is no different to the other (V)PMIN/(V)PMAX B/D/W instructions - it is fully commutable.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241994 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-12 16:44:11 +00:00
Simon Pilgrim
3ecdd44e5d [X86][SSE4A] Shuffle lowering using SSE4A EXTRQ/INSERTQ instructions
This patch adds support for v8i16 and v16i8 shuffle lowering using the immediate versions of the SSE4A EXTRQ and INSERTQ instructions. Although rather limited (they can only act on the lower 64-bits of the source vectors, leave the upper 64-bits of the result vector undefined and don't have VEX encoded variants), the instructions are still useful for the zero extension of any lane (EXTRQ) or inserting a lane into another vector (INSERTQ). Testing demonstrated that it wasn't typically worth it to use these instructions for v2i64 or v4i32 vector shuffles although they are capable of it.

As well as adding specific pattern matching for the shuffles, the patch uses EXTRQ for zero extension cases where SSE41 isn't available and its more efficient than the SSE2 'unpack' default approach. It also adds shuffle decode support for the EXTRQ / INSERTQ cases when the instructions are handling full byte-sized extractions / insertions.

From this foundation, future patches will be able to make use of the instructions for situations that use their ability to extract/insert at the bit level.

Differential Revision: http://reviews.llvm.org/D10146

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241508 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-06 20:46:41 +00:00
Simon Pilgrim
ecb00f403c [X86][SSE] Use the general SMAX/SMIN/UMAX/UMIN opcodes and remove the X86 implementation
With the completion of D9746 there is now a common implementation of integer signed/unsigned min/max nodes, removing the need for the equivalent X86 specific implementations.

This patch removes the old X86ISD nodes, legalizes the relevant SSE2/SSE41/AVX2/AVX512 instructions for the ISD versions and converts the small amount of existing X86 code.

Differential Revision: http://reviews.llvm.org/D10947

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241506 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-06 20:30:47 +00:00
Ahmed Bougacha
0810814bcc [X86] Don't generate vbroadcasti128 for v4i64 splats from memory.
We used to erroneously match:
    (v4i64 shuffle (v2i64 load), <0,0,0,0>)

Whereas vbroadcasti128 is more like:
    (v4i64 shuffle (v2i64 load), <0,1,0,1>)

This problem doesn't exist for vbroadcastf128, which kept matching
the intrinsic after r231182.  We should perhaps re-introduce the
intrinsic here as well, but that's a separate issue still being
discussed.

While there, add some proper vbroadcastf128 tests.  We don't currently
match those, like for loading vbroadcastsd/ss on AVX (the reg-reg
broadcasts where added in AVX2).

Fixes PR23886.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240488 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-24 00:07:16 +00:00
Simon Pilgrim
e2d3e4467e [X86][SSE] Vectorize v2i32 to v2f64 conversions
This patch enables support for the conversion of v2i32 to v2f64 to use the CVTDQ2PD xmm instruction and stay on the SSE unit instead of scalarizing, sign extending to i64 and using CVTSI2SDQ scalar conversions.

Differential Revision: http://reviews.llvm.org/D10433

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239855 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-16 21:40:28 +00:00
Simon Pilgrim
dd5cde6e60 [X86] Removed (unused) FSRL x86 operation
This patch removes the old X86ISD::FSRL op - which allowed float vectors to use the byte right shift operations (causing a domain switch....).

Since the refactoring of the shuffle lowering code this no longer has any use.

Differential Revision: http://reviews.llvm.org/D10169

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238906 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 08:32:36 +00:00
Chandler Carruth
828f5b807c [x86] Implement a faster vector population count based on the PSHUFB
in-register LUT technique.

Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html

The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.

On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.

The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.

I have extended Bruno's patch in the following ways:

0) I committed the new tests with baseline sequences so this shows
   a diff, and regenerated the tests using the update scripts.

1) Bruno had noticed and mentioned in IRC a redundant mask that
   I removed.

2) I introduced a particular optimization for the i32 vector cases where
   we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
   + PSADBW to compute doubled high i32 popcounts. This takes advantage
   of the fact that to line up the high i32 popcounts we have to shift
   them anyways, and we can shift them by one fewer bit to effectively
   divide the count by two. While the PSHUFD based horizontal add is no
   faster, it doesn't require registers or load traffic the way a mask
   would, and provides more ILP as it happens on different ports with
   high throughput.

3) I did some code cleanups throughout to simplify the implementation
   logic.

4) I refactored it to continue to use the parallel bitmath lowering when
   SSSE3 is not available to preserve the performance of that version on
   SSE2 targets where it is still much better than scalarizing as we'll
   still do a bitmath implementation of popcount even in scalar code
   there.

With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.

With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.

Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.

Differential Revision: http://reviews.llvm.org/D10084

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:59 +00:00
Elena Demikhovsky
078088b790 AVX-512: Implemented all forms of sign-extend and zero-extend instructions for KNL and SKX
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-27 08:15:19 +00:00
Elena Demikhovsky
86425451e5 AVX-512: Enabled SSE intrinsics on AVX-512.
Predicate UseAVX depricates pattern selection on AVX-512.
This predicate is necessary for DAG selection to select EVEX form.
But mapping SSE intrinsics to AVX-512 instructions is not ready yet.
So I replaced UseAVX with HasAVX for intrinsics patterns.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237903 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-21 14:01:32 +00:00
Sanjay Patel
f575e9f902 Use intrinsic pattern to make a simpler match
This is a follow-on to r236740 where I took Andrea's advice
in D9504 to remove a redundant pattern...except that I removed
the wrong pattern!

AFAICT, there is no change in the final code produced because 
subsequent passes would clean up the extra instructions created
by the more complicated pattern.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236743 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-07 16:51:12 +00:00
Sanjay Patel
39cf555429 [x86] eliminate unnecessary shuffling/moves with unary scalar math ops (PR21507)
Finish the job that was abandoned in D6958 following the refactoring in
http://reviews.llvm.org/rL230221:

1. Uncomment the intrinsic def for the AVX r_Int instruction.
2. Add missing r_Int entries to the load folding tables; there are already
   tests that check these in "test/Codegen/X86/fold-load-unops.ll", so I
   haven't added any more in this patch.
3. Add patterns to solve PR21507 ( https://llvm.org/bugs/show_bug.cgi?id=21507 ).

So instead of this:

  movaps	%xmm0, %xmm1
  rcpss	%xmm1, %xmm1
  movss	%xmm1, %xmm0

We should now get:

  rcpss	%xmm0, %xmm0

And instead of this:

  vsqrtss	%xmm0, %xmm0, %xmm1
  vblendps	$1, %xmm1, %xmm0, %xmm0 ## xmm0 = xmm1[0],xmm0[1,2,3]

We should now get:

  vsqrtss	%xmm0, %xmm0, %xmm0


Differential Revision: http://reviews.llvm.org/D9504



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236740 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-07 15:48:53 +00:00
Sanjay Patel
f060668270 [x86] remove RCPPS and RSQRTPS intrinsic instruction definitions
We don't need codegen-only intrinsic instructions for the vector forms of these instructions.

This makes the reciprocal estimate instruction lowering identical to how we handle normal
square roots: (V)SQRTPS / (V)SQRTPD.

No existing regression tests fail with this patch.

Differential Revision: http://reviews.llvm.org/D9301



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236013 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-28 18:48:45 +00:00
Sanjay Patel
fd55d49f65 remove obsolete pattern matches for scalar SSE ops
The blendi pattern should always replace the insertps pattern after:
http://reviews.llvm.org/rL232850
http://reviews.llvm.org/rL235124



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235930 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-27 22:23:17 +00:00
Sanjay Patel
3f1f6571cc [x86] Add store-folded memop patterns for vcvtps2ph
Differential Revision: http://reviews.llvm.org/D7296



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235517 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 16:11:19 +00:00
Andrea Di Biagio
6c347524e2 [X86][AVX] Fix failure due to a missing ISel pattern to select VBROADCAST nodes (PR23259).
This fixes a regression introduced at revision 218263.

On AVX, if we optimize for size, a splat build_vector of a load
is lowered into a VBROADCAST node. This is done even if the value type of the
splat build_vector node is v2i64.

Since AVX doesn't support v2f64/v2i64 broadcasts, revision 218263 added two
extra tablegen patterns to allow selecting a VMOVDDUPrm from an X86VBroadcast
where the scalar element comes from a loadi64/loadf64.

However, revision 218263 forgot to add an extra fallback pattern for the case
where we have a X86VBroadcast of a loadi64 with multiple uses.

This patch adds the missing tablegen pattern in X86InstrSSE.td.
This patch also adds an extra test to 'splat-for-size.ll' to verify that ISel
doesn't crash with a 'fatal error in the backend' due to a missing AVX pattern
to select v2i64 X86ISD::BROADCAST nodes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 14:53:39 +00:00
Simon Pilgrim
01eaaa72bf [X86][SSE] Provide execution domains for scalar floating point operations
This is an updated version of Chandler's patch D7402 that got accepted but never committed, and has bit-rotted a bit since.

I've updated the execution domain declarations to match the approach of the packed templates and also added some extra scalar unary tests.

Differential Revision: http://reviews.llvm.org/D9095

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235372 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 08:40:22 +00:00
Sanjay Patel
8765e82c83 [X86, AVX] adjust tablegen patterns to generate better code for scalar insertion into zero vector (PR23073)
For code like this:

define <8 x i32> @load_v8i32() {
  ret <8 x i32> <i32 7, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0>
}

We produce this AVX code:

_load_v8i32:                            ## @load_v8i32
  movl	$7, %eax
  vmovd	%eax, %xmm0
  vxorps	%ymm1, %ymm1, %ymm1
  vblendps	$1, %ymm0, %ymm1, %ymm0 ## ymm0 = ymm0[0],ymm1[1,2,3,4,5,6,7]
  retq

There are at least 2 bugs in play here:

    We're generating a blend when a move scalar does the same job using 2 less instruction bytes (see FIXMEs).
    We're not matching an existing pattern that would eliminate the xor and blend entirely. The zero bytes are free with vmovd.

The 2nd fix involves an adjustment of "AddedComplexity" [1] and mostly masks the 1st problem.

[1] AddedComplexity has close to no documentation in the source. 
The best we have is this comment: "roughly corresponds to the number of nodes that are covered". 
It appears that x86 has bastardized this definition by inflating its values for some other
undocumented reason. For example, we have a pattern with "AddedComplexity = 400" (!). 

I searched my way to this page:
https://groups.google.com/forum/#!topic/llvm-dev/5UX-Og9M0xQ

Differential Revision: http://reviews.llvm.org/D8794



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233931 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-02 17:56:17 +00:00
Sanjay Patel
ace5d1d606 typo; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233761 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 21:24:47 +00:00
Sanjay Patel
1b10376319 [X86, AVX] fix zero-extending integer operand load patterns to use integer instructions
This is a follow-on to r233704 and another partial fix for PR22685:
https://llvm.org/bugs/show_bug.cgi?id=22685



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 18:43:43 +00:00
Ahmed Bougacha
18128bd376 [X86] Generate MOVNT for all vector types.
We used to miss non-Q YMM integer vectors, and, non-Q/D XMM integer
vectors.
While there, change the v4i32 patterns to prefer MOVNTDQ.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233668 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 03:16:51 +00:00
Sanjay Patel
cae9695fbb [X86, AVX2] Replace inserti128 and extracti128 intrinsics with generic shuffles
This should complete the job started in r231794 and continued in r232045:
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.

This should complete the removal of insert/extract128 intrinsics.

The Clang precursor patch for this change was checked in at r232109.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232120 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 23:16:18 +00:00
Juergen Ributzka
9814f7b92c Add the "vbroadcasti128" instruction back.
This is a follow-up to r231182. This adds the "vbroadcasti128" instruction
back, but without the intrinsic mapping. Also add a test to check the
instriction encoding.

This is related to rdar://problem/18742778.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231945 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 17:29:03 +00:00
Elena Demikhovsky
13cc6f2b6e AVX-512: Added SKX forms of shift instructions.
Added rotation instructions, encoding only.
Added encoding tests for all these forms.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231916 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 10:25:42 +00:00
Ahmed Bougacha
30fdbe5948 [X86] Remove stale comment. NFC.
It turns out 256bit V[SZ]EXT nodes are still
generated by the new shuffle lowering, so this
is here to stay!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231422 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-05 23:18:41 +00:00
Juergen Ributzka
e49da9aff1 Remove 'llvm.x86.avx2.vbroadcasti128' intrinsic.
The intrinsic is no longer generated by the front-end. Remove the intrinsic and
auto-upgrade it to a vector shuffle.

Reviewed by Nadav

This is related to rdar://problem/18742778.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231182 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-04 00:13:25 +00:00
Elena Demikhovsky
975e9b99aa AVX-512: Added mask and rounding mode for scalar arithmetics
Added more tests for scalar instructions to destinguish between AVX and AVX-512 forms.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230891 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-01 07:44:04 +00:00
Craig Topper
8df1c6ef09 [X86] Remove the blendpd/blendps/pblendw/pblendd intrinsics. They can represented by shuffle_vector instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230860 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-28 19:33:17 +00:00
Elena Demikhovsky
d8e5adcd92 restructured X86 scalar unary operation templates
I made the templates general, no need to define pattern separately for each instruction/intrinsic.
Now only need to add r_Int pattern for AVX.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230221 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 14:14:02 +00:00
Craig Topper
f9c1605d56 [X86] Add some missing redundant MMX and SSE encodings for disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-22 07:50:41 +00:00
Sanjay Patel
74e8bf678a canonicalize a v2f64 blendi of 2 registers
This canonicalization step saves us 3 pattern matching possibilities * 4 math ops
for scalar FP math that uses xmm regs. The backend can re-commute the operands
post-instruction-selection if that makes register allocation better.

The tests in llvm/test/CodeGen/X86/sse-scalar-fp-arith.ll cover this scenario already,
so there are no new tests with this patch.

Differential Revision: http://reviews.llvm.org/D7777


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230024 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 16:55:27 +00:00
Craig Topper
ed42dcef75 [X86] Remove AVX2 and SSE2 pslldq and psrldq intrinsics. We can represent them in IR with vector shuffles now. All their uses have been removed from clang in favor of shuffles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229640 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 06:24:44 +00:00
Sanjay Patel
544843cee1 prevent folding a scalar FP load into a packed logical FP instruction (PR22371)
Change the memory operands in sse12_fp_packed_scalar_logical_alias from scalars to vectors. 
That's what the hardware packed logical FP instructions define: 128-bit memory operands.
There are no scalar versions of these instructions...because this is x86.

Generating the wrong code (folding a scalar load into a 128-bit load) is still possible
using the peephole optimization pass and the load folding tables. We won't completely
solve this bug until we either fix the lowering in fabs/fneg/fcopysign and any other
places where scalar FP logic is created or fix the load folding in foldMemoryOperandImpl()
to make sure it isn't changing the size of the load.

Differential Revision: http://reviews.llvm.org/D7474


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229531 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 20:08:21 +00:00
Simon Pilgrim
0638f4e115 [X86][SSE] Add SSE MOVQ instructions to SSEPackedInt domain
Patch to explicitly add the SSE MOVQ (rr,mr,rm) instructions to SSEPackedInt domain - prevents a number of costly domain switches.

Differential Revision: http://reviews.llvm.org/D7600

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229439 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 21:50:56 +00:00
Craig Topper
4031c08c87 [X86] Remove the multiply by 8 that goes into the shift constant for X86ISD::VSHLDQ and X86ISD::VSRLDQ. This simplifies the pattern matching in isel and allows these nodes to become the patterns embedded in the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229431 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 20:52:07 +00:00
Craig Topper
e124dc723b [X86] Remove x86.avx2.psll.dq.bs and x86.avx2.psrl.dq.bs intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229430 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 20:51:59 +00:00
Simon Pilgrim
28f299b62d [X86][AVX2] vpslldq/vpsrldq byte shifts for AVX2
This patch refactors the existing lowerVectorShuffleAsByteShift function to add support for 256-bit vectors on AVX2 targets.

It also fixes a tablegen issue that prevented the lowering of vpslldq/vpsrldq vec256 instructions.

Differential Revision: http://reviews.llvm.org/D7596

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229311 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 13:19:52 +00:00
Sanjay Patel
fa1b3ba1f0 [SSE/AVX] Use multiclasses to reduce the mass of scalar math patterns; NFCI
This takes the preposterous number of patterns in this section
that were last added to in r219033 down to just plain obnoxious.

With a little more work, we might get this down to just comical.

I've added more test cases to the existing file that checks these
patterns, but it seems that some of these patterns simply don't
exist with today's shuffle lowering.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229158 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 21:52:42 +00:00
Craig Topper
db9343fb40 [X86] Remove int_x86_sse2_psll_dq_bs and int_x86_sse2_psrl_dq_bs intrinsics. The builtins aren't used by clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229069 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 06:07:24 +00:00
Craig Topper
cc5e6d56fc [X86] Remove 256-bit and 512-bit memop pattern fragments. They are no longer used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228563 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-09 04:04:53 +00:00
Craig Topper
3824fd3a25 [X86] Remove the remaining uses of memop from AVX and AVX2 instruction patterns. AVX and AVX2 can handle unaligned loads being folded so we can just use 'load'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228551 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 22:38:25 +00:00
Craig Topper
20d15157e4 [X86] Add xrstors/xsavec/xsaves/clflushopt/clwb/pcommit instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228283 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 08:51:06 +00:00
Chandler Carruth
b0589710cc [x86] Give movss and movsd execution domains in the x86 backend.
This associates movss and movsd with the packed single and packed double
execution domains (resp.). While this is largely cosmetic, as we now
don't have weird ping-pong-ing between single and double precision, it
is also useful because it avoids the domain fixing algorithm from seeing
domain breaks that don't actually exist. It will also be much more
important if we have an execution domain default other than packed
single, as that would cause us to mix movss and movsd with integer
vector code on a regular basis, a very bad mixture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228135 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:58:53 +00:00
Chandler Carruth
5ad147196d [x86] Add missing patterns for andps, orps, xorps, and andnps.
Specifically, the existing patterns were scalar-only. These cover the
packed vector bitwise operations when specifically requested with pseudo
instructions. This is particularly important in SSE1 where we can't
actually emit a logical operation on a v2i64 as that isn't a legal type.

This will be tested in subsequent patches which form the floating point
and patterns in more places.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228123 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:06:01 +00:00
Sanjay Patel
9b4cc76745 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228006 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 18:54:00 +00:00
Simon Pilgrim
44513da617 [X86][SSE] Float comparisons can sometimes be safely commuted
For ordered, unordered, equal and not-equal tests, packed float and double comparison instructions can be safely commuted without affecting the results. This patch checks the comparison mode of the (v)cmpps + (v)cmppd instructions and commutes the result if it can.

Differential Revision: http://reviews.llvm.org/D7178

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 22:29:24 +00:00
Simon Pilgrim
3ba85ab23a [X86][PCLMUL] Enable commutation for PCLMUL instructions
Patch to allow (v)pclmulqdq to be commuted - swaps the src registers and inverts the immediate (low/high) src mask.

Differential Revision: http://reviews.llvm.org/D7180

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227141 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 22:00:18 +00:00
Sanjay Patel
956d6f0cf5 Model sqrtsd as a binary operation with one source operand tied to the destination (PR14221)
This patch fixes the following miscompile:

define void @sqrtsd(<2 x double> %a) nounwind uwtable ssp {
  %0 = tail call <2 x double> @llvm.x86.sse2.sqrt.sd(<2 x double> %a) nounwind 
  %a0 = extractelement <2 x double> %0, i32 0
  %conv = fptrunc double %a0 to float
  %a1 = extractelement <2 x double> %0, i32 1
  %conv3 = fptrunc double %a1 to float
  tail call void @callee2(float %conv, float %conv3) nounwind
  ret void
}

Current codegen:

sqrtsd	%xmm0, %xmm1        ## high element of %xmm1 is undef here
xorps	%xmm0, %xmm0
cvtsd2ss	%xmm1, %xmm0
shufpd	$1, %xmm1, %xmm1
cvtsd2ss	%xmm1, %xmm1 ## operating on undef value
jmp	_callee

This is a continuation of http://llvm.org/viewvc/llvm-project?view=revision&revision=224624 ( http://reviews.llvm.org/D6330 ) 
which was itself a continuation of r167064 ( http://llvm.org/viewvc/llvm-project?view=revision&revision=167064 ).

All of these patches are partial fixes for PR14221 ( http://llvm.org/bugs/show_bug.cgi?id=14221 ); 
this should be the final patch needed to resolve that bug.

Differential Revision: http://reviews.llvm.org/D6885



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227111 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 18:42:16 +00:00
Craig Topper
ff763041d2 [X86] Give scalar VRNDSCALE instructions priority in AVX512 mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227039 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-25 08:49:22 +00:00
Craig Topper
e237954ed8 Remove tab characters. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227036 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-25 08:45:32 +00:00