Commit Graph

3258 Commits

Author SHA1 Message Date
Elena Demikhovsky
0880fe5997 AVX-512: I brought back vector-shuffle-512-v8.ll test.
I re-generated it after all AVX-512 shuffle optimizations.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239026 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 07:49:56 +00:00
Elena Demikhovsky
693d40eec0 Removed {}, NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239014 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 07:01:29 +00:00
Sanjay Patel
e4e5cf5a66 make reciprocal estimate code generation more flexible by adding command-line options (3rd try)
The first try (r238051) to land this was reverted due to ExecutionEngine build failure;
that was hopefully addressed by r238788.

The second try (r238842) to land this was reverted due to BUILD_SHARED_LIBS failure;
that was hopefully addressed by r238953.

This patch adds a TargetRecip class for processing many recip codegen possibilities.
The class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other x86 CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239001 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 01:32:35 +00:00
Asaf Badouh
ce375dc63a re-apply 238809
AVX-512: Implemented GETEXP instruction for KNL and SKX
Added rounding mode modifier for SQRTPS/PD
Added tests for encoding and intrinsics.
CR:
http://reviews.llvm.org/D9991


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238923 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 13:41:48 +00:00
Elena Demikhovsky
fc28da72f0 AVX-512: More code improvements in shuffles, NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238919 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 12:05:03 +00:00
Elena Demikhovsky
10eb2dd9df AVX-512: VSHUFPD instruction selection - code improvements
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238918 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 11:21:01 +00:00
Elena Demikhovsky
23dc4bbf1d AVX-512: Implemented SHUFF32x4/SHUFF64x2/SHUFI32x4/SHUFI64x2 instructions for SKX and KNL.
Added tests for encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238917 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 10:56:40 +00:00
Simon Pilgrim
dd5cde6e60 [X86] Removed (unused) FSRL x86 operation
This patch removes the old X86ISD::FSRL op - which allowed float vectors to use the byte right shift operations (causing a domain switch....).

Since the refactoring of the shuffle lowering code this no longer has any use.

Differential Revision: http://reviews.llvm.org/D10169

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238906 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 08:32:36 +00:00
Rafael Espindola
a0bcb4184b Revert "make reciprocal estimate code generation more flexible by adding command-line options (2nd try)"
This reverts commit r238842.

It broke -DBUILD_SHARED_LIBS=ON build.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238900 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 05:32:44 +00:00
Sanjay Patel
871beb8dd7 make reciprocal estimate code generation more flexible by adding command-line options (2nd try)
The first try (r238051) to land this was reverted due to bot failures
that were hopefully addressed by r238788.

This patch adds a TargetRecip class for processing many recip codegen possibilities.
The class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other x86 CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238842 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 15:28:15 +00:00
Elena Demikhovsky
ccbc17f896 AVX-512: Shorten implementation of lowerV16X32VectorShuffle()
using lowerVectorShuffleWithSHUFPS() and other shuffle-helpers routines.
Added matching of VALIGN instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238830 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 13:43:18 +00:00
Asaf Badouh
aa9e1c528b revert 238809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238810 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 07:45:19 +00:00
Asaf Badouh
82fa06895e AVX-512: Implemented GETEXP instruction for KNL and SKX
Added rounding mode modifier for SQRTPS/PD
Added tests for encoding and intrinsics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 07:18:14 +00:00
Elena Demikhovsky
bbd7cab2b9 AVX-512: Optimized vector shuffle for v16f32 and v16i32 types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238743 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 13:26:18 +00:00
Elena Demikhovsky
af0e519127 AVX-512: Implemented VRANGEPD and VRANGEPD instructions for SKX.
Implemented DAG lowering for all these forms.
Added tests for encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238738 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 11:05:34 +00:00
Elena Demikhovsky
aa62d8a6b2 AVX-512: Implemented vector shuffle lowering for v8i64 and v8f64 types.
I removed the vector-shuffle-512-v8.ll, it is auto-generated test, not valid any more.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238735 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 09:49:53 +00:00
Elena Demikhovsky
9029910d9f AVX-512: Implemented VFIXUPIMMPD and VFIXUPIMMPS instructions for KNL and SKX
Implemented DAG lowering for all these forms.
Added tests for encoding.

by Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238728 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 06:50:49 +00:00
Elena Demikhovsky
9f63519857 AVX-512: Fixed a bug in compress and expand intrinsics.
By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 06:30:13 +00:00
Matt Arsenault
5f3a6430d6 Add address space argument to isLegalAddressingMode
This is important because of different addressing modes
depending on the address space for GPU targets.

This only adds the argument, and does not update
any of the uses to provide the correct address space.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 05:31:59 +00:00
Simon Pilgrim
08786dc314 Stripped trailing whitespace. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238654 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 13:01:42 +00:00
Chandler Carruth
fa68750e54 [x86] Unify the horizontal adding used for popcount lowering taking the
best approach of each.

For vNi16, we use SHL + ADD + SRL pattern that seem easily the best.

For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases
there is a huge improvement with this in IACA's estimated throughput --
over 2x higher throughput!!!! -- but the measurements are too good to be
true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks
slightly faster, but I'm not sure I believe any of the measurements at
this point. Both are the exact same uops though. Hard to be confident of
anything past that.

If anyone wants to collect very detailed (Agner-level) timings with the
result of this patch, or with the i32 case replaced with SHL + ADD + SHl
+ ADD + SRL, I'd be very interested. Note that you'll need to test it on
both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as
I saw unique behavior in each of these buckets with IACA all of which
should be checked against measured performance.

But this patch is still a useful improvement by dropping duplicate work
and getting the much nicer PSADBW lowering for v2i64.

I'd still like to rephrase this in terms of generic horizontal sum. It's
a bit lame to have a special case of that just for popcount.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238652 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 10:35:03 +00:00
Chandler Carruth
da8bb20158 [x86] Split out the horizontal byte sum lowering component of the LUT
lowering into a helper function.

NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238650 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 09:46:16 +00:00
Chandler Carruth
3279f2381b [x86] Replace the long spelling of getting a bitcast with the *much*
shorter one. NFC.

In addition to being much shorter to type and requiring fewer arguments,
this change saves over 30 lines from this one file, all wasted on total
boilerplate...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238640 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:23:13 +00:00
Chandler Carruth
b26a073acb [x86] Replace the long spelling of getting a bitcast with the new short
spelling. NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238639 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:19:57 +00:00
Chandler Carruth
89a133960b [sdag] Add the helper I most want to the DAG -- building a bitcast
around a value using its existing SDLoc.

Start using this in just one function to save omg lines of code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238638 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:14:10 +00:00
Chandler Carruth
d8018eeac9 [x86] Restore the bitcasts I removed when refactoring this to avoid
shifting vectors of bytes as x86 doesn't have direct support for that.

This removes a bunch of redundant masking in the generated code for SSE2
and SSE3.

In order to avoid the really significant code size growth this would
have triggered, I also factored the completely repeatative logic for
shifting and masking into two lambdas which in turn makes all of this
much easier to read IMO.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238637 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:05:11 +00:00
Chandler Carruth
828f5b807c [x86] Implement a faster vector population count based on the PSHUFB
in-register LUT technique.

Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html

The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.

On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.

The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.

I have extended Bruno's patch in the following ways:

0) I committed the new tests with baseline sequences so this shows
   a diff, and regenerated the tests using the update scripts.

1) Bruno had noticed and mentioned in IRC a redundant mask that
   I removed.

2) I introduced a particular optimization for the i32 vector cases where
   we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
   + PSADBW to compute doubled high i32 popcounts. This takes advantage
   of the fact that to line up the high i32 popcounts we have to shift
   them anyways, and we can shift them by one fewer bit to effectively
   divide the count by two. While the PSHUFD based horizontal add is no
   faster, it doesn't require registers or load traffic the way a mask
   would, and provides more ILP as it happens on different ports with
   high throughput.

3) I did some code cleanups throughout to simplify the implementation
   logic.

4) I refactored it to continue to use the parallel bitmath lowering when
   SSSE3 is not available to preserve the performance of that version on
   SSE2 targets where it is still much better than scalarizing as we'll
   still do a bitmath implementation of popcount even in scalar code
   there.

With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.

With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.

Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.

Differential Revision: http://reviews.llvm.org/D10084

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:59 +00:00
Chandler Carruth
43d1e87d73 [x86] Restructure the parallel bitmath lowering of popcount into
a separate routine, generalize it to work for all the integer vector
sizes, and do general code cleanups.

This dramatically improves lowerings of byte and short element vector
popcount, but more importantly it will make the introduction of the
LUT-approach much cleaner.

The biggest cleanup I've done is to just force the legalizer to do the
bitcasting we need. We run these iteratively now and it makes the code
much simpler IMO. Other changes were minor, and mostly naming and
splitting things up in a way that makes it more clear what is going on.

The other significant change is to use a different final horizontal sum
approach. This is the same number of instructions as the old method, but
shifts left instead of right so that we can clear everything but the
final sum with a single shift right. This seems likely better than
a mask which will usually have to read the mask from memory. It is
certaily fewer u-ops. Also, this will be temporary. This and the LUT
approach share the need of horizontal adds to finish the computation,
and we have more clever approaches than this one that I'll switch over
to.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238635 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:55 +00:00
Jim Grosbach
586c0042da MC: Clean up MCExpr naming. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238634 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 01:25:56 +00:00
Reid Kleckner
cb95e9ef45 Remove debug prints from r238487
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238501 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 21:23:53 +00:00
Reid Kleckner
7738ecd62b Disable x86 tail call optimizations that jump through GOT
For x86 targets, do not do sibling call optimization when materializing
the callee's address would require a GOT relocation. We can still do
tail calls to internal functions, hidden functions, and protected
functions, because they do not require this kind of relocation. It is
still possible to get GOT relocations when the user explicitly asks for
it with musttail or -tailcallopt, both of which are supposed to
guarantee TCO.

Based on a patch by Chih-hung Hsieh.

Reviewers: srhines, timmurray, danalbert, enh, void, nadav, rnk

Subscribers: joerg, davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D9799

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238487 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 20:44:28 +00:00
Reid Kleckner
9417bdcc55 [WinEH] Remove debugging dump() call
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238472 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 20:02:05 +00:00
Elena Demikhovsky
078088b790 AVX-512: Implemented all forms of sign-extend and zero-extend instructions for KNL and SKX
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-27 08:15:19 +00:00
Elena Demikhovsky
001a2ba63f AVX-512: fixed a bug in arithmetic operations lowering for i1 type
https://llvm.org/bugs/show_bug.cgi?id=23630



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238198 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 12:37:17 +00:00
Elena Demikhovsky
55fd78065f AVX-512: fixed a bug in lowering VSELECT for 512-bit vector
https://llvm.org/bugs/show_bug.cgi?id=23634



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238195 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 11:32:39 +00:00
Simon Pilgrim
4da23583b6 [X86][AVX2] Vectorized i16 shift operators
Part of D9474, this patch extends AVX2 v16i16 types to 2 x 8i32 vectors and uses i32 shift variable shifts before packing back to i16.

Adds AVX2 tests for v8i16 and v16i16 

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238149 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 17:49:13 +00:00
Elena Demikhovsky
17b7d6bf25 Added promotion to EXTRACT_SUBVECTOR operand.
I encountered with this case in one of KNL tests for i1 vectors.
v16i1 = EXTRACT_SUBVECTOR v32i1, x



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238130 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 11:33:13 +00:00
Rafael Espindola
58bf2827d3 Revert "make reciprocal estimate code generation more flexible by adding command-line options"
This reverts commit r238051.

It broke some bots:

http://lab.llvm.org:8011/builders/llvm-ppc64-linux1/builds/18190

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238075 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-23 00:22:44 +00:00
Sanjay Patel
7e80a67d35 make reciprocal estimate code generation more flexible by adding command-line options
This patch adds a class for processing many recip codegen possibilities.
The TargetRecip class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238051 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-22 21:10:06 +00:00
Simon Pilgrim
41c749a31f Fixed unused variable warning in non-assert builds from rL237885
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237889 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-21 10:22:10 +00:00
Simon Pilgrim
87d1836793 [X86][SSE] Improve support for 128-bit vector sign extension
This patch improves support for sign extension of the lower lanes of vectors of integers by making use of the SSE41 pmovsx* sign extension instructions where possible, and optimizing the sign extension by shifts on pre-SSE41 targets (avoiding the use of i64 arithmetic shifts which require scalarization).

It converts SIGN_EXTEND nodes to SIGN_EXTEND_VECTOR_INREG where necessary, that more closely matches the pmovsx* instruction than the default approach of using SIGN_EXTEND_INREG which splits the operation (into an ANY_EXTEND lowered to a shuffle followed by shifts) making instruction matching difficult during lowering. Necessary support for SIGN_EXTEND_VECTOR_INREG has been added to the DAGCombiner.

Differential Revision: http://reviews.llvm.org/D9848

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237885 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-21 10:05:03 +00:00
Reid Kleckner
7681f6a1b0 [WinEH] Store pointers to the LSDA in the exception registration object
We aren't yet emitting the LSDA yet, so this will still fail to
assemble.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237852 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-20 23:08:04 +00:00
Hans Wennborg
fa13c712af Revert r237828 "[X86] Remove unused node after morphing it from shr to and."
This caused assertions during DAG combine: PR23601.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237843 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-20 22:31:55 +00:00
Benjamin Kramer
644f1ff184 [X86] Remove unused node after morphing it from shr to and.
In some cases it won't get cleaned up properly leading to crashes
downstream. PR23353.

Based on a patch by Davide Italiano.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237828 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-20 20:10:26 +00:00
Elena Demikhovsky
b65b24c0df AVX-512: fixed algorithm of building vectors of i1 elements
fixed extract-insert i1 element,
load i1, zextload i1 should be with "and $1, %reg" to prevent loading garbage.
added a bunch of new tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237793 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-20 14:32:03 +00:00
David Majnemer
349f0b12a4 [X86] Implement the local-exec TLS model for Windows targets
We know that _tls_index is zero for local-exec TLS variables because
they are always defined in the executable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237772 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-20 04:45:26 +00:00
Reid Kleckner
37f1bba13a Re-land r237175: [X86] Always return the sret parameter in eax/rax ...
This reverts commit r237210.

Also fix X86/complex-fca.ll to match the code that we used to generate
on win32 and now generate everwhere to conform to SysV.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237639 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 23:35:09 +00:00
David Blaikie
042dd34f9c Simplify IRBuilder::CreateCall* by using ArrayRef+initializer_list/braced init only
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237624 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 22:13:54 +00:00
Elena Demikhovsky
1c21f2ef8c AVX-512: Added intrinsics for ADDSS/D, MULSS/D, SUBSS/D, DIVSS/D
instructions. These intrinsics are comming with rounding mode.
Added intrinsics for MAXSS/D, MINSS/D - with and without  sae.

By Asaf Badouh (asaf.badouh@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237560 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 07:24:19 +00:00
Elena Demikhovsky
324d41ce49 fixed compilation warning/error
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237559 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 07:10:25 +00:00