Commit Graph

6178 Commits

Author SHA1 Message Date
Simon Pilgrim
41cda40157 Reapplied D7816 & rL230177 & rL230278 - with an additional fix toensure that the smallest build vector input scalar type is always used. Additional (crash) test cases already committed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230388 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 22:08:56 +00:00
Andrew Kaylor
8f475e9d77 Fixing eol-style
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230378 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 20:49:35 +00:00
Eric Christopher
7c611d59cc Revert:
Author: Simon Pilgrim <llvm-dev@redking.me.uk>
Date:   Mon Feb 23 23:04:28 2015 +0000

    Fix based on post-commit comment on D7816 & rL230177 - BUILD_VECTOR operand truncation was using the the BV's output scalar type instead of the input type.

and

Author: Simon Pilgrim <llvm-dev@redking.me.uk>
Date:   Sun Feb 22 18:17:28 2015 +0000

    [DagCombiner] Generalized BuildVector Vector Concatenation

    The CONCAT_VECTORS combiner pass can transform the concat of two BUILD_VECTOR nodes into a single BUILD_VECTOR node.

    This patch generalises this to support any number of BUILD_VECTOR nodes, and also permits UNDEF nodes to be included as well.

    This was noticed as AVX vec128 -> vec256 canonicalization sometimes creates a CONCAT_VECTOR with a real vec128 lower and an vec128 UNDEF upper.

    Differential Revision: http://reviews.llvm.org/D7816

as the root cause of PR22678 which is causing an assertion inside the DAG combiner.

I'll follow up to the main thread as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230358 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 19:11:00 +00:00
Hans Wennborg
b499b73e30 Revert r230280: "Bugfix: SCEVExpander incorrectly marks increment operations as no-wrap"
This caused PR22674, failing this assert:

Instructions.h:2281: llvm::Value* llvm::PHINode::getOperand(unsigned int) const: Assertion `i_nocapture < OperandTraits<PHINode>::operands(this) && "getOperand() out of range!"' failed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230341 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:19:29 +00:00
Michael Kuperstein
2379e8a2ee [x32] x32 should use ebx as the base pointer.
This fixes the original issue in PR22655, but not the secondary one.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230334 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 15:27:13 +00:00
David Majnemer
fbdee9f0c0 X86: Only use 'lea' in Win64 epilogues if a frame pointer exists
We can only use 'add' in epilogues, 'lea' is not permitted unless we've
established a frame pointer in the prologue.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230286 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 00:11:32 +00:00
Sanjoy Das
8d16a81c33 Bugfix: SCEVExpander incorrectly marks increment operations as no-wrap
When emitting the increment operation, SCEVExpander marks the
operation as nuw or nsw based on the flags on the preincrement SCEV.
This is incorrect because, for instance, it is possible that {-6,+,1}
is <nuw> while {-6,+,1}+1 = {-5,+,1} is not.

This change teaches SCEV to mark the increment as nuw/nsw only if it
can explicitly prove that the increment operation won't overflow.

Apart from the attached test case, another (more realistic) manifestation
of the bug can be seen in Transforms/IndVarSimplify/pr20680.ll.

NOTE: this change was landed with an incorrect commit message in
rL230275 and was reverted for that reason in rL230279.  This commit
message is the correct one.

Differential Revision: http://reviews.llvm.org/D7778



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230280 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 23:22:58 +00:00
Sanjoy Das
69048edf8a Revert 230275.
230275 got committed with an incorrect commit message due to a mixup
on my side.  Will re-land in a few moments with the correct commit
message.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230279 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 23:13:22 +00:00
Andrea Di Biagio
770e106ed6 [X86] Teach how to custom lower double-to-half conversions under fast-math.
This patch teaches the backend how to expand a double-half conversion into
a double-float conversion immediately followed by a float-half conversion.
We do this only under fast-math, and if float-half conversions are legal
for the target.

Added test CodeGen/X86/fastmath-float-half-conversion.ll

Differential Revision: http://reviews.llvm.org/D7832


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230276 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 22:59:02 +00:00
Sanjoy Das
7ebbc8de2f Fix bug 22641
The bug was a result of getPreStartForExtend interpreting nsw/nuw
flags on an add recurrence more strongly than is legal.  {S,+,X}<nsw>
implies S+X is nsw only if the backedge of the loop is taken at least
once.

Differential Revision: http://reviews.llvm.org/D7808



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230275 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 22:55:13 +00:00
David Majnemer
ad6622575c X86: Use a smaller 'mov' instruction for stack probe calls
Prologue emission, in some cases, requires calls to a stack probe helper
function.  The amount of stack to probe is passed as a register
argument in the Win64 ABI but the instruction sequence used is
pessimistic: it assumes that the number of bytes to probe is greater
than 4 GB.

Instead, select a more appropriate opcode depending on the number of
bytes we are going to probe.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230270 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:30 +00:00
David Majnemer
d71e4c6218 X86: Use 'mov' instead of 'lea' in Win64 SEH prologues when possible
'mov' and 'lea' are equivalent when the displacement applied with 'lea'
is zero.  However, 'mov' should encode smaller.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230269 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:27 +00:00
Bruno Cardoso Lopes
a7db376a63 [X86][MMX] Fix test to reflect current codegen
This test failed in several buildbots, a bit unclear how that happen
since this was the previous behavior before r230248.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230258 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 20:57:46 +00:00
Andrew Kaylor
595050a793 Adding test for Windows EH frame variable remapping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230250 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 20:04:51 +00:00
Andrew Kaylor
1d10231766 Remap frame variables for native Windows exception handling.
Differential Revision: http://reviews.llvm.org/D7770



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230249 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 20:01:56 +00:00
Bruno Cardoso Lopes
ee7b509aa3 Revert "[X86][MMX] Add MMX instructions to foldable tables"
This reverts commit r230226 since it breaks win buildbots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230248 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 19:53:37 +00:00
Bruno Cardoso Lopes
01312dd0b4 [X86] Add specific mtriple in order to appease builbots
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230229 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:33:40 +00:00
Bruno Cardoso Lopes
77d2363908 [X86][MMX] Add MMX instructions to foldable tables
Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:23:22 +00:00
Bruno Cardoso Lopes
c606f3a3cb [X86][MMX] Support folding loads in psll, psrl and psra intrinsics
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230225 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:23:14 +00:00
Bruno Cardoso Lopes
6916c75fba [X86][MMX] Add tests for pslli, psrli and psrai intrinsics
Add tests to cover the RR form of the pslli, psrli and psrai intrinsics.
In the next commit, the loads are going to be folded and the
instructions use the RM form.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230224 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:23:06 +00:00
Elena Demikhovsky
fdafc8fd5e AVX-512: recommitted 229837 + bugfix + test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230223 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:12:31 +00:00
Simon Pilgrim
66c960350c [DagCombiner] Generalized BuildVector Vector Concatenation
The CONCAT_VECTORS combiner pass can transform the concat of two BUILD_VECTOR nodes into a single BUILD_VECTOR node.

This patch generalises this to support any number of BUILD_VECTOR nodes, and also permits UNDEF nodes to be included as well.

This was noticed as AVX vec128 -> vec256 canonicalization sometimes creates a CONCAT_VECTOR with a real vec128 lower and an vec128 UNDEF upper.

Differential Revision: http://reviews.llvm.org/D7816

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230177 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-22 18:17:28 +00:00
Simon Pilgrim
b430a06e94 [X86][SSE] Added shuffle based integer zero extension tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 21:25:16 +00:00
David Majnemer
e95985d3a0 Win64: Stack alignment constraints aren't applied during SET_FPREG
Stack realignment occurs after the prolog, not during, for Win64.
Because of this, don't factor in the maximum stack alignment when
establishing a frame pointer.

This fixes PR22572.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230113 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 01:04:47 +00:00
Rafael Espindola
c093973970 Use short names for jumptable sections.
Also refactor code to remove some duplication.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230087 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 23:28:28 +00:00
Andrea Di Biagio
3583d23018 [X86][FastIsel] Teach how to select float-half conversion intrinsics.
This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and
intrinsic 'convert_to_fp16'.
If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion,
and VCVTPH2PSrr for a half-float conversion.

Differential Revision: http://reviews.llvm.org/D7673


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230043 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 19:37:14 +00:00
Chandler Carruth
efbbaefea5 [x86] Remove the old vector shuffle lowering code and its flag.
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]

Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.

In case you aren't using a diff algorithm that can handle this:
  X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229964 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 04:25:04 +00:00
Chandler Carruth
07ef8904ad [x86] Now that the new vector shuffle legality is enabled and everything
is going well, remove the flag and the code for the old legality tests.

This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229963 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 03:59:35 +00:00
Chandler Carruth
38749b8e07 [x86] Make the new vector shuffle legality test on by default, which
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.

My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229958 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 03:05:47 +00:00
Chandler Carruth
4fd100f9e9 [x86] Clean up a couple of test cases with the new update script. Split
one test case that is only partially tested in 32-bits into two test
cases so that the script doesn't generate massive spews of tests for the
cases we don't care about.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229955 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 02:44:13 +00:00
Chandler Carruth
90b8e791ac Revert r229944: EH: Prune unreachable resume instructions during Dwarf EH preparation
This doesn't pass 'ninja check-llvm' for me. Lots of tests, including
the ones updated, fail with crashes and other explosions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229952 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 02:15:36 +00:00
Reid Kleckner
49ab3a626a EH: Prune unreachable resume instructions during Dwarf EH preparation
Today a simple function that only catches exceptions and doesn't run
destructor cleanups ends up containing a dead call to _Unwind_Resume
(PR20300). We can't remove these dead resume instructions during normal
optimization because inlining might introduce additional landingpads
that do have cleanups to run. Instead we can do this during EH
preparation, which is guaranteed to run after inlining.

Fixes PR20300.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D7744

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229944 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 01:00:19 +00:00
Eric Christopher
8c4bb575e1 Revert "AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics."
The instructions were being generated on architectures that don't support avx512.

This reverts commit r229837.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229942 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 00:45:28 +00:00
Sanjay Patel
4dad5f2731 add X86 load folding tests for unary math ops
X86 load folding is fragile; eg, the tests here
don't work without AVX even though they should. This
is because we have a mix of tablegen patterns that have
been added over time, and we have a load folding table
used by the peephole optimizer that has to be kept in 
sync with the ever-changing ISA and tablegen defs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229870 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 16:59:11 +00:00
Chandler Carruth
b7012af85f [x86] Delete still more piles of complex code now that we have a good
systematic lowering of v8i16.

This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.

In some cases, this is yet another significant improvement.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229859 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 15:21:57 +00:00
Chandler Carruth
c57e90422f [x86] Teach the unpack lowering how to lower with an initial unpack in
addition to lowering to trees rooted in an unpack.

This saves shuffles and or registers in many various ways, lets us
handle another class of v4i32 shuffles pre SSE4.1 without domain
crosses, etc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229856 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 15:06:13 +00:00
Chandler Carruth
7f583a4201 [x86] Dramatically improve v8i16 shuffle lowering by not using its
terribly complex partial blend logic.

This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.

This does expose one (minor) weakness of the unpack lowering that I'll
try to address.

In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229853 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 14:08:24 +00:00
Chandler Carruth
943b2ca2de [x86] Remove the final fallback in the v8i16 lowering that isn't really
needed, and significantly improve the SSSE3 path.

This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.

This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229851 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 13:56:49 +00:00
Chandler Carruth
c3d7858505 [x86] Simplify the pre-SSSE3 v16i8 lowering significantly by decomposing
them into permutes and a blend with the generic decomposition logic.

This works really well in almost every case and lets the code only
manage the expansion of a single input into two v8i16 vectors to perform
the actual shuffle. The blend-based merging is often much nicer than the
pack based merging that this replaces. The only place where it isn't we
end up blending between two packs when we could do a single pack. To
handle that case, just teach the v2i64 lowering to handle these blends
by digging out the operands.

With this we're down to only really random permutations that cause an
explosion of instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229849 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 13:15:12 +00:00
Chandler Carruth
3d4542ce3d [x86] Remove the insanely over-aggressive unpack lowering strategy for
v16i8 shuffles, and replace it with new facilities.

This uses precise patterns to match exact unpacks, and the new
generalized unpack lowering only when we detect a case where we will
have to shuffle both inputs anyways and they terminate in exactly
a blend.

This fixes all of the blend horrors that I uncovered by always lowering
blends through the vector shuffle lowering. It also removes *sooooo*
much of the crazy instruction sequences required for v16i8 lowering
previously. Much cleaner now.

The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when
it would be marginally nicer to use pshufb+pshufb+por. However, the
difference there is *tiny*. In many cases its a win because we re-use
the pshufb mask. In others, we get to avoid the pshufb entirely. I've
left a FIXME, but I'm dubious we can really do better than this. I'm
actually pretty happy with this lowering now.

For SSE2 this exposes some horrors that were really already there. Those
will have to fixed by changing a different path through the v16i8
lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229846 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 12:10:37 +00:00
Elena Demikhovsky
675d06d1d0 AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229837 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 10:48:04 +00:00
Chandler Carruth
ac2b1a1bb3 [x86] Add support for bit-wise blending and use it in the v8 and v16
lowering paths. I'm going to be leveraging this to simplify a lot of the
overly complex lowering of v8 and v16 shuffles in pre-SSSE3 modes.

Sadly, this isn't profitable on v4i32 and v2i64. There, the float and
double blending instructions for pre-SSE4.1 are actually pretty good,
and we can't beat them with bit math. And once SSE4.1 comes around we
have direct blending support and this ceases to be relevant.

Also, some of the test cases look odd because the domain fixer
canonicalizes these to floating point domain. That's OK, it'll use the
integer domain when it matters and some day I may be able to update
enough of LLVM to canonicalize the other way.

This restores almost all of the regressions from teaching x86's vselect
lowering to always use vector shuffle lowering for blends. The remaining
problems are because the v16 lowering path is still doing crazy things.
I'll be re-arranging that strategy in more detail in subsequent commits
to finish recovering the performance here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229836 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 10:46:52 +00:00
Chandler Carruth
a8fb39af83 [x86,sdag] Two interrelated changes to the x86 and sdag code.
First, don't combine bit masking into vector shuffles (even ones the
target can handle) once operation legalization has taken place. Custom
legalization of vector shuffles may exist for these patterns (making the
predicate return true) but that custom legalization may in some cases
produce the exact bit math this matches. We only really want to handle
this prior to operation legalization.

However, the x86 backend, in a fit of awesome, relied on this. What it
would do is mark VSELECTs as expand, which would turn them into
arithmetic, which this would then match back into vector shuffles, which
we would then lower properly. Amazing.

Instead, the second change is to teach the x86 backend to directly form
vector shuffles from VSELECT nodes with constant conditions, and to mark
all of the vector types we support lowering blends as shuffles as custom
VSELECT lowering. We still mark the forms which actually support
variable blends as *legal* so that the custom lowering is bypassed, and
the legal lowering can even be used by the vector shuffle legalization
(yes, i know, this is confusing. but that's how the patterns are
written).

This makes the VSELECT lowering much more sensible, and in fact should
fix a bunch of bugs with it. However, as you'll see in the test cases,
right now what it does is point out the *hilarious* deficiency of the
new vector shuffle lowering when it comes to blends. Fortunately, my
very next patch fixes that. I can't submit it yet, because that patch,
somewhat obviously, forms the exact and/or pattern that the DAG combine
is matching here! Without this patch, teaching the vector shuffle
lowering to produce the right code infloops in the DAG combiner. With
this patch alone, we produce terrible code but at least lower through
the right paths. With both patches, all the regressions here should be
fixed, and a bunch of the improvements (like using 2 shufps with no
memory loads instead of 2 andps with memory loads and an orps) will
stay. Win!

There is one other change worth noting here. We had hilariously wrong
vectorization cost estimates for vselect because we fell through to the
code path that assumed all "expand" vector operations are scalarized.
However, the "expand" lowering of VSELECT is vector bit math, most
definitely not scalarized. So now we go back to the correct if horribly
naive cost of "1" for "not scalarized". If anyone wants to add actual
modeling of shuffle costs, that would be cool, but this seems an
improvement on its own. Note the removal of 16 and 32 "costs" for doing
a blend. Even in SSE2 we can blend in fewer than 16 instructions. ;] Of
course, we don't right now because of OMG bad code, but I'm going to fix
that. Next patch. I promise.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229835 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 10:36:19 +00:00
Chandler Carruth
00c954ffc4 [x86] Merge checks for a recently added test case that is the same on
all SSE variants and AVX variants.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229770 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 23:20:49 +00:00
Reid Kleckner
f89d9b1c75 Add an IR-to-IR test for dwarf EH preparation using opt
This tests the simple resume instruction elimination logic that we have
before making some changes to it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229768 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 23:17:41 +00:00
Reid Kleckner
ae09ebc540 dos2unix the WinEH file and tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229735 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 19:52:46 +00:00
Andrew Kaylor
a4976167c4 Adding implementation to outline C++ catch handlers for native Windows 64 exception handling.
Differential Revision: http://reviews.llvm.org/D7363



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229715 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 18:31:51 +00:00
Michael Kuperstein
9571ea6620 Fixes two issue in SimplifyDemandedBits of sext_in_reg:
1) We should not try to simplify if the sext has multiple uses
2) There is no need to simplify is the source value is already sign-extended.

Patch by Gil Rapaport <gil.rapaport@intel.com>

Differential Revision: http://reviews.llvm.org/D6949

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229659 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 09:43:40 +00:00
Chandler Carruth
4e8a4638e9 [x86] Refactor the bit shift code the same as I just did the byte shift
code.

While this didn't have the miscompile (it used MatchLeft consistently)
it missed some cases where it could use right shifts. I've added a test
case Craig Topper came up with to exercise the right shift matching.

This code is really identical between the two. I'm going to merge them
next so that we don't keep two copies of all of this logic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229655 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 09:19:58 +00:00
Daniel Jasper
66bd6852bb Remove experimental options to control machine block placement.
This reverts r226034. Benchmarking with those flags has not revealed
anything interesting.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229648 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 08:18:07 +00:00
Elena Demikhovsky
87483ed180 AVX-512: Added support for FP instructions with embedded rounding mode.
By Asaf Badouh <asaf.badouh@intel.com>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229645 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 07:59:20 +00:00
Craig Topper
052d754ccb [X86] Add another test case for the bug fixed in r229642. With the bug a vpsrldq was emitted instead of pslldq.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229643 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 07:45:43 +00:00
Chandler Carruth
c9520b48ae [x86] Rewrite the byte shift detection to not use boolean variables to
track state.

I didn't like this in the code review because the pattern tends to be
error prone, but I didn't see a clear way to rewrite it. Turns out that
there were bugs here, I found them when fuzz testing our shuffle
lowering for correctness on x86.

The core of the problem is that we need to consistently test all our
preconditions for the same directionality of shift and the same input
vector. Instead, formulate this as two predicates (one doesn't depend on
the input in any way), pass things like the directionality and input
vector as inputs, and loop over the alternatives.

This fixes a pattern of very rare miscompiles coming out of this code.
Turned up roughly 4 out of every 1 million v8 shuffles in my fuzz
testing. The new code is over half a million test runs with no failures
yet. I've also fuzzed every other function in the lowering code with
over 3.5 million test cases and not discovered any other miscompiles.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229642 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 07:13:48 +00:00
Craig Topper
ed42dcef75 [X86] Remove AVX2 and SSE2 pslldq and psrldq intrinsics. We can represent them in IR with vector shuffles now. All their uses have been removed from clang in favor of shuffles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229640 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-18 06:24:44 +00:00
Andrea Di Biagio
b3ff6a88b6 [X86][FastIsel] Teach how to select scalar integer to float/double conversions.
This patch teaches fast-isel how to select a (V)CVTSI2SSrr for an integer to 
float conversion, and how to select a (V)CVTSI2SDrr for an integer to double
conversion.

Added test 'fast-isel-int-float-conversion.ll'.

Differential Revision: http://reviews.llvm.org/D7698


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229589 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 23:40:58 +00:00
Rafael Espindola
3b75cfe179 Add r228939 back with a fix.
The problem in the original patch was not switching back to .text after printing
an eh table.

Original message:

On ELF, put PIC jump tables in a non executable section.

Fixes PR22558.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229586 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 23:34:51 +00:00
Rafael Espindola
54b2025420 Add a test showing the problem in r228939.
If an EH table is printed in between the function and the jump table we would
fail to switch back to the text section to print the jump table.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229580 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 23:21:46 +00:00
Simon Pilgrim
cbc2ca5ec9 [X86][SSE] Generalised unpckl/unpckh shuffle matching
Added commuted unpckl/unpckh shuffle matching patterns as many cases containing undefined lanes fail to commute by themselves.

Differential Revision: http://reviews.llvm.org/D7564

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229571 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 22:24:32 +00:00
Sanjay Patel
ef23f042ce use a triple instead of a cpu; less builbot sadness
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229563 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 21:59:54 +00:00
Rafael Espindola
51beb495fc Add testcases I missed in r229541.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229542 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 20:50:39 +00:00
Sanjay Patel
166769cfb4 make basic block label matching more flexible for less sad buildbots
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229535 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 20:29:31 +00:00
Sanjay Patel
544843cee1 prevent folding a scalar FP load into a packed logical FP instruction (PR22371)
Change the memory operands in sse12_fp_packed_scalar_logical_alias from scalars to vectors. 
That's what the hardware packed logical FP instructions define: 128-bit memory operands.
There are no scalar versions of these instructions...because this is x86.

Generating the wrong code (folding a scalar load into a 128-bit load) is still possible
using the peephole optimization pass and the load folding tables. We won't completely
solve this bug until we either fix the lowering in fabs/fneg/fcopysign and any other
places where scalar FP logic is created or fix the load folding in foldMemoryOperandImpl()
to make sure it isn't changing the size of the load.

Differential Revision: http://reviews.llvm.org/D7474


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229531 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 20:08:21 +00:00
Sanjay Patel
1115b2c27e Canonicalize splats as build_vectors (PR22283)
This is a follow-on patch to:
http://reviews.llvm.org/D7093

That patch canonicalized constant splats as build_vectors, 
and this patch removes the constant check so we can canonicalize
all splats as build_vectors.

This fixes the 2nd test case in PR22283:
http://llvm.org/bugs/show_bug.cgi?id=22283

The unfortunate code duplication between SelectionDAG and DAGCombiner
is discussed in the earlier patch review. At least this patch is just
removing code...

This improves an existing x86 AVX test and changes codegen in an ARM test.

Differential Revision: http://reviews.llvm.org/D7389


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229511 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 16:54:32 +00:00
Andrea Di Biagio
16389eee3f [X86][FastISel] Add missing flag -fast-isel-abort to run lines in test fast-isel-fptrunc-fpext.ll.
Flag -fast-isel-abort is required in order to verify that X86FastISel
never fails to select FPExt (float-to-double) and FPTrunc (double-to-float).
No Functional change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229489 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 12:25:49 +00:00
Elena Demikhovsky
199f58a198 AVX-512: changes in intel_ocl_bi calling conventions
- added mask types v8i1 and v16i1 to possible function parameters
- enabled passing 512-bit vectors in standard CC
- added a test for KNL intel_ocl_bi conventions


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229482 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 09:20:12 +00:00
Michael Kuperstein
e275542046 [X86] Combine vector anyext + and into a vector zext
Vector zext tends to get legalized into a vector anyext, represented as a vector shuffle with an undef vector + a bitcast, that gets ANDed with a mask that zeroes the undef elements.
Combine this into an explicit shuffle with a zero vector instead. This allows shuffle lowering to match it as a zext, instead of matching it as an anyext and emitting an explicit AND.
This combine only covers a subset of the cases, but it's a start.

Differential Revision: http://reviews.llvm.org/D7666

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229480 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 08:22:51 +00:00
Chandler Carruth
1e357351be [x86] Teach the unpack lowering to try wider element unpacks.
This allows it to match still more places where previously we would have
to fall back on floating point shuffles or other more complex lowering
strategies.

I'm hoping to replace some of the hand-rolled unpack matching with this
routine is it gets more and more clever.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229463 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 02:12:24 +00:00
Hal Finkel
4ba3a67430 Specify arch in test/CodeGen/X86/float-conv-elim.ll
This test was failing on non-x86 hosts because it specified a cpu of x86_64,
but not an architecture. x86_64 is obviously not a valid cpu on all
architectures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229460 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-17 00:11:19 +00:00
Cameron McInally
cdddfe0cb3 [AVX512] Make 512b vector floating point rounds legal on AVX512.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229445 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 22:15:42 +00:00
Simon Pilgrim
0638f4e115 [X86][SSE] Add SSE MOVQ instructions to SSEPackedInt domain
Patch to explicitly add the SSE MOVQ (rr,mr,rm) instructions to SSEPackedInt domain - prevents a number of costly domain switches.

Differential Revision: http://reviews.llvm.org/D7600

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229439 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 21:50:56 +00:00
Mehdi Amini
2deb1d0b54 SelectionDAG: fold (fp_to_u/sint (s/uint_to_fp)) here too
Update SPARC tests to match.

From: Fiona Glaser <fglaser@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229438 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 21:47:58 +00:00
Craig Topper
4031c08c87 [X86] Remove the multiply by 8 that goes into the shift constant for X86ISD::VSHLDQ and X86ISD::VSRLDQ. This simplifies the pattern matching in isel and allows these nodes to become the patterns embedded in the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229431 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 20:52:07 +00:00
Chandler Carruth
cbe6ecfc81 [x86] Add a generic unpack-targeted lowering technique. This can be used
to generically lower blends and is particularly nice because it is
available frome SSE2 onward. This removes a lot of the remaining domain
crossing blends in SSE2 code.

I'm hoping to replace some of the "interleaved" lowering hacks with
something closer to this which should be more principled. First, this
needs to learn how to detect and use other interleavings besides that of
the natural type provided. That will be a follow-up patch though.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229378 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 12:28:18 +00:00
Chandler Carruth
e62fbca6b7 [x86] Switch this test to use checks generated by my update script. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229377 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 12:23:22 +00:00
Chandler Carruth
29679ccc12 [x86] Add initial basic support for forming blends of v16i8 vectors.
This blend instruction is ... really lame. The register usage is insane.
As a consequence this is probably only *barely* better than 2 pshufbs
followed by a por, and that mostly because it only has to read from
a single memory location.

However, this doesn't fix as much as I kind of expected, so more to go.
Pretty sure that the ordering and delegation of v16i8 is just really,
really bad.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 10:58:23 +00:00
Chandler Carruth
ab497238cb [x86] Add some more test cases for i8 vector blends.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229372 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 10:51:49 +00:00
Craig Topper
74b9ad3485 [X86] Add support for lowering shuffles to 256-bit PALIGNR instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229359 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 06:29:06 +00:00
Craig Topper
abdf58f7f9 [X86] Remove some hard tab characters from tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229358 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 06:29:02 +00:00
Chandler Carruth
454c3997b4 [x86] Teach the 128-bit vector shuffle lowering routines to take
advantage of the existence of a reasonable blend instruction.

The 256-bit vector shuffle lowering has leveraged the general technique
of decomposed shuffles and blends for quite some time, but this never
made it back into the 128-bit code, and there are a large number of
patterns where this is substantially better. For example, this removes
almost all domain crossing in vector shuffles that involve some blend
and some permutation with SSE4.1 and later. See the massive reduction
in 'shufps' for integer test cases in this commit.

This isn't perfect yet for a few reasons:

1) The v8i16 shuffle lowering continues to plague me. We don't always
   form an unpack-based blend when that would be better. But the wins
   pretty drastically outstrip the losses here.
2) The v16i8 shuffle lowering is just a disaster here. I never went and
   implemented blend support here for some terrible reason. I'll do
   that next probably. I've not updated it for now.

More variations on this technique are coming as well -- we don't
shuffle-into-unpack or shuffle-into-palignr, both of which would also be
profitable.

Note that some test cases grow significantly in the number of
instructions, but I expect to actually be faster. We use
pshufd+pshufd+blendw instead of a single shufps, but the pshufd's are
very likely to pipeline well (two ports on most modern intel chips) and
the blend is a *very* fast instruction. The domain switch penalty will
essentially always be more than a blend instruction, which is the only
increase in tree height.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229350 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 01:52:02 +00:00
Chandler Carruth
aa35e012a3 [x86] Clean up a few test cases with the update script. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229349 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 01:39:50 +00:00
Simon Pilgrim
f4e056ac2a Added (still inefficient) shuffle test case for PR21138
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229321 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 18:21:39 +00:00
Simon Pilgrim
afcb895fe1 Added some test cases of missed opportunities to use unpckl/unpckh shuffles
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229313 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 15:07:45 +00:00
Simon Pilgrim
28f299b62d [X86][AVX2] vpslldq/vpsrldq byte shifts for AVX2
This patch refactors the existing lowerVectorShuffleAsByteShift function to add support for 256-bit vectors on AVX2 targets.

It also fixes a tablegen issue that prevented the lowering of vpslldq/vpsrldq vec256 instructions.

Differential Revision: http://reviews.llvm.org/D7596

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229311 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 13:19:52 +00:00
Chandler Carruth
3e93916175 [x86] Add the test case from PR22412, we now get this right even with
the new vector shuffle legality.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229310 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 12:45:05 +00:00
Chandler Carruth
fbde8bffba [x86] Teach the decomposed shuffle/blend lowering to use an early blend
when that will allow it to lower with a single permute instead of
multiple permutes.

It tries to detect when it will only have to do a single permute in
either case to maximize folding of loads and such.

This cuts a *lot* of the avx2 shuffle permute counts in half. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229309 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 12:42:15 +00:00
Chandler Carruth
72753f87f2 [SDAG] Teach the SelectionDAG to canonicalize vector shuffles of splats
directly into blends of the splats.

These patterns show up even very late in the vector shuffle lowering
where we don't have any chance for DAG combining to kick in, and
blending is a tremendously simpler operation to model. By coercing the
shuffle into a blend we can much more easily match and lower shuffles of
splats.

Immediately with this change there are significantly more blends being
matched in the x86 vector shuffle lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229308 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 12:18:12 +00:00
Chandler Carruth
52f1b6dbed [x86] Stop shuffling zero vectors. =]
I was somewhat surprised this pattern really came up, but it does. It
seems better to just directly handle it than try to special case every
place where we end up forming a shuffle that devolves to a shuffle of
a zero vector.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 10:34:52 +00:00
Chandler Carruth
46d3e580ed [x86] When splitting 256-bit vectors into 128-bit vectors, don't extract
subvectors from buildvectors. That doesn't really make any sense and it
breaks all of the down-stream matching of buildvectors to cleverly lower
shuffles.

With this, we now get the shift-based lowering of 256-bit vector
shuffles with AVX1 when we split them into 128-bit vectors. We also do
much better on the zero-extension patterns, although there remains quite
a bit of room for improvement here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229299 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 10:12:02 +00:00
Chandler Carruth
4d2dfa703e [x86] Update some tests with the latest version of my script and llc.
This mostly adds some shuffle decode comments and cleans up indentation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 09:26:15 +00:00
Chandler Carruth
62ba2b29d8 [x86] Add a slight variation on some of the other generic shuffle
lowerings -- one which decomposes into an initial blend followed by
a permute.

Particularly on newer chips, blends are handled independently of
shuffles and so this is much less bottlenecked on the single port that
floating point shuffles are executed with on Intel.

I'll be adding this lowering to a bunch of other code paths in
subsequent commits to handle still more places where we can effectively
leverage blends when they're available in the ISA.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229292 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 08:26:30 +00:00
Chandler Carruth
8e8b9bd42f [x86] Add a test case for PR22390 which was a dup of PR22377 and fixed
by r229285. This is a nice different test case though, so I'd like to
have the extra testing of these kinds of patterns.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229286 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 07:05:50 +00:00
Chandler Carruth
5ee516549f [x86] Fix PR22377, a regression with the new vector shuffle legality
test.

This was just a matter of the DAG combine for vector shuffles being too
aggressive. This is a bit of a grey area, but I think generally if we
can re-use intermediate shuffles, we should. Certainly, given the test
cases I have available, this seems like the right call.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229285 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 07:01:10 +00:00
Chandler Carruth
9bb943b185 [x86] Switch a collection of tests explicitly to the new vector shuffle
legality test (essentially, everything is legal).

I'm planning to make this the default shortly, but I'd like to fix
a collection of the bugs it exposes first, and this will let me easily
test them. It also showcases both the improvements and a few of the
regressions triggered by the change. The biggest improvements by far are
the significantly reduced shuffling and domain crossing in the combining
test case. The biggest regressions are missing some clever blending
patterns.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229284 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 06:37:21 +00:00
Chandler Carruth
0294b517ac [x86] Remove the now-default-on flag for the new vector shuffle lowering
strategy from a bunch of tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229283 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 06:20:51 +00:00
Chandler Carruth
da0198de41 [x86] Teach my test updating script about another quirk of the printed
asm and port the mmx vector shuffle test to it.

Not thrilled with how it handles the stack manipulation logic, but I'm
much less bothered by that than I am by updating the test manually. =]
If anyone wants to teach the test checks management script about stack
adjustment patterns, that'd be cool too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229268 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 00:08:01 +00:00
Simon Pilgrim
6d5ee8a8b5 [X86][XOP] Enable commutation for XOP instructions
Patch to allow XOP instructions (integer comparison and integer multiply-add) to be commuted. The comparison instructions sometimes require the compare mode to be flipped but the remaining instructions can use default commutation modes.

This patch also sets the SSE domains of all the XOP instructions.

Differential Revision: http://reviews.llvm.org/D7646

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229267 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-14 22:40:46 +00:00
Andrea Di Biagio
47cd120a18 [optnone] Skip pass Constant Hoisting on optnone functions.
Added test CodeGen/X86/constant-hoisting-optnone.ll to verify that
pass Constant Hoisting is not run on optnone functions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229258 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-14 15:11:48 +00:00
Simon Pilgrim
fba633f4b6 [X86] Ensure integer domain on scalar load/store stack folding tests. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229257 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-14 14:10:44 +00:00
Matthias Braun
821ec14add Revert "On ELF, put PIC jump tables in a non executable section."
This reverts commit r228939.

The commit broke something in the output of exception handling tables on
darwin x86-64.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229203 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-14 01:16:54 +00:00
Sanjay Patel
fa1b3ba1f0 [SSE/AVX] Use multiclasses to reduce the mass of scalar math patterns; NFCI
This takes the preposterous number of patterns in this section
that were last added to in r219033 down to just plain obnoxious.

With a little more work, we might get this down to just comical.

I've added more test cases to the existing file that checks these
patterns, but it seems that some of these patterns simply don't
exist with today's shuffle lowering.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229158 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 21:52:42 +00:00
Andrea Di Biagio
59d115311a [CodeGenPrepare] Removed duplicate logic. SimplifyCFG already knows how to speculate calls to cttz/ctlz.
SimplifyCFG now knows how to speculate calls to intrinsic cttz/ctlz that are
'cheap' for the target. Therefore, some of the logic in CodeGenPrepare
that was originally added at revision 224899 can now be removed.

This patch is basically a no functional change. It removes the duplicated
logic in CodeGenPrepare and converts all the existing target specific tests
for cttz/ctlz into SimplifyCFG tests.

Differential Revision: http://reviews.llvm.org/D7608


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229105 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 14:15:48 +00:00
Chandler Carruth
00ae03a747 Revert a series of commits starting at r228886 which is triggering some
regressions for LLDB on Linux. Rafael indicated on lldb-dev that we
should just go ahead and revert these but that he wasn't at a computer.
The patches backed out are as follows:

r228980: Add support for having multiple sections with the name and ...
r228889: Invert the section relocation map.
r228888: Use the existing SymbolTableIndex intsead of doing a lookup.
r228886: Create the Section -> Rel Section map when it is first needed.

These patches look pretty nice to me, so hoping its not too hard to get
them re-instated. =D

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229080 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 07:52:39 +00:00
Craig Topper
f3455f13a2 [X86] Add support for parsing and printing the mnemonic aliases for the XOP VPCOM instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229078 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 07:42:25 +00:00
Craig Topper
d4f1c60bf5 Fix probable typo in test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229070 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 06:07:27 +00:00
Craig Topper
db9343fb40 [X86] Remove int_x86_sse2_psll_dq_bs and int_x86_sse2_psrl_dq_bs intrinsics. The builtins aren't used by clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229069 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 06:07:24 +00:00
Rafael Espindola
2fa06b171b Add support for having multiple sections with the same name and comdat.
Using this in combination with -ffunction-sections allows LLVM to output a .o
file with mulitple sections named .text. This saves space by avoiding long
unique names of the form .text.<C++ mangled name>.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228980 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 23:29:51 +00:00
David Majnemer
73a92d5136 X86: Don't crash if we can't decode the pshufb mask
Constant pool entries are uniqued by their contents regardless of their
type.  This means that a pshufb can have a shuffle mask which isn't a
simple array of bytes.

The code path which attempts to decode the mask didn't check for
failure, causing PR22559.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 23:26:26 +00:00
Simon Pilgrim
6911b3bc37 Ensure integer domain on general shuffle stack folding tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228972 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 22:47:45 +00:00
David Blaikie
c3dc5bac73 Remove typedef of a pointer type used in a gep to simplify migration of geps to a typeless-pointer future.
I'd modify my migration tool to account for this, but this is the only
instance of a typedef'd pointer type to a gep I found in the whole test
suite, so it didn't seem worthwhile.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228970 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 22:45:25 +00:00
Rafael Espindola
c3c5d7c2d6 On ELF, put PIC jump tables in a non executable section.
Fixes PR22558.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228939 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 17:46:49 +00:00
Rafael Espindola
8eeedf74d3 Put each jump table in an independent section if the function is too.
This allows the linker to GC both, fixing pr22557.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228937 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 17:16:46 +00:00
Michael Kuperstein
fb107d8bf0 [X86] Call frame optimization - allow stack-relative movs to be folded into a push
Since we track esp precisely, there's no reason not to allow this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228924 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 14:17:35 +00:00
Elena Demikhovsky
f41b8e3e49 AVX-512: Fixed the "test" operation for i1 type
Using KORTESTW for comparison i1 value with zero was wrong since the instruction tests 16 bits.
KORTESTW may be used with KSHIFTL+KSHIFTR that clean the 15 upper bits.
I removed (X86cmp i1, 0) pattern and zero-extend i1 to i8 and then use TESTB.

There are some cases where i1 is in the mask register and the upper bits are already zeroed.
Then KORTESTW is the better solution, but it is subject for optimization.
Meanwhile, I'm fixing the correctness issue.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228916 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 08:40:34 +00:00
Michael Kuperstein
fd98d3be55 [X86] A heuristic to estimate the size impact for converting stack-relative parameter movs to pushes
This gives a rough estimate of whether using pushes instead of movs is profitable, in terms of size.
We go over all calls in the MachineFunction and compute:
a) For each callsite that can not use pushes, the penalty of not having a reserved call frame.
b) For each callsite that can use pushes, the gain of actually replacing the movs with pushes (and the potential penalty of having to readjust the stack).

Differential Revision: http://reviews.llvm.org/D7561

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228915 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 08:36:35 +00:00
Ahmed Bougacha
9e9bde9b54 [CodeGen] Don't blindly combine (fp_round (fp_round x)) to (fp_round x).
We used to do this DAG combine, but it's not always correct:
If the first fp_round isn't a value preserving truncation, it might
introduce a tie in the second fp_round, that wouldn't occur in the
single-step fp_round we want to fold to.
In other words, double rounding isn't the same as rounding.

Differential Revision: http://reviews.llvm.org/D7571


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228911 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 06:15:29 +00:00
Simon Pilgrim
d606d6bfe1 [X86][SSE] Added dual vector truncation tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228857 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-11 18:14:35 +00:00
Sanjay Patel
cb2ff33a8a fixed to test features, not CPUs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228836 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-11 15:00:41 +00:00
Sanjay Patel
00fb386b23 fixed to test features, not CPUs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228835 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-11 15:00:19 +00:00
Sanjay Patel
57fb1850e2 fixed to test features, not CPUs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228834 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-11 14:58:25 +00:00
David Majnemer
f2138c2df8 X86: @llvm.frameaddress should defer to SelectionDAG for Win CFI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228754 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 22:00:34 +00:00
David Majnemer
420f72a301 X86: Make @llvm.frameaddress work correctly with Windows unwind codes
Simply loading or storing the frame pointer is not sufficient for
Windows targets.  Instead, create a synthetic frame object that we will
lower later.  References to this synthetic object will be replaced with
the correct reference to the frame address.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228748 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 21:22:05 +00:00
David Majnemer
3163865f01 X86: Emit Win64 SaveXMM opcodes at the right offset in the right order
Walk the instructions marked FrameSetup and consider any stores of XMM
registers to the stack as needing a SaveXMM opcode.

This fixes PR22521.

Differential Revision: http://reviews.llvm.org/D7527

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 19:01:47 +00:00
Paul Robinson
a932cb6d09 Explicitly initialize a flag in a default constructor.
Works around a Visual C++ issue.

Patch by Douglas Yung!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228699 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 15:30:02 +00:00
Simon Pilgrim
c99d58d6c1 [X86][AVX2] Missing AVX2 memory folding instructions
Added most of the missing vector folding patterns for AVX2 (as well as fixing the vpermpd and verpmq patterns)

Differential Revision: http://reviews.llvm.org/D7492

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 13:22:57 +00:00
Simon Pilgrim
8bcc093da5 [X86][XOP] Added XOP memory folding patterns + tests
This patch adds the complete AMD Bulldozer XOP instruction set to the memory folding pattern tables for stack folding, etc.

Note: Many of the XOP instructions have multiple table entries as it can fold loads from different sources.

Differential Revision: http://reviews.llvm.org/D7484

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228685 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 12:57:17 +00:00
Andrea Di Biagio
bd1729e5d4 [X86][FastIsel] Avoid introducing legacy SSE instructions if the target has AVX.
This patch teaches X86FastISel how to select AVX instructions for scalar
float/double convert operations.

Before this patch, X86FastISel always selected legacy SSE instructions
for FPExt (from float to double) and FPTrunc (from double to float).

For example:
\code
  define double @foo(float %f) {
    %conv = fpext float %f to double
    ret double %conv
  }
\end code

Before (with -mattr=+avx -fast-isel) X86FastIsel selected a CVTSS2SDrr which is
legacy SSE:
  cvtss2sd %xmm0, %xmm0

With this patch, X86FastIsel selects a VCVTSS2SDrr instead:
  vcvtss2sd %xmm0, %xmm0, %xmm0

Added test fast-isel-fptrunc-fpext.ll to check both the register-register and
the register-memory float/double conversion variants.

Differential Revision: http://reviews.llvm.org/D7438


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228682 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 12:04:41 +00:00
Nick Lewycky
3c5236ae68 Remove non-test files that appear to have been accidentally committed in r228641.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228657 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 02:39:17 +00:00
Chandler Carruth
1c7c2e8650 [x86] Fix PR22524: the DAG combiner was incorrectly handling illegal
nodes when folding bitcasts of constants.

We can't fold things and then check after-the-fact whether it was legal.
Once we have formed the DAG node, arbitrary other nodes may have been
collapsed to it. There is no easy way to go back. Instead, we need to
test for the specific folding cases we're interested in and ensure those
are legal first.

This could in theory make this less powerful for bitcasting from an
integer to some vector type, but AFAICT, that can't actually happen in
the SDAG so its fine. Now, we *only* whitelist specific int->fp and
fp->int bitcasts for post-legalization folding. I've added the test case
from the PR.

(Also as a note, this does not appear to be in 3.6, no backport needed)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228656 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 02:25:56 +00:00
David Majnemer
69114ee016 X86: Emit an ABI compliant prologue and epilogue for Win64
Win64 has specific contraints on what valid prologues and epilogues look
like.  This constraint is born from the flexibility and descriptiveness
of Win64's unwind opcodes.

Prologues previously emitted by LLVM could not be represented by the
unwind opcodes, preventing operations powered by stack unwinding to
successfully work.

Differential Revision: http://reviews.llvm.org/D7520

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228641 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-10 00:57:42 +00:00
Sanjay Patel
93411cf4f8 fixed to test features, not CPUs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228581 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-09 17:17:09 +00:00
Sanjay Patel
4616d7dd2f fix test attributes; this is an SSE2 test, not a Nehalem test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228546 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 21:14:27 +00:00
Sanjay Patel
751e3f1f80 fix test attributes; this is an x86-64 test, not a Nehalem test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228545 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 21:10:40 +00:00
Sanjay Patel
714f3d3a0f fix test attributes; these are SSE2 tests, not Nehalem tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228544 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 21:05:03 +00:00
Sanjay Patel
e755d452e0 fix test attributes; these are SSE2 tests, not Nehalem tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228541 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 20:50:58 +00:00
Sanjay Patel
78547012ac fix test attributes; these are x86-64 tests, not Nehalem tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228536 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 20:05:53 +00:00
Sanjay Patel
8d32999929 fix test attributes; these are MMX tests, not Nehalem tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228535 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 20:01:12 +00:00
Sanjay Patel
7596cf2b66 fix test attributes; these are SSE2 tests, not Nehalem tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228534 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 19:50:55 +00:00
Sanjay Patel
c3803c8bc2 generalize test; nothing Nehalem-specific here
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228532 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 19:38:25 +00:00
Simon Pilgrim
c92ffedc5c [X86][AVX2] AVX2 broadcast + permute memory folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228528 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 18:33:13 +00:00
Simon Pilgrim
437265ee96 [X86][AVX2] AVX2 integer stack folding tests.
This adds tests for the remaining AVX2 instructions that currently support memory folding.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 23:28:16 +00:00
Simon Pilgrim
2134ae7f38 [X86][AVX] Added missing stack folding support + test for vptest ymm instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:44:06 +00:00
Simon Pilgrim
710e70bb70 [X86][SSE] Added missing stack folding tests for (v)mpsadbw instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228506 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:20:11 +00:00
Simon Pilgrim
3281412d2a [X86] Force fp stack folding tests to keep to specific domain.
General boolean instructions (AND, ANDN, OR, XOR) need to use a specific domain instruction (and not just the default).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228495 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:14:55 +00:00
Simon Pilgrim
bf4a435d0a [X86][AVX2] More AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228494 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:07:27 +00:00
David Majnemer
fdac306a12 MC: Emit COFF section flags in the "proper" order
COFF section flags are not idempotent:
  'rd' will make a read-write section because 'd' implies write
  'dr' will make a read-only section because 'r' disables write

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228490 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 08:26:40 +00:00
Simon Pilgrim
148482dd6b [X86][AVX2] Begun adding AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228462 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:12:15 +00:00
Reid Kleckner
6dc42dd2da Don't dllexport declarations
Fixes PR22488

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228411 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 17:59:49 +00:00
Matthias Braun
b8b2dff046 X86: Test cleanup
Use FileCheck, make it more consistent and do not rely on unoptimized
or(cmp,cmp) getting combined for max to be matched.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228361 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:12 +00:00
Ahmed Bougacha
ec35069525 [CodeGen] Add hook/combine to form vector extloads, enabled on X86.
The combine that forms extloads used to be disabled on vector types,
because "None of the supported targets knows how to perform load and
sign extend on vectors in one instruction."

That's not entirely true, since at least SSE4.1 X86 knows how to do
those sextloads/zextloads (with PMOVS/ZX).
But there are several aspects to getting this right.
First, vector extloads are controlled by a profitability callback.
For instance, on ARM, several instructions have folded extload forms,
so it's not always beneficial to create an extload node (and trying to
match extloads is a whole 'nother can of worms).

The interesting optimization enables folding of s/zextloads to illegal
(splittable) vector types, expanding them into smaller legal extloads.

It's not ideal (it introduces some legalization-like behavior in the
combine) but it's better than the obvious alternative: form illegal
extloads, and later try to split them up.  If you do that, you might
generate extloads that can't be split up, but have a valid ext+load
expansion.  At vector-op legalization time, it's too late to generate
this kind of code, so you end up forced to scalarize. It's better to
just avoid creating egregiously illegal nodes.

This optimization is enabled unconditionally on X86.

Note that the splitting combine is happy with "custom" extloads. As
is, this bypasses the actual custom lowering, and just unrolls the
extload. But from what I've seen, this is still much better than the
current custom lowering, which does some kind of unrolling at the end
anyway (see for instance load_sext_4i8_to_4i64 on SSE2, and the added
FIXME).

Also note that the existing combine that forms extloads is now also
enabled on legal vectors.  This doesn't have a big effect on X86
(because sext+load is usually combined to sext_inreg+aextload).
On ARM it fires on some rare occasions; that's for a separate commit.

Differential Revision: http://reviews.llvm.org/D6904


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228325 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:31:02 +00:00
Andrew Trick
c4ae8cbc5d X86 ABI fix for return values > 24 bytes.
The return value's address must be returned in %rax.
i.e. the callee needs to copy the sret argument (%rdi)
into the return value (%rax).

This probably won't manifest as a bug when the caller is LLVM-compiled
code. But it is an ABI guarantee and tools expect it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228321 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:09:05 +00:00
Bruno Cardoso Lopes
04715c9915 [X86][MMX] Handle i32->mmx conversion using movd
Implement a BITCAST dag combine to transform i32->mmx conversion patterns
into a X86 specific node (MMX_MOVW2D) and guarantee that moves between
i32 and x86mmx are better handled, i.e., don't use store-load to do the
conversion..

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228293 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:23:07 +00:00
Bruno Cardoso Lopes
d4299719af [X86][MMX] Add several bitcast tests
Avoid regression in previously supported MMX code by adding different
combinations of tests which exercise MMX bitcasts. Small improvements
to these patterns should come next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228292 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:22:57 +00:00
Rafael Espindola
e247dd2839 Don' try to make sections in comdats SHF_MERGE.
Parts of llvm were not expecting it and we wouldn't print
the entity size of the section.

Given what comdats are used for, having SHF_MERGE sections would be
just a small improvement, so just disable it for now.

Fixes pr22463.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228196 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 21:27:24 +00:00
Michael Kuperstein
8f260e3084 Fixes a bug in vector load legalization that confused bits and bytes.
Differential Revision: http://reviews.llvm.org/D7400

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228168 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 18:54:01 +00:00
Chandler Carruth
b0589710cc [x86] Give movss and movsd execution domains in the x86 backend.
This associates movss and movsd with the packed single and packed double
execution domains (resp.). While this is largely cosmetic, as we now
don't have weird ping-pong-ing between single and double precision, it
is also useful because it avoids the domain fixing algorithm from seeing
domain breaks that don't actually exist. It will also be much more
important if we have an execution domain default other than packed
single, as that would cause us to mix movss and movsd with integer
vector code on a regular basis, a very bad mixture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228135 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:58:53 +00:00
Chandler Carruth
886bbe2d76 [x86] Remove a low-value test that was just checking how we cleared
a register. We have lots of tests covering this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228133 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:47:34 +00:00
Chandler Carruth
424a198c30 [x86] Mechanically update a bunch of tests' check lines using the latest
version of the script.

Changes include:
- Using the VEX prefix
- Skipping more detail when we have useful shuffle comments to match
- Matching more shuffle comments that have been added to the printer
  (yay!)
- Matching the destination registers of some AVX instructions
- Stripping trailing whitespace that crept in
- Fixing indentation issues

Nothing interesting going on here. I'm just trying really hard to ensure
these changes don't show up in the diffs with actual changes to the
backend.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228132 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:46:53 +00:00
Chandler Carruth
d16b9cd3d4 [x86] Include the destination register in the check-lines for AVX
instructions.

No actual change here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228127 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:18:27 +00:00
Chandler Carruth
82b686e611 [x86] Add some tests I missed in the prior commit to cover blends with
zero for v8i16 as well.

These exhibit the same domain badness, but also exhibit other weaknesses
in our blend lowering. More fixes to come.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228126 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:15:46 +00:00
Chandler Carruth
da681cc578 [x86] Start to introduce bit-masking based blend lowering.
This is the simplest form of bit-math based blending which only fires
when we are blending with zero and is relatively profitable. I've only
enabled this path on very specific lowering strategies. I'm planning to
widen its applicability in subsequent patches, but so far you'll notice
that even though we get fewer shufps instructions, we *still* do the bit
math in the FP execution port. I'm looking into why this is still
happening.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228124 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:06:05 +00:00
Chandler Carruth
6b1eacb0b5 [x86] Add tests for blends-with-zero on 4-element vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228122 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:05:58 +00:00
Chandler Carruth
786f55c1fb [x86] Refresh the checks of a number of tests using
update_llc_test_checks.py.

The exact format of the checks has changed over time. This includes
different indenting rules, new shuffle comments that have been added,
and more operand hiding behind regular expressions.

No functional change to the tests are expected here, but this will make
subsequent patches have a clean diff as they change shuffle lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228097 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:42 +00:00
Chandler Carruth
18ee73e456 [x86] Switch to using the long '--check-prefix' form which the
update_llc_test_checks.py script uses, and refresh the checks in this
test.

No functionality changed here, just bringing this test up to work with
automated updates using the python script.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228096 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:40 +00:00
Chandler Carruth
877ac0a034 [x86] Port this test to use utils/update_llc_test_checks.py.
This will make it easy to update as I change some parts of the X86
backend, makes it more clear what instruction differences are
introduced, and I find it makes it a bit easier to read as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228095 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:37 +00:00
Sanjay Patel
f1ac92a3b9 improved CHECK
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228086 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:24:06 +00:00
Simon Pilgrim
3d04e48cb6 [X86][SSE] psrl(w/d/q) and psll(w/d/q) bit shifts for SSE2
Patch to match cases where shuffle masks can be reduced to bit shifts. Similar to byte shift shuffle matching from D5699.

Differential Revision: http://reviews.llvm.org/D6649

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228047 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:58:29 +00:00
Chandler Carruth
2e3524ec17 [x86] Add two truly horrific test cases for the new vector shuffle
lowering. I'm prepping patches to improve these, and this will let the
delta of those patches show the improvement. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228044 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:56:28 +00:00
Chandler Carruth
d5a61c2958 [x86] Update the indent and layout of some tests in this file. NFC
This is just to remove voise from using the update_llc_test_checks
script.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228043 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:56:24 +00:00
Chandler Carruth
dc5e49a1c4 [x86] Tweak my update script to use test case function names starting
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).

I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228033 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:26:45 +00:00
Simon Pilgrim
646722d55f [X86][SSE] Added general integer shuffle matching for MOVQ instruction
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228022 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 20:09:18 +00:00
Simon Pilgrim
71a4e9522e [X86][AVX2] Enabled shuffle matching for the AVX2 zero extension (128bit -> 256bit) vpmovzx* instructions.
Differential Revision: http://reviews.llvm.org/D7251

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228014 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:34:09 +00:00
Rafael Espindola
f4e2998eda Fix typo in test/CodeGen/X86/sibcall.ll (pr22331).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228011 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:20:26 +00:00
Sanjay Patel
9b4cc76745 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228006 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 18:54:00 +00:00
Sanjay Patel
3cf9267d4e Fix program crashes due to alignment exceptions generated for SSE memop instructions (PR22371).
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit.
The commit log says that change did not affect anything, but that's not correct.
That change allowed SSE instructions to have unaligned mem operands folded into
math ops, and that's not allowed in the default specification for any SSE variant. 

The bug is exposed when compiling for an AVX-capable CPU that had this feature
flag but without enabling AVX codegen. Another mistake in r224330 was not adding
the feature flag to all AVX CPUs; the AMD chips were excluded.

This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ).

This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem".
Changed the existing test case for the feature bit to reflect the new name and
renamed the test file itself to better reflect the feature.
Added runs to fold-vex.ll to check for the failing codegen.

Note that the feature bit is not set by default on any CPU because it may require a
configuration register setting to enable the enhanced unaligned behavior.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227983 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 17:13:04 +00:00
Sanjay Patel
ec60318bf5 Improve test to actually check for a folded load.
This test was checking for lack of a "movaps" (an aligned load)
rather than a "movups" (an unaligned load). It also included
a store which complicated the checking.

Add specific CPU runs to prevent subtarget feature flag overrides
from inhibiting this optimization.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227972 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 15:37:18 +00:00
Bruno Cardoso Lopes
7df357f552 [X86][MMX] Improve transfer from mmx to i32
Improve EXTRACT_VECTOR_ELT DAG combine to catch conversion patterns
between x86mmx and i32 with more layers of indirection.

Before:
  movq2dq %mm0, %xmm0
  movd %xmm0, %eax
After:
  movd %mm0, %eax

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227969 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 14:46:49 +00:00
Alex Rosenberg
cba5c599e8 Revert part of r227437 as it was unnecessary. Thanks to echristo for
pointing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227897 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 23:58:54 +00:00
Bruno Cardoso Lopes
d821e0a5cc [X86][MMX] Add tests for MMX extract element
LLVM ToT produces poor MMX code compared to 3.5. However, part of the previous
functionality can be achieved by using -x86-experimental-vector-widening-legalization.
Add tests to be sure we don't regress again.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227869 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 22:00:48 +00:00
Bruno Cardoso Lopes
12c944ba10 [X86][MMX] Cleanup shuffle, bitcast and insert element tests
- Merge MMX arg passing test files
- Merge MMX bitcast, insert elt and shuffle tests

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227867 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 21:56:11 +00:00
Sanjay Patel
f766946abd fix typo
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227815 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 17:47:30 +00:00
Michael Kuperstein
acd5f13c88 [X86] Convert esp-relative movs of function arguments to pushes, step 2
This moves the transformation introduced in r223757 into a separate MI pass.
This allows it to cover many more cases (not only cases where there must be a 
reserved call frame), and perform rudimentary call folding. It still doesn't 
have a heuristic, so it is enabled only for optsize/minsize, with stack 
alignment <= 8, where it ought to be a fairly clear win.

(Re-commit of r227728)

Differential Revision: http://reviews.llvm.org/D6789


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227752 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 16:56:04 +00:00
Michael Kuperstein
5b61b8f53c Revert r227728 due to bad line endings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227746 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 16:15:07 +00:00
Michael Kuperstein
59d9986259 [X86] Convert esp-relative movs of function arguments to pushes, step 2
This moves the transformation introduced in r223757 into a separate MI pass.
This allows it to cover many more cases (not only cases where there must be a 
reserved call frame), and perform rudimentary call folding. It still doesn't 
have a heuristic, so it is enabled only for optsize/minsize, with stack 
alignment <= 8, where it ought to be a fairly clear win.

Differential Revision: http://reviews.llvm.org/D6789

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227728 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 11:44:44 +00:00
Elena Demikhovsky
516052acd3 AVX2: Added 2 more tests for gather intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227718 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 08:52:15 +00:00
Simon Pilgrim
982005c23e [X86][SSE] Shuffle mask decode support for zero extend, scalar float/double moves and integer load instructions
This patch adds shuffle mask decodes for integer zero extends (pmovzx** and movq xmm,xmm) and scalar float/double loads/moves (movss/movsd).

Also adds shuffle mask decodes for integer loads (movd/movq).

Differential Revision: http://reviews.llvm.org/D7228

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-31 14:09:36 +00:00
Ahmed Bougacha
ee93f014cc [X86] Cleanup tabs in test vector-zext.ll. NFC.
Some tests have tabs, some don't.
In vector-[sz]ext.ll, space wins (well duh!).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227615 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-30 21:41:28 +00:00
Reid Kleckner
e359929517 Win64: Put a REX_W prefix on all TAILJMP* instructions
MSDN's x64 software conventions page says that this is one of the fixed
list of legal epilogues:
https://msdn.microsoft.com/en-us/library/tawsa7cb.aspx

Presumably this is how the unwinder distinguishes epilogue jumps from
in-function control flow.

Also normalize the way we place "## TAILCALL" comments on such jumps.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227611 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-30 21:03:31 +00:00
Reid Kleckner
c9fbc97e95 x86: Fix large model calls to __chkstk for dynamic allocas
In the large code model, we now put __chkstk in %r11 before calling it.

Refactor the code so that we only do this once. Simplify things by using
__chkstk_ms instead of __chkstk on cygming. We already use that symbol
in the prolog emission, and it simplifies our logic.

Second half of PR18582.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227519 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 23:58:04 +00:00
Reid Kleckner
cb867e4ac4 Update comments to use unreachable instead of llvm.trap, as implemented now
win64: Call __chkstk through a register with the large code model

Fixes half of PR18582. True dynamic allocas will still have a
CALL64pcrel32 which will fail.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D7267

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227503 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 22:33:00 +00:00
Robert Lougher
1031549bec [X86] Use single add/sub for large stack offsets
For large stack offsets the compiler generates multiple immediate mode
sub/add instructions in the prologue/epilogue.  This patch makes the
compiler place the final amount to be added/subtracted into a register,
which is then added/substracted with a single operation.

Differential Revision: http://reviews.llvm.org/D7226


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227458 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 16:18:29 +00:00
Alex Rosenberg
1c31f0df49 Make the test actually test what it's supposed to test. Add a test for the from memory variant of vcvtph2ps for 256-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227446 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 15:19:54 +00:00
Alex Rosenberg
27f7a0622c Cleanup a few tests on sse4a machines and FileCheckize along the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227437 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 13:31:32 +00:00
Rafael Espindola
6cc1119408 Don't create multiple mergeable sections with -fdata-sections.
ELF has support for sections that can be split into fixed size or
null terminated entities.

Since these sections can be split by the linker, it is not necessary
to split them in codegen.

This reduces the combined .o size in a llvm+clang build from
202,394,570 to 173,819,098 bytes.

The time for linking clang with gold (on a VM, on a laptop) goes
from 2.250089985 to 1.383001792 seconds.

The flip side is the size of rodata in clang goes from 10,926,785
to 10,929,345 bytes.

The increase seems to be because of http://sourceware.org/bugzilla/show_bug.cgi?id=17902.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227431 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 12:43:28 +00:00
Reid Kleckner
f77571aeac Add a Windows EH preparation pass that zaps resumes
If the personality is not a recognized MSVC personality function, this
pass delegates to the dwarf EH preparation pass. This chaining supports
people on *-windows-itanium or *-windows-gnu targets.

Currently this recognizes some personalities used by MSVC and turns
resume instructions into traps to avoid link errors.  Even if cleanups
are not used in the source program, LLVM requires the frontend to emit a
code path that resumes unwinding after an exception.  Clang does this,
and we get unreachable resume instructions. PR20300 covers cleaning up
these unreachable calls to resume.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D7216

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227405 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-29 00:41:44 +00:00
Michael Kuperstein
0906c8fc1c [X86] Reduce some 32-bit imuls into lea + shl
Reduce integer multiplication by a constant of the form k*2^c, where k is in {3,5,9} into a lea + shl. Previously it was only done for imulq on 64-bit platforms, but it makes sense for imull and 32-bit as well.

Differential Revision: http://reviews.llvm.org/D7196

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227308 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-28 14:08:22 +00:00
Michael Kuperstein
e5b95695ea [x32] Enable sibcall optimization on x32.
This includes two things:
1) Fix TCRETURNdi and TCRETURN64di patterns to check the right thing (LP64 as opposed to target bitness).
2) Allow LEA64_32 in MatchingStackOffset.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227307 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-28 13:38:48 +00:00
Elena Demikhovsky
b9d3801cd2 AVX-512: Added FMA intrinsics with rounding mode
By Asaf Badouh and Elena Demikhovsky

Added special nodes for rounding: FMADD_RND, FMSUB_RND..
It will prevent merge between nodes with rounding and other standard nodes.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227303 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-28 10:21:27 +00:00
Quentin Colombet
24508d33fb Revert r227242 - Merge vector stores into wider vector stores (PR21711).
This commit creates infinite loop in DAG combine for in the LLVM test-suite
for aarch64 with mcpu=cylcone (just having neon may be enough to expose this).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227272 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-27 23:58:01 +00:00
Alexey Samsonov
00b7a940e7 Revert "[x86] Combine x86mmx/i64 to v2i64 conversion to use scalar_to_vector"
This reverts commits r226953 and r226974.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227248 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-27 21:34:11 +00:00
Sanjay Patel
c94f9d3d2f Merge vector stores into wider vector stores (PR21711)
This patch resolves part of PR21711 ( http://llvm.org/bugs/show_bug.cgi?id=21711 ).

The 'f3' test case in that report presents a situation where we have two 128-bit
stores extracted from a 256-bit source vector. 

Instead of producing this:

vmovaps %xmm0, (%rdi)
vextractf128    $1, %ymm0, 16(%rdi)

This patch merges the 128-bit stores into a single 256-bit store:

vmovups %ymm0, (%rdi)

Differential Revision: http://reviews.llvm.org/D7208



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-27 20:50:27 +00:00
Simon Pilgrim
44513da617 [X86][SSE] Float comparisons can sometimes be safely commuted
For ordered, unordered, equal and not-equal tests, packed float and double comparison instructions can be safely commuted without affecting the results. This patch checks the comparison mode of the (v)cmpps + (v)cmppd instructions and commutes the result if it can.

Differential Revision: http://reviews.llvm.org/D7178

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 22:29:24 +00:00
Simon Pilgrim
3ba85ab23a [X86][PCLMUL] Enable commutation for PCLMUL instructions
Patch to allow (v)pclmulqdq to be commuted - swaps the src registers and inverts the immediate (low/high) src mask.

Differential Revision: http://reviews.llvm.org/D7180

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227141 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 22:00:18 +00:00
Simon Pilgrim
38c35f3e2c Line endings fix. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227138 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 21:28:32 +00:00
Simon Pilgrim
1d34ec14e9 Line endings fix. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227136 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 21:15:42 +00:00
Bruno Cardoso Lopes
0f3d4650f7 [x86][MMX] Rename and cleanup tests: arith, intrinsics and shuffle
- Rename mmx-builtins to mmx-intrinsics to match other intrinsic test naming.
- Remove tests that duplicate functionality from mmx-intrinsics.ll.
- Move arith related tests to mmx-arith.ll.
- MMX related shuffle goes to vector-shuffle-mmx.ll.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227130 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 20:06:51 +00:00
Alex Rosenberg
8da9a6686a Use a different encoding for debugtrap on PS4.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227116 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 19:09:27 +00:00
Sanjay Patel
956d6f0cf5 Model sqrtsd as a binary operation with one source operand tied to the destination (PR14221)
This patch fixes the following miscompile:

define void @sqrtsd(<2 x double> %a) nounwind uwtable ssp {
  %0 = tail call <2 x double> @llvm.x86.sse2.sqrt.sd(<2 x double> %a) nounwind 
  %a0 = extractelement <2 x double> %0, i32 0
  %conv = fptrunc double %a0 to float
  %a1 = extractelement <2 x double> %0, i32 1
  %conv3 = fptrunc double %a1 to float
  tail call void @callee2(float %conv, float %conv3) nounwind
  ret void
}

Current codegen:

sqrtsd	%xmm0, %xmm1        ## high element of %xmm1 is undef here
xorps	%xmm0, %xmm0
cvtsd2ss	%xmm1, %xmm0
shufpd	$1, %xmm1, %xmm1
cvtsd2ss	%xmm1, %xmm1 ## operating on undef value
jmp	_callee

This is a continuation of http://llvm.org/viewvc/llvm-project?view=revision&revision=224624 ( http://reviews.llvm.org/D6330 ) 
which was itself a continuation of r167064 ( http://llvm.org/viewvc/llvm-project?view=revision&revision=167064 ).

All of these patches are partial fixes for PR14221 ( http://llvm.org/bugs/show_bug.cgi?id=14221 ); 
this should be the final patch needed to resolve that bug.

Differential Revision: http://reviews.llvm.org/D6885



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227111 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 18:42:16 +00:00
Sanjay Patel
dbc6dda771 fix line-endings; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227095 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 17:21:36 +00:00
Craig Topper
1656cc8ddb [X86] Change comparision immediate type to i8 in test cases for AVX512 floating point comparisons. The type was already changed in the definitions and was being auto upgraded to the new type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227064 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-25 23:26:12 +00:00
Craig Topper
fd176682b9 [X86] Use i8 immediate for comparison type on AVX512 packed integer instructions. This matches floating point equivalents. Includes autoupgrade support to convert old code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227063 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-25 23:26:02 +00:00
Elena Demikhovsky
717d41d8c3 AVX-512: Changes in operations on masks registers for KNL and SKX
- Added KSHIFTB/D/Q for skx
- Added KORTESTB/D/Q for skx
- Fixed store operation for v8i1 type for KNL
- Store size of v8i1, v4i1 and v2i1 are changed to 8 bits



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227043 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-25 12:47:15 +00:00
Andrea Di Biagio
b981dc4a21 [DAG] Fix wrong canonicalization performed on shuffle nodes.
This fixes a regression introduced by r226816.
When replacing a splat shuffle node with a constant build_vector,
make sure that the new build_vector has a valid number of elements.

Thanks to Patrik Hagglund for reporting this problem and providing a
small reproducible.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227002 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-24 11:54:29 +00:00
Reid Kleckner
339591e0a9 Fix assertion when C++ EH filters are present in functions using SEH
Should fix PR22305.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226969 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-23 23:51:25 +00:00
Bruno Cardoso Lopes
807360ab08 [x86] Combine x86mmx/i64 to v2i64 conversion to use scalar_to_vector
Handle the poor codegen for i64/x86xmm->v2i64 (%mm -> %xmm) moves. Instead of
using stack store/load pair to do the job, use scalar_to_vector directly, which
in the MMX case can use movq2dq. This was the current behavior prior to
improvements for vector legalization of extloads in r213897.

This commit fixes the regression and as a side-effect also remove some
unnecessary shuffles.

In the new attached testcase, we go from:

pshufw  $-18, (%rdi), %mm0
movq    %mm0, -8(%rsp)
movq    -8(%rsp), %xmm0
pshufd  $-44, %xmm0, %xmm0
movd    %xmm0, %eax
...

To:

pshufw  $-18, (%rdi), %mm0
movq2dq %mm0, %xmm0
movd    %xmm0, %eax
...

Differential Revision: http://reviews.llvm.org/D7126
rdar://problem/19413324

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226953 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-23 22:44:16 +00:00
Reid Kleckner
26ba4c13a7 Classify functions by EH personality type rather than using the triple
This mostly reverts commit r222062 and replaces it with a new enum. At
some point this enum will grow at least for other MSVC EH personalities.

Also beefs up the way we were sniffing the personality function.
Previously we would emit the Itanium LSDA despite using
__C_specific_handler.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D6987

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226920 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-23 18:49:01 +00:00
Craig Topper
d05a6aa4e6 [x86] Change u8imm operands to always print as unsigned. This makes shuffle masks and the like make way more sense.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226902 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-23 08:00:59 +00:00
Simon Pilgrim
316b43f7df [X86][AVX] Added (V)MOVDDUP / (V)MOVSLDUP / (V)MOVSHDUP memory folding + tests.
Minor tweak now that D7042 is complete, we can enable stack folding for (V)MOVDDUP and do proper testing.

Added missing AVX ymm folding patterns and fixed alignment for AVX VMOVSLDUP / VMOVSHDUP.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226873 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 22:39:59 +00:00
Simon Pilgrim
6377361399 Line endings fixes. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226872 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 22:27:37 +00:00
Simon Pilgrim
c7d6e9b0f9 [X86][SSE] Simplified PSUBUS tests
Removed loops from PSUBUS tests - ensures folding is tested. Also renamed SSE2 tests SSSE3 to match cpu.

This is a follow up commit agreed in http://reviews.llvm.org/D7094



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226871 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 22:19:58 +00:00
Ramkumar Ramachandra
230796b278 Intrinsics: introduce llvm_any_ty aka ValueType Any
Specifically, gc.result benefits from this greatly. Instead of:

gc.result.int.*
gc.result.float.*
gc.result.ptr.*
...

We now have a gc.result.* that can specialize to literally any type.

Differential Revision: http://reviews.llvm.org/D7020

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226857 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 20:14:38 +00:00
Sanjay Patel
05d5e213c4 merge consecutive stores of extracted vector elements (PR21711)
This is a 2nd try at the same optimization as http://reviews.llvm.org/D6698. 
That patch was checked in at r224611, but reverted at r225031 because it
caused a failure outside of the regression tests.

The cause of the crash was not recognizing consecutive stores that have mixed
source values (loads and vector element extracts), so this patch adds a check
to bail out if any store value is not coming from a vector element extract.

This patch also refactors the shared logic of the constant source and vector
extracted elements source cases into a helper function.

Differential Revision: http://reviews.llvm.org/D6850
 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226845 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 18:21:26 +00:00
Michael Kuperstein
a52ddfa930 [DAGCombine] Produce better code for constant splats
This solves PR22276.
Splats of constants would sometimes produce redundant shuffles, sometimes ridiculously so (see the PR for details). Fold these shuffles into BUILD_VECTORs early on instead.

Differential Revision: http://reviews.llvm.org/D7093

Fixed recommit of r226811.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226816 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 13:07:28 +00:00
Michael Kuperstein
8fc1c3a619 Revert r226811, MSVC accepts code sane compilers don't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226814 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 12:48:07 +00:00
Michael Kuperstein
0a979a09ae [DAGCombine] Produce better code for constant splats
This solves PR22276.
Splats of constants would sometimes produce redundant shuffles, sometimes ridiculously so (see the PR for details). Fold these shuffles into BUILD_VECTORs early on instead.

Differential Revision: http://reviews.llvm.org/D7093

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226811 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 12:37:23 +00:00
Elena Demikhovsky
2785766bc8 Fixed a bug in type legalizer for masked load/store intrinsics.
The problem occurs when after vectorization we have type
<2 x i32>. This type is promoted to <2 x i64> and then requires
additional efforts for expanding loads and truncating stores.
I added EXPAND / TRUNCATE attributes to the masked load/store
SDNodes. The code now contains additional shuffles.
I've prepared changes in the cost estimation for masked memory
operations, it will be submitted separately.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226808 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 12:07:59 +00:00
Elena Demikhovsky
cdce03426d Fixed a bug in narrowing store operation.
Type MVT::i1 became legal in KNL, but store operation can't be narrowed to this type,
since the size of VT (1 bit) is not equal to its actual store size(8 bits).

Added a test provided by David (dag@cray.com)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226805 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 09:39:08 +00:00
Reid Kleckner
73f671a60e SEH: Finish writing the catch-all test case
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226768 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 02:31:09 +00:00
Reid Kleckner
0d056fd4c3 Win64 SEH: Emit the constant 1 for catch-all into xdata
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226767 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-22 02:27:44 +00:00
Simon Pilgrim
3f6acdd265 [X86][SSE] Missing SSE/AVX1 memory folding integer instructions
Added most of the missing integer vector folding patterns for SSE (to SSE42) and AVX1.

The most useful of these are probably the i32/i64 extraction, i8/i16/i32/i64 insertions, zero/sign extension, unsigned saturation subtractions, i64 subtractions and the variable mask blends (pblendvb) - others include CLMUL, SSE42 string comparisons and bit tests.

Differential Revision: http://reviews.llvm.org/D7094



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226745 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 23:43:30 +00:00
Tim Northover
f5f8a3e6a6 DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))
It can help with argument juggling on some targets, and is generally a good
idea.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226740 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 23:17:19 +00:00
Simon Pilgrim
4269590166 [X86][SSE] Added support for SSE3 lane duplication shuffle instructions
This patch adds shuffle matching for the SSE3 MOVDDUP, MOVSLDUP and MOVSHDUP instructions. The big use of these being that they avoid many single source shuffles from needing to use (pre-AVX) dual source instructions such as SHUFPD/SHUFPS: causing extra moves and preventing load folds.

Adding these instructions uncovered an issue in XFormVExtractWithShuffleIntoLoad which crashed on single operand shuffle instructions (now fixed). It also involved fixing getTargetShuffleMask to correctly identify theses instructions as unary shuffles.

Also adds a missing tablegen pattern for MOVDDUP.

Differential Revision: http://reviews.llvm.org/D7042



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226716 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 22:44:35 +00:00
Ahmed Bougacha
34288d885e [X86] Declare SSE4.1/AVX2 vector extloads covered by PMOV[SZ]X legal.
Now that we can fully specify extload legality, we can declare them
legal for the PMOVSX/PMOVZX instructions.  This for instance enables
a DAGCombine to fire on code such as
  (and (<zextload-equivalent> ...), <redundant mask>)
to turn it into:
  (zextload ...)
as seen in the testcase changes.

There is one regression, in widen_load-2.ll: we're no longer able
to do store-to-load forwarding with illegal extload memory types.
This will be addressed separately.

Differential Revision: http://reviews.llvm.org/D6533


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226676 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 17:07:06 +00:00
Tim Northover
c49e57ade1 Revert "DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))"
It hadn't gone through review yet, but was still on my local copy.

This reverts commit r226663

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226665 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:48:52 +00:00
Tim Northover
47f47f5d2a DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226663 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:43:28 +00:00
Michael Kuperstein
0b4244ade1 [x32] Fast ISel should use LEA64_32r instead of LEA32r to adjust addresses in x32 mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226661 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 14:44:05 +00:00
Simon Pilgrim
650d4f00ae [X86][AVX] Simplified diff between AVX1 and SSE42 fp stack folding tests. NFC.
Changed the AVX1 tests register spill tail call to return a xmm like the SSE42 version - makes doing diffs between them a lot easier without affecting the spills themselves.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226623 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 00:02:13 +00:00
Simon Pilgrim
8608e5bbc7 [X86][SSE] Added SSE/AVX1 integer stack folding tests.
Some folding patterns + tests are missing (marked as TODO) - these will be added in a future patch for review.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226622 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:54:17 +00:00
Simon Pilgrim
32f0438ada [X86][SSE] Added SSE fp stack folding tests.
Some folding patterns + tests are missing (marked as TODO) - these will be added in a future patch for review.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226621 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:50:18 +00:00
Simon Pilgrim
bddfe2660e [X86][AVX] Renamed AVX1 fp stack folding tests. NFC.
The SSE42 version of the AVX1 float stack folding tests will be added shortly, this renames the AVX1 file so that the files will be near each other in a directory listing to help ensure they are kept in sync.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226620 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:45:50 +00:00
Daniel Jasper
529ff2f257 Prevent binary-tree deterioration in sparse switch statements.
This addresses part of llvm.org/PR22262. Specifically, it prevents
considering the densities of sub-ranges that have fewer than
TLI.getMinimumJumpTableEntries() elements. Those densities won't help
jump tables.

This is not a complete solution but works around the most pressing
issue.

Review: http://reviews.llvm.org/D7070

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226600 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:43:33 +00:00
Ramkumar Ramachandra
b8ae0acfaf [GC] Verify-pass void vararg functions in gc.statepoint
With the appropriate Verifier changes, exactracting the result out of a
statepoint wrapping a vararg function crashes. However, a void vararg
function works fine: commit this first step.

Differential Revision: http://reviews.llvm.org/D7071

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226599 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:42:46 +00:00
Simon Pilgrim
2a2f94c5f2 [X86][AVX] Missing AVX1 memory folding float instructions
Now that we can create much more exhaustive X86 memory folding tests, this patch adds the missing AVX1/F16C floating point instruction stack foldings we can easily test for including the scalar intrinsics (add, div, max, min, mul, sub), conversions float/int to double, half precision conversions, rounding, dot product and bit test. The patch also adds a couple of obviously missing SSE instructions (more to follow once we have full SSE testing).

Now that scalar folding is working it broke a very old test (2006-10-07-ScalarSSEMiscompile.ll) - this test appears to make no sense as its trying to ensure that a scalar subtraction isn't folded as it 'would zero the top elts of the loaded vector' - this test just appears to be wrong to me.

Differential Revision: http://reviews.llvm.org/D7055



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 22:40:45 +00:00
Rafael Espindola
4b678bff4e Bring r226038 back.
No change in this commit, but clang was changed to also produce trivial comdats when
needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226467 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 15:16:06 +00:00
Michael Kuperstein
5a0c8601d3 [MIScheduler] Slightly better handling of constrainLocalCopy when both source and dest are local
This fixes PR21792.

Differential Revision: http://reviews.llvm.org/D6823

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226433 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 07:30:47 +00:00
Simon Pilgrim
a707cf4652 [X86][SSE] Added scalar min/max folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226406 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 18:06:23 +00:00
Simon Pilgrim
7c62013afe [X86][SSE] Added float extract and xmm extract/insert stack folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226405 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 17:04:32 +00:00
Simon Pilgrim
2793dd7a29 [X86][SSE] Added scalar conversion stack folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226404 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 16:22:15 +00:00
Simon Pilgrim
65a3a9bea3 AVX1 stack folding tests. NFC.
Begun adding more exhaustive tests - all floating point instructions should now be either tested or have placeholders. We do seem to have a number of missing instructions, I will add a patch for review once the remaining working instructions are added.

I'll then move on to SSE tests and then the integer instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226400 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 12:56:39 +00:00
Mehdi Amini
5eed637b34 Improve DAG combine pass on certain IR vector patterns
Loading 2 2x32-bit float vectors into the bottom half of a 256-bit vector
produced suboptimal code in AVX2 mode with certain IR combinations.

In particular, the IR optimizer folded 2f32 + 2f32 -> 4f32, 4f32 + 4f32
(undef) -> 8f32 into a 2f32 + 2f32 -> 8f32, which seems more canonical,
but then mysteriously generated rather bad code; the movq/movhpd combination
didn't match.

The problem lay in the BUILD_VECTOR optimization path. The 2f32 inputs
would get promoted to 4f32 by the type legalizer, eventually resulting
in a BUILD_VECTOR on two 4f32 into an 8f32. The BUILD_VECTOR then, recognizing
these were both half the output size, concatted them and then produced
a shuffle. However, the resulting concat + shuffle was more complex than
it should be; in the case where the upper half of the output is undef, we
probably want to generate shuffle + concat instead.

This enhancement causes the vector_shuffle combine step to recognize this
suboptimal pattern and correct it. I included it there instead of in BUILD_VECTOR
in case the same suboptimal pattern occurs for other reasons.

This results in the optimizer correctly producing the optimal movq + movhpd
sequence for all three variations on this IR, even with AVX2.

I've included a test case.

Radar link: rdar://problem/19287012
Fix for PR 21943.

From: Fiona Glaser <fglaser@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226360 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-17 01:35:56 +00:00
Adam Nemet
ad2ac976af [AVX512] Add intrinsics for masked aligned FP loads and stores
Similar to the unaligned cases.

Test was generated with update_llc_test_checks.py.

Part of <rdar://problem/17688758>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 18:50:09 +00:00