Commit Graph

14 Commits

Author SHA1 Message Date
Simon Pilgrim
38c35f3e2c Line endings fix. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227138 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-26 21:28:32 +00:00
Simon Pilgrim
4269590166 [X86][SSE] Added support for SSE3 lane duplication shuffle instructions
This patch adds shuffle matching for the SSE3 MOVDDUP, MOVSLDUP and MOVSHDUP instructions. The big use of these being that they avoid many single source shuffles from needing to use (pre-AVX) dual source instructions such as SHUFPD/SHUFPS: causing extra moves and preventing load folds.

Adding these instructions uncovered an issue in XFormVExtractWithShuffleIntoLoad which crashed on single operand shuffle instructions (now fixed). It also involved fixing getTargetShuffleMask to correctly identify theses instructions as unary shuffles.

Also adds a missing tablegen pattern for MOVDDUP.

Differential Revision: http://reviews.llvm.org/D7042



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226716 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 22:44:35 +00:00
Chandler Carruth
1d02acb7a0 [x86] Remove the 2-addr-to-3-addr "optimization" from shufps to pshufd.
This trades a (register-renamer-friendly) movaps for a floating point
/ integer domain cross. That is a very bad trade, even on architectures
where domain crossing is relatively fast. On any chip where there is
even a cycle stall, this is a Very Bad Idea. It doesn't even seem likely
to cause a spill to be introduced because the reason for the copy is to
destructively shuffle in place.

Thanks to Ben Kramer for fixing a bug in this code that my new shuffle
lowering exposed and highlighting that perhaps it should just go away.
=]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219090 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 22:57:31 +00:00
Chandler Carruth
03a77831cc [x86] Enable the new vector shuffle lowering by default.
Update the entire regression test suite for the new shuffles. Remove
most of the old testing which was devoted to the old shuffle lowering
path and is no longer relevant really. Also remove a few other random
tests that only really exercised shuffles and only incidently or without
any interesting aspects to them.

Benchmarking that I have done shows a few small regressions with this on
LNT, zero measurable regressions on real, large applications, and for
several benchmarks where the loop vectorizer fires in the hot path it
shows 5% to 40% improvements for SSE2 and SSE3 code running on Sandy
Bridge machines. Running on AMD machines shows even more dramatic
improvements.

When using newer ISA vector extensions the gains are much more modest,
but the code is still better on the whole. There are a few regressions
being tracked (PR21137, PR21138, PR21139) but by and large this is
expected to be a win for x86 generated code performance.

It is also more correct than the code it replaces. I have fuzz tested
this extensively with ISA extensions up through AVX2 and found no
crashes or miscompiles (yet...). The old lowering had a few miscompiles
and crashers after a somewhat smaller amount of fuzz testing.

There is one significant area where the new code path lags behind and
that is in AVX-512 support. However, there was *extremely little*
support for that already and so this isn't a significant step backwards
and the new framework will probably make it easier to implement lowering
that uses the full power of AVX-512's table-based shuffle+blend (IMO).

Many thanks to Quentin, Andrea, Robert, and others for benchmarking
assistance. Thanks to Adam and others for help with AVX-512. Thanks to
Hal, Eric, and *many* others for answering my incessant questions about
how the backend actually works. =]

I will leave the old code path in the tree until the 3 PRs above are at
least resolved to folks' satisfaction. Then I will rip it (and 1000s of
lines of code) out. =] I don't expect this flag to stay around for very
long. It may not survive next week.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219046 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-04 03:52:55 +00:00
Chandler Carruth
4d6f14c9e3 [x86] Regenerate and clean up more tests is preparation for vector
shufle switch.

I nuked a win64 config from one test as it doesn't really make sense to
cover that ABI specially for generic v2f32 tests...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218948 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 01:44:04 +00:00
Nico Rieck
3c1dc3cac8 Fix non-deterministic SDNodeOrder-dependent codegen
Reset SelectionDAGBuilder's SDNodeOrder to ensure deterministic code
generation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199050 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-12 14:09:17 +00:00
Stephen Lin
b4dc0233c9 Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier debugging. No functionality change and all tests pass after conversion.
This was done with the following sed invocation to catch label lines demarking function boundaries:
    sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll
which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186258 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-13 20:38:47 +00:00
Bruno Cardoso Lopes
278cbfb3f5 Attempt to fix -mtriple=i686-{cygwin|mingw|win32} regressions. Nakamura,
if this doesn't work, please provide more details.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140107 91177308-0d34-0410-b5e6-96231b3b80d8
2011-09-20 00:08:12 +00:00
NAKAMURA Takumi
37947c6bad test/CodeGen/X86: Add a pattern for Win64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127733 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-16 13:52:51 +00:00
Chris Lattner
e6f7c267df Change handling of illegal vector types to widen when possible instead of
expanding: e.g. <2 x float> -> <4 x float> instead of -> 2 floats.  This
affects two places in the code: handling cross block values and handling
function return and arguments.  Since vectors are already widened by 
legalizetypes, this gives us much better code and unblocks x86-64 abi
and SPU abi work.

For example, this (which is a silly example of a cross-block value):
define <4 x float> @test2(<4 x float> %A) nounwind {
 %B = shufflevector <4 x float> %A, <4 x float> undef, <2 x i32> <i32 0, i32 1>
 %C = fadd <2 x float> %B, %B
  br label %BB
BB:
 %D = fadd <2 x float> %C, %C
 %E = shufflevector <2 x float> %D, <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
 ret <4 x float> %E
}

Now compiles into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 addps %xmm0, %xmm0
 ret

previously it compiled into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 pshufd $1, %xmm0, %xmm1
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
 insertps $0, %xmm0, %xmm0
 insertps $16, %xmm1, %xmm0
 addps %xmm0, %xmm0
 ret

This implements rdar://8230384



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112101 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-25 22:49:25 +00:00
Chris Lattner
11b3d1621d another v2f32 case, in this case showing poor codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107614 91177308-0d34-0410-b5e6-96231b3b80d8
2010-07-05 05:52:56 +00:00
Chris Lattner
4fd1ab3bec fix test on non-x86 hosts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107608 91177308-0d34-0410-b5e6-96231b3b80d8
2010-07-05 03:56:55 +00:00
Chris Lattner
f172ecd964 Just rip v2f32 support completely out of the X86 backend. In
the example in the testcase, we now generate:

_test1:                                 ## @test1
	movss	4(%esp), %xmm0
	addss	8(%esp), %xmm0
	movl	12(%esp), %eax
	movss	%xmm0, (%eax)
	ret

instead of:

_test1:                                                     ## @test1
	subl	$20, %esp
	movl	24(%esp), %eax
	movq	%mm0, (%esp)
	movq	%mm0, 8(%esp)
	movss	(%esp), %xmm0
	addss	12(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$20, %esp
	ret

v2f32 support did not work reliably because most of the X86
backend didn't know it was legal.  It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode.  If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32.  However,
we generally don't try very hard to be abi compatible on gcc
extended vectors. 



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107601 91177308-0d34-0410-b5e6-96231b3b80d8
2010-07-04 23:07:25 +00:00
Chris Lattner
e35d9842f7 fix PR7518 - terrible codegen of <2 x float>, by only marking
v2f32 as legal in 32-bit mode.  It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107600 91177308-0d34-0410-b5e6-96231b3b80d8
2010-07-04 22:57:10 +00:00