Commit Graph

4963 Commits

Author SHA1 Message Date
Filipe Cabecinhas
ca162faee2 Added more insertps optimizations
Summary:
When inserting an element that's coming from a vector load or a broadcast
of a vector (or scalar) load, combine the load into the insertps
instruction.
Added PerformINSERTPSCombine for the case where we need to fix the load
(load of a vector + insertps with a non-zero CountS).
Added patterns for the broadcasts.

Also added tests for SSE4.1, AVX, and AVX2.

Reviewers: delena, nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3581

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209156 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 19:45:57 +00:00
Benjamin Kramer
bb81d9d5fa SDAG: Legalize vector BSWAP into a shuffle if the shuffle is legal but the bswap not.
- On ARM/ARM64 we get a vrev because the shuffle matching code is really smart. We still unroll anything that's not v4i32 though.
- On X86 we get a pshufb with SSSE3. Required more cleverness in isShuffleMaskLegal.
- On PPC we get a vperm for v8i16 and v4i32. v2i64 is unrolled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209123 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 13:12:38 +00:00
Filipe Cabecinhas
cb596baadd Change the blend tests to AVX, not AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 04:47:12 +00:00
Chandler Carruth
2ff4a49344 [x86] Fix a bad predicate I spotted by inspection -- pshufhw and pshuflw
were added in SSE2, no SSSE3. Found this while auditing all uses of
SSSE3 in the X86 target. I don't actually expect this to make
a significant difference on anything and I don't have any detailed test
cases but I updated the existing test cases that already covered some of
this code path.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209056 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 03:29:20 +00:00
Filipe Cabecinhas
d77c1c4465 Implemented special cases for PerformVSELECTCombine.
vselects with constant masks, after legalization, will get turned into
specialized shuffle_vectors so they can be matched to blend+imm
instructions.

Fixed some tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209044 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 22:47:54 +00:00
Filipe Cabecinhas
5ea7215050 Lower vselects into X86ISD::BLENDI when appropriate.
LowerVSELECT will, if possible, generate a X86ISD::BLENDI DAG node if the
condition is constant and we can emit that instruction, given the
subtarget.

This is not enough for all cases. An additional SELECTCombine optimization
will be committed.

Fixed tests that were expecting variable blends but where a blend+imm can
be generated.
Added test where we can't emit blend+immediate.
Added avx2 blend+imm tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209043 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 22:47:49 +00:00
Rafael Espindola
27c076ae40 Fix most of PR10367.
This patch changes the design of GlobalAlias so that it doesn't take a
ConstantExpr anymore. It now points directly to a GlobalObject, but its type is
independent of the aliasee type.

To avoid changing all alias related tests in this patches, I kept the common
syntax

@foo = alias i32* @bar

to mean the same as now. The cases that used to use cast now use the more
general syntax

@foo = alias i16, i32* @bar.

Note that GlobalAlias now behaves a bit more like GlobalVariable. We
know that its type is always a pointer, so we omit the '*'.

For the bitcode, a nice surprise is that we were writing both identical types
already, so the format change is minimal. Auto upgrade is handled by looking
through the casts and no new fields are needed for now. New bitcode will
simply have different types for Alias and Aliasee.

One last interesting point in the patch is that replaceAllUsesWith becomes
smart enough to avoid putting a ConstantExpr in the aliasee. This seems better
than checking and updating every caller.

A followup patch will delete getAliasedGlobal now that it is redundant. Another
patch will add support for an explicit offset.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209007 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 19:35:39 +00:00
David Blaikie
33b37c7b12 DebugInfo: Assume the CU's Subprogram list only contains definitions.
DIBuilder maintains this invariant and the current DwarfDebug code could
end up doing weird things if it contained declarations (such as putting
the definition DIE inside a CU that contained the declaration - this
doesn't seem like a good idea, so rather than adding logic to handle
this case we'll just ban in for now & cross that bridge if we come to
it later).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209004 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 18:26:53 +00:00
NAKAMURA Takumi
70e3aba855 llvm/test/CodeGen/X86/combine-sse41-intrinsics.ll: Add explicit triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208897 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 15:45:31 +00:00
Andrea Di Biagio
9836c47ea6 [X86] Teach the backend how to fold SSE4.1/AVX/AVX2 blend intrinsics.
Added target specific combine rules to fold blend intrinsics according
to the following rules:
 1) fold(blend A, A, Mask) -> A;
 2) fold(blend A, B, <allZeros>) -> A;
 3) fold(blend A, B, <allOnes>) -> B.

Added two new tests to verify that the new folding rules work for all
the optimized blend intrinsics.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208895 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 15:18:15 +00:00
Jay Foad
6b543713a2 Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208811 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 21:14:37 +00:00
Benjamin Kramer
202be06318 X86: If we have an instruction that sets a flag and a zero test on the input of that instruction try to eliminate the test.
For example
	tzcntl	%edi, %ebx
	testl %edi, %edi
	je	.label

can be rewritten into
	tzcntl	%edi, %ebx
	jb 	.label

A minor complication is that tzcnt sets CF instead of ZF when the input
is zero, we have to rewrite users of the flags from ZF to CF. Currently
we recognize patterns using lzcnt, tzcnt and popcnt.

Differential Revision: http://reviews.llvm.org/D3454

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208788 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 16:14:45 +00:00
Joey Gouly
fdae312e65 [CGP] r205941 changed the logic, so that a cast happens *before* 'Result' is
compared to 'AddrMode.BaseReg'. In the case that 'AddrMode.BaseReg' is
nullptr, 'Result' will also be nullptr, so the cast causes an assertion. We
should use dyn_cast_or_null here to check 'Result' is not null and it is an
instruction.

Bug found by Mats Petersson, and I reduced his IR to get a test case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208705 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-13 15:42:45 +00:00
Reid Kleckner
17335ce80f Try to fix an SDAG dependence issue with sret
r208453 added support for having sret on the second parameter.  In that
change, the code for copying sret into a virtual register was hoisted
into the loop that lowers formal parameters.  This caused a "Wrong
topological sorting" assertion failure during scheduling when a
parameter is passed in memory.  This change undoes that by creating a
second loop that deals with sret.

I'm worried that this fix is incomplete.  I don't fully understand the
dependence issues.  However, with this change we produce the same DAGs
we used to produce, so if they are broken, they are just as broken as
they have always been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208637 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 22:01:27 +00:00
Adam Nemet
45fc47013f [Test] Trim unnecessary .c and .cpp from config.suffix in lit.local.cfg
Tested by comparing make check VERBOSE=1 before and after to make sure
no tests are missed.  (VERBOSE=1 prints the list of tests.)

Only one test :( remains where .cpp is required:

tools/llvm-cov/range_based_for.cpp:// RUN: llvm-cov range_based_for.cpp | FileCheck %s --check-prefix=STDOUT

The topic was discussed in this thread:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140428/214905.html

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208621 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 19:57:31 +00:00
Tim Northover
d6cd0381f6 TableGen: use PrintMethods to print more aliases
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208607 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 18:04:06 +00:00
Benjamin Kramer
b31a977c9c X86: Make sure that we have SSE4.1 before we generate insertps nodes.
PR19721.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208552 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 13:12:08 +00:00
Elena Demikhovsky
1cec507d6d AVX-512: changes in intrinsics
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208522 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 07:18:51 +00:00
Filipe Cabecinhas
4ccf0ebb19 Fixed a bug when lowering build_vector (PR19694)
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208487 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 08:12:56 +00:00
Reid Kleckner
d30c11edde Revert "[ms-cxxabi] Add a new calling convention that swaps 'this' and 'sret'"
This reverts commit r200561.

This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value.  This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.

Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208459 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:56:42 +00:00
Reid Kleckner
805a83c041 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208453 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:32:13 +00:00
Filipe Cabecinhas
e4a3254c02 Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper, delena

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 23:16:08 +00:00
Andrea Di Biagio
2360e51fd0 [X86] Add target specific combine rules to fold SSE2/AVX2 packed arithmetic shift intrinsics.
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.

The rules are:
 - Always fold a packed arithmetic shift by zero to its first operand;
 - Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
   the shift count is known to be smaller than the vector element size.

This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.

Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208342 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 17:44:04 +00:00
Filipe Cabecinhas
b19c087aa7 Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 00:25:16 +00:00
Quentin Colombet
57b4c5d473 [X86] Add a test case for r208252.
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.

I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0

Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0

<rdar://problem/16800495> 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208260 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 22:52:58 +00:00
Andrea Di Biagio
8a712ba229 [X86] Improve the lowering of BITCAST dag nodes from type f64 to type v2i32 (and vice versa).
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.

With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).

This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 17:09:03 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Reid Kleckner
9ad48c11b1 Fix i128 div/mod on mingw64
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.

Patch by Jameson Nash!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208029 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 01:20:42 +00:00
Filipe Cabecinhas
75ea413a1b Revert "Optimize shufflevector that copies an i64/f64 and zeros the rest."
This reverts commit 207992. I misread the phab number on the LGTM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207993 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 19:40:36 +00:00
Filipe Cabecinhas
a0fa9eb606 Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207992 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 19:36:28 +00:00
Michael Zolotukhin
830195e5fb Move test from r207969 to another folder and rename it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207984 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 18:10:15 +00:00
Rafael Espindola
3b4c8c2b2a Modify test to not use -disable-cfi.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207974 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 16:47:07 +00:00
Elena Demikhovsky
8a3751f813 AVX-512: minor change in rndscale intrinsic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207937 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-04 13:35:37 +00:00
Saleem Abdulrasool
f3b2ed7498 X86: repair export compatibility with MinGW/cygwin
Both MinGW and cygwin (i686) construct export directives without the global
leader prefix.  This is mostly due to the fact that they use GNU ld which does
not correctly handle the export directive.  This apparently has been been broken
for a while.  However, this was recently reported as being broken by
mingwandroid and diorcety of the msys2 project.

Remove the global leader prefix if targeting MinGW or cygwin, otherwise, retain
the global leader prefix.  Add an explicit test for cygwin's behaviour of export
directives.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207926 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-04 00:03:48 +00:00
Benjamin Kramer
bcf1501839 Allow SelectionDAG::FoldConstantArithmetic to work when it's called with a vector VT but scalar values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207835 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-02 12:35:22 +00:00
Michael J. Spencer
d4b4f2d340 [IR] Make {extract,insert}element accept an index of any integer type.
Given the following C code llvm currently generates suboptimal code for
x86-64:

__m128 bss4( const __m128 *ptr, size_t i, size_t j )
{
    float f = ptr[i][j];
    return (__m128) { f, f, f, f };
}

=================================================

define <4 x float> @_Z4bss4PKDv4_fmm(<4 x float>* nocapture readonly %ptr, i64 %i, i64 %j) #0 {
  %a1 = getelementptr inbounds <4 x float>* %ptr, i64 %i
  %a2 = load <4 x float>* %a1, align 16, !tbaa !1
  %a3 = trunc i64 %j to i32
  %a4 = extractelement <4 x float> %a2, i32 %a3
  %a5 = insertelement <4 x float> undef, float %a4, i32 0
  %a6 = insertelement <4 x float> %a5, float %a4, i32 1
  %a7 = insertelement <4 x float> %a6, float %a4, i32 2
  %a8 = insertelement <4 x float> %a7, float %a4, i32 3
  ret <4 x float> %a8
}

=================================================

        shlq    $4, %rsi
        addq    %rdi, %rsi
        movslq  %edx, %rax
        vbroadcastss    (%rsi,%rax,4), %xmm0
        retq

=================================================

The movslq is uneeded, but is present because of the trunc to i32 and then
sext back to i64 that the backend adds for vbroadcastss.

We can't remove it because it changes the meaning. The IR that clang
generates is already suboptimal. What clang really should emit is:

  %a4 = extractelement <4 x float> %a2, i64 %j

This patch makes that legal. A separate patch will teach clang to do it.

Differential Revision: http://reviews.llvm.org/D3519

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207801 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-01 22:12:39 +00:00
Michael Zolotukhin
c80b103a2b [X86] Never hoist the shift value of a shift instruction.
There is no need to check if we want to hoist the immediate value of an
shift instruction. Simply return TCC_Free right away.

This change is like r206101, but for X86.

rdar://problem/16190769

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207692 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 19:17:32 +00:00
Reid Kleckner
9902128e2a Implement X86 code generation for musttail
Currently, musttail codegen is relying on sibcall optimization, and
reporting a fatal error if fails.  Sibcall optimization fails when stack
arguments need to be modified, which is insufficient for musttail.

The logic for moving arguments in memory safely is already implemented
for GuaranteedTailCallOpt.  This change merely arranges for musttail
calls to use it.

No functional change for GuaranteedTailCallOpt.

Reviewers: espindola

Differential Revision: http://reviews.llvm.org/D3493

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207598 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-29 23:55:41 +00:00
Tim Northover
d5d3e188f0 X86: emit hidden stubs into a proper non_lazy_symbol_pointer section.
rdar://problem/16660411

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207518 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-29 10:06:10 +00:00
Elena Demikhovsky
e3e08acd09 AVX-512: optimized a shuffle pattern to VINSERTI64x4.
Added intrinsics for VPERMT2PS/PD/D/Q instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207513 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-29 09:09:15 +00:00
Benjamin Kramer
ad1f916eaf X86: If SSE4.1 is missing lower SMUL_LOHI of v4i32 to pmuludq and fix up the high parts.
This is more expensive than pmuldq but still cheaper than scalarizing the whole thing.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207370 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-27 18:47:41 +00:00
Benjamin Kramer
55e03c1992 SelectionDAG: Aggressively fold shuffles of constant splats.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207352 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-27 11:41:06 +00:00
Benjamin Kramer
eb3430cfbd DAGCombiner: Simplify code a bit, make more transforms work with vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207338 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 23:09:49 +00:00
Benjamin Kramer
9f2c21871c X86: Lower SMUL_LOHI of v4i32 to pmuldq when SSE4.1 is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207318 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 14:12:19 +00:00
Benjamin Kramer
fb625eadf9 X86: Add patterns for MULHU/MULHS of v8i16 and v16i16.
This gets us pretty code for divs of i16 vectors. Turn the existing
intrinsics into the corresponding nodes.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207317 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 13:01:03 +00:00
Benjamin Kramer
aab6231cd9 DAGCombiner: Turn divs of vector splats into vectorized multiplications.
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.

I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207315 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 12:06:28 +00:00
Michael Zolotukhin
abd7ca0706 Revert r206749 till a final decision about the intrinsics is made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207313 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 09:56:41 +00:00
Juergen Ributzka
d10a9fb25a [DAG] During DAG legalization keep opaque constants even after expanding.
The included test case would return the incorrect results, because the expansion
of an shift with a constant shift amount of 0 would generate undefined behavior.

This is because ExpandShiftByConstant assumes that all shifts by constants with
a value of 0 have already been optimized away. This doesn't happen for opaque
constants and usually this isn't a problem, because opaque constants won't take
this code path - they are not supposed to. In the case that the opaque constant
has to be expanded by the legalizer, the legalizer would drop the opaque flag.
In this case we hit the limitations of ExpandShiftByConstant and create incorrect
code.

This commit fixes the legalizer by not dropping the opaque flag when expanding
opaque constants and adding an assertion to ExpandShiftByConstant to catch this
not supported case in the future.

This fixes <rdar://problem/16718472>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207304 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 02:58:04 +00:00
Quentin Colombet
9e93e47b7f [X86] Implement TargetLowering::getScalingFactorCost hook.
Scaling factors are not free on X86 because every "complex" addressing mode
breaks the related instruction into 2 allocations instead of 1.

<rdar://problem/16730541>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207301 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 01:11:26 +00:00
Filipe Cabecinhas
31e75e673c Appease the almighty buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207295 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 00:02:37 +00:00