Commit Graph

4983 Commits

Author SHA1 Message Date
Adam Nemet
7cd893a985 [X86] Remove AVX1 vbroadcast intrinsics
The corresponding CFE patch replaces these intrinsics with vector initializers
in avxintrin.h.  This patch removes the LLVM intrinsics from the backend.

We now stop lowering at X86ISD::VBROADCAST custom node rather than lowering
that further to the intrinsics.

The patch only changes VBROADCASTS* and leaves VBROADCAST[FI]128 to continue
to use intrinsics.  As explained in the CFE patch, the reason is that we
currently don't generate as good code for them without the intrinsics.

CodeGen/X86/avx-vbroadcast.ll already provides coverage for this change.  It
checks that for a series of insertelements we generate the appropriate
vbroadcast instruction.

Also verified that there was no assembly change in the test-suite before and
after this patch.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209864 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-29 23:35:36 +00:00
Filipe Cabecinhas
ade072c1a9 Added tests for shufflevector lowering to blend instrs.
These tests ensure that a change I will propose in clang works as
expected.

Summary:
Added tests for the generation of blend+immediate instructions from a
shufflevector.
These tests were proposed along with a patch that was dropped. I'm
committing the tests anyway to protect against possible regressions in
codegen.

Reviewers: nadav, bkramer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3600

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209853 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-29 22:04:42 +00:00
Michael J. Spencer
11ef9456a8 [x86] Fold extract_vector_elt of a load into the Load's address computation.
An address only use of an extract element of a load can be simplified to a
load. Without this the result of the extract element is spilled to the
stack so that an address is available.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209788 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-29 01:42:45 +00:00
Rafael Espindola
665d42accf [pr19844] Add thread local mode to aliases.
This matches gcc's behavior. It also seems natural given that aliases
contain other properties that govern how it is accessed (linkage,
visibility, dll storage).

Clang still has to be updated to expose this feature to C.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209759 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-28 18:15:43 +00:00
Filipe Cabecinhas
c5f611404c Convert some X86 blendv* intrinsics into IR.
Summary:
Implemented an InstCombine transformation that takes a blendv* intrinsic
call and translates it into an IR select, if the mask is constant.

This will eventually get lowered into blends with immediates if possible,
or pblendvb (with an option to further optimize if we can transform the
pblendvb into a blend+immediate instruction, depending on the selector).
It will also enable optimizations by the IR passes, which give up on
sight of the intrinsic.

Both the transformation and the lowering of its result to asm got shiny
new tests.

The transformation is a bit convoluted because of blendvp[sd]'s
definition:

Its mask is a floating point value! This forces us to convert it and get
the highest bit. I suppose this happened because the mask has type
__m128 in Intel's intrinsic and v4sf (for blendps) in gcc's builtin.

I will send an email to llvm-dev to discuss if we want to change this or
not.

Reviewers: grosbach, delena, nadav

Differential Revision: http://reviews.llvm.org/D3859

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209643 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-27 03:42:20 +00:00
Rafael Espindola
524b8836c2 Just check the entire string.
Thanks to David Blaikie for the suggestion.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209610 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-26 04:08:51 +00:00
Rafael Espindola
c385367909 Emit data or code export directives based on the type.
Currently we look at the Aliasee to decide what type of export
directive to use. It seems better to use the type of the alias
directly. This is similar to how we handle the alias having the
same address but other attributes (linkage, visibility) from the
aliasee.

With this patch it is now possible to do things like

target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-windows-msvc"
@foo = global [6 x i8] c"\B8*\00\00\00\C3", section ".text", align 16
@f = dllexport alias i32 (), [6 x i8]* @foo
!llvm.module.flags = !{!0}
!0 = metadata !{i32 6, metadata !"Linker Options", metadata !1}
!1 = metadata !{metadata !2, metadata !3}
!2 = metadata !{metadata !"/DEFAULTLIB:libcmt.lib"}
!3 = metadata !{metadata !"/DEFAULTLIB:oldnames.lib"}

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209600 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-25 12:49:07 +00:00
Rafael Espindola
fba226fc52 Make these CHECKs a bit more strict.
The " at the end of the line makes sure we matched the entire directive.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209599 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-25 12:43:13 +00:00
Rafael Espindola
ef72e73da9 Use alias linkage and visibility to decide tls access mode.
This matches both what we do for the non-thread case and what gcc does.

With this patch clang would match gcc's behaviour in

static __thread int a = 42;
extern __thread int b __attribute__((alias("a")));
int *f(void) { return &a; }
int *g(void) { return &b; }

if not for pr19843. Manually writing the IL does produce the same access modes.

It is also a step in the direction of fixing pr19844.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209543 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-23 19:16:56 +00:00
Nico Rieck
b16514017b Remove unused CHECK lines
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209539 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-23 19:06:44 +00:00
Rafael Espindola
c3be377e36 Convert test to use FileCheck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209528 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-23 16:51:13 +00:00
Andrea Di Biagio
3957d4245f [X86] Improve the lowering of BITCAST from MVT::f64 to MVT::v4i16/MVT::v8i8.
This patch teaches the x86 backend how to efficiently lower ISD::BITCAST dag
nodes from MVT::f64 to MVT::v4i16 (and vice versa), and from MVT::f64 to
MVT::v8i8 (and vice versa).

This patch extends the logic from revision 208107 to also handle MVT::v4i16
and MVT::v8i8. Also, this patch correctly propagates Undef values when
performing the widening of a vector (example: when widening from v2i32 to
v4i32, the upper 64bits of the resulting vector are 'undef').



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209451 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-22 16:21:39 +00:00
Tim Northover
de70176f5f Segmented stacks: omit __morestack call when there's no frame.
Patch by Florian Zeitz

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209436 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-22 13:03:43 +00:00
Quentin Colombet
fd0096a42c [X86] Fix a bug in the lowering of BLENDI introduced in r209043.
ISD::VSELECT mask uses 1 to identify the first argument and 0 to identify the
second argument.
On the other hand, BLENDI uses 0 to identify the first argument and 1 to
identify the second argument.
Fix the generation of the blend mask to account for this difference.

The bug did not show up with r209043, because we were not checking for the
actual arguments of the blend instruction!
This commit also fixes the test cases.

Note: The same mask works for the BLENDr variant because the arguments are
swapped during instruction selection (see the BLENDXXrr patterns).

<rdar://problem/16975435>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209324 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-21 22:00:39 +00:00
Eric Christopher
6a9366c0c6 Move the function and data section flags into the options struct and
make the functions to set them non-static.
Move and rename the llvm specific backend options to avoid conflicting
with the clang option.

Paired with a backend commit to update.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209238 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 21:25:34 +00:00
Quentin Colombet
50d4008b47 [LSR] Canonicalize reg1 + ... + regN into reg1 + ... + 1*regN.
This commit introduces a canonical representation for the formulae.
Basically, as soon as a formula has more that one base register, the scaled
register field is used for one of them. The register put into the scaled
register is preferably a loop variant.
The commit refactors how the formulae are built in order to produce such
representation.
This yields a more accurate, but still perfectible, cost model.

<rdar://problem/16731508>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209230 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 19:25:04 +00:00
Renato Golin
5d7afdb2ec Avoids DCE on write_register
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209222 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 17:40:03 +00:00
Benjamin Kramer
51af588366 Legalizer: Make bswap promotion safe for vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209202 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 09:42:31 +00:00
David Blaikie
c108a06c86 DebugInfo: Assume all subprogram DIEs have been created before any abstract subprograms are constructed.
Since we visit the whole list of subprograms for each CU at module
start, this is clearly true - don't test for the case, just assert it.

A few old test cases seemed to have incomplete subprogram lists, but any
attempt to reproduce them shows full subprogram lists that even include
entities that have been completely inlined and the out of line
definition removed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209178 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 23:16:19 +00:00
Andrea Di Biagio
8e4a223f7b [X86] Add ISel patterns to improve the selection of TZCNT and LZCNT.
Instructions TZCNT (requires BMI1) and LZCNT (requires LZCNT), always
provide the operand size as output if the input operand is zero.

We can take advantage of this knowledge during instruction selection
stage in order to simplify a few corner case.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209159 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 20:38:59 +00:00
Filipe Cabecinhas
ca162faee2 Added more insertps optimizations
Summary:
When inserting an element that's coming from a vector load or a broadcast
of a vector (or scalar) load, combine the load into the insertps
instruction.
Added PerformINSERTPSCombine for the case where we need to fix the load
(load of a vector + insertps with a non-zero CountS).
Added patterns for the broadcasts.

Also added tests for SSE4.1, AVX, and AVX2.

Reviewers: delena, nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3581

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209156 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 19:45:57 +00:00
Benjamin Kramer
bb81d9d5fa SDAG: Legalize vector BSWAP into a shuffle if the shuffle is legal but the bswap not.
- On ARM/ARM64 we get a vrev because the shuffle matching code is really smart. We still unroll anything that's not v4i32 though.
- On X86 we get a pshufb with SSSE3. Required more cleverness in isShuffleMaskLegal.
- On PPC we get a vperm for v8i16 and v4i32. v2i64 is unrolled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209123 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 13:12:38 +00:00
Filipe Cabecinhas
cb596baadd Change the blend tests to AVX, not AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-19 04:47:12 +00:00
Chandler Carruth
2ff4a49344 [x86] Fix a bad predicate I spotted by inspection -- pshufhw and pshuflw
were added in SSE2, no SSSE3. Found this while auditing all uses of
SSSE3 in the X86 target. I don't actually expect this to make
a significant difference on anything and I don't have any detailed test
cases but I updated the existing test cases that already covered some of
this code path.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209056 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 03:29:20 +00:00
Filipe Cabecinhas
d77c1c4465 Implemented special cases for PerformVSELECTCombine.
vselects with constant masks, after legalization, will get turned into
specialized shuffle_vectors so they can be matched to blend+imm
instructions.

Fixed some tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209044 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 22:47:54 +00:00
Filipe Cabecinhas
5ea7215050 Lower vselects into X86ISD::BLENDI when appropriate.
LowerVSELECT will, if possible, generate a X86ISD::BLENDI DAG node if the
condition is constant and we can emit that instruction, given the
subtarget.

This is not enough for all cases. An additional SELECTCombine optimization
will be committed.

Fixed tests that were expecting variable blends but where a blend+imm can
be generated.
Added test where we can't emit blend+immediate.
Added avx2 blend+imm tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209043 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 22:47:49 +00:00
Rafael Espindola
27c076ae40 Fix most of PR10367.
This patch changes the design of GlobalAlias so that it doesn't take a
ConstantExpr anymore. It now points directly to a GlobalObject, but its type is
independent of the aliasee type.

To avoid changing all alias related tests in this patches, I kept the common
syntax

@foo = alias i32* @bar

to mean the same as now. The cases that used to use cast now use the more
general syntax

@foo = alias i16, i32* @bar.

Note that GlobalAlias now behaves a bit more like GlobalVariable. We
know that its type is always a pointer, so we omit the '*'.

For the bitcode, a nice surprise is that we were writing both identical types
already, so the format change is minimal. Auto upgrade is handled by looking
through the casts and no new fields are needed for now. New bitcode will
simply have different types for Alias and Aliasee.

One last interesting point in the patch is that replaceAllUsesWith becomes
smart enough to avoid putting a ConstantExpr in the aliasee. This seems better
than checking and updating every caller.

A followup patch will delete getAliasedGlobal now that it is redundant. Another
patch will add support for an explicit offset.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209007 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 19:35:39 +00:00
David Blaikie
33b37c7b12 DebugInfo: Assume the CU's Subprogram list only contains definitions.
DIBuilder maintains this invariant and the current DwarfDebug code could
end up doing weird things if it contained declarations (such as putting
the definition DIE inside a CU that contained the declaration - this
doesn't seem like a good idea, so rather than adding logic to handle
this case we'll just ban in for now & cross that bridge if we come to
it later).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209004 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 18:26:53 +00:00
NAKAMURA Takumi
70e3aba855 llvm/test/CodeGen/X86/combine-sse41-intrinsics.ll: Add explicit triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208897 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 15:45:31 +00:00
Andrea Di Biagio
9836c47ea6 [X86] Teach the backend how to fold SSE4.1/AVX/AVX2 blend intrinsics.
Added target specific combine rules to fold blend intrinsics according
to the following rules:
 1) fold(blend A, A, Mask) -> A;
 2) fold(blend A, B, <allZeros>) -> A;
 3) fold(blend A, B, <allOnes>) -> B.

Added two new tests to verify that the new folding rules work for all
the optimized blend intrinsics.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208895 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 15:18:15 +00:00
Jay Foad
6b543713a2 Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208811 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 21:14:37 +00:00
Benjamin Kramer
202be06318 X86: If we have an instruction that sets a flag and a zero test on the input of that instruction try to eliminate the test.
For example
	tzcntl	%edi, %ebx
	testl %edi, %edi
	je	.label

can be rewritten into
	tzcntl	%edi, %ebx
	jb 	.label

A minor complication is that tzcnt sets CF instead of ZF when the input
is zero, we have to rewrite users of the flags from ZF to CF. Currently
we recognize patterns using lzcnt, tzcnt and popcnt.

Differential Revision: http://reviews.llvm.org/D3454

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208788 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 16:14:45 +00:00
Joey Gouly
fdae312e65 [CGP] r205941 changed the logic, so that a cast happens *before* 'Result' is
compared to 'AddrMode.BaseReg'. In the case that 'AddrMode.BaseReg' is
nullptr, 'Result' will also be nullptr, so the cast causes an assertion. We
should use dyn_cast_or_null here to check 'Result' is not null and it is an
instruction.

Bug found by Mats Petersson, and I reduced his IR to get a test case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208705 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-13 15:42:45 +00:00
Reid Kleckner
17335ce80f Try to fix an SDAG dependence issue with sret
r208453 added support for having sret on the second parameter.  In that
change, the code for copying sret into a virtual register was hoisted
into the loop that lowers formal parameters.  This caused a "Wrong
topological sorting" assertion failure during scheduling when a
parameter is passed in memory.  This change undoes that by creating a
second loop that deals with sret.

I'm worried that this fix is incomplete.  I don't fully understand the
dependence issues.  However, with this change we produce the same DAGs
we used to produce, so if they are broken, they are just as broken as
they have always been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208637 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 22:01:27 +00:00
Adam Nemet
45fc47013f [Test] Trim unnecessary .c and .cpp from config.suffix in lit.local.cfg
Tested by comparing make check VERBOSE=1 before and after to make sure
no tests are missed.  (VERBOSE=1 prints the list of tests.)

Only one test :( remains where .cpp is required:

tools/llvm-cov/range_based_for.cpp:// RUN: llvm-cov range_based_for.cpp | FileCheck %s --check-prefix=STDOUT

The topic was discussed in this thread:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140428/214905.html

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208621 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 19:57:31 +00:00
Tim Northover
d6cd0381f6 TableGen: use PrintMethods to print more aliases
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208607 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 18:04:06 +00:00
Benjamin Kramer
b31a977c9c X86: Make sure that we have SSE4.1 before we generate insertps nodes.
PR19721.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208552 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 13:12:08 +00:00
Elena Demikhovsky
1cec507d6d AVX-512: changes in intrinsics
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208522 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 07:18:51 +00:00
Filipe Cabecinhas
4ccf0ebb19 Fixed a bug when lowering build_vector (PR19694)
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208487 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 08:12:56 +00:00
Reid Kleckner
d30c11edde Revert "[ms-cxxabi] Add a new calling convention that swaps 'this' and 'sret'"
This reverts commit r200561.

This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value.  This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.

Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208459 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:56:42 +00:00
Reid Kleckner
805a83c041 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208453 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:32:13 +00:00
Filipe Cabecinhas
e4a3254c02 Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper, delena

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 23:16:08 +00:00
Andrea Di Biagio
2360e51fd0 [X86] Add target specific combine rules to fold SSE2/AVX2 packed arithmetic shift intrinsics.
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.

The rules are:
 - Always fold a packed arithmetic shift by zero to its first operand;
 - Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
   the shift count is known to be smaller than the vector element size.

This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.

Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208342 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 17:44:04 +00:00
Filipe Cabecinhas
b19c087aa7 Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 00:25:16 +00:00
Quentin Colombet
57b4c5d473 [X86] Add a test case for r208252.
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.

I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0

Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0

<rdar://problem/16800495> 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208260 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 22:52:58 +00:00
Andrea Di Biagio
8a712ba229 [X86] Improve the lowering of BITCAST dag nodes from type f64 to type v2i32 (and vice versa).
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.

With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).

This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 17:09:03 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Reid Kleckner
9ad48c11b1 Fix i128 div/mod on mingw64
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.

Patch by Jameson Nash!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208029 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 01:20:42 +00:00
Filipe Cabecinhas
75ea413a1b Revert "Optimize shufflevector that copies an i64/f64 and zeros the rest."
This reverts commit 207992. I misread the phab number on the LGTM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207993 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 19:40:36 +00:00
Filipe Cabecinhas
a0fa9eb606 Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207992 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 19:36:28 +00:00