Commit Graph

11476 Commits

Author SHA1 Message Date
Eric Christopher
6f125f52d3 Cache the Function dependent subtarget on the MachineFunction.
As preparation for removing the getSubtargetImpl() call from
TargetMachine go ahead and flip the switch on caching the function
dependent subtarget and remove the bare getSubtargetImpl call
from the X86 port. As part of this add a few tests that show we
can generate code and assemble on X86 based on features/cpu on
the Function.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232879 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-21 03:13:10 +00:00
Sanjay Patel
39110ecd35 [X86] Prefer blendps over insertps codegen for one special case
With this patch, for this one exact case, we'll generate:

  blendps %xmm0, %xmm1, $1

instead of:

  insertps %xmm0, %xmm1, $0

If there's a memory operand available for load folding and we're
optimizing for size, we'll still generate the insertps.

The detailed performance data motivation for this may be found in D7866; 
in summary, blendps has 2-3x throughput vs. insertps on widely used chips.

Differential Revision: http://reviews.llvm.org/D8332



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232850 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:19:52 +00:00
Benjamin Kramer
5155a78d18 X86: Make helper functions static. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232848 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:07:30 +00:00
Rafael Espindola
82759c6cac Reorganize the x86 ELF relocation selection logic.
The main differences are:

* Split in 32 and 64 bit functions.
* First switch on the Modifier so that we have only one non fully covered
  switch.
* Map the fixup kind first to a x86_64 (or i386) specific enum, to make
  it easy to handle cases like X86::reloc_riprel_4byte_movq_load.
* Switch on IsPCRel last, which reduces code duplication.

Fixes pr22308.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232837 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 19:48:54 +00:00
Simon Pilgrim
45f61bfec3 Stripped trailing whitespace. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232822 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 16:08:17 +00:00
Rafael Espindola
c2c5c09f1c Reduce indentation after return. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232814 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 14:33:25 +00:00
Rafael Espindola
6ee46c4389 Use early returns. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232813 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 14:23:46 +00:00
Rafael Espindola
c09121b93b Fold a llvm_unreachable into an assert. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232811 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 13:50:15 +00:00
Rafael Espindola
ddcc06d824 clang-format a function. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232810 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 13:47:40 +00:00
Sanjay Patel
2326d50776 move insert, extract, concat helper functions closer to related helper functions; NFCI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232781 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 23:04:25 +00:00
Sanjay Patel
11d77223a5 [X86, AVX] use blends instead of insert128 with index 0
Another case of x86-specific shuffle strength reduction:
avoid generating insert*128 instructions with index 0 because
they are slower than their non-lane-changing blend equivalents.

Shuffle lowering already catches most of these cases, but
the zero vector case and some other paths such as in the
modified test in vector-shuffle-256-v32.ll were getting
through.

Differential Revision: http://reviews.llvm.org/D8366


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232773 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:29:40 +00:00
Rafael Espindola
c7c4c36694 Split the object streamer callback in one per file format.
There are two main advantages to doing this

* Targets that only need to handle one of the formats specially don't have
  to worry about the others. For example, x86 now only registers a
  constructor for the COFF streamer.

* Changes to the arguments passed to one format constructor will not impact
  the other formats.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232699 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 01:50:16 +00:00
Rafael Espindola
64d662ba93 two or more, use a for.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 23:15:49 +00:00
Simon Pilgrim
ab18d0e7cb [X86][SSE] Avoid scalarization of v2i64 vector shifts (REAPPLIED)
Fixed broken tests.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232682 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 22:18:51 +00:00
Eric Christopher
3932b367d7 Revert "[X86][SSE] Avoid scalarization of v2i64 vector shifts" as it
appears to have broken tests/bots.

This reverts commit r232660.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232670 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 21:01:00 +00:00
Simon Pilgrim
0ee70a1554 [X86][SSE] Avoid scalarization of v2i64 vector shifts
Currently v2i64 vectors shifts (non-equal shift amounts) are scalarized, costing 4 x extract, 2 x x86-shifts and 2 x insert instructions - and it gets even more awkward on 32-bit targets.

This patch separately shifts the vector by both shift amounts and then shuffles the partial results back together, costing 2 x shuffles and 2 x sse-shifts instructions (+ 2 movs on pre-AVX hardware).

Note - this patch only improves the SHL / LSHR logical shifts as only these are supported in SSE hardware.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232660 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 19:35:31 +00:00
Rafael Espindola
df600f8049 Handle X86::reloc_riprel_4byte in 32 bits mode.
We can get there with .code64.

Fixes pr22349.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232651 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 17:33:40 +00:00
Rafael Espindola
b80d90e9d0 Make EmitFunctionHeader a private helper.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232481 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-17 14:38:30 +00:00
Rafael Espindola
7b8bb89ecd Pass in a "const Triple &T" instead of a raw StringRef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232429 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 22:29:29 +00:00
Rafael Espindola
63641de2e6 Remove unused argument. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232428 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 22:06:15 +00:00
David Blaikie
7610ba7d24 Fix uses of reserved identifiers starting with an underscore followed by an uppercase letter
This covers essentially all of llvm's headers and libs. One or two weird
cases I wasn't sure were worth/appropriate to fix.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232394 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 18:06:57 +00:00
Sanjay Patel
b8434f1cf5 fix comments to match code; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232385 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 15:38:48 +00:00
Rafael Espindola
8d8c155a61 Use the i8 immediate cmp instructions when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232378 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 14:25:08 +00:00
Rafael Espindola
6fcd5b90ab Don't repeat names in comments and clang-format this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232375 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 14:05:49 +00:00
Daniel Sanders
6d9e62f432 Make each target map all inline assembly memory constraints to InlineAsm::Constraint_m. NFC.
Summary:
This is instead of doing this in target independent code and is the last
non-functional change before targets begin to distinguish between
different memory constraints when selecting code for the ISD::INLINEASM
node.

Next, each target will individually move away from the idea that all
memory constraints behave like 'm'.

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8173


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 13:13:41 +00:00
Gabor Horvath
1fc0a8da34 [llvm] Replacing asserts with static_asserts where appropriate
Summary:
This patch consists of the suggestions of clang-tidy/misc-static-assert check.


Reviewers: alexfh

Reviewed By: alexfh

Subscribers: xazax.hun, llvm-commits

Differential Revision: http://reviews.llvm.org/D8343

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232366 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 09:53:42 +00:00
Simon Pilgrim
d6c5465667 Use SDValue bool check to tidyup some possible combines. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232331 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-15 19:47:42 +00:00
Simon Pilgrim
ec009464f2 Use SDValue bool check to tidyup some possible combines. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232325 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-15 17:21:35 +00:00
Rafael Espindola
89c84b0c83 Use add32ri8 and friends on fast isel.
This fixes pr22854.

The core issue on the bug is that there are multiple instructions that
print the same in assembly. In fact, there doesn't seem to be any
syntax for specifying that a constant that fits in 8 bits should use a 32 bit
immediate.

The attached patch changes fast isel to consider i16immSExt8,
i32immSExt8, and i64immSExt8. They were disabled because fastisel didn’t know
to call the predicate back in the day.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232223 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-13 22:18:18 +00:00
Andrea Di Biagio
d288259ccd [X86][AVX] Fix wrong lowering of v4x64 shuffles into concat_vector plus extract_subvector nodes.
This patch fixes a bug in the shuffle lowering logic implemented by function
'lowerV2X128VectorShuffle'.

The are few cases where function 'lowerV2X128VectorShuffle' wrongly expands a
shuffle of two v4X64 vectors into a CONCAT_VECTORS of two EXTRACT_SUBVECTOR
nodes. The problematic expansion only occurs when the shuffle mask M has an
'undef' element at position 2, and M is equivalent to mask <0,1,4,5>.
In that case, the algorithm propagates the wrong vector to one of the two
new EXTRACT_SUBVECTOR nodes.

Example:
;;
define <4 x double> @test(<4 x double> %A, <4 x double> %B) {
entry:
  %0 = shufflevector <4 x double> %A, <4 x double> %B, <4 x i32><i32 undef, i32 1, i32 undef, i32 5>
  ret <4 x double> %0
}
;;

Before this patch, llc (-mattr=+avx) generated:
  vinsertf128 $1, %xmm0, %ymm0, %ymm0

With this patch, llc correctly generates:
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

Added test lower-vec-shuffle-bug.ll

Differential Revision: http://reviews.llvm.org/D8259


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232179 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-13 17:29:49 +00:00
Daniel Sanders
547ba56bd0 Recommit r232027 with PR22883 fixed: Add infrastructure for support of multiple memory constraints.
The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.

This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break
anything.

The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate
Constraint_* values.

PR22883 was caused the matching operands copying the whole of the operand flags
for the matched operand. This included the constraint id which needed to be
replaced with the operand number. This has been fixed with a conversion
function. Following on from this, matching operands also used the operand
number as the constraint id. This has been fixed by looking up the matched
operand and taking it from there. 



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-13 12:45:09 +00:00
Sanjay Patel
cae9695fbb [X86, AVX2] Replace inserti128 and extracti128 intrinsics with generic shuffles
This should complete the job started in r231794 and continued in r232045:
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.

This should complete the removal of insert/extract128 intrinsics.

The Clang precursor patch for this change was checked in at r232109.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232120 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 23:16:18 +00:00
Hal Finkel
8faeecead0 Revert "r232027 - Add infrastructure for support of multiple memory constraints"
This (r232027) has caused PR22883; so it seems those bits might be used by
something else after all. Reverting until we can figure out what else to do.

Original commit message:

The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.

This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break anything.

The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate Constraint_*
values.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232093 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 20:09:39 +00:00
Quentin Colombet
be45e0e669 [X86] Fix a regression introduced by r223641.
The permps and permd instructions have their operands swapped compared to the
intrinsic definition. Therefore, they do not fall into the INTR_TYPE_2OP
category.

I did not create a new category for those two, as they are the only one AFAICT
in that case.

<rdar://problem/20108262>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232085 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 19:34:12 +00:00
Eric Christopher
f516a66bdd Remove the need to cache the subtarget in the X86 TargetRegisterInfo
classes. Use a Triple instead and simplify a lot of the querying
logic to use lookups on the Triple.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232071 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 17:54:19 +00:00
Andrea Di Biagio
be9322ae7c [X86] Fix wrong target specific combine on SETCC nodes.
Part of the folding logic implemented by function 'PerformISDSETCCCombine'
only worked under the assumption that the condition code in input could have
been either SETNE or SETEQ.
Unfortunately that assumption was incorrect, and in some cases the algorithm
ended up incorrectly folding SETCC nodes.

The incorrect folding only affected SETCC dag nodes where:
 - one of the operands was a build_vector of all zeroes;
 - the other operand was a SIGN_EXTEND from a vector of MVT:i1 elements;
 - the condition code was neither SETNE nor SETEQ.

Example:
  (setcc (v4i32 (sign_extend v4i1:%A)), (v4i32 VectorOfAllZeroes), setge)

Before this patch, the entire dag node sequence from the example was
incorrectly folded to node %A.

With this patch, the dag node sequence is folded to a
  (xor %A, (v4i1 VectorOfAllOnes)).

Added test setcc-combine.ll.

Thanks to Greg Bedwell for spotting this issue.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232046 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 15:16:58 +00:00
Daniel Sanders
67f6425792 Add infrastructure for support of multiple memory constraints.
Summary:
The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.

This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break anything.

The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate Constraint_*
values.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8171


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232027 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 11:00:48 +00:00
Elena Demikhovsky
3209a40889 AVX-512: Added encoding tests for VPROR, VPROL instructions,
fixed opcode.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232018 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 07:28:41 +00:00
Eric Christopher
257ea92cdf Remove some unnecessary forward declarations and put a couple more
where they're supposed to reside.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232014 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 06:07:16 +00:00
Mehdi Amini
ceb9150268 Move the DataLayout to the generic TargetMachine, making it mandatory.
Summary:
I don't know why every singled backend had to redeclare its own DataLayout.
There was a virtual getDataLayout() on the common base TargetMachine, the
default implementation returned nullptr. It was not clear from this that
we could assume at call site that a DataLayout will be available with
each Target.

Now getDataLayout() is no longer virtual and return a pointer to the
DataLayout member of the common base TargetMachine. I plan to turn it into
a reference in a future patch.

The only backend that didn't have a DataLayout previsouly was the CPPBackend.
It now initializes the default DataLayout. This commit is NFC for all the
other backends.

Test Plan: clang+llvm ninja check-all

Reviewers: echristo

Subscribers: jfb, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8243

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231987 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 00:07:24 +00:00
Eric Christopher
85aa6fd741 Have getCallPreservedMask and getThisCallPreservedMask take a
MachineFunction argument so that we can grab subtarget specific
features off of it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 22:42:13 +00:00
Juergen Ributzka
9814f7b92c Add the "vbroadcasti128" instruction back.
This is a follow-up to r231182. This adds the "vbroadcasti128" instruction
back, but without the intrinsic mapping. Also add a test to check the
instriction encoding.

This is related to rdar://problem/18742778.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231945 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 17:29:03 +00:00
Derek Schuff
87e6561f34 Make NaCl's use of .init_array for static constructors match Linux
Summary:
The generic ELF TargetObjectFile defaults to .ctors, but Linux's
defaults to .init_array by calling InitializeELF with the value of
UseInitArray from TargetMachine. Make NaCl's behavior match.

Reviewers: jvoung
Differential Revision: http://reviews.llvm.org/D8240

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231934 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 16:16:09 +00:00
Elena Demikhovsky
13cc6f2b6e AVX-512: Added SKX forms of shift instructions.
Added rotation instructions, encoding only.
Added encoding tests for all these forms.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231916 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 10:25:42 +00:00
Eric Christopher
4ec858ec4b Have TargetRegisterInfo::getLargestLegalSuperClass take a
MachineFunction argument so that it can look up the subtarget
rather than using a cached one in some Targets.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231888 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-10 23:46:01 +00:00
Eric Christopher
57849e3bb4 Remove the use of the subtarget in MCCodeEmitter creation and
update all ports accordingly. Required a couple of small rewrites
in handling subtarget features during creation in PPC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231861 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-10 22:03:14 +00:00
Andrea Di Biagio
692f7382b5 [X86][AVX] Fix wrong lowering of VPERM2X128 nodes
There were cases where the backend computed a wrong permute mask for a VPERM2X128 node.

Example:
\code
define <8 x float> @foo(<8 x float> %a, <8 x float> %b) {
  %shuffle = shufflevector <8 x float> %a, <8 x float> %b, <8 x i32> <i32 undef, i32 undef, i32 6, i32 7, i32 undef, i32 undef, i32 6, i32 7>
  ret <8 x float> %shuffle
}
\code end

Before this patch, llc (with -mattr=+avx) emitted the following vperm2f128:
  vperm2f128 $0, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[0,1,0,1]

With this patch, llc emits a vperm2f128 with a correct permute mask:
  vperm2f128 $17, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[2,3,2,3]

Differential Revision: http://reviews.llvm.org/D8119


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231601 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-08 16:28:47 +00:00
Simon Pilgrim
b8056be62c [DAGCombiner] Add a shuffle mask commutation helper function. NFCI.
We have an increasing number of cases where we are creating commuted shuffle masks - all implementing nearly the same code.

This patch adds a static helper function - ShuffleVectorSDNode::commuteMask() and replaces a number of cases to use it.

Differential Revision: http://reviews.llvm.org/D8139

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231581 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 22:33:11 +00:00
Benjamin Kramer
ed0266d8ee Make constant arrays that are passed to functions as const.
In theory this allows the compiler to skip materializing the array on
the stack. In practice clang often fails to do that, but that's a
different story. NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231571 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 17:41:00 +00:00
Benjamin Kramer
75664a8213 X86: Roll repetitive code into a loop. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231565 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 15:06:16 +00:00