Commit Graph

3205 Commits

Author SHA1 Message Date
Sanjay Patel
e53dbeb2ad [X86, AVX] improve insertion into zero element of 256-bit vector
This patch allows AVX blend instructions to handle insertion into the low
element of a 256-bit vector for the appropriate data types.

For f32, instead of:

   vblendps	$1, %xmm1, %xmm0, %xmm1 ## xmm1 = xmm1[0],xmm0[1,2,3]
   vblendps	$15, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0,1,2,3],ymm0[4,5,6,7]

we get:

   vblendps	$1, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0],ymm0[1,2,3,4,5,6,7]

For f64, instead of:

   vmovsd	%xmm1, %xmm0, %xmm1     ## xmm1 = xmm1[0],xmm0[1]
   vblendpd	$3, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0,1],ymm0[2,3]

we get:

   vblendpd	$1, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0],ymm0[1,2,3]

For the hardware-neglected integer data types, I left a TODO comment in the
code and added regression tests for a follow-on patch.

Differential Revision: http://reviews.llvm.org/D8609



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233199 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-25 17:36:01 +00:00
Sanjay Patel
fe76881930 [X86, AVX] recognize shufflevector with zero input as a vperm2 (PR22984)
vperm2x128 instructions have the special ability (aka free hardware capability)
to shuffle zero values into a vector.

This patch recognizes that type of shuffle and generates the appropriate
control byte.

https://llvm.org/bugs/show_bug.cgi?id=22984

Differential Revision: http://reviews.llvm.org/D8563



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233100 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-24 19:19:07 +00:00
Sanjay Patel
39110ecd35 [X86] Prefer blendps over insertps codegen for one special case
With this patch, for this one exact case, we'll generate:

  blendps %xmm0, %xmm1, $1

instead of:

  insertps %xmm0, %xmm1, $0

If there's a memory operand available for load folding and we're
optimizing for size, we'll still generate the insertps.

The detailed performance data motivation for this may be found in D7866; 
in summary, blendps has 2-3x throughput vs. insertps on widely used chips.

Differential Revision: http://reviews.llvm.org/D8332



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232850 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:19:52 +00:00
Simon Pilgrim
45f61bfec3 Stripped trailing whitespace. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232822 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 16:08:17 +00:00
Sanjay Patel
2326d50776 move insert, extract, concat helper functions closer to related helper functions; NFCI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232781 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 23:04:25 +00:00
Sanjay Patel
11d77223a5 [X86, AVX] use blends instead of insert128 with index 0
Another case of x86-specific shuffle strength reduction:
avoid generating insert*128 instructions with index 0 because
they are slower than their non-lane-changing blend equivalents.

Shuffle lowering already catches most of these cases, but
the zero vector case and some other paths such as in the
modified test in vector-shuffle-256-v32.ll were getting
through.

Differential Revision: http://reviews.llvm.org/D8366


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232773 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:29:40 +00:00
Simon Pilgrim
ab18d0e7cb [X86][SSE] Avoid scalarization of v2i64 vector shifts (REAPPLIED)
Fixed broken tests.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232682 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 22:18:51 +00:00
Eric Christopher
3932b367d7 Revert "[X86][SSE] Avoid scalarization of v2i64 vector shifts" as it
appears to have broken tests/bots.

This reverts commit r232660.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232670 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 21:01:00 +00:00
Simon Pilgrim
0ee70a1554 [X86][SSE] Avoid scalarization of v2i64 vector shifts
Currently v2i64 vectors shifts (non-equal shift amounts) are scalarized, costing 4 x extract, 2 x x86-shifts and 2 x insert instructions - and it gets even more awkward on 32-bit targets.

This patch separately shifts the vector by both shift amounts and then shuffles the partial results back together, costing 2 x shuffles and 2 x sse-shifts instructions (+ 2 movs on pre-AVX hardware).

Note - this patch only improves the SHL / LSHR logical shifts as only these are supported in SSE hardware.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232660 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 19:35:31 +00:00
Sanjay Patel
b8434f1cf5 fix comments to match code; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232385 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 15:38:48 +00:00
Gabor Horvath
1fc0a8da34 [llvm] Replacing asserts with static_asserts where appropriate
Summary:
This patch consists of the suggestions of clang-tidy/misc-static-assert check.


Reviewers: alexfh

Reviewed By: alexfh

Subscribers: xazax.hun, llvm-commits

Differential Revision: http://reviews.llvm.org/D8343

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232366 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-16 09:53:42 +00:00
Simon Pilgrim
d6c5465667 Use SDValue bool check to tidyup some possible combines. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232331 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-15 19:47:42 +00:00
Simon Pilgrim
ec009464f2 Use SDValue bool check to tidyup some possible combines. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232325 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-15 17:21:35 +00:00
Andrea Di Biagio
d288259ccd [X86][AVX] Fix wrong lowering of v4x64 shuffles into concat_vector plus extract_subvector nodes.
This patch fixes a bug in the shuffle lowering logic implemented by function
'lowerV2X128VectorShuffle'.

The are few cases where function 'lowerV2X128VectorShuffle' wrongly expands a
shuffle of two v4X64 vectors into a CONCAT_VECTORS of two EXTRACT_SUBVECTOR
nodes. The problematic expansion only occurs when the shuffle mask M has an
'undef' element at position 2, and M is equivalent to mask <0,1,4,5>.
In that case, the algorithm propagates the wrong vector to one of the two
new EXTRACT_SUBVECTOR nodes.

Example:
;;
define <4 x double> @test(<4 x double> %A, <4 x double> %B) {
entry:
  %0 = shufflevector <4 x double> %A, <4 x double> %B, <4 x i32><i32 undef, i32 1, i32 undef, i32 5>
  ret <4 x double> %0
}
;;

Before this patch, llc (-mattr=+avx) generated:
  vinsertf128 $1, %xmm0, %ymm0, %ymm0

With this patch, llc correctly generates:
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

Added test lower-vec-shuffle-bug.ll

Differential Revision: http://reviews.llvm.org/D8259


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232179 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-13 17:29:49 +00:00
Quentin Colombet
be45e0e669 [X86] Fix a regression introduced by r223641.
The permps and permd instructions have their operands swapped compared to the
intrinsic definition. Therefore, they do not fall into the INTR_TYPE_2OP
category.

I did not create a new category for those two, as they are the only one AFAICT
in that case.

<rdar://problem/20108262>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232085 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 19:34:12 +00:00
Andrea Di Biagio
be9322ae7c [X86] Fix wrong target specific combine on SETCC nodes.
Part of the folding logic implemented by function 'PerformISDSETCCCombine'
only worked under the assumption that the condition code in input could have
been either SETNE or SETEQ.
Unfortunately that assumption was incorrect, and in some cases the algorithm
ended up incorrectly folding SETCC nodes.

The incorrect folding only affected SETCC dag nodes where:
 - one of the operands was a build_vector of all zeroes;
 - the other operand was a SIGN_EXTEND from a vector of MVT:i1 elements;
 - the condition code was neither SETNE nor SETEQ.

Example:
  (setcc (v4i32 (sign_extend v4i1:%A)), (v4i32 VectorOfAllZeroes), setge)

Before this patch, the entire dag node sequence from the example was
incorrectly folded to node %A.

With this patch, the dag node sequence is folded to a
  (xor %A, (v4i1 VectorOfAllOnes)).

Added test setcc-combine.ll.

Thanks to Greg Bedwell for spotting this issue.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232046 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-12 15:16:58 +00:00
Eric Christopher
85aa6fd741 Have getCallPreservedMask and getThisCallPreservedMask take a
MachineFunction argument so that we can grab subtarget specific
features off of it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-11 22:42:13 +00:00
Andrea Di Biagio
692f7382b5 [X86][AVX] Fix wrong lowering of VPERM2X128 nodes
There were cases where the backend computed a wrong permute mask for a VPERM2X128 node.

Example:
\code
define <8 x float> @foo(<8 x float> %a, <8 x float> %b) {
  %shuffle = shufflevector <8 x float> %a, <8 x float> %b, <8 x i32> <i32 undef, i32 undef, i32 6, i32 7, i32 undef, i32 undef, i32 6, i32 7>
  ret <8 x float> %shuffle
}
\code end

Before this patch, llc (with -mattr=+avx) emitted the following vperm2f128:
  vperm2f128 $0, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[0,1,0,1]

With this patch, llc emits a vperm2f128 with a correct permute mask:
  vperm2f128 $17, %ymm0, %ymm0, %ymm0  # ymm0 = ymm0[2,3,2,3]

Differential Revision: http://reviews.llvm.org/D8119


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231601 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-08 16:28:47 +00:00
Simon Pilgrim
b8056be62c [DAGCombiner] Add a shuffle mask commutation helper function. NFCI.
We have an increasing number of cases where we are creating commuted shuffle masks - all implementing nearly the same code.

This patch adds a static helper function - ShuffleVectorSDNode::commuteMask() and replaces a number of cases to use it.

Differential Revision: http://reviews.llvm.org/D8139

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231581 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 22:33:11 +00:00
Benjamin Kramer
75664a8213 X86: Roll repetitive code into a loop. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231565 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 15:06:16 +00:00
Eric Christopher
b0b21de627 Typo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231547 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-07 01:39:09 +00:00
Sanjay Patel
5f79fd2f02 [AVX] Lower / fast-isel scalar FP selects into VBLENDV instructions (PR22483)
This patch reduces code size for all AVX targets and increases speed for some chips.

SSE 4.1 introduced the useless (see code comments) 2-register form of BLENDV and
only in the packed float/double flavors.

AVX subsequently made the instruction useful by adding a 4-register operand form.

So we just need to paper over the lack of scalar forms of this instruction, complicate
the code to choose float or double forms, and use blendv on scalars since all FP is in
xmm registers anyway.

This gives us an approximately 50% speed up for a blendv microbenchmark sequence
on SandyBridge and Haswell:
blendv : 29.73 cycles/iter
logic : 43.15 cycles/iter

No new test cases with this patch because:

1. fast-isel-select-sse.ll tests the positive side for regular X86 lowering and fast-isel
2. sse-minmax.ll and fp-select-cmp-and.ll confirm that we're not firing for scalar selects without AVX
3. fp-select-cmp-and.ll and logical-load-fold.ll confirm that we're not firing for scalar selects with constants.

http://llvm.org/bugs/show_bug.cgi?id=22483

Differential Revision: http://reviews.llvm.org/D8063



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231408 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-05 21:46:54 +00:00
Elena Demikhovsky
e670dc7848 AVX-512, SKX: Enabled masked_load/store operations for this target.
Added lowering for ISD::CONCAT_VECTORS and ISD::INSERT_SUBVECTOR for i1 vectors,
it is needed to pass all masked_memop.ll tests for SKX.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231371 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-05 15:11:35 +00:00
Craig Topper
62eaac6087 [X86] Use vmovss to handle inserting an element into index 0 of a v8f32 vector of zeros.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231354 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-05 06:38:42 +00:00
JF Bastien
81338a4890 Mutate TargetLowering::shouldExpandAtomicRMWInIR to specifically dictate how AtomicRMWInsts are expanded.
Summary:
In PNaCl, most atomic instructions have their own @llvm.nacl.atomic.* function, each one, with a few exceptions, represents a consistent behaviour across all NaCl-supported targets. Unfortunately, the atomic RMW operations nand, [u]min, and [u]max aren't directly represented by any such @llvm.nacl.atomic.* function. This patch refines shouldExpandAtomicRMWInIR in TargetLowering so that a future `Le32TargetLowering` class can selectively inform the caller how the target desires the atomic RMW instruction to be expanded (ie via load-linked/store-conditional for ARM/AArch64, via cmpxchg for X86/others?, or not at all for Mips) if at all.

This does not represent a behavioural change and as such no tests were added.

Patch by: Richard Diamond.

Reviewers: jfb

Reviewed By: jfb

Subscribers: jfb, aemerson, t.p.northover, llvm-commits

Differential Revision: http://reviews.llvm.org/D7713

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231250 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-04 15:47:57 +00:00
Sanjay Patel
d885b861e6 use bool operator shortcut; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231123 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-03 20:41:27 +00:00
Ahmed Bougacha
14593eb417 [X86] Special-case 2x CMOV when custom-inserting.
This lets us avoid a few copies that are otherwise hard to get rid of.
The way this is done is, the custom-inserter looks at the following
instruction for another CMOV, and replaces both at the same time.
A previous version used a new CMOV2 opcode, but the custom inserter
is expected to be able to return a different basic block anyway, which
means it's OK - though far from ideal - to alter that block's contents.
Explicitly document that, in case it ever makes a difference.
Alternatives welcome!

Follow-up to r231045.

rdar://19767934
Closes http://reviews.llvm.org/D8019


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231046 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-03 01:21:16 +00:00
Ahmed Bougacha
8b5527deef [X86] Combine (cmov (and/or (setcc) (setcc))) into (cmov (cmov)).
Fold and/or of setcc's to double CMOV:

(CMOV F, T, ((cc1 | cc2) != 0)) -> (CMOV (CMOV F, T, cc1), T, cc2)
(CMOV F, T, ((cc1 & cc2) != 0)) -> (CMOV (CMOV T, F, !cc1), F, !cc2)

When we can't use the CMOV instruction, it might increase branch
mispredicts.  When we can, or when there is no mispredict, this
improves throughput and reduces register pressure.

These can't be catched by generic combines, because the pattern can
appear when legalizing some instructions (such as fcmp une).

rdar://19767934
http://reviews.llvm.org/D7634


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231045 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-03 01:09:14 +00:00
Benjamin Kramer
adad988089 X86: Replace variadic function with init list. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230911 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-01 21:47:40 +00:00
Benjamin Kramer
c255d35a10 ArrayRef: Remove the equals helper with many arguments.
With initializer lists there is a really neat idiomatic way to write
this, 'ArrayRef.equals({1, 2, 3, 4, 5})'. Remove the equal method which
always had a hard limit on the number of arguments. I considered
rewriting it with variadic templates but that's not really a good fit
for a function with homogeneous arguments.

'ArrayRef == {1, 2, 3, 4, 5}' would've been even more awesome, but C++11
doesn't allow init lists with binary operators.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230907 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-01 21:05:05 +00:00
Craig Topper
8df1c6ef09 [X86] Remove the blendpd/blendps/pblendw/pblendd intrinsics. They can represented by shuffle_vector instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230860 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-28 19:33:17 +00:00
Benjamin Kramer
bac8d0ec70 Convert push_back loops into append calls.
No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230849 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-28 13:20:15 +00:00
Chandler Carruth
c4179ffed3 [x86] Run most of the rest of the shuffle combining over non-128-bit
vectors. This lets us fix the rest of the v16 lowering problems when
pshufb is clearly better.

We might still be able to improve some of the lowerings by enabling the
other combine-based rewriting to fire for non-128-bit vectors, but this
at least should remove any regressions from using the fancy v16i16
lowering strategy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230753 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-27 12:13:14 +00:00
Chandler Carruth
2d58cc5f1b [x86] Teach a bunch of the x86-specific shuffle combining to work with
256-bit vectors as well as 128-bit vectors. Fixes some of the redundant
shuffles for v16i16.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230752 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-27 11:45:13 +00:00
Chandler Carruth
8c71e440a2 [x86] Make the v8i16 clever single-input shuffle lowering usable for
repeated 128-bit lane shuffles of wider vector types and use it to lower
256-bit v16i16 vector shuffles where applicable.

This should let us perfectly lowering the pattern of pshuflw and pshufhw
even for AVX2 256-bit patterns.

I've not added AVX-512 support, but it should be trivial for someone
working on that to wire up.

Note that currently this generates bad, long shuffle chains because we
don't combine 256-bit target shuffles. The subsequent patches will fix
that.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230751 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-27 11:33:46 +00:00
Chandler Carruth
b1961a3896 [x86] Make the single-input v8i16 lowering directly recurse rather than
going back through the entire vector shuffle lowering.

This is an important step to being able to re-use this logic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230743 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-27 09:11:38 +00:00
Eric Christopher
acdd4442cb getRegForInlineAsmConstraint wants to use TargetRegisterInfo for
a lookup, pass that in rather than use a naked call to getSubtargetImpl.
This involved passing down and around either a TargetMachine or
TargetRegisterInfo. Update all callers/definitions around the targets
and SelectionDAG.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230699 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 22:38:43 +00:00
Chandler Carruth
b54c36fb4d [x86] Fix PR22706 where we would incorrectly try lower a v32i8 dynamic
blend as legal.

We made the same mistake in two different places. Whenever we are custom
lowering a v32i8 blend we need to check whether we are custom lowering
it only for constant conditions that can be shuffled, or whether we
actually have AVX2 and full dynamic blending support on bytes. Both are
fixed, with comments added to make it clear what is going on and a new
test case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230695 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 22:15:34 +00:00
Chandler Carruth
205a9a3aec [x86] Restructure the comments and the conditions for handling
dynamic blends.

This makes it much more clear what is going on. The case we're handling
is that of dynamic conditions, and we're bailing when the nature of the
vector types and subtarget preclude lowering the dynamic condition
vselect as an actual blend.

No functionality changed here, but this will make a subsequent bug-fix
to this code much more clear.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230690 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 21:29:06 +00:00
Chandler Carruth
9b9d0fcfe9 [x86] Re-order the combines of select in the X86 backend. This doesn't
change functionality, but makes it more clear that the dynamic case and
the shuffle case don't overlap in any interesting way.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230689 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 21:21:36 +00:00
Chandler Carruth
34f88924e1 [x86] Add an assert to catch if we ever try to blend a v32i8 without
AVX2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 21:18:20 +00:00
Reid Kleckner
783f7f989e Don't sibcall between SysV and Win64 convention functions
The shadow stack space expectations won't match.

Fixes PR22709.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230667 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 19:43:20 +00:00
Chandler Carruth
eabb1227f6 [x86] Sink the single-input v8i16 lowering code that is actually
formulaic into the top v8i16 lowering routine.

This makes the generalized lowering a completely general and single path
lowering which will allow generalizing it in turn for multiple 128-bit
lanes.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230623 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 11:00:40 +00:00
Chandler Carruth
177498a4e0 [x86] Remove a SimpleTy usage. No need for it here, we already have the
MVT.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230622 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 10:37:01 +00:00
Chandler Carruth
19c267aed1 [x86] Make the vector shuffle helpers order the SDLoc and MVT arguments.
This ordering matches that of DAG.getNode.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230617 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 08:19:24 +00:00
Eric Christopher
341f17d0f0 Remove a FIXME.
Explanation: This function is in TargetLowering because it uses
RegClassForVT which would need to be moved to TargetRegisterInfo
and would necessitate moving isTypeLegal over as well - a massive
change that would just require TargetLowering having a TargetRegisterInfo
class member that it would use.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230585 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 00:00:35 +00:00
Eric Christopher
a01bc6a59f Remove an argument-less call to getSubtargetImpl from TargetLoweringBase.
This required plumbing a TargetRegisterInfo through computeRegisterProperties
and into findRepresentativeClass which uses it for register class
iteration. This required passing a subtarget into a few target specific
initializations of TargetLowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230583 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 00:00:24 +00:00
Sanjay Patel
a90eb87f7e simplify control flow; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230342 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:26:02 +00:00
Sanjay Patel
269510242b fix typo in comment; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230338 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:11:05 +00:00
Elena Demikhovsky
fdafc8fd5e AVX-512: recommitted 229837 + bugfix + test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230223 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:12:31 +00:00