Commit Graph

12712 Commits

Author SHA1 Message Date
Tim Northover
848d812931 ARM & AArch64: teach LowerVSETCC that output type size may differ from input.
While various DAG combines try to guarantee that a vector SETCC
operation will have the same output size as input, there's nothing
intrinsic to either creation or LegalizeTypes that actually guarantees
it, so the function needs to be ready to handle a mismatch.

Fortunately this is easy enough, just extend or truncate the naturally
compared result.

I couldn't reproduce the failure in other backends that I know have
SIMD, so it's probably only an issue for these two due to shared
heritage.

Should fix PR21645.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228518 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 00:50:47 +00:00
Simon Pilgrim
437265ee96 [X86][AVX2] AVX2 integer stack folding tests.
This adds tests for the remaining AVX2 instructions that currently support memory folding.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 23:28:16 +00:00
Simon Pilgrim
2134ae7f38 [X86][AVX] Added missing stack folding support + test for vptest ymm instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:44:06 +00:00
Simon Pilgrim
710e70bb70 [X86][SSE] Added missing stack folding tests for (v)mpsadbw instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228506 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:20:11 +00:00
Simon Pilgrim
3281412d2a [X86] Force fp stack folding tests to keep to specific domain.
General boolean instructions (AND, ANDN, OR, XOR) need to use a specific domain instruction (and not just the default).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228495 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:14:55 +00:00
Simon Pilgrim
bf4a435d0a [X86][AVX2] More AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228494 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:07:27 +00:00
David Majnemer
fdac306a12 MC: Emit COFF section flags in the "proper" order
COFF section flags are not idempotent:
  'rd' will make a read-write section because 'd' implies write
  'dr' will make a read-only section because 'r' disables write

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228490 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 08:26:40 +00:00
Hal Finkel
05bd43dc6e [PowerPC] Handle loop predecessor invokes
If a loop predecessor has an invoke as its terminator, and the return value
from that invoke is used to determine the loop iteration space, then we can't
insert a computation based on that value in the loop predecessor prior to the
terminator (oops). If there's such an invoke, or just no predecessor for that
matter, insert a new loop preheader.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228488 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 07:32:58 +00:00
Hal Finkel
9ce4011708 [PowerPC] Fixup incomplete revert of test/CodeGen/PowerPC/tls-pic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228467 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:30:06 +00:00
Simon Pilgrim
148482dd6b [X86][AVX2] Begun adding AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228462 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:12:15 +00:00
Hal Finkel
9168f717c9 Revert "r227976 - [PowerPC] Yet another approach to __tls_get_addr" and related fixups
Unfortunately, even with the workaround of disabling the linker TLS
optimizations in Clang restored (which has already been done), this still
breaks self-hosting on my P7 machine (-O3 -DNDEBUG -mcpu=native).

Bill is currently working on an alternate implementation to address the TLS
issue in a way that also fully elides the linker bug (which, unfortunately,
this approach did not fully), so I'm reverting this now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228460 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:07:40 +00:00
Reid Kleckner
6dc42dd2da Don't dllexport declarations
Fixes PR22488

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228411 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 17:59:49 +00:00
Michel Danzer
7097d17da0 R600/SI: Amend a test to ensure WQM is enabled for LDS in pixel shaders
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228374 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:29 +00:00
Michel Danzer
971f0f0071 R600/SI: Don't enable WQM for V_INTERP_* instructions v2
Doesn't seem necessary anymore. I think this was mostly compensating for
not enabling WQM for texture sampling instructions.

v2: Add test coverage
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:25 +00:00
Michel Danzer
a7879dcf33 R600/SI: Also enable WQM for image opcodes which calculate LOD v3
If whole quad mode isn't enabled for these, the level of detail is
calculated incorrectly for pixels along diagonal triangle edges, causing
artifacts.

v2: Use a TSFlag instead of lots of switch cases
v3: Add test coverage

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88642
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228372 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:20 +00:00
Matthias Braun
3fd0775f06 AArch64: Make test more robust.
Avoid the creation of select instructions which can result in different
scheduling of the selects.

I also added a bunch of additional store volatiles. Those avoid A
CodeGen problem (bug?) where normalizes and denomarlizing the control
moves all shift instructions into the first block where ISel can't match
them together with the cmps.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228362 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:14 +00:00
Matthias Braun
b8b2dff046 X86: Test cleanup
Use FileCheck, make it more consistent and do not rely on unoptimized
or(cmp,cmp) getting combined for max to be matched.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228361 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:12 +00:00
Colin LeMahieu
71166427a3 [Hexagon] Simplifying and formatting several patterns. Changing a pattern multiply to be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228347 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 21:13:25 +00:00
Hal Finkel
b8a6712c27 [PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:

  for (int i = 0; i < n; ++i)
    array[i] = c;

to look like this:

  T *p = array[-1];
  for (int i = 0; i < n; ++i)
    *++p = c;

the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.

A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:

External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
	-2.32514% +/- 1.03736%

So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.

In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228328 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:43:00 +00:00
Hal Finkel
885b67a5c3 [PowerPC] Generate pre-increment floating-point ld/st instructions
PowerPC supports pre-increment floating-point load/store instructions, both r+r
and r+i, and we had patterns for them, but they were not marked as legal. Mark
them as legal (and add a test case).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228327 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:42:53 +00:00
Ahmed Bougacha
ec35069525 [CodeGen] Add hook/combine to form vector extloads, enabled on X86.
The combine that forms extloads used to be disabled on vector types,
because "None of the supported targets knows how to perform load and
sign extend on vectors in one instruction."

That's not entirely true, since at least SSE4.1 X86 knows how to do
those sextloads/zextloads (with PMOVS/ZX).
But there are several aspects to getting this right.
First, vector extloads are controlled by a profitability callback.
For instance, on ARM, several instructions have folded extload forms,
so it's not always beneficial to create an extload node (and trying to
match extloads is a whole 'nother can of worms).

The interesting optimization enables folding of s/zextloads to illegal
(splittable) vector types, expanding them into smaller legal extloads.

It's not ideal (it introduces some legalization-like behavior in the
combine) but it's better than the obvious alternative: form illegal
extloads, and later try to split them up.  If you do that, you might
generate extloads that can't be split up, but have a valid ext+load
expansion.  At vector-op legalization time, it's too late to generate
this kind of code, so you end up forced to scalarize. It's better to
just avoid creating egregiously illegal nodes.

This optimization is enabled unconditionally on X86.

Note that the splitting combine is happy with "custom" extloads. As
is, this bypasses the actual custom lowering, and just unrolls the
extload. But from what I've seen, this is still much better than the
current custom lowering, which does some kind of unrolling at the end
anyway (see for instance load_sext_4i8_to_4i64 on SSE2, and the added
FIXME).

Also note that the existing combine that forms extloads is now also
enabled on legal vectors.  This doesn't have a big effect on X86
(because sext+load is usually combined to sext_inreg+aextload).
On ARM it fires on some rare occasions; that's for a separate commit.

Differential Revision: http://reviews.llvm.org/D6904


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228325 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:31:02 +00:00
Andrew Trick
c4ae8cbc5d X86 ABI fix for return values > 24 bytes.
The return value's address must be returned in %rax.
i.e. the callee needs to copy the sret argument (%rdi)
into the return value (%rax).

This probably won't manifest as a bug when the caller is LLVM-compiled
code. But it is an ABI guarantee and tools expect it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228321 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:09:05 +00:00
Tom Stellard
c7198528eb R600/SI: Fix bug in TTI loop unrolling preferences
We should be setting UnrollingPreferences::MaxCount to MAX_UINT instead
of UnrollingPreferences::Count.

Count is a 'forced unrolling factor', while MaxCount sets an upper
limit to the unrolling factor.

Setting Count to MAX_UINT was causing the loop in the testcase to be
unrolled 15 times, when it only had a maximum of 4 iterations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228303 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:32:18 +00:00
Tom Stellard
041211cd79 R600/SI: Fix bug from insertion of llvm.SI.end.cf into loop headers
The llvm.SI.end.cf intrinsic is used to mark the end of if-then blocks,
if-then-else blocks, and loops.  It is responsible for updating the
exec mask to re-enable threads that had been masked during the preceding
control flow block.  For example:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()
s_or_b64 exec, exec, s[0:1]         ; llvm.SI.end.cf

The bug fixed by this patch was one where the llvm.SI.end.cf intrinsic
was being inserted into the header of loops.  This would happen when
an if block terminated in a loop header and we would end up with
code like this:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()

LOOP:                       ; Start of loop header
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf <-BUG: The exec mask has the
                              same value at the beginning of each loop
			      iteration.
do_stuff();
s_cbranch_execnz LOOP

The fix is to create a new basic block before the loop and insert the
llvm.SI.end.cf there.  This way the exec mask is restored before the
start of the loop instead of at the beginning of each iteration.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228302 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:32:15 +00:00
Bill Schmidt
202b6045bf [PowerPC] Implement the vclz instructions for PWR8
Patch by Kit Barton.

Add the vector count leading zeros instruction for byte, halfword,
word, and doubleword sizes.  This is a fairly straightforward addition
after the changes made for vpopcnt:

 1. Add the correct definitions for the various instructions in
    PPCInstrAltivec.td
 2. Make the CTLZ operation legal on vector types when using P8Altivec
    in PPCISelLowering.cpp 

Test Plan

Created new test case in test/CodeGen/PowerPC/vec_clz.ll to check the
instructions are being generated when the CTLZ operation is used in
LLVM.

Check the encoding and decoding in test/MC/PowerPC/ppc_encoding_vmx.s
and test/Disassembler/PowerPC/ppc_encoding_vmx.txt respectively.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:24:47 +00:00
Bruno Cardoso Lopes
04715c9915 [X86][MMX] Handle i32->mmx conversion using movd
Implement a BITCAST dag combine to transform i32->mmx conversion patterns
into a X86 specific node (MMX_MOVW2D) and guarantee that moves between
i32 and x86mmx are better handled, i.e., don't use store-load to do the
conversion..

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228293 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:23:07 +00:00
Bruno Cardoso Lopes
d4299719af [X86][MMX] Add several bitcast tests
Avoid regression in previously supported MMX code by adding different
combinations of tests which exercise MMX bitcasts. Small improvements
to these patterns should come next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228292 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:22:57 +00:00
Matt Arsenault
81eb6ca158 R600/SI: Fix i64 truncate to i1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228273 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 06:05:13 +00:00
Ahmed Bougacha
a7f2cf45f3 [ARM] Use patterns instead of hardcoded regs in test. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228259 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 01:52:19 +00:00
Ahmed Bougacha
42ec3433ef [ARM] Make testcase more explicit. NFC.
The q8/d16 thing is silly;  I'd be happy to hear about a better
way to write those tests where simple substitution isn't enough..


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228258 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 01:45:28 +00:00
Tom Stellard
26bfda9dd3 R600/SI: Enable subreg liveness by default
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228228 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 23:14:18 +00:00
Rafael Espindola
e247dd2839 Don' try to make sections in comdats SHF_MERGE.
Parts of llvm were not expecting it and we wouldn't print
the entity size of the section.

Given what comdats are used for, having SHF_MERGE sections would be
just a small improvement, so just disable it for now.

Fixes pr22463.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228196 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 21:27:24 +00:00
Tom Stellard
89c96b1cd0 R600/SI: Expand misaligned 16-bit memory accesses
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228190 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 20:49:52 +00:00
Tom Stellard
fd4c349de2 R600/SI: Make more store operations legal
v2i32, i32, trunc i32 to i16, and truc i32 to i8 stores are legal for
all address spaces.  We had marked them as custom in order to lower
them for the private address space, but this is no longer necessary.

This enables lowering of misaligned stores of these types in the
DAGLegalizer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228189 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 20:49:51 +00:00
Tom Stellard
056a34916a R600: Don't promote i64 stores to v2i32 during DAG legalization
We take care of this during instruction selection now.  This
fixes a potential infinite loop when lowering misaligned stores.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228188 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 20:49:49 +00:00
Bill Schmidt
b9fc61d031 Add missing test case from r228046
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228182 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 20:00:04 +00:00
Matthias Braun
a602c10686 MachineCSE: Clear dead-def flag on CSE.
In case CSE reuses a previoulsy unused register the dead-def flag has to
be cleared on the def operand, as exposed by the arm64-cse.ll test.

This fixes PR22439 and the corresponding rdar://19694987

Differential Revision: http://reviews.llvm.org/D7395

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228178 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 19:35:16 +00:00
Michael Kuperstein
8f260e3084 Fixes a bug in vector load legalization that confused bits and bytes.
Differential Revision: http://reviews.llvm.org/D7400

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228168 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 18:54:01 +00:00
Colin LeMahieu
47d6e4d009 [Hexagon] Adding encoding information for absolute-reg mode stores. Xfailing a test until constant extenders are correctly put in the same packet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228158 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 17:52:06 +00:00
Zoran Jovanovic
8dc0ae6606 [mips][microMIPS] Implement CodeGen support for SW16 and LW16 instructions
Differential Revision: http://reviews.llvm.org/D6581


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228149 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 15:43:17 +00:00
Daniel Sanders
712010f655 [mips] Remove unused check prefix from tests. NFC.
Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7376

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 14:48:39 +00:00
Renato Golin
0966a4e370 Adding support to LLVM for targeting Cortex-A72
Currently, Cortex-A72 is modelled as an Cortex-A57 except the fp
load balancing pass isn't enabled for Cortex-A72 as it's not
profitable to have it enabled for this core.

Patch by Ranjeet Singh.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228140 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 13:31:29 +00:00
Chandler Carruth
b0589710cc [x86] Give movss and movsd execution domains in the x86 backend.
This associates movss and movsd with the packed single and packed double
execution domains (resp.). While this is largely cosmetic, as we now
don't have weird ping-pong-ing between single and double precision, it
is also useful because it avoids the domain fixing algorithm from seeing
domain breaks that don't actually exist. It will also be much more
important if we have an execution domain default other than packed
single, as that would cause us to mix movss and movsd with integer
vector code on a regular basis, a very bad mixture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228135 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:58:53 +00:00
Chandler Carruth
886bbe2d76 [x86] Remove a low-value test that was just checking how we cleared
a register. We have lots of tests covering this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228133 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:47:34 +00:00
Chandler Carruth
424a198c30 [x86] Mechanically update a bunch of tests' check lines using the latest
version of the script.

Changes include:
- Using the VEX prefix
- Skipping more detail when we have useful shuffle comments to match
- Matching more shuffle comments that have been added to the printer
  (yay!)
- Matching the destination registers of some AVX instructions
- Stripping trailing whitespace that crept in
- Fixing indentation issues

Nothing interesting going on here. I'm just trying really hard to ensure
these changes don't show up in the diffs with actual changes to the
backend.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228132 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:46:53 +00:00
Renato Golin
ff01f89466 Reverting VLD1/VST1 base-updating/post-incrementing combining
This reverts patches 223862, 224198, 224203, and 224754, which were all
related to the vector load/store combining and were reverted/reaplied
a few times due to the same alignment problems we're seeing now.

Further tests, mainly self-hosting Clang, will be needed to reapply this
patch in the future.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228129 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 10:11:59 +00:00
Chandler Carruth
d16b9cd3d4 [x86] Include the destination register in the check-lines for AVX
instructions.

No actual change here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228127 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:18:27 +00:00
Chandler Carruth
82b686e611 [x86] Add some tests I missed in the prior commit to cover blends with
zero for v8i16 as well.

These exhibit the same domain badness, but also exhibit other weaknesses
in our blend lowering. More fixes to come.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228126 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:15:46 +00:00
Chandler Carruth
da681cc578 [x86] Start to introduce bit-masking based blend lowering.
This is the simplest form of bit-math based blending which only fires
when we are blending with zero and is relatively profitable. I've only
enabled this path on very specific lowering strategies. I'm planning to
widen its applicability in subsequent patches, but so far you'll notice
that even though we get fewer shufps instructions, we *still* do the bit
math in the FP execution port. I'm looking into why this is still
happening.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228124 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:06:05 +00:00
Chandler Carruth
6b1eacb0b5 [x86] Add tests for blends-with-zero on 4-element vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228122 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 09:05:58 +00:00
Bill Schmidt
89e8a17b4d [PowerPC] Handle 32-bit targets properly in PPCTLSDynamicCall.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228116 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 05:51:56 +00:00
Chandler Carruth
786f55c1fb [x86] Refresh the checks of a number of tests using
update_llc_test_checks.py.

The exact format of the checks has changed over time. This includes
different indenting rules, new shuffle comments that have been added,
and more operand hiding behind regular expressions.

No functional change to the tests are expected here, but this will make
subsequent patches have a clean diff as they change shuffle lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228097 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:42 +00:00
Chandler Carruth
18ee73e456 [x86] Switch to using the long '--check-prefix' form which the
update_llc_test_checks.py script uses, and refresh the checks in this
test.

No functionality changed here, just bringing this test up to work with
automated updates using the python script.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228096 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:40 +00:00
Chandler Carruth
877ac0a034 [x86] Port this test to use utils/update_llc_test_checks.py.
This will make it easy to update as I change some parts of the X86
backend, makes it more clear what instruction differences are
introduced, and I find it makes it a bit easier to read as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228095 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:58:37 +00:00
Sanjay Patel
f1ac92a3b9 improved CHECK
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228086 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 00:24:06 +00:00
Simon Pilgrim
3d04e48cb6 [X86][SSE] psrl(w/d/q) and psll(w/d/q) bit shifts for SSE2
Patch to match cases where shuffle masks can be reduced to bit shifts. Similar to byte shift shuffle matching from D5699.

Differential Revision: http://reviews.llvm.org/D6649

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228047 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:58:29 +00:00
Chandler Carruth
2e3524ec17 [x86] Add two truly horrific test cases for the new vector shuffle
lowering. I'm prepping patches to improve these, and this will let the
delta of those patches show the improvement. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228044 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:56:28 +00:00
Chandler Carruth
d5a61c2958 [x86] Update the indent and layout of some tests in this file. NFC
This is just to remove voise from using the update_llc_test_checks
script.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228043 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:56:24 +00:00
Marek Olsak
90eef42c8e R600/SI: Remove the -CHECK suffix from all FileCheck prefixes in LIT tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228040 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:53:27 +00:00
Marek Olsak
e1a8ca95be R600/SI: Fix B64 VALU shifts on VI
SI only has standard versions. VI only has REV versions.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228037 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:53:01 +00:00
Chandler Carruth
dc5e49a1c4 [x86] Tweak my update script to use test case function names starting
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).

I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228033 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 21:26:45 +00:00
Colin LeMahieu
3c159ed1a0 [Hexagon] Converting XTYPE/SHIFT intrinsics. Cleaning out old intrinsic patterns and updating tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228026 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 20:40:52 +00:00
Simon Pilgrim
646722d55f [X86][SSE] Added general integer shuffle matching for MOVQ instruction
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228022 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 20:09:18 +00:00
Colin LeMahieu
861e105e61 [Hexagon] Updating XTYPE/PRED intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228019 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:43:59 +00:00
Colin LeMahieu
30f48c7dc4 [Hexagon] Updating XTYPE/PERM intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228015 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:36:59 +00:00
Simon Pilgrim
71a4e9522e [X86][AVX2] Enabled shuffle matching for the AVX2 zero extension (128bit -> 256bit) vpmovzx* instructions.
Differential Revision: http://reviews.llvm.org/D7251

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228014 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:34:09 +00:00
Rafael Espindola
f4e2998eda Fix typo in test/CodeGen/X86/sibcall.ll (pr22331).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228011 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:20:26 +00:00
Colin LeMahieu
6217146dce [Hexagon] Adding missing vector multiply instruction encodings. Converting multiply intrinsics and updating tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228010 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 19:15:11 +00:00
Sanjay Patel
9b4cc76745 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228006 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 18:54:00 +00:00
Colin LeMahieu
936986d12d [Hexagon] Converting complex number intrinsics and adding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227995 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 18:16:28 +00:00
Colin LeMahieu
a3a588d983 [Hexagon] Adding vector intrinsics for alu32/alu and xtype/alu.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227993 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 18:01:45 +00:00
Marek Olsak
a95296a86e R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.

The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
  (very useful to catch bugs where an unsupported instruction somehow makes
   it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
  to prevent REV from commuting to non-REV on VI

Tested-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227990 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 17:38:12 +00:00
Marek Olsak
b19dbd9eb3 R600/SI: Fix dependency between instruction writing M0 and S_SENDMSG on VI (v2)
This fixes a hang when using an empty geometry shader.

v2: - don't add s_nop when followed by s_waitcnt
    - comestic changes

Tested-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227986 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 17:37:52 +00:00
Sanjay Patel
3cf9267d4e Fix program crashes due to alignment exceptions generated for SSE memop instructions (PR22371).
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit.
The commit log says that change did not affect anything, but that's not correct.
That change allowed SSE instructions to have unaligned mem operands folded into
math ops, and that's not allowed in the default specification for any SSE variant. 

The bug is exposed when compiling for an AVX-capable CPU that had this feature
flag but without enabling AVX codegen. Another mistake in r224330 was not adding
the feature flag to all AVX CPUs; the AMD chips were excluded.

This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ).

This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem".
Changed the existing test case for the feature bit to reflect the new name and
renamed the test file itself to better reflect the feature.
Added runs to fold-vex.ll to check for the failing codegen.

Note that the feature bit is not set by default on any CPU because it may require a
configuration register setting to enable the enhanced unaligned behavior.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227983 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 17:13:04 +00:00
Bill Schmidt
aeba87d6a6 Disable 32-bit tests in tls-pic.ll until they can be repaired
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227981 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 16:57:38 +00:00
Bill Schmidt
b32d6f455f Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227980 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 16:33:55 +00:00
Bill Schmidt
f336df5f3f Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227978 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 16:29:52 +00:00
Bill Schmidt
5114e12df0 Revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227977 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 16:24:05 +00:00
Bill Schmidt
1123a81009 [PowerPC] Yet another approach to __tls_get_addr
This patch is a third attempt to properly handle the local-dynamic and
global-dynamic TLS models.

In my original implementation, calls to __tls_get_addr were hidden
from view until the asm-printer phase, at which point the underlying
branch-and-link instruction was created with proper relocations.  This
mostly worked well, but I used some repellent techniques to ensure
that the TLS_GET_ADDR nodes at the SD and MI levels correctly received
input from GPR3 and produced output into GPR3.  This proved to work
badly in the presence of multiple TLS variable accesses, with the
copies to and from GPR3 being scheduled incorrectly and generally
creating havoc.

In r221703, I addressed that problem by representing the calls to
__tls_get_addr as true calls during instruction lowering.  This had
the advantage of removing all of the bad hacks and relying on the
existing call machinery to properly glue the copies in place. It
looked like this was going to be the right way to go.

However, as a side effect of the recent discovery of problems with
linker optimizations for TLS, we discovered cases of suboptimal code
generation with this strategy.  The problem comes when tls_get_addr is
called for the same address, and there is a resulting CSE
opportunity.  It turns out that in such cases MachineCSE will common
the addis/addi instructions that set up the input value to
tls_get_addr, but will not common the calls themselves.  MachineCSE
does not have any machinery to common idempotent calls.  This is
perfectly sensible, since presumably this would be done at the IR
level, and introducing calls in the back end isn't commonplace.  In
any case, we end up with two calls to __tls_get_addr when one would
suffice, and that isn't good.

I presumed that the original design would have allowed commoning of
the machine-specific nodes that hid the __tls_get_addr calls, so as
suggested by Ulrich Weigand, I went back to that design and cleaned it
up so that the copies were properly held together by glue
nodes.  However, it turned out that this didn't work either...the
presence of copies to physical registers kept the machine-specific
nodes from being commoned also.

All of which leads to the design presented here.  This is a return to
the original design, except that no attempt is made to introduce
copies to and from GPR3 during instruction lowering.  Virtual registers
are used until prior to register allocation.  At that point, a special
pass is run that identifies the machine-specific nodes that hide the
tls_get_addr calls and introduces the copies to and from GPR3 around
them.  The register allocator then coalesces these copies away.  With
this design, MachineCSE succeeds in commoning tls_get_addr calls where
possible, and we get nice optimal code generation (better than GCC at
the moment, which does not common these calls).

One additional problem must be dealt with:  After introducing the
mentions of the physical register GPR3, the aggressive anti-dependence
breaker sees opportunities to improve scheduling by selecting a
different register instead.  Flags must be used on the instruction
descriptions to tell the anti-dependence breaker to keep its hands in
its pockets.

One thing missing from the original design was recording a definition
of the link register on the GET_TLS_ADDR nodes.  Doing this was found
to be insufficient to force a stack frame to be created, which led to
looping behavior because two different LR values were stored at the
same address.  This appears to have been an oversight in
PPCFrameLowering::determineFrameLayout(), which is repaired here.

Because MustSaveLR() returns true for calls to builtin_return_address,
this changed the expected behavior of
test/CodeGen/PowerPC/retaddr2.ll, which now stacks a frame but
formerly did not.  I've fixed the test case to reflect this.

There are existing TLS tests to catch regressions; the checks in
test/CodeGen/PowerPC/tls-store2.ll proved to be too restrictive in the
face of instruction scheduling with these changes, so I fixed that
up.

I've added a new test case based on the PrettyStackTrace module that
demonstrated the original problem. This checks that we get correct
code generation and that CSE of the calls to __get_tls_addr has taken
place.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227976 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 16:16:01 +00:00
Sanjay Patel
ec60318bf5 Improve test to actually check for a folded load.
This test was checking for lack of a "movaps" (an aligned load)
rather than a "movups" (an unaligned load). It also included
a store which complicated the checking.

Add specific CPU runs to prevent subtarget feature flag overrides
from inhibiting this optimization.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227972 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 15:37:18 +00:00
Bruno Cardoso Lopes
7df357f552 [X86][MMX] Improve transfer from mmx to i32
Improve EXTRACT_VECTOR_ELT DAG combine to catch conversion patterns
between x86mmx and i32 with more layers of indirection.

Before:
  movq2dq %mm0, %xmm0
  movd %xmm0, %eax
After:
  movd %mm0, %eax

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227969 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-03 14:46:49 +00:00
Alex Rosenberg
cba5c599e8 Revert part of r227437 as it was unnecessary. Thanks to echristo for
pointing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227897 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 23:58:54 +00:00
Bruno Cardoso Lopes
d821e0a5cc [X86][MMX] Add tests for MMX extract element
LLVM ToT produces poor MMX code compared to 3.5. However, part of the previous
functionality can be achieved by using -x86-experimental-vector-widening-legalization.
Add tests to be sure we don't regress again.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227869 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 22:00:48 +00:00
Bruno Cardoso Lopes
12c944ba10 [X86][MMX] Cleanup shuffle, bitcast and insert element tests
- Merge MMX arg passing test files
- Merge MMX bitcast, insert elt and shuffle tests

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227867 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 21:56:11 +00:00
Tom Stellard
d73d1062fe R600/SI: 64-bit and larger memory access must be at least 4-byte aligned
This is true for SI only. CI+ supports unaligned memory accesses,
but this requires driver support, so for now we disallow unaligned
accesses for all GCN targets.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227822 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 18:02:28 +00:00
Tom Stellard
80e70ee18e R600/SI: Merge two test files
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227821 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 18:02:23 +00:00
Ahmed Bougacha
0f1a21bcb8 [AArch64] Prefer DUP/MOV ("CPY") to INS for vector_extract.
This avoids a partial false dependency on the previous content of
the upper lanes of the destination vector register.

Differential Revision: http://reviews.llvm.org/D7307


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227820 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 17:55:57 +00:00
Sanjay Patel
f766946abd fix typo
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227815 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 17:47:30 +00:00
Jan Wen Voung
1a63641597 Fix ARM peephole optimizeCompare to avoid optimizing unsigned cmp to 0.
Summary:
Previously it only avoided optimizing signed comparisons to 0.
Sometimes the DAGCombiner will optimize the unsigned comparisons
to 0 before it gets to the peephole pass, but sometimes it doesn't.

Fix for PR22373.

Test Plan: test/CodeGen/ARM/sub-cmp-peephole.ll

Reviewers: jfb, manmanren

Subscribers: aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D7274

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 16:56:50 +00:00
Hal Finkel
3bafb64914 [PowerPC] VSX stores don't also read
The VSX store instructions were also picking up an implicit "may read" from the
default pattern, which was an intrinsic (and we don't currently have a way of
specifying write-only intrinsics).

This was causing MI verification to fail for VSX spill restores.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227759 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 19:07:41 +00:00
Hal Finkel
ec716cecda [PowerPC] Better scheduling for isel on P7/P8
isel is actually a cracked instruction on the P7/P8, and must start a dispatch
group. The scheduling model should reflect this so that we don't bunch too many
of them together when possible.

Thanks to Bill Schmidt and Pat Haugen for helping to sort this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227758 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 17:52:16 +00:00
Michael Kuperstein
acd5f13c88 [X86] Convert esp-relative movs of function arguments to pushes, step 2
This moves the transformation introduced in r223757 into a separate MI pass.
This allows it to cover many more cases (not only cases where there must be a 
reserved call frame), and perform rudimentary call folding. It still doesn't 
have a heuristic, so it is enabled only for optsize/minsize, with stack 
alignment <= 8, where it ought to be a fairly clear win.

(Re-commit of r227728)

Differential Revision: http://reviews.llvm.org/D6789


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227752 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 16:56:04 +00:00
Michael Kuperstein
5b61b8f53c Revert r227728 due to bad line endings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227746 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 16:15:07 +00:00
Hal Finkel
8f5c829c1e [PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).

The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227745 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 15:03:28 +00:00
Michael Kuperstein
59d9986259 [X86] Convert esp-relative movs of function arguments to pushes, step 2
This moves the transformation introduced in r223757 into a separate MI pass.
This allows it to cover many more cases (not only cases where there must be a 
reserved call frame), and perform rudimentary call folding. It still doesn't 
have a heuristic, so it is enabled only for optsize/minsize, with stack 
alignment <= 8, where it ought to be a fairly clear win.

Differential Revision: http://reviews.llvm.org/D6789

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227728 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 11:44:44 +00:00
Elena Demikhovsky
516052acd3 AVX2: Added 2 more tests for gather intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227718 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 08:52:15 +00:00
Jingyue Wu
f15b696b79 [NVPTX] Emit .pragma "nounroll" for loops marked with nounroll
Summary:
CUDA driver can unroll loops when jit-compiling PTX. To prevent CUDA
driver from unrolling a loop marked with llvm.loop.unroll.disable is not
unrolled by CUDA driver, we need to emit .pragma "nounroll" at the
header of that loop.

This patch also extracts getting unroll metadata from loop ID metadata
into a shared helper function.

Test Plan: test/CodeGen/NVPTX/nounroll.ll

Reviewers: eliben, meheff, jholewinski

Reviewed By: jholewinski

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D7041

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227703 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-01 02:27:45 +00:00
Matt Arsenault
9061eb6d2e R600/SI: Only select cvt_flr/cvt_rpi with no NaNs.
These have different behavior from cvt_i32_f32 on NaN.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227693 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-31 21:28:13 +00:00
Simon Pilgrim
982005c23e [X86][SSE] Shuffle mask decode support for zero extend, scalar float/double moves and integer load instructions
This patch adds shuffle mask decodes for integer zero extends (pmovzx** and movq xmm,xmm) and scalar float/double loads/moves (movss/movsd).

Also adds shuffle mask decodes for integer loads (movd/movq).

Differential Revision: http://reviews.llvm.org/D7228

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-31 14:09:36 +00:00
Saleem Abdulrasool
81674807cc ARM: support stack probe size on Windows on ARM
Now that -mstack-probe-size is piped through to the backend via the function
attribute as on Windows x86, honour the value to permit handling of non-default
values for stack probes.  This is needed /Gs with the clang-cl driver or
-mstack-probe-size with the clang driver when targeting Windows on ARM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227667 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-31 02:26:37 +00:00