Commit Graph

10359 Commits

Author SHA1 Message Date
Tim Northover
812a174ba4 ARM64: add extra scalar neg pattern & tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205208 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:42 +00:00
Tim Northover
b35ffdff16 ARM64: add patterns for scalar sqdmlal & sqdmlsl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205207 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:38 +00:00
Tim Northover
9542846832 ARM64: add more patterns for commuted fmsub operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205206 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:34 +00:00
Tim Northover
4417e07e39 ARM64: shuffle patterns around for fmin/fmax & add tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205205 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:30 +00:00
Tim Northover
576c3f709f ARM64: add more scalar patterns for usqadd & suqadd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205204 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:26 +00:00
Tim Northover
13277a78bc ARM64: add more scalar patterns for reciprocal ops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205203 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:22 +00:00
Tim Northover
8f93e159ed ARM64: add i64 scalar pattern for @llvm.arm64.abs
This will be used by the Clang front-end code for vabsd_s64.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205202 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 15:46:17 +00:00
Tom Stellard
1f143aa3e9 R600/SI: Lower i64 SELECT by bitcasting to a vector type
This allows allows us to replace ISD::EXTRACT_ELEMENT, which is lowered
using shifts, with ISD::EXTRACT_VECTOR_ELT, which is a no-op.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205187 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 14:01:55 +00:00
Zoran Jovanovic
077aa54e4e Fixed issue with microMIPS JAL instruction.
Differential Revision: http://llvm-reviews.chandlerc.com/D3200


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205185 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 14:00:10 +00:00
Hal Finkel
65fafbb109 Look at shuffles of build_vectors in DAGCombiner::visitEXTRACT_VECTOR_ELT
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:

  %b1 = insertelement <2 x i32> undef, i32 %v, i32 0
  %b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
  %i = add <2 x i32> %b2, <i32 2, i32 3>

If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.

By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205179 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 11:43:19 +00:00
Daniel Sanders
dd67fd636c [mips] Check emitted code for llvm.bswap.i32 on MIPS16/MIPS64 and llvm.bswap.i64 on MIPS16.
While reviewing r204163, I noticed that the MIPS16 test only checked for a .ent
directive and didn't actually check the code emitted. Fixed this and added a
check for llvm.bswap.i32 on MIPS64 at the same time.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205177 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 11:00:04 +00:00
Chandler Carruth
2530cd31f4 [ARM64] Fix materialization of an fp128 zero immediate. There currently
is not a pattern to lower this with clever instructions that zero the
register, so restrict the zero immediate legality special case to f64
and f32 (the only two sizes which fmov seems to directly support). Fixes
backend errors when building code such as libxml.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205161 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 00:02:10 +00:00
Hal Finkel
111bcf9b59 Make use of previously generated stores in SelectionDAGLegalize::ExpandExtractFromVectorThroughStack
When expanding EXTRACT_VECTOR_ELT and EXTRACT_SUBVECTOR using
SelectionDAGLegalize::ExpandExtractFromVectorThroughStack, we store the entire
vector and then load the piece we want. This is fine in isolation, but
generating a new store (and corresponding stack slot) for each extraction ends
up producing code of poor quality. When we scalarize a vector operation (using
SelectionDAG::UnrollVectorOp for example) we generate one EXTRACT_VECTOR_ELT
for each element in the vector. This used to generate one stored copy of the
vector for each element in the vector. Now we search the uses of the vector for
a suitable store before generating a new one, which results in much more
efficient scalarization code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205153 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-30 15:10:18 +00:00
Hal Finkel
ee8e48d4c9 [PowerPC] Handle VSX v2i64 SIGN_EXTEND_INREG
sitofp from v2i32 to v2f64 ends up generating a SIGN_EXTEND_INREG v2i64 node
(and similarly for v2i16 and v2i8). Even though there are no sign-extension (or
algebraic shifts) for v2i64 types, we can handle v2i32 sign extensions by
converting two and from v2i64. The small trick necessary here is to shift the
i32 elements into the right lanes before the i32 -> f64 step. This is because
of the big Endian nature of the system, we need the i32 portion in the high
word of the i64 elements.

For v2i16 and v2i8 we can do the same, but we first use the default Altivec
shift-based expansion from v2i16 or v2i8 to v2i32 (by casting to v4i32) and
then apply the above procedure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205146 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-30 13:22:59 +00:00
NAKAMURA Takumi
a60d50cb9f Suppress llvm/test/CodeGen/ARM64 for targeting pecoff. ARM64 is unaware of that.
FIXME: Could we support them?

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205126 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-30 05:01:17 +00:00
Hal Finkel
7563821402 [PowerPC] Handle v2i64 comparisons
v2i64 is a legal type under VSX, however we don't have native vector
comparisons. We can handle eq/ne by casting it to an Altivec type, but
everything else must be expanded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205106 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 16:04:40 +00:00
Tim Northover
7b837d8c75 ARM64: initial backend import
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.

Everything will be easier with the target in-tree though, hence this
commit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205090 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 10:18:08 +00:00
Hal Finkel
44b2b9dc1a [PowerPC] Add subregister classes for f64 VSX values
We had stored both f64 values and v2f64, etc. values in the VSX registers. This
worked, but was suboptimal because we would always spill 16-byte values even
through we almost always had scalar 8-byte values. This resulted in an
increase in stack-size use, extra memory bandwidth, etc. To fix this, I've
added 64-bit subregisters of the Altivec registers, and combined those with the
existing scalar floating-point registers to form a class of VSX scalar
floating-point registers. The ABI code has also been enhanced to use this
register class and some other necessary improvements have been made.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205075 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 05:29:01 +00:00
Akira Hatanaka
b3cb36026a [x86] Fix printing of register operands with q modifier.
Emit 32-bit register names instead of 64-bit register names if the target does
not have 64-bit general purpose registers.

<rdar://problem/14653996>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205067 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-28 23:28:07 +00:00
David Majnemer
f3e8b0575d X86: Disable IsLegalToCallImmediateAddr for Win32
WinCOFF cannot form PC relative relocations to support absolute
MCValues.  We should reenable this once WinCOFF supports emission of
IMAGE_REL_I386_REL32 relocations.

This fixes PR19272.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205058 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-28 21:40:47 +00:00
Hal Finkel
0e11c017a9 [PowerPC] Fix VSX permutation isel
Not only did I invert the indices when I wrote the code, but I also did the
same thing when I wrote the regression test. Oops.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205046 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-28 20:24:55 +00:00
Hal Finkel
c9de9e60b9 [PowerPC] v2[fi]64 need to be explicitly passed in VSX registers
v2[fi]64 values need to be explicitly passed in VSX registers. This is because
the code in TRI that finds the minimal register class given a register and a
value type will assert if given an Altivec register and a non-Altivec type.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205041 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-28 19:58:11 +00:00
Hal Finkel
e2ee98ab16 [PowerPC] Use a small cleanup pass to remove VSX self copies
As explained in r204976, because of how the allocation of VSX registers
interacts with the call-lowering code, we sometimes end up generating self VSX
copies. Specifically, things like this:
  %VSL2<def> = COPY %F2, %VSL2<imp-use,kill>
(where %F2 is really a sub-register of %VSL2, and so this copy is a nop)

This adds a small cleanup pass to remove these prior to post-RA scheduling.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204980 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 23:12:31 +00:00
Hal Finkel
6bdc4ebedd [PowerPC] Fix v2f64 vector extract and related patterns
First, v2f64 vector extract had not been declared legal (and so the existing
patterns were not being used). Second, the patterns for that, and for
scalar_to_vector, should really be a regclass copy, not a subregister
operation, because the VSX registers directly hold both the vector and scalar data.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204971 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 22:22:48 +00:00
Hal Finkel
276d854549 [PowerPC] Expand v2i64 shifts
These operations need to be expanded during legalization so that isel does not
crash. In theory, we might be able to custom lower some of these. That,
however, would need to be follow-up work.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204963 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 21:26:33 +00:00
Matt Arsenault
0c6d96cf16 R600: Implement isZExtFree.
This allows 64-bit operations that are truncated to be reduced
to 32-bit ones.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204946 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 17:23:31 +00:00
Matt Arsenault
94687c0f43 R600/SI: Fix unreachable with a sext_in_reg to an illegal type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204945 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 17:23:24 +00:00
Logan Chien
3b0efb2f89 [AArch64] Lower SHL_PARTS, SRA_PARTS and SRL_PARTS
Lower SHL_PARTS, SRA_PARTS and SRL_PARTS to perform 128-bit integer shift

Patch by GuanHong Liu.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204940 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 16:28:09 +00:00
Rafael Espindola
f165cf7ce8 Prevent alias from pointing to weak aliases.
This adds back r204781.

Original message:

Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given

define void @my_func() {
  ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias

We produce without this patch:

        .weak   my_alias
my_alias = my_func
        .globl  my_alias2
my_alias2 = my_alias

That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a

@my_alias = alias void ()* @other_func

would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.

There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204934 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 15:26:56 +00:00
Elena Demikhovsky
61785a0c3d AVX-512: Implemented masking for integer arithmetic & logic instructions.
By Robert Khasanov rob.khasanov@gmail.com


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204906 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 09:45:08 +00:00
Hal Finkel
ee5f4bb6b3 [PowerPC] Generate VSX permutations for v2[fi]64 vectors
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204873 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 22:58:37 +00:00
Ekaterina Romanova
899a605fc1 This is a fix for PR# 19051. I noticed code gen differences due to code motion when running tests with and without the debug info at O2. The problem is in branch folding. A loop wanted to skip the debug info, but actually it didn't do so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204865 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 22:15:28 +00:00
Hal Finkel
6da0178737 [PowerPC] VSX loads and stores support unaligned access
I've not yet updated PPCTTI because I'm not sure what the actual relative cost
is compared to the aligned uses.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204848 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 19:39:09 +00:00
Hal Finkel
b397453155 [PowerPC] Use v2f64 <-> v2i64 VSX conversion instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204843 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 19:13:54 +00:00
Matt Arsenault
e0e503801f R600: Add a testcase for sext_in_reg I missed.
This sext_inreg i32 in i64 case was already handled, but not enabled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204840 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 18:31:06 +00:00
Hal Finkel
c6940d4cb7 [PowerPC] Use VSX vector load/stores for v2[fi]64
These instructions have access to the complete VSX register file. In addition,
they "swap" the order of the elements so that element 0 (the scalar part) comes
first in memory and element 1 follows at a higher address.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204838 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 18:26:30 +00:00
Jim Grosbach
489752ddb4 Fix for incorrect address sinking in the presence of potential overflows.
In some cases it is possible for CGP to attempt to reuse a base address from
another basic block. In those cases we have to be sure that all the address
math was either done at the same bit width, or that none of it overflowed
before it was extended.

Patch by Louis Gerbarg <lgg@apple.com>

rdar://16307442

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204833 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 17:27:01 +00:00
Hans Wennborg
b82f8a28e8 Revert "X86 memcpy lowering: use "rep movs" even when esi is used as base pointer" (r204174)
>  For functions where esi is used as base pointer, we would previously fall ba
>  from lowering memcpy with "rep movs" because that clobbers esi.
>
>  With this patch, we just store esi in another physical register, and restore
>  it afterwards. This adds a little bit of register preassure, but the more
>  efficient memcpy should be worth it.
>
>  Differential Revision: http://llvm-reviews.chandlerc.com/D2968

This didn't work. I was ending up with code like this:

  lea     edi,[esi+38h]
  mov     ecx,0Fh
  mov     edx,esi
  mov     esi,ebx
  rep movs dword ptr es:[edi],dword ptr [esi]
  lea     ecx,[esi+74h] <-- Ooops, we're now using esi before restoring it from edx.
  add     ebx,3Ch
  mov     esi,edx

I guess if we want to do this we need stronger glue or something, or doing the expansion
much later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204829 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 16:30:54 +00:00
Hal Finkel
7363d2223e [PowerPC] Add v2i64 as a legal VSX type
v2i64 needs to be a legal VSX type because it is the SetCC result type from
v2f64 comparisons. We need to expand all non-arithmetic v2i64 operations.

This fixes the lowering for v2f64 VSELECT.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204828 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 16:12:58 +00:00
Christian Pirker
a634d0a570 AArch64_BE Elf support for MC-JIT runtime dynamic linker
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204816 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 14:57:32 +00:00
Christian Pirker
94708f1784 AArch64_BE function argument passing for ARM ABI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204814 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 14:51:22 +00:00
Tim Northover
fc4fa22846 ARM: add intrinsics for the v8 ldaex/stlex
We've already got versions without the barriers, so this just adds IR-level
support for generating the new v8 ones.

rdar://problem/16227836

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204813 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 14:39:31 +00:00
Cameron McInally
4de1039403 Fix AVX512 Gather and Scatter execution domains.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204804 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 13:50:50 +00:00
Renato Golin
c4b058f9e7 Add @llvm.clear_cache builtin
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.

Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.

A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204802 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 12:52:28 +00:00
Hal Finkel
159e7f4095 [PowerPC] Lower VSELECT using xxsel when VSX is available
With VSX there is a real vector select instruction, and so we should use it.
Note that VSELECT will still scalarize for v2f64 because the corresponding
SetCC result type (v2i64) is not currently a legal type.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204801 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 12:49:28 +00:00
Rafael Espindola
72db10a995 Revert "Prevent alias from pointing to weak aliases."
This reverts commit r204781.

I will follow up to with msan folks to see what is what they
were trying to do with aliases to weak aliases.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204784 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 06:14:40 +00:00
Hal Finkel
360ee97179 [PowerPC] Generate logical vector VSX instructions
These instructions are essentially the same as their Altivec counterparts, but
have access to the larger VSX register file.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204782 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 04:55:40 +00:00
Rafael Espindola
33845aa8c4 Prevent alias from pointing to weak aliases.
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given

define void @my_func() {
  ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias

We produce without this patch:

        .weak   my_alias
my_alias = my_func
        .globl  my_alias2
my_alias2 = my_alias

That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a

@my_alias = alias void ()* @other_func

would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.

There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204781 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 04:48:47 +00:00
Quentin Colombet
596516bef8 [X86] Add broadcast instructions to the table used by ExeDepsFix pass.
Adds the different broadcast instructions to the ReplaceableInstrsAVX2 table.
That way the ExeDepsFix pass can take better decisions when AVX2 broadcasts are
across domain (int <-> float).

In particular, prior to this patch we were generating:
  vpbroadcastd  LCPI1_0(%rip), %ymm2
  vpand %ymm2, %ymm0, %ymm0
  vmaxps  %ymm1, %ymm0, %ymm0 ## <- domain change penalty

Now, we generate the following nice sequence where everything is in the float
domain:
  vbroadcastss  LCPI1_0(%rip), %ymm2
  vandps  %ymm2, %ymm0, %ymm0
  vmaxps  %ymm1, %ymm0, %ymm0

<rdar://problem/16354675>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204770 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 00:10:22 +00:00
Hal Finkel
6a0f060f64 [PowerPC] Select between VSX A-type and M-type FMA instructions just before RA
The VSX instruction set has two types of FMA instructions: A-type (where the
addend is taken from the output register) and M-type (where one of the product
operands is taken from the output register). This adds a small pass that runs
just after MI scheduling (and, thus, just before register allocation) that
mutates A-type instructions (that are created during isel) into M-type
instructions when:

 1. This will eliminate an otherwise-necessary copy of the addend

 2. One of the product operands is killed by the instruction

The "right" moment to make this decision is in between scheduling and register
allocation, because only there do we know whether or not one of the product
operands is killed by any particular instruction. Unfortunately, this also
makes the implementation somewhat complicated, because the MIs are not in SSA
form and we need to preserve the LiveIntervals analysis.

As a simple example, if we have:

%vreg5<def> = COPY %vreg9; VSLRC:%vreg5,%vreg9
%vreg5<def,tied1> = XSMADDADP %vreg5<tied0>, %vreg17, %vreg16,
                        %RM<imp-use>; VSLRC:%vreg5,%vreg17,%vreg16
  ...
  %vreg9<def,tied1> = XSMADDADP %vreg9<tied0>, %vreg17, %vreg19,
                        %RM<imp-use>; VSLRC:%vreg9,%vreg17,%vreg19
  ...

We can eliminate the copy by changing from the A-type to the
M-type instruction. This means:

  %vreg5<def,tied1> = XSMADDADP %vreg5<tied0>, %vreg17, %vreg16,
                        %RM<imp-use>; VSLRC:%vreg5,%vreg17,%vreg16

is replaced by:

  %vreg16<def,tied1> = XSMADDMDP %vreg16<tied0>, %vreg18, %vreg9,
                        %RM<imp-use>; VSLRC:%vreg16,%vreg18,%vreg9

and we remove: %vreg5<def> = COPY %vreg9; VSLRC:%vreg5,%vreg9

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204768 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 23:29:21 +00:00
Adam Nemet
6f4f46cf11 [X86] Generate VPSHUFB for in-place v16i16 shuffles
This used to resort to splitting the 256-bit operation into two 128-bit
shuffles and then recombining the results.

Fixes <rdar://problem/16167303>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204735 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 17:47:06 +00:00
Matt Arsenault
0227461758 R600: Add failing testcase for <3 x i32> stores.
This is supposed to have the same store size and alignment as <4 x i32>,
but currently is split into a 64-bit and 32-bit store.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204729 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 16:50:55 +00:00
Cameron McInally
3ec862b7ae Fix AVX2 Gather execution domains.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204713 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 12:36:38 +00:00
David Majnemer
4fd5cd06c8 WinCOFF: Add support for -fdata-sections
This is a pretty straight forward translation for COFF, we just need to
stick the data in a COMDAT section marked as
IMAGE_COMDAT_SELECT_NODUPLICATES.

N.B. We must be careful to avoid sticking entities with private linkage
in COMDAT groups.  COFF is pretty hostile to the renaming of entities so
we must be careful to disallow GlobalVariables with unstable names.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204703 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 06:14:26 +00:00
Saleem Abdulrasool
f50e709043 test: fix CHECK lines
Thanks to gix for pointing out that the CHECK-LABEL lines were incorrect!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204700 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 03:39:39 +00:00
Manman Ren
692d1830f3 Register Allocator: check other options before using a CSR for the first time.
When register allocator's stage is RS_Spill, we choose spill over using the CSR
for the first time, if the spill cost is lower than CSRCost. 
When register allocator's stage is < RS_Split, we choose pre-splitting over
using the CSR for the first time, if the cost of splitting is lower than
CSRCost.

CSRCost is set with command-line option "regalloc-csr-first-time-cost". The
default value is 0 to generate the same codes as before this commit.

With a value of 15 (1 << 14 is the entry frequency), I measured performance
gain of 3% on 253.perlbmk and 1.7% on 197.parser, with instrumented PGO,
on an arm device.

rdar://16162005


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204690 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-25 00:16:25 +00:00
Matt Arsenault
add2e2ec8f R600/SI: Fix extra mov from legalizing 64-bit SALU ops.
Check the register class of each operand individually
to avoid an extra copy to a vgpr.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204662 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 20:08:13 +00:00
Matt Arsenault
3a96e61469 R600/SI: Sub-optimial fix for 64-bit immediates with SALU ops.
No longer asserts, but now you get moves loading legal immediates
into the split 32-bit operations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204661 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 20:08:09 +00:00
Matt Arsenault
db1807144a R600/SI: Fix 64-bit bit ops that require the VALU.
Try to match scalar and first like the other instructions.
Expand 64-bit ands to a pair of 32-bit ands since that is not
available on the VALU.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204660 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 20:08:05 +00:00
Matt Arsenault
6c199d8212 R600: Implement isNarrowingProfitable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204658 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 19:43:31 +00:00
Quentin Colombet
4768df00c4 [X86][ISelDAG] Add missing fallback patterns for avx2 broadcast instructions.
Those patterns are used when the load cannot be folded into the related broadcast
during the select phase.
This happens when the load gets additional uses that were not anticipated during
the previous lowering phases (constant vector to constant load, then constant
load reused) or when selection DAG is not able to prove that folding the load
will not create a cycle in the DAG.

<rdar://problem/16074331>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204631 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 17:54:19 +00:00
Matt Arsenault
875870fdb4 R600/SI: Fix 64-bit private loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204630 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 17:50:46 +00:00
Eli Bendersky
b6f02c13da Add test to test/CodeGen/NVPTX for "alloca buffer" arguments.
Make sure such IR gets properly lowered to PTX.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204624 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 16:52:30 +00:00
Justin Holewinski
b8cb709858 [NVPTX] Add isel patterns for addrspacecast
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204600 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-24 11:17:53 +00:00
David Majnemer
77df5169a8 WinCOFF: Add support for -ffunction-sections
This is a pretty straight forward translation for COFF, we just need to
stick the function in a COMDAT section marked as
IMAGE_COMDAT_SELECT_NODUPLICATES.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204565 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-23 17:47:39 +00:00
Hal Finkel
b6cbecd272 [PowerPC] Make use of VSX f64 <-> i64 conversion instructions
When VSX is available, these instructions should be used in preference to the
older variants that only have access to the scalar floating-point registers.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204559 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-23 05:35:00 +00:00
Hal Finkel
0d277ab1ba [PowerPC] Fix the VSX v2f64 return register
v2f64 values, like other 128-bit values, are returned under VSX in register
vs34 (Altivec register v2).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204543 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-22 18:24:43 +00:00
Andrea Di Biagio
d47cb57ab8 [DAG] Fix an assertion failure caused by an invalid cast in method 'BuildVectorSDNode::isConstantSplat'
This patch renames method 'isConstantSplat' as 'getConstantSplatValue'
(mainly for consistency reasons), and rewrites its logic to ensure
that we always perform a legal 'cast<ConstantSDNode>'.

Added test shift-combine-crash.ll to verify that DAGCombiner no longer crashes with an assertion failure in the attempt to simplify a vector shift by a vector of all undef counts.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204536 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-22 01:47:22 +00:00
Manman Ren
8dbd561a88 Register allocator: add condition to hoist a spill to outer loop.
We make sure a spill is not hoisted to a hotter outer loop by adding
a condition. Hoist a spill to outer loop if there are multiple dependents
(it can be beneficial if more than one dependents are hoisted) or
if DepSV (the hoisting source) is hotter than SV (the hoisting destination).

rdar://16268194


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204522 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 21:46:24 +00:00
Chad Rosier
1eb67a4f84 [AArch64] Add SchedRW lists to NEON instructions.
Previously, only regular AArch64 instructions were annotated with SchedRW lists.
This patch does the same for NEON enabling these instructions to be scheduled by
the MIScheduler. Additionally, store operations are now modeled and a few
SchedRW lists were updated for bug fixes (e.g. multiple def operands).

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204505 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 19:34:41 +00:00
Matt Arsenault
55d17f4842 R600/SI: Move instruction patterns to scalar versions.
Some of them also had the pattern on both, so this removes the
duplication.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204492 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 18:01:18 +00:00
Rafael Espindola
d38fea31a5 Remove redundant test.
This is tested from MC already.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204491 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 18:00:51 +00:00
Rafael Espindola
3f687d350c Move codegen test over to MC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204490 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 17:55:34 +00:00
Rafael Espindola
469198f995 Convert test to using cfi.
An unnamed global in llvm still produces a regular symbol.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204488 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 17:38:01 +00:00
Rafael Espindola
0b50368e68 Remove redundant test.
The production of the .eh symbols is done from MC now and we already have tests
for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204483 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 17:26:35 +00:00
Rafael Espindola
6c22b041da Split out the MC part of this test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204481 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 17:16:11 +00:00
Daniel Sanders
e85dd7c26d [mips] Correct lowering of VECTOR_SHUFFLE to VSHF.
Summary:
VECTOR_SHUFFLE concatenates the vectors in an vectorwise fashion.
  <0b00, 0b01> + <0b10, 0b11> -> <0b00, 0b01, 0b10, 0b11>
VSHF concatenates the vectors in a bitwise fashion:
  <0b00, 0b01> + <0b10, 0b11> ->
  0b0100       + 0b1110       -> 0b01001110
                                 <0b10, 0b11, 0b00, 0b01>
We must therefore swap the operands to get the correct result.

The test case that discovered the issue was MultiSource/Benchmarks/nbench.

Reviewers: matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D3142

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204480 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 16:56:51 +00:00
Tom Stellard
a1d28f6dd7 R600/SI: Handle MUBUF instructions in SIInstrInfo::moveToVALU()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204476 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 15:51:57 +00:00
Tom Stellard
1f1c0495d0 R600/SI: Handle S_MOV_B64 in SIInstrInfo::moveToVALU()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204475 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 15:51:54 +00:00
Richard Sandiford
6b6889d87b [SystemZ] Add support for z196 float<->unsigned conversions
These complement the older float<->signed instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204451 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 10:56:30 +00:00
Kevin Qin
fc029f2983 Fix test command line to avoid generating output file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204437 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 07:20:29 +00:00
Juergen Ributzka
d3cf783ed1 [Constant Hoisting] Make the constant materialization cost operand dependent
Extend the target hook to take also the operand index into account when
calculating the cost of the constant materialization.

Related to <rdar://problem/16381500>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204435 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 06:04:45 +00:00
Kevin Qin
c53b3dbc20 Fix an assertion caused by using inline asm with indirect register inputs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204425 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 02:14:50 +00:00
Kevin Qin
287cc35cd7 [AArch64] Remove .data_region directive from AArch64.
.data_region is only used in Darwin, so it shouldn't be generated
for other OS. Currently AArch64 doesn't support darwin yet, so
I removed it from AArch64. When Darwin is supported someday, we can
add it back and associate it with Darwin.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204424 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 02:12:48 +00:00
Rafael Espindola
5b460ed4cd Convert a CodeGen test into a MC test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204421 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 00:55:42 +00:00
Rafael Espindola
fab1a40a7b Port test to cfi.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204416 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-21 00:30:24 +00:00
Rafael Espindola
aeb12e91d1 Convert another CodeGen test into a MC test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204412 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 23:35:00 +00:00
Weiming Zhao
4eb2d228e9 Fix PR19136: [ARM] Fix Folding SP Update into vpush/vpop
Sicne MBB->computeRegisterLivenes() returns Dead for sub regs like s0,
d0 is used in vpop instead of updating sp, which causes s0 dead before
its use.

This patch checks the liveness of each subreg to make sure the reg is
actually dead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204411 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 23:28:16 +00:00
Rafael Espindola
e316e00f16 Remove unused options from test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204401 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 21:38:04 +00:00
Juergen Ributzka
ee3242ed0b Revert "[Constant Hoisting] Extend coverage of the constant hoisting pass."
I will break this up into smaller pieces for review and recommit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204393 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 20:17:13 +00:00
Juergen Ributzka
228c72a841 [Constant Hoisting] Extend coverage of the constant hoisting pass.
This commit extends the coverage of the constant hoisting pass, adds additonal
debug output and updates the function names according to the style guide.

Related to <rdar://problem/16381500>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204389 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 19:55:52 +00:00
Kai Nacke
ebf9f0c6cb [MIPS] Add cpu octeon and some instructions
The Octeon cpu from Cavium Networks is mips64r2 based and has an extended
instruction set. In order to utilize this with LLVM, a new cpu feature "octeon"
and a subtarget feature "cnmips" is added. A small set of new instructions
(baddu, dmul, pop, dpop, seq, sne) is also added. LLVM generates dmul, pop and
dpop instructions with option -mcpu=octeon or -mattr=+cnmips.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204337 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 11:51:58 +00:00
Hao Liu
19a3e9aabe [ARM]Fix an assertion failure in A15SDOptimizer about DPair reg class by treating DPair as QPR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204304 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-20 05:36:59 +00:00
Matt Arsenault
e3620da269 R600/SI: Add support for 64-bit LDS writes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204274 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-19 22:19:54 +00:00
Matt Arsenault
62b3e22092 R600/SI: Add support for 64-bit LDS loads.
v2:
  -Use correct opcode for DS_READ_64

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204273 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-19 22:19:52 +00:00
Matt Arsenault
6eaa49233f R600/SI: Match i16 immediate offset of LDS instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204272 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-19 22:19:49 +00:00
Matt Arsenault
adf5141ecd R600/SI: Fix test checking wrong instruction operand.
The source and destination happen to be the same register.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-19 22:19:45 +00:00
Matt Arsenault
9c0b2d08d3 R600/SI: Don't display the GDS bit.
It isn't actually used now, and probably never will be, plus it makes
tests less annoying. I also think SC prints GDS instructions as a
separate instruction name.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204270 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-19 22:19:43 +00:00
Eli Bendersky
21354ec60d Expose "noduplicate" attribute as a property for intrinsics.
The "noduplicate" function attribute exists to prevent certain optimizations
from duplicating calls to the function. This is important on platforms where
certain function call duplications are unsafe (for example execution barriers
for CUDA and OpenCL).

This patch makes it possible to specify intrinsics as "noduplicate" and
translates that to the appropriate function attribute.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204200 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 23:51:07 +00:00
Hans Wennborg
523f800e90 X86 memcpy lowering: use "rep movs" even when esi is used as base pointer
For functions where esi is used as base pointer, we would previously fall back
from lowering memcpy with "rep movs" because that clobbers esi.

With this patch, we just store esi in another physical register, and restore
it afterwards. This adds a little bit of register preassure, but the more
efficient memcpy should be worth it.

Differential Revision: http://llvm-reviews.chandlerc.com/D2968

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204174 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 20:04:34 +00:00
Michael Zolotukhin
50e4d56b9f Fix test lsr-normalization.ll broken in r204161.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204166 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 18:17:59 +00:00
Raul E. Silvera
370981ad17 Add support for scalarizing/splitting vector bswap.
Summary:
  SLP Vectorization of intrinsics (r203707) has exposed cases where the
  expansion of vector bswap is failing (PR19151).

Reviewers: hfinkel

CC: chandlerc

Differential Revision: http://llvm-reviews.chandlerc.com/D3104

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204163 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 17:49:12 +00:00
Michael Zolotukhin
13ca05e2b8 Add stride normalization to SCEV Normalize/Denormalize transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204161 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 17:34:03 +00:00
Andrea Di Biagio
6077ca9abb [DAGCombiner] teach how to simplify xor/and/or nodes according to the following rules:
1)  (AND (shuf (A, C, Mask), shuf (B, C, Mask)) -> shuf (AND (A, B), C, Mask)
 2)  (OR  (shuf (A, C, Mask), shuf (B, C, Mask)) -> shuf (OR  (A, B), C, Mask)
 3)  (XOR (shuf (A, C, Mask), shuf (B, C, Mask)) -> shuf (XOR (A, B), V_0, Mask)

 4)  (AND (shuf (C, A, Mask), shuf (C, B, Mask)) -> shuf (C, AND (A, B), Mask)
 5)  (OR  (shuf (C, A, Mask), shuf (C, B, Mask)) -> shuf (C, OR  (A, B), Mask)
 6)  (XOR (shuf (C, A, Mask), shuf (C, B, Mask)) -> shuf (V_0, XOR (A, B), Mask)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204160 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 17:12:59 +00:00
Bill Schmidt
d4585b941a Fix PR19144: Incorrect offset generated for int-to-fp conversion at -O0.
When converting a signed 32-bit integer to double-precision floating point on
hardware without a lfiwax instruction, we have to instead use a lfd followed
by fcfid.  We were erroneously offsetting the address by 4 bytes in
preparation for either a lfiwax or lfiwzx when generating the lfd.  This fixes
that silly error.

This was not caught in the test suite since the conversion tests were run with
-mcpu=pwr7, which implies availability of lfiwax.  I've added another test
case for older hardware that checks the code we expect in the absence of
lfiwax and other flavors of fcfid.  There are fewer tests in this test case
because we punt to DAG selection in more cases on older hardware.  (We must
generate complex fiddly sequences in those cases, and there is marginal
benefit in duplicating that logic in fast-isel.)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204155 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 14:32:50 +00:00
NAKAMURA Takumi
3bdef4b6dc CodeGen/R600/v_cndmask.ll: Relax an expression to unbreak msvcrt.
V_CNDMASK_B32_e64 v0, v0, -1.#QNAN0e+00, s[2:3], 0, 0, 0, 0

FIXME: We really need to implement our formatter...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204118 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-18 06:17:22 +00:00
Kevin Enderby
c153e49aa9 Making a guess to fix the test case with r204056 to get the build bot working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204073 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 19:00:03 +00:00
Matt Arsenault
2683baa8ac R600: Match sign_extend_inreg to BFE instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204072 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 18:58:11 +00:00
Matt Arsenault
94bdb453a4 Make DAGCombiner work on vector bitshifts with constant splat vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204071 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 18:58:01 +00:00
Adam Nemet
8c8fe42a0d [VectorLegalizer/X86] Don't unvectorize fp_to_uint for v8f32->v8i16
Rather than LegalizeAction::Expand, this needs LegalizeAction::Promote to get
promoted to fp_to_sint v8f32->v8i32.  This is a legal operation on AVX.

For that to work properly, we also need to teach the legalizer about the
specific promotion required here.  The default vector promotion uses
bitcasting to a vector type of the same total size.  We want to promote the
vector element type, effectively widening the operation and then truncating
the result.  This is analogous to the current logic of how int_to_fp is
promoted.

The change also factors out some code from the int_to_fp promotion code to
ValueType::widenIntegerVectorElementType.  This is now shared between
int_to_fp and fp_to_int.

There is no longer need for the custom lowering of fp_to_sint f32->v8i16 in
X86.  It can now go through the new target-independent fp_to_*int promotion
logic.

I also checked that no other target uses Promote for these ops yet, so there
shouldn't be any unexpected change in behavior.

Fixes <rdar://problem/16202247>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204058 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 17:06:14 +00:00
Tom Stellard
ad52f4f70c R600/SI: Fix implementation of isInlineConstant() used by the verifier
The type of the immediates should not matter as long as the encoding is
equivalent to the encoding of one of the legal inline constants.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204056 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 17:03:52 +00:00
Tom Stellard
eb7876083d R600/SI: Use correct dest register class for V_READFIRSTLANE_B32
This instructions writes to an 32-bit SGPR.  This change required adding
the 32-bit VCC_LO and VCC_HI registers, because the full VCC register
is 64 bits.

This fixes verifier errors on several of the indirect addressing piglit
tests.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204055 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 17:03:51 +00:00
Lang Hames
3dd951e842 [X86] New and improved VZeroUpperInserter optimization.
- Adds support for inserting vzerouppers before tail-calls.
  This is enabled implicitly by having MachineInstr::copyImplicitOps preserve
  regmask operands, which allows VZeroUpperInserter to see where tail-calls use
  vector registers.

- Fixes a bug that caused the previous version of this optimization to miss some
  vzeroupper insertion points in loops. (Loops-with-vector-code that followed
  loops-without-vector-code were mistakenly overlooked by the previous version).

- New algorithm never revisits instructions.

Fixes <rdar://problem/16228798>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204021 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-17 01:22:54 +00:00
Adrian Prantl
2110a0d07b Re-add checks that were in this testcase before it was converted to dwarfdump.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203981 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-14 23:08:21 +00:00
Ulrich Weigand
0951eecae4 [ppc64] Avoid copy relocs in named rodata sections
Commit r181723 introduced code to avoid placing initialized variables
needing relocations into the .rodata section, which avoid copy relocs
that do not work as expected on ppc64 function references.

The same treatment is also needed for *named* .rodata.XXX sections.
This patch changes PPC64LinuxTargetObjectFile::SelectSectionForGlobal
to modify "Kind" *before* calling the default SelectSectionForGlobal
routine, instead of first calling the default routine and then just
checking for the (main) .rodata section afterwards.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203921 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-14 12:45:22 +00:00
Rafael Espindola
1f21e0dd0d Remove the linker_private and linker_private_weak linkages.
These linkages were introduced some time ago, but it was never very
clear what exactly their semantics were or what they should be used
for. Some investigation found these uses:

* utf-16 strings in clang.
* non-unnamed_addr strings produced by the sanitizers.

It turns out they were just working around a more fundamental problem.
For some sections a MachO linker needs a symbol in order to split the
section into atoms, and llvm had no idea that was the case. I fixed
that in r201700 and it is now safe to use the private linkage. When
the object ends up in a section that requires symbols, llvm will use a
'l' prefix instead of a 'L' prefix and things just work.

With that, these linkages were already dead, but there was a potential
future user in the objc metadata information. I am still looking at
CGObjcMac.cpp, but at this point I am convinced that linker_private
and linker_private_weak are not what they need.

The objc uses are currently split in

* Regular symbols (no '\01' prefix). LLVM already directly provides
whatever semantics they need.
* Uses of a private name (start with "\01L" or "\01l") and private
linkage. We can drop the "\01L" and "\01l" prefixes as soon as llvm
agrees with clang on L being ok or not for a given section. I have two
patches in code review for this.
* Uses of private name and weak linkage.

The last case is the one that one could think would fit one of these
linkages. That is not the case. The semantics are

* the linker will merge these symbol by *name*.
* the linker will hide them in the final DSO.

Given that the merging is done by name, any of the private (or
internal) linkages would be a bad match. They allow llvm to rename the
symbols, and that is really not what we want. From the llvm point of
view, these objects should really be (linkonce|weak)(_odr)?.

For now, just keeping the "\01l" prefix is probably the best for these
symbols. If we one day want to have a more direct support in llvm,
IMHO what we should add is not a linkage, it is just a hidden_symbol
attribute. It would be applicable to multiple linkages. For example,
on weak it would produce the current behavior we have for objc
metadata. On internal, it would be equivalent to private (and we
should then remove private).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203866 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 23:18:37 +00:00
Kevin Enderby
c5888b8d1b Add -mtriple=x86_64-linux to this test case to fix the build bots.5
The original commit was r203829.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203844 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 20:31:19 +00:00
Ekaterina Romanova
ed2ca70ccf Fix for http://llvm.org/bugs/show_bug.cgi?id=18590
This patch fixes the bug in peephole optimization that folds a load which defines one vreg into the one and only use of that vreg. With debug info, a DBG_VALUE that referenced the vreg considered to be a use, preventing the optimization. The fix is to ignore DBG_VALUE's during the optimization, and undef a DBG_VALUE that references a vreg that gets removed.
Patch by Trevor Smigiel!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203829 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 18:47:12 +00:00
Tom Stellard
47feea0802 R600: LDS instructions shouldn't implicitly define OQAP
LDS instructions are pseudo instructions which model
the OQAP defs and uses within a single instruction.

This fixes a hang in the opencv MedianFilter tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203818 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 17:13:04 +00:00
Mark Seaborn
d2a816fe10 Cleanup: Remove use of old "-enable-correct-eh-support" option from a test
This option enables LowerInvoke's obsolete SJLJ EH support, but the
target used in this test (ARM Darwin) no longer uses the LowerInvoke
pass, so the option has no effect here.  This target currently uses
the newer SjLjEHPrepare pass instead.

This cleanup will help with removing "-enable-correct-eh-support".

Differential Revision: http://llvm-reviews.chandlerc.com/D3064

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203810 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 16:23:00 +00:00
Hans Wennborg
c8ed0db5aa [ARM] Use symbolic register names in .cfi directives only with IAS (PR19110)
This is a follow-up to r203635. Saleem pointed out that since symbolic register
names are much easier to read, it would be good if we could turn them off only
when we really need to because we're using an external assembler.

Differential Revision: http://llvm-reviews.chandlerc.com/D3056

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203806 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 15:56:41 +00:00
Manuel Jacob
f8909fa140 CodeGenPrep: sink extends of illegal types into use block.
Summary:
This helps the instruction selector to lower an i64 * i64 -> i128
multiplication into a single instruction on targets which support it.

This is an update of D2973 which was reverted because of a bug reported
as PR19084.

Reviewers: t.p.northover, chapuni

Reviewed By: t.p.northover

CC: llvm-commits, alex, chapuni

Differential Revision: http://llvm-reviews.chandlerc.com/D3021

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203797 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 13:36:25 +00:00
Elena Demikhovsky
3d1ae71813 AVX-512: masked load/store + intrinsics for them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203790 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 12:05:52 +00:00
Hal Finkel
ab849adec4 [PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.

The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).

Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that.  The assembler and disassembler
are fully implemented and tested. However:

 - CodeGen support causes miscompiles; test-suite runtime failures:
      MultiSource/Benchmarks/FreeBench/distray/distray
      MultiSource/Benchmarks/McCat/08-main/main
      MultiSource/Benchmarks/Olden/voronoi/voronoi
      MultiSource/Benchmarks/mafft/pairlocalalign
      MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
      SingleSource/Benchmarks/CoyoteBench/almabench
      SingleSource/Benchmarks/Misc/matmul_f64_4x4

 - The lowering currently falls back to using Altivec instructions far more
   than it should. Worse, there are some things that are scalarized through the
   stack that shouldn't be.

 - A lot of unnecessary copies make it past the optimizers, and this needs to
   be fixed.

 - Many more regression tests are needed.

Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203768 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 07:58:58 +00:00
Adam Nemet
a65ca9dcf0 [X86] Add peephole for masked rotate amount
Extend what's currently done for shift because the HW performs this masking
implicitly:

   (rotl:i32 x, (and y, 31)) -> (rotl:i32 x, y)

I use the newly factored out multiclass that was only supporting shifts so
far.

For testing I extended my testcase for the new rotation idiom.

<rdar://problem/15295856>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203718 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 21:20:55 +00:00
Rafael Espindola
38048cdb1c Reject alias to undefined symbols in the verifier.
On ELF and COFF an alias is just another name for a position in the file.
There is no way to refer to a position in another file, so an alias to
undefined is meaningless.

MachO currently doesn't support aliases. The spec has a N_INDR, which when
implemented will have a different set of restrictions. Adding support for
it shouldn't be harder than any other IR extension.

For now, having the IR represent what is actually possible with current
tools makes it easier to fix the design of GlobalAlias.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203705 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 20:15:49 +00:00
Matt Arsenault
054f4eccd2 R600: Fix trunc store from i64 to i1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203695 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 18:45:52 +00:00
Daniel Sanders
fe6bd52bf2 [mips] BSEL's and BINS[RL] operands are reversed compared to the vselect node used in the pattern.
Summary:
Correct the match patterns and the lowerings that made the CodeGen tests pass despite the mistakes.

The original testcase that discovered the problem was SingleSource/UnitTests/SignlessType/factor.c in test-suite.
During review, we also found that some of the existing CodeGen tests were incorrect and fixed them:
* bitwise.ll: In bsel_v16i8 the IfSet/IfClear were reversed because bsel and bmnz have different operand orders and the test didn't correctly account for this. bmnz goes 'IfClear, IfSet, CondMask', while bsel goes 'CondMask, IfClear, IfSet'.
* vec.ll: In the cases where a bsel is emitted as a bmnz (they are the same operation with a different input tied to the result) the operands were in the wrong order.
* compare.ll and compare_float.ll: The bsel operand order was correct for a greater-than comparison, but a greater-than comparison instruction doesn't exist. Lowering this operation inverts the condition so the IfSet/IfClear need to be swapped to match.

The differences between BSEL, BMNZ, and BMZ and how they map to/from vselect are rather confusing. I've therefore added a note to MSA.txt to explain this in a single place in addition to the comments that explain each case.

Reviewers: matheusalmeida, jacksprat

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D3028

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203657 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 11:54:00 +00:00
Tim Northover
d4517fa24d ARM: correct Dwarf output for non-contiguous VFP saves.
When the list of VFP registers to be saved was non-contiguous (so multiple
vpush/vpop instructions were needed) these were being ordered oddly, as in:
    vpush {d8, d9}
    vpush {d11}

This led to the layout in memory being [d11, d8, d9] which is ugly and doesn't
match the CFI_INSTRUCTIONs we're generating either (so Dwarf info would be
broken).

This switches the order of vpush/vpop (in both prologue and epilogue,
obviously) so that the Dwarf locations are correct again.

rdar://problem/16264856

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203655 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 11:29:23 +00:00
Hans Wennborg
e03daa01f6 [ARM] Use DWARF register numbers for CFI directives in ELF assembly
It seems gas can't handle CFI directives with VFP register names ("d12", etc.).
This broke us trying to build Chromium for Android after 201423.

A gas bug has been filed: https://sourceware.org/bugzilla/show_bug.cgi?id=16694

compnerd suggested making this conditional on whether we're using the integrated
assembler or not. I'll look into that in a follow-up patch.

Differential Revision: http://llvm-reviews.chandlerc.com/D3049

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203635 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-12 03:52:34 +00:00
Hans Wennborg
1332459dbb X86: Don't generate 64-bit movd after cmpneqsd in 32-bit mode (PR19059)
This fixes the bug where we would bitcast the 64-bit floating point result
of cmpneqsd to a 64-bit integer even on 32-bit targets.

Differential Revision: http://llvm-reviews.chandlerc.com/D3009

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 15:49:24 +00:00
Saleem Abdulrasool
90d0ed297f ARM: honour -f{no-,}optimize-sibling-calls
Use the options in the ARMISelLowering to control whether tail calls are
optimised or not.  Previously, this option was entirely ignored on the ARM
target and only honoured on x86.

This option is mostly useful in profiling scenarios.  The default remains that
tail call optimisations will be applied.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203577 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 15:09:54 +00:00
Saleem Abdulrasool
2b42ff6fdb ARM: remove ancient -arm-tail-calls option
This option is from 2010, designed to work around a linker issue on Darwin for
ARM.  According to grosbach this is no longer an issue and this option can
safely be removed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203576 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 15:09:49 +00:00
Saleem Abdulrasool
cde1f2eae2 ARM: enable tail call optimisation on Thumb 2
Tail call optimisation was previously disabled on all targets other than
iOS5.0+.  This enables the tail call optimisation on all Thumb 2 capable
platforms.

The test adjustments are to remove the IR hint "tail" to function invocation.
The tests were designed assuming that tail call optimisations would not kick in
which no longer holds true.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203575 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 15:09:44 +00:00
Tim Northover
ca396e391e IR: add a second ordering operand to cmpxhg for failure
The syntax for "cmpxchg" should now look something like:

	cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic

where the second ordering argument gives the required semantics in the case
that no exchange takes place. It should be no stronger than the first ordering
constraint and cannot be either "release" or "acq_rel" (since no store will
have taken place).

rdar://problem/15996804

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203559 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 10:48:52 +00:00
Jim Grosbach
7a37166a7a X86: Enable ISel of 16-bit MOVBE instructions.
When the MOVBE instructions are available, use them for 16-bit endian
swapping as well as for 32 and 64 bit.

The patterns were already present on the instructions, but weren't being
matched because the operation was unconditionally marked to 'Expand.'
Change that to be conditional on whether the MOVBE instructions are
available. Use 'rolw' to implement the in-register version (32 and 64
bit have the dedicated 'bswap' instruction for that).

Patch by Louis Gerbarg <lgg@apple.com>.

rdar://15479984

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203524 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 00:44:14 +00:00
Matt Arsenault
53131629dc Fix undefined behavior in vector shift tests.
These were all shifting the same amount as the bitwidth.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203519 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 00:01:41 +00:00
Eli Bendersky
428b609de3 Followup to r203483 - add test.
[forgot to 'svn add' before committing r203483]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203485 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-10 20:36:04 +00:00
Sasa Stankovic
754aaee387 [mips] Implement NaCl sandboxing of loads, stores and SP changes:
* Add masking instructions before loads and stores (in MC layer).
  * Add masking instructions after SP changes (in MC layer).
  * Forbid loads, stores and SP changes in delay slots (in MI layer).

Differential Revision: http://llvm-reviews.chandlerc.com/D2904


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203484 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-10 20:34:23 +00:00
Reed Kotler
017bc0fca6 Fix regression with -O0 for mips .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203469 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-10 16:31:25 +00:00
Tim Northover
8ca089df49 AArch64: fix LowerCONCAT_VECTORS for new CodeGen.
The function was making too many assumptions about its input:

1. The NEON_VDUP optimisation was far too aggressive, assuming (I
think) that the input would always be BUILD_VECTOR.

2. We were treating most unknown concats as legal (by returning Op
rather than SDValue()). I think only concats of pairs of vectors are
actually legal.

http://llvm.org/PR19094

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203450 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-10 09:34:07 +00:00
NAKAMURA Takumi
e086782817 Revert r203230, "CodeGenPrep: sink extends of illegal types into use block."
It choked i686 stage2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203386 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-09 11:01:07 +00:00
David Majnemer
39a09d2b7c IR: Change inalloca's grammar a bit
The grammar for LLVM IR is not well specified in any document but seems
to obey the following rules:

 - Attributes which have parenthesized arguments are never preceded by
   commas.  This form of attribute is the only one which ever has
   optional arguments.  However, not all of these attributes support
   optional arguments: 'thread_local' supports an optional argument but
   'addrspace' does not.  Interestingly, 'addrspace' is documented as
   being a "qualifier".  What constitutes a qualifier?  I cannot find a
   definition.

 - Some attributes use a space between the keyword and the value.
   Examples of this form are 'align' and 'section'.  These are always
   preceded by a comma.

 - Otherwise, the attribute has no argument.  These attributes do not
   have a preceding comma.

Sometimes an attribute goes before the instruction, between the
instruction and it's type, or after it's type.  'atomicrmw' has
'volatile' between the instruction and the type while 'call' has 'tail'
preceding the instruction.

With all this in mind, it seems most consistent for 'inalloca' on an
'inalloca' instruction to occur before between the instruction and the
type.  Unlike the current formulation, there would be no preceding
comma.  The combination 'alloca inalloca' doesn't look particularly
appetizing, perhaps a better spelling of 'inalloca' is down the road.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203376 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-09 06:41:58 +00:00
Adam Nemet
b033b03c23 Update comment from r203315 based on review
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203361 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-08 21:51:55 +00:00
David Blaikie
50b59c77e0 DebugInfo: further improvements to test following up on r203329
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203337 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-08 02:45:53 +00:00
David Blaikie
5c31033dda DebugInfo: Fix test fallout from r203323
Will fix this harder in a moment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203329 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-08 01:32:51 +00:00
Adam Nemet
316d3e3085 [DAGCombiner] Recognize another rotation idiom
This is the new idiom:

  x<<(y&31) | x>>((0-y)&31)

which is recognized as:

  x ROTL (y&31)

The change refines matchRotateSub.  In
Neg & (OpSize - 1) == (OpSize - Pos) & (OpSize - 1), if Pos is
Pos' & (OpSize - 1) we can just use Pos' instead of Pos.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203315 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 23:56:28 +00:00
Arnold Schwaighofer
aa5b17b359 ISel: Make VSELECT selection terminate in cases where the condition type has to
be split and the result type widened.

When the condition of a vselect has to be split it makes no sense widening the
vselect and thereby widening the condition. We end up in an endless loop of
widening (vselect result type) and splitting (condition mask type) doing this.
Instead, split both the condition and the vselect and widen the result.

I ran this over the test suite with i686 and mattr=+sse and saw no regressions.

Fixes PR18036.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203311 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 23:25:55 +00:00
Sasa Stankovic
fa14948a11 Moved test file from test/MC/Mips to test/CodeGen/Mips.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203298 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 22:08:46 +00:00
Tom Stellard
6cadd406cc R600/SI: Using SGPRs is illegal for instructions that read carry-out from VCC
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203281 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 20:12:39 +00:00
Tom Stellard
7e06370873 R600/SI: Custom lower i1 stores
These are sometimes created by the shrink to boolean optimization in the
globalopt pass.

Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203280 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 20:12:33 +00:00
Tim Northover
fa9e4b52f4 CodeGenPrep: sink extends of illegal types into use block.
This helps the instruction selector to lower an i64 * i64 -> i128
multiplication into a single instruction on targets which support it.

Patch by Manuel Jacob.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203230 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 11:04:30 +00:00
Rafael Espindola
7d7d99622f Replace PROLOG_LABEL with a new CFI_INSTRUCTION.
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.

The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.

The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.

I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.

The net result is that we don't create temporary labels that are never used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203204 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-07 06:08:31 +00:00
Rafael Espindola
b52d0c0d74 Remove shouldEmitUsedDirectiveFor.
Clang now uses llvm.compiler.used for these cases.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203174 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 22:47:08 +00:00
Rafael Espindola
e7147c1b57 Convert test to FileCheck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203173 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 22:21:43 +00:00
Andrea Di Biagio
e54158504f [X86] Teach the DAGCombiner how to fold a OR of two shufflevector nodes.
This patch teaches the DAGCombiner how to fold a binary OR between two
shufflevector into a single shuffle vector when possible.

The rules are:
  1. fold (or (shuf A, V_0, MA), (shuf B, V_0, MB)) -> (shuf A, B, Mask1)
  2. fold (or (shuf A, V_0, MA), (shuf B, V_0, MB)) -> (shuf B, A, Mask2)

The DAGCombiner can take advantage of the fact that OR is commutative and
compute two possible shuffle masks (Mask1 and Mask2) for the resulting
shuffle node.

Before folding a dag according to either rule 1 or 2, DAGCombiner verifies
that the resulting shuffle mask is legal for the target.
DAGCombiner would firstly try to fold according to 1.; If not possible
then it will try to fold according to 2.
If both Mask1 and Mask2 are illegal then we conservatively don't fold
the OR instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203156 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 20:19:52 +00:00
Matt Arsenault
161e3a80b2 R600: Fix extloads from i8 / i16 to i64.
This appears to only be working for global loads. Private
and local break for other reasons.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203135 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 17:34:12 +00:00
Matt Arsenault
b4cd160bb9 R600/SI: Expand selects on vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203134 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 17:34:03 +00:00
Richard Osborne
d530a96701 [XCore] Add support for the "m" inline asm constraint.
Summary:
This provides support for CP and DP relative global accesses in inline
asm.

Reviewers: robertlytton

Reviewed By: robertlytton

Differential Revision: http://llvm-reviews.chandlerc.com/D2943

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203129 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 16:37:48 +00:00
Chad Rosier
514d703ff6 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203125 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 16:04:00 +00:00
Hal Finkel
341ea7ddf6 Fixup PPC Darwin i1 argument handling
Like on other targets, we need to zero_extend/truncate i1 args before copying
them to GPRs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203045 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 00:45:19 +00:00
Hal Finkel
025c1cefca When using CR bit registers on PPC32, handle the i1 vaarg case
When copying an i1 value into a GPR for a vaarg call, we need to explicitly
zero-extend the i1 value (otherwise an invalid CRBIT -> GPR copy will be
generated).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203041 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 00:23:33 +00:00
Jack Carter
c4392f9fe9 [Mips] Testcase typo fix. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203020 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 22:54:56 +00:00
Hal Finkel
f698d7775a With PPC CR bit registers, handle int_to_fp on older cores
On cores without fpcvt support, we cannot promote int_to_fp i1 operations,
because there is nothing to promote them to. The most straightforward
implementation of this uses a select to choose between the two possible
resulting floating-point values (and that's what is done here).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203015 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 22:14:00 +00:00
Rafael Espindola
fc5436c951 Always print the implicit .text at the start of an asm file.
Before llvm-mc would print it, but llc was assuming that it would produce
another section changing directive before one was needed. That assumption is
false with inline asm.

Fixes PR19049.

Another option would be to always create the section, but in the asm printer
avoid printing sections changes during initialization. That would work, but
* We do use the fact that llvm-mc prints it in testing. The tests can be changed
  if needed.
* A quick poll on IRC suggest that most developers prefer the implicit .text to
  be printed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203001 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 20:09:15 +00:00
Cameron McInally
f3ff7c32f7 Lower AVX v4i64->v4i32 truncate to one shuffle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202996 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 19:41:16 +00:00
Oliver Stannard
0d31d1e612 ARM: Correctly align arguments after a byval struct is passed on the stack
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202985 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 15:25:27 +00:00
Andrew Trick
a6ace00520 Make stackmap machineinstrs clobber the scratch regs too.
Patchpoints already did this. Doing it for stackmaps is a convenience
for the runtime in the event that it needs to scratch register to
patch or perform a runtime call thunk.

Unlike patchpoints, we just assume the AnyRegCC calling
convention. This is the only language and target independent calling
convention specific to stackmaps so makes sense.  Although the calling
convention is not currently used to select the scratch registers.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202943 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 07:08:16 +00:00
Hans Wennborg
2f471c83a0 Check for dynamic allocas and inline asm that clobbers sp before building
selection dag (PR19012)

In X86SelectionDagInfo::EmitTargetCodeForMemcpy we check with MachineFrameInfo
to make sure that ESI isn't used as a base pointer register before we choose to
emit rep movs (which clobbers esi).

The problem is that MachineFrameInfo wouldn't know about dynamic allocas or
inline asm that clobbers the stack pointer until SelectionDAGBuilder has
encountered them.

This patch fixes the problem by checking for such things when building the
FunctionLoweringInfo.

Differential Revision: http://llvm-reviews.chandlerc.com/D2954

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202930 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 02:43:26 +00:00
Richard Osborne
f41c05c7ca [XCore] Fix call of absolute address.
Previously for:

tail call void inttoptr (i64 65536 to void ()*)() nounwind

We would emit:

bl 65536

The immediate operand of the bl instruction is a relative offset so it is
wrong to use the absolute address here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202860 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-04 16:50:30 +00:00
Daniel Sanders
e06bec47d6 [mips][msa] Correct the behaviour of the COPY_FW pseudo on lanes 2 and 3.
Summary:
Previously, attempting to extract lanes 2 and 3 would actually extract lane 1.
The MSA CodeGen tests only covered lanes 0 and 1.

Differential Revision: http://llvm-reviews.chandlerc.com/D2935

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202848 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-04 13:54:30 +00:00
Chad Rosier
168a1af83c Revert "[AArch64] This is a work in progress to provide a machine description"
This reverts commit ff717c8fc786a0cfa1602982b91895fa09e514fc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202773 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-04 00:32:07 +00:00
Chad Rosier
824dfb1c56 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202767 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-03 23:32:47 +00:00
Daniel Sanders
fc210ac1ef [mips] Prevent %lo relocation being used on MSA loads and stores.
Summary:
Parts of the compiler still believed MSA load/stores have a 16-bit offset when
it is actually 10-bit. Corrected this, and fixed a closely related issue this
uncovered where load/stores with 10-bit and 12-bit offsets (MSA and microMIPS
respectively) could not load/store using offsets from the stack/frame pointer.
They accepted frameindex+offset, but not frameindex by itself.

Reviewers: jacksprat, matheusalmeida

Reviewed By: jacksprat

Differential Revision: http://llvm-reviews.chandlerc.com/D2888

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202717 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-03 14:31:21 +00:00
Hal Finkel
5a49125fec Add a PPC inline asm constraint type for single CR bits
Now that the PowerPC backend can track individual CR bits as first-class
registers, we should also have a way of allocating them for inline asm
statements. Because these registers are only one bit, if an output variable is
implicitly cast to a larger integer size, we'll get an any_extend to that
larger type (this is part of the existing target-independent logic). As a
result, regardless of the size of the output type, only the first bit is
meaningful.

The constraint identifier "wc" has been chosen for this purpose. Although gcc
does not currently support allocating individual CR bits, this identifier
choice has been coordinated with the gcc PowerPC team, and will be marked as
reserved for this purpose in the gcc constraints.md file.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202657 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-02 18:23:39 +00:00
Elena Demikhovsky
a9fe27ffb3 AVX-512: Fixed extract_vector_elt for v8i1 vector
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202624 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-02 09:19:44 +00:00
Matt Arsenault
c59a9f09fb R600: Add failing control flow tests.
Simple cases hit a variety of problems at -O0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202601 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-01 21:45:41 +00:00
Hal Finkel
92b38a9d1c Remove extra truncs/exts around i32 bit operations on PPC64
This generalizes the code to eliminate extra truncs/exts around i1 bit
operations to also do the same on PPC64 for i32 bit operations. This eliminates
a fairly prevalent code wart:

int foo(int a) {
  return a == 5 ? 7 : 8;
}

On PPC64, because of the extension implied by the ABI, this would generate:

	cmplwi 0, 3, 5
	li 12, 8
	li 4, 7
	isel 3, 4, 12, 2
	rldicl 3, 3, 0, 32
	blr

where the 'rldicl 3, 3, 0, 32', the extension, is completely unnecessary. At
least for the single-BB case (which is all that the DAG combine mechanism can
handle), this unnecessary extension is no longer generated.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202600 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-01 21:36:57 +00:00
Venkatraman Govindaraju
17e9537004 [Sparc] Add support for parsing directives in SparcAsmParser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202564 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-01 02:18:04 +00:00
Venkatraman Govindaraju
c9bf74fdc5 [Sparc] Emit 'restore' instead of 'restore %g0, %g0, %g0'. This improves the readability of the generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202563 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-01 01:04:26 +00:00
Manman Ren
5de5680689 SpillPlacement: fix a bug in iterate.
Inside iterate, we scan backwards then scan forwards in a loop. When iteration
is not zero, the last node was just updated so we can skip it. But when
iteration is zero, we can't skip the last node.

For the testing case, fixing this will save a spill and move register copies
from hot path to cold path.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202557 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 23:05:31 +00:00
Tom Stellard
9f0d68f522 R600/SI: Expand all v16[if]32 operations
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202543 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 21:36:37 +00:00
Justin Bogner
4ef6a7bf69 CommandLine: Exit successfully for -version and -help
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.

I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202530 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 19:08:01 +00:00
Adam Nemet
1b5d421e4d Test commit
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202528 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 18:44:39 +00:00
Zoran Jovanovic
2a80d7db79 Fixed operand of SC microMIPS instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202526 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 18:22:56 +00:00
Hal Finkel
3d2ce7a5a7 Swap PPC isel operands to allow for 0-folding
The PPC isel instruction can fold 0 into the first operand (thus eliminating
the need to materialize a zero-containing register when the 'true' result of
the isel is 0). When the isel is fed by a bit register operation that we can
invert, do so as part of the bit-register-operation peephole routine.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202469 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 06:11:16 +00:00
Hal Finkel
36e1825e68 Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:

 - Reduction in register pressure (because we no longer need GPRs to store
   boolean values).

 - Logical operations on booleans can be handled more efficiently; we used to
   have to move all results from comparisons into GPRs, perform promoted
   logical operations in GPRs, and then move the result back into condition
   register bits to be used by conditional branches. This can be very
   inefficient, because the throughput of these CR <-> GPR moves have high
   latency and low throughput (especially when other associated instructions
   are accounted for).

 - On the POWER7 and similar cores, we can increase total throughput by using
   the CR bits. CR bit operations have a dedicated functional unit.

Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).

This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.

It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
  trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).

POWER7 test-suite performance results (from 10 runs in each configuration):

SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup

SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202451 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 00:27:01 +00:00
Roman Divacky
14551f041b Lower FNEG just like FABS to fneg[ds] and fmov[ds], thus avoiding
expensive libcall. Also, Qp_neg is not implemented on at least
FreeBSD. This is also what gcc is doing.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202422 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 19:26:29 +00:00
Adrian Prantl
4bd26ae070 Debug info: Remove ARMAsmPrinter::EmitDwarfRegOp(). AsmPrinter can now
scan the register file for sub- and super-registers.
No functionality change intended.

(Tests are updated because the comments in the assembler output are
different.)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202416 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 17:56:08 +00:00
Richard Osborne
cc331c8f40 [XCore] Support functions returning more than 4 words.
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values. This is
r202397 reapplied with a fix to avoid an uninitialized read of a member.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202414 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 17:47:54 +00:00
Richard Osborne
e4db795a4c Revert r202396, r202397.
These are causing test failures, revert for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202398 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 14:24:13 +00:00
Richard Osborne
ad4ffce35f [XCore] Support functions returning more than 4 words.
Summary:
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values.

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2889

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202397 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 14:00:40 +00:00
Richard Osborne
c26292d4dc [XCore] Target optimized library function __memcpy_4()
Summary:
If the src, dst and size of a memcpy are known to be 4 byte aligned we
can call __memcpy_4() instead of memcpy().

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2871

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202395 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 13:39:07 +00:00
Richard Osborne
83eab939a4 [XCore] Add dag combines for instructions that ignore some input bits.
These instructions ignore the high bits of one of their input operands -
try and use this to simplify the code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202394 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 13:20:11 +00:00
Richard Osborne
ef174f733a [XCore] Provide information about known zero bits of resource instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202393 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 13:20:06 +00:00
Daniel Sanders
fb1e26d9a2 Stop test/CodeGen/X86/v4i32load-crash.ll targeting non-X86-64 targets.
Summary:
Fixes an issue where a test attempts to use -mcpu=x86-64 on non-X86-64 targets.
This triggers an assertion in the MIPS backend since it doesn't know what ABI to
use by default for unrecognized processors.

CC: llvm-commits, rafael

Differential Revision: http://llvm-reviews.chandlerc.com/D2877

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202369 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 09:24:31 +00:00
Michel Danzer
644aecfc97 R600/SI: Optimize SI_KILL for constant operands
If the SI_KILL operand is constant, we can either clear the exec mask if
the operand is negative, or do nothing otherwise.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202337 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 01:47:09 +00:00
Michel Danzer
a5fbf24716 R600/SI: Allow SI_KILL for geometry shaders
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202336 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 01:47:02 +00:00
Andrew Trick
9ad6bda08e Use regnum regex in an XCore test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202315 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 23:22:49 +00:00
Andrew Trick
aa115a7fbf Very temporarily XFAILing a test. Will be fixed shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202310 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 22:39:59 +00:00