Commit Graph

29702 Commits

Author SHA1 Message Date
David Majnemer
9478f0a6ff Mips: Silence a -Wcovered-switch-default
Remove a default label which covered no enumerators, replace it with a
llvm_unreachable.

No functionality changed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212729 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 16:04:04 +00:00
Zoran Jovanovic
0ef47114d3 [mips] Added FPXX modeless calling convention.
Differential Revision: http://reviews.llvm.org/D4293


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212726 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 15:36:12 +00:00
Arnaud A. de Grandmaison
a9af0558b2 [AArch64] Add logical alias instructions to MC AsmParser
This patch teaches the AsmParser to accept some logical+immediate
instructions and convert them as shown:

  bic  Rd, Rn, #imm  ->  and Rd, Rn, #~imm
  bics Rd, Rn, #imm  ->  ands Rd, Rn, #~imm
  orn  Rd, Rn, #imm  ->  orr Rd, Rn, #~imm
  eon  Rd, Rn, #imm  ->  eor Rd, Rn, #~imm

Those instructions are an alternate syntax available to assembly coders,
and are needed in order to support code already compiling with some other
assemblers. For example, the bic construct is used by the linux kernel.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212722 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 15:12:26 +00:00
Tim Northover
6b0ac2aa02 AArch64: correctly fast-isel i8 & i16 multiplies
We were asking for a register for type i8 or i16 which caused an assert.

rdar://problem/17620015

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212718 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 14:18:46 +00:00
Daniel Sanders
24a071b8c5 [mips] Add support for -modd-spreg/-mno-odd-spreg
Summary:
When -mno-odd-spreg is in effect, 32-bit floating point values are not
permitted in odd FPU registers. The option also prohibits 32-bit and 64-bit
floating point comparison results from being written to odd registers.

This option has three purposes:
* It allows support for certain MIPS implementations such as loongson-3a that
  do not allow the use of odd registers for single precision arithmetic.
* When using -mfpxx, -mno-odd-spreg is the default and this allows us to
  statically check that code is compliant with the O32 FPXX ABI since mtc1/mfc1
  instructions to/from odd registers are guaranteed not to appear for any
  reason. Once this has been established, the user can then re-enable
  -modd-spreg to regain the use of all 32 single-precision registers.
* When using -mfp64 and -mno-odd-spreg together, an O32 extension named
  O32 FP64A is used as the ABI. This is intended to provide almost all
  functionality of an FR=1 processor but can also be executed on a FR=0 core
  with the assistance of a hardware compatibility mode which emulates FR=0
  behaviour on an FR=1 processor.

* Added '.module oddspreg' and '.module nooddspreg' each of which update
  the .MIPS.abiflags section appropriately
* Moved setFpABI() call inside emitDirectiveModuleFP() so that the caller
  doesn't have to remember to do it.
* MipsABIFlags now calculates the flags1 and flags2 member on demand rather
  than trying to maintain them in the same format they will be emitted in.

There is one portion of the -mfp64 and -mno-odd-spreg combination that is not
implemented yet. Moves to/from odd-numbered double-precision registers must not
use mtc1. I will fix this in a follow-up.

Differential Revision: http://reviews.llvm.org/D4383


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212717 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 13:38:23 +00:00
Zinovy Nis
a2bc403710 [x32] Add AsmBackend for X32 which uses ELF32 with x86_64 (the author is Pavel Chupin).
This is minimal change for backend required to have "hello world" compiled and working on x32 target (x86_64-linux-gnux32). More patches for x32 will follow.

Differential Revision: http://reviews.llvm.org/D4181



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212716 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 13:03:26 +00:00
Richard Sandiford
35dda8a53c [SystemZ] Use SystemZCallingConv.td to define callee-saved registers
Just a clean-up.  No behavioral change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212711 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 11:44:37 +00:00
Richard Sandiford
446067b143 [SystemZ] Tweak instruction format classifications
There's no real need to have Shift as a separate format type from Binary.
The comments for other format types were too specific and in some cases
no longer accurate.

Just a clean-up, no behavioral change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212707 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 11:29:23 +00:00
Chandler Carruth
a4e7f05a28 [x86] Add another combine that is particularly useful for the new vector
shuffle lowering: match shuffle patterns equivalent to an unpcklwd or
unpckhwd instruction.

This allows us to use generic lowering code for v8i16 shuffles and match
the unpack pattern late.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212705 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 11:09:29 +00:00
Richard Sandiford
9aa8beb9d6 [SystemZ] Add MC support for LEDBRA, LEXBRA and LDXBRA
These instructions aren't used for codegen since the original L*DB instructions
are suitable for fround.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212703 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 11:00:55 +00:00
Richard Sandiford
beefa3ada0 [SystemZ] Avoid using i8 constants for immediate fields
Immediate fields that have no natural MVT type tended to use i8 if the
field was small enough.  This was a bit confusing since i8 isn't a legal
type for the target.  Fields for short immediates in a 32-bit or 64-bit
operation use i32 or i64 instead, so it would be better to do the same
for all fields.

No behavioral change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212702 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 10:52:51 +00:00
Richard Sandiford
b350ec7ec6 [SystemZ] Fix FPR dwarf numbering
The dwarf FPR numbers are supposed to have the order F0, F2, F4, F6,
F1, F3, F5, F7, F8, etc., which matches the pairing of registers for
long doubles.  E.g. a long double stored in F0 is paired with F2.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212701 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 10:45:11 +00:00
Daniel Sanders
b0b3161567 Make it possible for ints/floats to return different values from getBooleanContents()
Summary:
On MIPS32r6/MIPS64r6, floating point comparisons return 0 or -1 but integer
comparisons return 0 or 1.

Updated the various uses of getBooleanContents. Two simplifications had to be
disabled when float and int boolean contents differ:
- ScalarizeVecRes_VSELECT except when the kind of boolean contents is trivially
  discoverable (i.e. when the condition of the VSELECT is a SETCC node).
- visitVSELECT (select C, 0, 1) -> (xor C, 1).
  Come to think of it, this one could test for the common case of 'C'
  being a SETCC too.

Preserved existing behaviour for all other targets and updated the affected
MIPS32r6/MIPS64r6 tests. This also fixes the pi benchmark where the 'low'
variable was counting in the wrong direction because it thought it could simply
add the result of the comparison.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, jholewinski, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D4389


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212697 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 10:18:12 +00:00
Chandler Carruth
977aab501d [x86] Expand the target DAG combining for PSHUFD nodes to be able to
combine into half-shuffles through unpack instructions that expand the
half to a whole vector without messing with the dword lanes.

This fixes some redundant instructions in splat-like lowerings for
v16i8, which are now getting to be *really* nice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212695 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 09:57:36 +00:00
Chandler Carruth
dc90a3ab8f [x86] Tweak the v16i8 single input special case lowering for shuffles
that splat i8s into i16s.

Previously, we would try much too hard to arrange a sequence of i8s in
one half of the input such that we could unpack them into i16s and
shuffle those into place. This isn't always going to be a cheaper i8
shuffle than our other strategies. The case where it is always going to
be cheaper is when we can arrange all the necessary inputs into one half
using just i16 shuffles. It happens that viewing the problem this way
also makes it much easier to produce an efficient set of shuffles to
move the inputs into one half and then unpack them.

With this, our splat code gets one step closer to being not terrible
with the new experimental lowering strategy. It also exposes two
combines missing which I will add next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212692 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 09:16:40 +00:00
Chandler Carruth
95b14b00da [x86] Initial improvements to the new shuffle lowering for v16i8
shuffles specifically for cases where a small subset of the elements in
the input vector are actually used.

This is specifically targetted at improving the shuffles generated for
trunc operations, but also helps out splat-like operations.

There is still some really low-hanging fruit here that I want to address
but this is a huge step in the right direction.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212680 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 04:34:06 +00:00
Matt Arsenault
425ef825a6 R600/SI: Add support for llvm.convert.{to|from}.fp16
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212676 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 03:22:20 +00:00
Chandler Carruth
c4788790f6 [x86] Refactor some of the new code for lowering v16i8 shuffles to
remove duplication and make it easier to select different strategies.

No functionality changed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212674 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 02:24:26 +00:00
Chandler Carruth
515f228ec3 [SDAG] Make the new zext-vector-inreg node default to expand so targets
don't need to set it manually.

This is based on feedback from Tom who pointed out that if every target
needs to handle this we need to reach out to those maintainers. In fact,
it doesn't make sense to duplicate everything when anything other than
expand seems unlikely at this stage.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212661 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 22:53:04 +00:00
Jim Grosbach
a3edd6a038 AArch64: Better codegen for storing to __fp16.
Storing will generally be immediately preceded by rounding from an f32
or f64, so make sure to match those patterns directly to convert into the
FPR16 register class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-from-f64 path
which was first converting to f32 and then to f16 from there.

rdar://17594379

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212638 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 18:55:52 +00:00
Benjamin Kramer
08b75dd061 TargetRegisterInfo: Remove function that fell out of use years ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212636 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 18:53:57 +00:00
Adam Nemet
074b752cc9 [X86] AVX512: Enable it in the Loop Vectorizer
This lets us experiment with 512-bit vectorization without passing
force-vector-width manually.

The code generated for a simple integer memset loop is properly vectorized.
Disassembly is still broken for it though :(.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212634 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 18:22:33 +00:00
Louis Gerbarg
479af7ce0c Make AArch64FastISel::EmitIntExt explicitly check its source and destination types
This is a follow up to r212492. There should be no functional difference, but
this patch makes it clear that SrcVT must be an i1/i8/16/i32 and DestVT must be
an i8/i16/i32/i64.

rdar://17516686

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212633 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 17:54:32 +00:00
Benjamin Kramer
4afbd3e941 X86: When lowering v8i32 himuls use the correct shuffle masks for AVX2.
Turns out my trick of using the same masks for SSE4.1 and AVX2 didn't work out
as we have to blend two vectors. While there remove unecessary cross-lane moves
from the shuffles so the backend can lower it to palignr instead of vperm.

Fixes PR20118, a miscompilation of vector sdiv by constant on AVX2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212611 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 11:12:39 +00:00
Chandler Carruth
ce184e95f9 [x86] Add a ZERO_EXTEND_VECTOR_INREG DAG node and use it when widening
vector types to be legal and a ZERO_EXTEND node is encountered.

When we use widening to legalize vector types, extend nodes are a real
challenge. Either the input or output is likely to be legal, but in many
cases not both. As a consequence, we don't really have any way to
represent this situation and the prior code in the widening legalization
framework would just scalarize the extend operation completely.

This patch introduces a new DAG node to represent doing a zero extend of
a vector "in register". The core of the idea is to allow legal but
different vector types in the input and output. The output vector must
have fewer lanes but wider elements. The operation is defined to zero
extend the low elements of the input to the size of the output elements,
and drop all of the high elements which don't have a corresponding lane
in the output vector.

It also includes generic expansion of this node in terms of blending
a zero vector into the high elements of the vector and bitcasting
across. This in turn yields extremely nice code for x86 SSE2 when we use
the new widening legalization logic in conjunction with the new shuffle
lowering logic.

There is still more to do here. We need to support sign extension, any
extension, and potentially int-to-float conversions. My current plan is
to continue using similar synthetic nodes to model each of these
transitions with generic lowering code for each one.

However, with this patch LLVM already reaches performance parity with
GCC for the core C loops of the x264 code (assuming you disable the
hand-written assembly versions) when compiling for SSE2 and SSE3
architectures and enabling the new widening and lowering logic for
vectors.

Differential Revision: http://reviews.llvm.org/D4405

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212610 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:58:18 +00:00
Daniel Sanders
1285b3130b [mips][mips64r6] Correct select patterns that have the condition or true/false values backwards
Summary: This bug caused SingleSource/Regression/C/uint64_to_float and SingleSource/UnitTests/2002-05-02-CastTest3 to fail (among others).

Differential Revision: http://reviews.llvm.org/D4388


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212608 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:47:26 +00:00
Daniel Sanders
388704618e [mips][mips64r6] Correct cond names in the cmp.cond.[ds] instructions
Summary:
It seems we accidentally read the wrong column of the table MIPS64r6 spec
and used the names for c.cond.fmt instead of cmp.cond.fmt.

Differential Revision: http://reviews.llvm.org/D4387


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212607 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:40:20 +00:00
Chandler Carruth
a2a8a72f6f [x86] Initialize a pointer to null to fix a bug in r212602.
This should restore GCC hosts (which happen to put the bad stuff into
the pointer) and MSan, etc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212606 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:36:42 +00:00
Daniel Sanders
f08bcb9b97 [mips][mips64r6] Use JALR for indirect branches instead of JR (which is not available on MIPS32r6/MIPS64r6)
Summary:
This completes the change to use JALR instead of JR on MIPS32r6/MIPS64r6.

Reviewers: jkolek, vmedic, zoran.jovanovic, dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D4269


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212605 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:21:59 +00:00
Daniel Sanders
7c2ef822f7 [mips][mips64r6] Use JALR for returns instead of JR (which is not available on MIPS32r6/MIPS64r6)
Summary:
RET, and RET_MM have been replaced by a pseudo named PseudoReturn.
In addition a version with a 64-bit GPR named PseudoReturn64 has been
added.

Instruction selection for a return matches RetRA, which is expanded post
register allocation to PseudoReturn/PseudoReturn64. During MipsAsmPrinter,
this PseudoReturn/PseudoReturn64 are emitted as:
- (JALR64 $zero, $rs) on MIPS64r6
- (JALR $zero, $rs) on MIPS32r6
- (JR_MM $rs) on microMIPS
- (JR $rs) otherwise

On MIPS32r6/MIPS64r6, 'jr $rs' is an alias for 'jalr $zero, $rs'. To aid
development and review (specifically, to ensure all cases of jr are
updated), these aliases are temporarily named 'r6.jr' instead of 'jr'.
A follow up patch will change them back to the correct mnemonic.

Added (JALR $zero, $rs) to MipsNaClELFStreamer's definition of an indirect
jump, and removed it from its definition of a call.
Note: I haven't accounted for MIPS64 in MipsNaClELFStreamer since it's
doesn't appear to account for any MIPS64-specifics.

The return instruction created as part of eh_return expansion is now expanded
using expandRetRA() so we use the right return instruction on MIPS32r6/MIPS64r6
('jalr $zero, $rs').

Also, fixed a misuse of isABI_N64() to detect 64-bit wide registers in
expandEhReturn().

Reviewers: jkolek, vmedic, mseaborn, zoran.jovanovic, dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D4268


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212604 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:16:07 +00:00
Chandler Carruth
98eac0a244 [x86] Re-apply a variant of the x86 side of r212324 now that the rest
has settled without incident, removing the x86-specific and overly
strict 'isVectorSplat' routine in favor of generic and more powerful
splat detection.

The primary motivation and result of this is that the x86 backend can
now see through splats which contain undef elements. This is essential
if we are using a widening form of legalization and I've updated a test
case to also run in that mode as before this change the generated code
for the test case was completely scalarized.

This version of the patch much more carefully handles the undef lanes.
- We aren't overly conservative about them in the shift lowering
  (where we will never use the splat itself).
- One place where the splat would have been re-used by the existing code
  now explicitly constructs a new constant splat that will be safe.
- The broadcast lowering is much more reasonable with undefs by doing
  a correct check of whether the splat is the only user of a loaded
  value, checking that the splat actually crosses multiple lanes before
  using a broadcast, and handling broadcasts of non-constant splats.

As a consequence of the last bullet, the weird usage of vpshufd instead
of vbroadcast is gone, and we actually can lower an AVX splat with
vbroadcastss where before we emitted a really strange pattern of
a vector load and a manual splat across the vector.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212602 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-09 10:06:58 +00:00
NAKAMURA Takumi
717c9da9d2 MipsTargetStreamer.h: Avoid "using" to appease msc17.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212577 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 23:48:22 +00:00
Jim Grosbach
05bb7c5045 AArch64: Better codegen for loading from __fp16.
Loading will generally extend to an f32 or an 64, so make sure
to match those patterns directly to load into the FPR16 register
class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-to-f64 path
which was first converting to f32 and then to f64 from there.

rdar://17594379

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212573 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 23:28:48 +00:00
Ulrich Weigand
b7fdc7ff16 [PowerPC] Implement atomic NAND operations as actual NAND
This changes the implementation of atomic NAND operations
from "a & ~b" (compatible with GCC < 4.4) to actual "~(a & b)"
(compatible with GCC >= 4.4).

This is in line with the common-code and ARM back-end change
implemented in r212433.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212547 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 16:16:02 +00:00
Daniel Sanders
7a16d24f8d [mips] Fixed struct/class mismatch introduced in r212522.
Clang emits a warning about this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212528 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 13:13:42 +00:00
Daniel Sanders
3c6b29cbde Fix r212522 - [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Added two lines that should have been in r212522.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212523 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 10:35:52 +00:00
Daniel Sanders
fbdb8e1eac [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Summary:
Follow on to r212519 to improve the encapsulation and limit the scope of the enums.

Also merged two very similar parser functions, fixed a bug where ASE's
were not being reported, and marked CPR1's as being 128-bit when MSA is
enabled.

Differential Revision: http://reviews.llvm.org/D4384


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212522 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 10:11:38 +00:00
Renato Golin
aebcee661e Revert "Refactor ARM subarchitecture parsing"
This reverts commit 7b4a688246.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212521 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 10:06:16 +00:00
Arnaud A. de Grandmaison
60d8767211 Truncate the immediate in logical operation to the register width
And continue to produce an error if the 32 most significant bits are not all ones or zeros.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212520 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 09:53:04 +00:00
Vladimir Medic
ffbc2a1325 Mips.abiflags is a new implicitly generated section that will be present on all new modules. The section contains a versioned data structure which represents essentially information to allow a program loader to determine the requirements of the application. This patch implements mips.abiflags section and provides test cases for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212519 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 08:59:22 +00:00
Chandler Carruth
25b7d54e7f [x86,SDAG] Sink the logic for folding shuffles of splats more
aggressively from the x86 shuffle lowering to the generic SDAG vector
shuffle formation code.

This code already tried to fold away shuffles of splats! It just had
lots of bugs and couldn't handle the case my new x86 shuffle lowering
needed.

First, it failed to correctly compute whether N2 was undef because it
pre-computed this, then did transformations which could *make* N2 undef,
then failed to ever re-consider the precomputed state.

Second, it didn't look through bitcasts at all, even in the safe cases
where they are just element-type bitcasts with no change to the number
of elements.

Third, it didn't handle all-zero bit casts nicely the way my code in the
x86 side of things did, which is essential to getting good zext-shuffle
lowerings.

But all of these are generic. I just ported the code down to this layer
and fixed the surrounding bugs. Tests exercising this in the x86 backend
still pass and some silly code in widen_cast-6.ll gets better. I updated
that test to be a bit more precise but it's still pretty unclear what
the value of the test is in this day and age.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212517 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 08:45:38 +00:00
Adam Nemet
f189d9cdf7 [X86] AVX512: Only allow k1-k7 as predicates to vpcmp*
As destination k0 is allowed but not as predicate/writemask.

I also modified the test to allow checking of error messages by the assembler.
I applied a similar approach to the test ret.s in the same directory.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212504 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-08 00:22:32 +00:00
Andrea Di Biagio
cfb83b7bac [x86] Fix assertion failure caused by a wrong combine of PSHUFD nodes with different types.
When combining a sequence of two PSHUFD dag nodes into a single PSHUFD,
make sure that we assign the correct type to the resulting PSHUFD.
X86ISD::PSHUFD dag nodes can be either MVT::v4i32 or MVT::v4f32.

Before this change, an assertion failure was triggered in method
'DAGCombinerInfo::CombineTo' when trying to combine the shuffles from the test
below into a single PSHUFD.

define <4 x float> @test1(<4 x float> %V) {
  %1 = shufflevector <4 x float> %V, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  %2 = shufflevector <4 x float> %1, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  ret <4 x float> %2
}



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212498 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 23:25:23 +00:00
Juergen Ributzka
1154be8198 [FastISel][X86] Fix smul.with.overflow.i8 lowering.
Add custom lowering code for signed multiply instruction selection, because the
default FastISel instruction selection for ISD::MUL will use unsigned multiply
for the i8 type and signed multiply for all other types. This would set the
incorrect flags for the overflow check.

This fixes <rdar://problem/17549300>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212493 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 21:52:21 +00:00
Louis Gerbarg
e7f8191b18 Allow AArch64FastISel to degrade graceully in the presence of an MVT::i128
Currently AArch64FastISel crashes if it tries to extend an integer into an
MVT::i128. This can happen by creating 128 bit integers like so:

  typedef unsigned int uint128_t __attribute__((mode(TI)));
  typedef int sint128_t __attribute__((mode(TI)));

This patch makes EmitIntExt check for their presence and then falls back to
SelectionDAG.

Tests included.

rdar://17516686

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212492 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 21:37:51 +00:00
Renato Golin
7b4a688246 Refactor ARM subarchitecture parsing
According to a FIXME in ARMMCTargetDesc.cpp the ARM version parsing should be
in the Triple helper class.

Patch by: Gabor Ballabas

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212479 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 20:01:11 +00:00
Ulrich Weigand
b053ddc909 [PowerPC] Fix no-assert build
r212476 caused a compile failure (unused variable) in a non-assertion
build ...



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212477 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 19:39:44 +00:00
Ulrich Weigand
bf7bfe3549 [PowerPC] Fix "byval align" arguments
Arguments passed as "byval align" should get the specified alignment
in the parameter save area.  There was some code in PPCISelLowering.cpp
that attempted to implement this, but this didn't work correctly:
while code did update the ArgOffset value, it neglected to update
the PtrOff value (which was already computed from the old ArgOffset),
and it also neglected to update GPR_idx -- fields skipped due to
alignment in the save area must likewise be skipped in GPRs.

This patch fixes and simplifies this logic by:
- handling argument offset alignment right at the beginning
  of argument processing, using a new helper routine
  CalculateStackSlotAlignment (this avoids having to update
  PtrOff and other derived values later on)
- not tracking GPR_idx separately, but always computing the
  correct GPR_idx for each argument *from* its ArgOffset
- removing some redundant computation in LowerFormalArguments:
  MinReservedArea must equal ArgOffset after argument processing,
  so there's no use in computing it twice.

[This doesn't change the behavior of the current clang front-end,
since that never creates "byval align" arguments at the moment.
This will change with a follow-on patch, however.]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212476 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 19:26:41 +00:00
Chandler Carruth
7fcb422bb2 [x86] Revert r212324 which was too aggressive w.r.t. allowing undef
lanes in vector splats.

The core problem here is that undef lanes can't *unilaterally* be
considered to contribute to splats. Their handling needs to be more
cautious. There is also a reported failure of the nightly testers
(thanks Tobias!) that may well stem from the same core issue. I'm going
to fix this theoretical issue, factor the APIs a bit better, and then
verify that I don't see anything bad with Tobias's reduction from the
test suite before recommitting.

Original commit message for r212324:
  [x86] Generalize BuildVectorSDNode::getConstantSplatValue to work for
  any constant, constant FP, or undef splat and to tolerate any undef
  lanes in a splat, then replace all uses of isSplatVector in X86's
  lowering with it.

  This fixes issues where undef lanes in an otherwise splat vector would
  prevent the splat logic from firing. It is a touch more awkward to use
  this interface, but it is much more accurate. Suggestions for better
  interface structuring welcome.

  With this fix, the code generated with the widening legalization
  strategy for widen_cast-4.ll is *dramatically* improved as the special
  lowering strategies for a v16i8 SRA kick in even though the high lanes
  are undef.

  We also get a slightly different choice for broadcasting an aligned
  memory location, and use vpshufd instead of vbroadcastss. This looks
  like a minor win for pipelining and domain crossing, but a minor loss
  for the number of micro-ops. I suspect its a wash, but folks can
  easily tweak the lowering if they want.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212475 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 19:03:32 +00:00
Matt Arsenault
0e1619e77c R600: Fix mishandling of load / store chains.
Fixes various bugs with reordering loads and stores.
Scalarized vector loads weren't collecting the chains
at all.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212473 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-07 18:34:45 +00:00