Commit Graph

118 Commits

Author SHA1 Message Date
Tim Northover
e87cadc49a ARM64: fix SELECT_CC lowering in absence of NaNs.
We were swapping the true & false results while testing for FMAX/FMIN,
but not putting them back to the original state if the later checks
failed.

Should fix PR19700.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208469 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-10 07:37:50 +00:00
James Molloy
00c4dbd10e [ARM64-BE] Teach fast-isel about how to set up sub-word stack arguments for big endian calls.
SelectionDAG already knows about this, but fast-isel was ignorant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208307 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 12:53:50 +00:00
Tim Northover
291cd09645 ARM64: make sure FastISel emits SSA MachineInstrs
We need to use a temporary register for a 2-step operation like REM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208297 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 10:30:56 +00:00
Hao Liu
1c2f863df9 AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ARM64 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208284 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 07:38:13 +00:00
Chad Rosier
8f0f458824 [ARM64][fast-isel] Disable target specific optimizations at -O0. Functionally,
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0.  The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check.  The AdvSIMDScalar pass is disabled by default at all
optimization levels.  This patch leaves that pass disabled by default.

Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging.  This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208223 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 16:41:55 +00:00
Tim Northover
04a359f768 AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208210 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:10:27 +00:00
James Molloy
2712c87cfe [ARM64-BE] Fix fast-isel, and add appropriate RUN lines to appropriate tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208200 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:55 +00:00
James Molloy
d93d214a67 [ARM64-BE] Fix variable-argument saving.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208199 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:48 +00:00
James Molloy
fca7f5c585 [ARM64-BE] Implement the lane-twiddling logic at AAPCS boundaries for big endian.
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.

The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208198 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:41 +00:00
James Molloy
737c2ac4fc [ARM64-BE] Implement the crazy bitcast handling for big endian vectors.
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.

Consider a simple memory load followed by a bitconvert then a store.
  v0 = load v2i32
  v1 = BITCAST v2i32 v0 to v4i16
       store v4i16 v2

In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.

LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence:  v0 = load v2i32

  v1 = REV v2i32                  (implicit)
  v2 = BITCAST v2i32 v1 to v4i16
  v3 = REV v4i16 v2               (implicit)
       store v4i16 v3

But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:

  v0 = load v2i32
  v1 = REV v2i32                  (implicit)
  v2 = REV v2i32
  v3 = BITCAST v2i32 v2 to v4i16
  v4 = REV v4i16
  v5 = REV v4i16 v4               (implicit)
       store v4i16 v5

This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
  (REV64_2s (REV64_4h X)) === (REV32_4h X)

There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.

Most bitconverts require some sort of conversion. The only exceptions are:
  a) Identity conversions -  vNfX <-> vNiX
  b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX

Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.

This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208194 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:53 +00:00
James Molloy
104629cc7c [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208192 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:36 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Kevin Qin
03145ebd88 [ARM64] Enable alignment control option in front-end for ARM64.
This is the modification in llvm part.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208074 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 09:48:52 +00:00
Rafael Espindola
260b6b05b9 Convert a CodeGen test into a MC test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207971 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 15:34:13 +00:00
Joey Gouly
72e96a51bf [ARM64] Correctly select ANDWri in FastISel.
http://reviews.llvm.org/D3598


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207917 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-03 17:27:06 +00:00
Tim Northover
b20252764d DAGCombine: prevent formation of illegal ConstantFP nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207850 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-02 17:25:02 +00:00
Tim Northover
ecc1896600 AArch64/ARM64: add patterns for post-indexed ST1 ops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207840 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-02 14:54:27 +00:00
Tim Northover
6f86e23c1a AArch64/ARM64: support indexed loads/stores on vector types.
While post-indexed LD1/ST1 instructions do exist for vector loads,
this patch makes use of the more flexible addressing-modes in LDR/STR
instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207838 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-02 14:54:15 +00:00
Bradley Smith
b378cacf1d [ARM64] Prefer generation of bzero on Darwin only
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207760 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-01 13:11:59 +00:00
Tim Northover
f2f35a9ca3 AArch64/ARM64: print BFM instructions as BFI or BFXIL
The canonical form of the BFM instruction is always one of the more explicit
extract or insert operations, which makes reading output much easier.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207752 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-01 12:29:38 +00:00
Weiming Zhao
fa1cf8cd68 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207702 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 21:07:24 +00:00
Tim Northover
b1c1b8a78d ARM64: print fp immediates without using scientific notation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207669 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 16:13:34 +00:00
Chad Rosier
fa2e88da1c [ARM64][fast-isel] Fast-isel doesn't know how to handle f128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207659 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 15:29:57 +00:00
Tim Northover
44a2f5610d ARM64: print lsr instead of lsrv for variable shifts (etc)
The canonical syntax for shifts by a variable amount does not end with 'v', but
that syntax should be supported as an alias (presumably for legacy reasons).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207649 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 13:37:07 +00:00
Tim Northover
d805bf8d61 AArch64/ARM64: use HS instead of CS & LO instead of CC.
On instructions using the NZCV register, a couple of conditions have dual
representations: HS/CS and LO/CC (meaning unsigned-higher-or-same/carry-set and
unsigned-lower/carry-clear). The first of these is more descriptive in most
circumstances, so we should print it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207644 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 13:14:03 +00:00
Tim Northover
ebde5a5e49 ARM64: use hex immediates for movz/movk instructions
Since these are mostly used in "lsl #16", "lsl #32", "lsl #48" combinations to
piece together an immediate in 16-bit chunks, hex is probably the most
appropriate format.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207635 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 11:19:40 +00:00
Tim Northover
87476b607c ARM64: hexify printing various immediate operands
This is mostly aimed at the NEON logical operations and MOVI/MVNI (since they
accept weird shifts which are more naturally understandable in hex notation).

Also changes BRK/HINT etc, which is probably a neutral change, but easier than
the alternative.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207634 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 11:19:28 +00:00
Tim Northover
2a2cce79be ARM64: print canonical syntax for add/sub (imm) instructions.
Since these instructions only accept a 12-bit immediate, possibly shifted left
by 12, the canonical syntax used by the architecture reference manual is "#N {,
lsl #12 }". We should accept an immediate that has already been shifted, (e.g.

Also, print a comment giving the full addend since it can be helpful.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207633 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 11:19:15 +00:00
Tim Northover
5b188b1cb8 ARM64: make sure FastISel uses a GPR64 source in 64-bit extensions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207620 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 09:32:01 +00:00
Hao Liu
5bbe6121c3 [ARM64]Fix a bug about incorrect operand order in an EXT instruction, which is introduced by r207485.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207500 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-29 07:51:19 +00:00
Hao Liu
270f09d712 [ARM64]Fix a bug when lowering shuffle vector to an EXT instruction.
E.g. Mask like <-1, -1, 1, ...> will generate incorrect EXT index.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207485 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-29 01:50:36 +00:00
Chad Rosier
2f3691eb61 [ARM64] Fix an issue where we were always assuming a copy was coming from a D subregister.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207423 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-28 16:21:50 +00:00
Hao Liu
0ddc7447d9 [ARM64]Fix a bug cannot select UQSHL/SQSHL with constant i64 shift amount.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207399 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-28 07:34:27 +00:00
Benjamin Kramer
3fd5902758 Update test not to check for a shuffle of an all-zero vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207354 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-27 11:54:45 +00:00
Michael Zolotukhin
abd7ca0706 Revert r206749 till a final decision about the intrinsics is made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207313 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 09:56:41 +00:00
Tilmann Scheller
e1cd93134f [ARM64] When compiling for ELF in PIC mode, local symbols shouldn't go through the GOT
There's no need for local symbols to go through the GOT, in fact it seems GNU ld is not even emitting GOT entries for local symbols and will error out when trying to resolve a GOT relocation for a local symbol.

This bug triggers when bootstrapping clang on AArch64 Linux with -fPIC and the ARM64 backend. The AArch64 backend is not affected.

With this commit it's now possible to bootstrap clang on AArch64 Linux with the ARM64 backend (-fPIC, -O3).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207226 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-25 13:43:18 +00:00
Jiangning Liu
0c4797c31a [ARM64] Handle fp128 for parameter passing on stack
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207222 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-25 12:07:03 +00:00
Tim Northover
5c4d1570ca ARM64: fix assertion in ISelDAGToDAG
Also an unused variable, so double bonus!

This should deal with PR19548.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207221 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-25 10:48:47 +00:00
Bradley Smith
8aa927abb5 [ARM64] Print preferred aliases for SFBM/UBFM in InstPrinter
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207219 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-25 10:25:29 +00:00
Kevin Qin
78eedb15c9 [ARM64] Support crc predicate on ARM64.
According to the specification, CRC is an optional extension of the
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207214 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-25 09:25:42 +00:00
Tim Northover
b62ba5eca0 AArch64/ARM64: implement BFI optimisation
ARM64 was not producing pure BFI instructions for bitfield insertion
operations, unlike AArch64. The approach had to be a little different (in
ISelDAGToDAG rather than ISelLowering), and the outcomes aren't identical but
hopefully this gives it similar power.

This should address PR19424.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207102 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-24 12:11:53 +00:00
Tim Northover
fe6f4e4d31 AArch64/ARM64: port more tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207101 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-24 12:11:46 +00:00
Quentin Colombet
28a24ca471 [ARM64] Fix the information we give to the peephole optimizer for comparison.
ANDS does not use the same encoding scheme as other xxxS instructions (e.g.,
ADDS). Take that into account to avoid wrong peephole optimization.

<rdar://problem/16693089>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207020 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-23 20:43:38 +00:00
Kevin Qin
81ea345894 [ARM64] Enable feature predicates for NEON / FP / CRYPTO.
AArch64 has feature predicates for NEON, FP and CRYPTO instructions.
This allows the compiler to generate code without using FP, NEON
or CRYPTO instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206949 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-23 06:22:48 +00:00
Tim Northover
2872e118b3 AArch64/ARM64: more testing from AArch64 to ARM64
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206889 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 12:45:47 +00:00
Tim Northover
8b36f98fd5 AArch64/ARM64: make use of ANDS and BICS instructions for comparisons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206888 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 12:45:42 +00:00
Tim Northover
c499ecd1d1 AArch64/ARM64: add extra testing from AArch64 to ARM64
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206887 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 12:45:32 +00:00
Tim Northover
85974bc77e AArch64/ARM64: mark fmul intrinsic as commutative.
This gives DAG patterns matching indexed patterns where either side is an
indexed vector.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206875 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 10:10:14 +00:00
Hao Liu
07dcdc7c90 Fix an infinite loop bug in DAG Combine about keeping transfering between ANY_EXTEND and SIGN_EXTEND.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206873 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 09:57:06 +00:00
Quentin Colombet
8959c39450 [CodeGenPrepare] Use APInt to check the value of the immediate in a and
while checking candidate for bit field extract.
Otherwise the value may not fit in uint64_t and this will trigger an
assertion.

This fixes PR19503.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206834 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-22 01:20:34 +00:00