Commit Graph

10566 Commits

Author SHA1 Message Date
Christian Pirker
34b9ca5e14 ARMEB: Fix byte order of EH frame unwinding instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208689 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-13 11:41:49 +00:00
Weiming Zhao
0449d522a6 Folding into CSEL when there is ZEXT between SETCC and ADD
Normally, patterns like (add x, (setcc cc ...)) will be folded into
(csel x, x+1, not cc). However, if there is a ZEXT after SETCC, they
won't be folded. This patch recognizes the ZEXT and allows the
generation of CSINC.

This patch fixes bug 19680.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208660 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-13 00:40:58 +00:00
Adam Nemet
73282018a1 [DAGCombiner] Split up an indexed load if only the base pointer value is live
Right now the load may not get DCE'd because of the side-effect of updating
the base pointer.

This can happen if we lower a read-modify-write of an illegal larger type
(e.g. i48) such that the modification only affects one of the subparts (the
lower i32 part but not the higher i16 part).  See the testcase.

In order to spot the dead load we need to revisit it when SimplifyDemandedBits
decided that the value of the load is masked off.  This is the
CommitTargetLoweringOpt piece.

I checked compile time with ARM64 by sending SPEC bitcode files through llc.
No measurable change.

Fixes <rdar://problem/16031651>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208640 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 23:00:03 +00:00
Louis Gerbarg
1633f752f8 Fix ARM bswap16.ll test on Windows
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208638 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 22:13:07 +00:00
Reid Kleckner
17335ce80f Try to fix an SDAG dependence issue with sret
r208453 added support for having sret on the second parameter.  In that
change, the code for copying sret into a virtual register was hoisted
into the loop that lowers formal parameters.  This caused a "Wrong
topological sorting" assertion failure during scheduling when a
parameter is passed in memory.  This change undoes that by creating a
second loop that deals with sret.

I'm worried that this fix is incomplete.  I don't fully understand the
dependence issues.  However, with this change we produce the same DAGs
we used to produce, so if they are broken, they are just as broken as
they have always been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208637 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 22:01:27 +00:00
Adam Nemet
45fc47013f [Test] Trim unnecessary .c and .cpp from config.suffix in lit.local.cfg
Tested by comparing make check VERBOSE=1 before and after to make sure
no tests are missed.  (VERBOSE=1 prints the list of tests.)

Only one test :( remains where .cpp is required:

tools/llvm-cov/range_based_for.cpp:// RUN: llvm-cov range_based_for.cpp | FileCheck %s --check-prefix=STDOUT

The topic was discussed in this thread:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140428/214905.html

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208621 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 19:57:31 +00:00
Louis Gerbarg
9cec62a27f Add support bswap16 to/from memory compiling to rev16 on ARM/Thumb
The current patterns for REV16 misses mostn __builtin_bswap16() due to
legalization promoting the operands to from load/stores toi32s and then
truncing/extending them. This patch adds new patterns that catch the resultant
DAGs and codegens them to rev16 instructions. Tests included.

rdar://15353652

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208620 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 19:53:52 +00:00
Tim Northover
d6cd0381f6 TableGen: use PrintMethods to print more aliases
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208607 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 18:04:06 +00:00
Matt Arsenault
5049ca67c2 R600: Add mul24 intrinsics
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208604 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 17:49:57 +00:00
Matt Arsenault
621299806c Make SimplifyDemandedBits understand BUILD_PAIR
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208598 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 17:14:48 +00:00
Benjamin Kramer
b31a977c9c X86: Make sure that we have SSE4.1 before we generate insertps nodes.
PR19721.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208552 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 13:12:08 +00:00
Christian Pirker
5c39a97a60 ARM: Implement big endian bit-conversion for NEON type
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208538 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 11:19:20 +00:00
Elena Demikhovsky
1cec507d6d AVX-512: changes in intrinsics
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208522 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 07:18:51 +00:00
Hal Finkel
70a83b490e [PowerPC] Add global named register support
Support for the intrinsics that read from and write to global named registers
is added for r1, r2 and r13 (depending on the subtarget).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208509 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 19:29:11 +00:00
Hal Finkel
87f4cf7028 [PowerPC] On PPC32, 128-bit shifts might be runtime calls
The counter-loops formation pass needs to know what operations might be
function calls (because they can't appear in counter-based loops). On PPC32,
128-bit shifts might be runtime calls (even though you can't use __int128 on
PPC32, it seems that SROA might form them).

Fixes PR19709.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208501 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 16:23:29 +00:00
Filipe Cabecinhas
4ccf0ebb19 Fixed a bug when lowering build_vector (PR19694)
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208487 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 08:12:56 +00:00
Vincent Lejeune
e283f74133 R600/SI: Fold fabs/fneg into src input modifier
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208480 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-10 19:18:39 +00:00
Vincent Lejeune
3378ca7d5c R600/SI: Prettier display of input modifiers
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208479 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-10 19:18:33 +00:00
Tim Northover
e87cadc49a ARM64: fix SELECT_CC lowering in absence of NaNs.
We were swapping the true & false results while testing for FMAX/FMIN,
but not putting them back to the original state if the later checks
failed.

Should fix PR19700.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208469 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-10 07:37:50 +00:00
Reid Kleckner
d30c11edde Revert "[ms-cxxabi] Add a new calling convention that swaps 'this' and 'sret'"
This reverts commit r200561.

This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value.  This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.

Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208459 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:56:42 +00:00
Reid Kleckner
805a83c041 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208453 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 22:32:13 +00:00
Reid Kleckner
b3275b9fca Fix ARM intrinsics-overflow.ll test on Windows
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208448 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 21:52:48 +00:00
Louis Gerbarg
7a9fbab182 Add custom lowering for add/sub with overflow intrinsics to ARM
This patch adds support to ARM for custom lowering of the
llvm.{u|s}add.with.overflow.i32 intrinsics for i32/i64. This is particularly useful
for handling idiomatic saturating math functions as generated by
InstCombineCompare.

Test cases included.

rdar://14853450

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208435 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 17:02:49 +00:00
Tom Stellard
3f26d366a4 R600/SI: Teach SIInstrInfo::moveToVALU() how to move S_LOAD_*_IMM instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208432 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 16:42:22 +00:00
Tom Stellard
300094fd84 R600/SI: Fix SMRD pattern for offsets > 32 bits
We were dropping the high bits of 64-bit immediate offsets.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208431 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 16:42:21 +00:00
Tom Stellard
561bb44525 R600: Expand i64 SELECT_CC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208430 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 16:42:19 +00:00
Tom Stellard
87b983680c R600: Move MIN/MAX matching from LowerOperation() to PerformDAGCombine()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208429 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 16:42:16 +00:00
James Molloy
bfaccd494f Attempt to pacify the bots - this commit requires asserts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208424 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 16:20:53 +00:00
Oliver Stannard
e2948385b9 ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208413 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 14:01:47 +00:00
Daniel Sanders
32650944eb [mips][mips64r6] Add experimental support for MIPS32r6 and MIPS64r6
Summary:
Adds MIPS32r6/MIPS64r6 and checks the compatibility requirements for these
processors.

I've also included comments to describe removed and re-encoded instructions,
along with placeholder def's for the new instructions but there are no
functional changes to codegen at this point.

Reviewers: jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3622

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208399 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 09:46:21 +00:00
Saleem Abdulrasool
74d614a6fc ARM: support PIC on Windows on ARM
Handle lowering of global addresses for PIC mode compilation on Windows.  Always
use the movw/movt load to load the address as Windows on ARM requires ARMv7+ and
is a pure Thumb environment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208385 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 00:58:32 +00:00
Filipe Cabecinhas
e4a3254c02 Optimize shufflevector that copies an i64/f64 and zeros the rest.
Summary:
Also ran clang-format on the function. The code added is the last else
if block.

Reviewers: nadav, craig.topper, delena

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3518

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 23:16:08 +00:00
Justin Bogner
73773ce844 test/CodeGen: Check that the correct register is used in a store
This tightens up r208351 to ensure that a store is fed with the
correct value.

Thanks to Quentin Colombet for spotting this!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208368 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 22:45:07 +00:00
Justin Bogner
8115f93cdb Make a CodeGen test more robust against vector register selection
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208351 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 18:53:56 +00:00
Andrea Di Biagio
2360e51fd0 [X86] Add target specific combine rules to fold SSE2/AVX2 packed arithmetic shift intrinsics.
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.

The rules are:
 - Always fold a packed arithmetic shift by zero to its first operand;
 - Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
   the shift count is known to be smaller than the vector element size.

This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.

Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208342 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 17:44:04 +00:00
Saleem Abdulrasool
f37151a2fd test: fix test on Windows
When building on Windows, the default target is Windows.  Windows on ARM does
not support ARM mode compilation, resulting in test failures.  Simply specify a
triple to ensure that we are testing the correct behaviour.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208340 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 17:11:29 +00:00
Christian Pirker
c60a59cad3 ARM big endian function argument passing
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208316 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 14:06:24 +00:00
James Molloy
00c4dbd10e [ARM64-BE] Teach fast-isel about how to set up sub-word stack arguments for big endian calls.
SelectionDAG already knows about this, but fast-isel was ignorant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208307 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 12:53:50 +00:00
Tim Northover
291cd09645 ARM64: make sure FastISel emits SSA MachineInstrs
We need to use a temporary register for a 2-step operation like REM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208297 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 10:30:56 +00:00
Hao Liu
1c2f863df9 AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ARM64 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208284 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 07:38:13 +00:00
Filipe Cabecinhas
b19c087aa7 Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 00:25:16 +00:00
Quentin Colombet
57b4c5d473 [X86] Add a test case for r208252.
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.

I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0

Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0

<rdar://problem/16800495> 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208260 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 22:52:58 +00:00
Chad Rosier
8f0f458824 [ARM64][fast-isel] Disable target specific optimizations at -O0. Functionally,
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0.  The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check.  The AdvSIMDScalar pass is disabled by default at all
optimization levels.  This patch leaves that pass disabled by default.

Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging.  This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208223 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 16:41:55 +00:00
Tim Northover
04a359f768 AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208210 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:10:27 +00:00
James Molloy
2712c87cfe [ARM64-BE] Fix fast-isel, and add appropriate RUN lines to appropriate tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208200 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:55 +00:00
James Molloy
d93d214a67 [ARM64-BE] Fix variable-argument saving.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208199 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:48 +00:00
James Molloy
fca7f5c585 [ARM64-BE] Implement the lane-twiddling logic at AAPCS boundaries for big endian.
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.

The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208198 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:41 +00:00
James Molloy
737c2ac4fc [ARM64-BE] Implement the crazy bitcast handling for big endian vectors.
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.

Consider a simple memory load followed by a bitconvert then a store.
  v0 = load v2i32
  v1 = BITCAST v2i32 v0 to v4i16
       store v4i16 v2

In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.

LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence:  v0 = load v2i32

  v1 = REV v2i32                  (implicit)
  v2 = BITCAST v2i32 v1 to v4i16
  v3 = REV v4i16 v2               (implicit)
       store v4i16 v3

But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:

  v0 = load v2i32
  v1 = REV v2i32                  (implicit)
  v2 = REV v2i32
  v3 = BITCAST v2i32 v2 to v4i16
  v4 = REV v4i16
  v5 = REV v4i16 v4               (implicit)
       store v4i16 v5

This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
  (REV64_2s (REV64_4h X)) === (REV32_4h X)

There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.

Most bitconverts require some sort of conversion. The only exceptions are:
  a) Identity conversions -  vNfX <-> vNiX
  b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX

Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.

This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208194 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:53 +00:00
James Molloy
104629cc7c [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208192 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:36 +00:00
Tim Northover
0d427515b3 AArch64/ARM64: run test on ARM64 too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208188 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 10:47:04 +00:00