Commit Graph

28860 Commits

Author SHA1 Message Date
Bradley Smith
492ea4a813 [ARM64] Check for proper immediate in shift/extend operands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208317 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 14:11:16 +00:00
Christian Pirker
c60a59cad3 ARM big endian function argument passing
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208316 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 14:06:24 +00:00
Daniel Sanders
7bd400ebfd [mips] Implement l[wd]c3, and s[wd]c3.
Summary:
These instructions were added in MIPS-I, and MIPS-II but were removed in
MIPS-III. Interestingly, GAS continues to accept them when assembling for
MIPS-III.

For the moment, these instructions will follow GAS and accept them for
MIPS-III and newer but this will be tightened up when the invalid-*.s
tests are added.

Depends on D3647

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3648

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208311 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 13:02:11 +00:00
James Molloy
00c4dbd10e [ARM64-BE] Teach fast-isel about how to set up sub-word stack arguments for big endian calls.
SelectionDAG already knows about this, but fast-isel was ignorant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208307 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 12:53:50 +00:00
Daniel Sanders
13517260e3 [mips] Marked up instructions added in MIPS-II and tested that IAS for -mcpu=mips1 does not accept them
Summary:
A small number of instructions are rejected with the wrong error message.
These have been placed in a separate test for now. There seems to be some
parsing quirk that triggers when these instructions are disabled.

Depends on D3571

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3647

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208305 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 12:40:48 +00:00
Daniel Sanders
dd9080a246 [mips] Implement tlbp, tlbr, tlbwi, and tlbwr
Reviewers: vmedic, dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3571

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208301 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 11:51:18 +00:00
Tim Northover
291cd09645 ARM64: make sure FastISel emits SSA MachineInstrs
We need to use a temporary register for a 2-step operation like REM.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208297 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 10:30:56 +00:00
Evgeniy Stepanov
89329e902c [asan] Preserve flags in asm instrumentation.
Patch by Yuri Gorshenin.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208296 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 09:55:24 +00:00
Hal Finkel
f35ce2376c Move late partial-unrolling thresholds into the processor definitions
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).

This does represent a small functionality change:
 - For generic x86-64 (which uses the SB model and, thus, will get some
   unrolling).
 - For AMD cores (because they still currently use the SB scheduling model)
 - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
   the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208289 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 09:14:44 +00:00
Hao Liu
1c2f863df9 AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ARM64 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208284 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 07:38:13 +00:00
Saleem Abdulrasool
dade1d5db5 ARM: support FK_SecRel_2 relocations on WoA
This adds FK_SecRel_2 relocation support to ARM.  This enables the building of
object files for armv7-windows-msvc which enables CodeView line tables for
debugging as opposed to armv7-windows-itanium which currently uses DWARF.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208273 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 01:35:57 +00:00
Filipe Cabecinhas
b19c087aa7 Lower certain build_vectors to insertps instructions
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.

Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.

Added some tests for different cases where this optimization triggers.

Reviewers: nadav, delena, craig.topper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3521

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 00:25:16 +00:00
Hal Finkel
df60e43e05 [X86TTI] Remove the unrolling branch limits
The loop stream detector (LSD) on modern Intel cores, which optimizes the
execution of small loops, has limits on the number of taken branches in
addition to uop-count limits (modern AMD cores have similar limits).
Unfortunately, at the IR level, estimating the number of branches that will be
taken is difficult. For one thing, it strongly depends on later passes (block
placement, etc.). The original implementation took a conservative approach and
limited the maximal BB DFS depth of the loop.  However, fairly-extensive
benchmarking by several of us has revealed that this is the wrong approach. In
fact, there are zero known cases where the branch limit prevents a detrimental
unrolling (but plenty of cases where it does prevent beneficial unrolling).

While we could improve the current branch counting logic by incorporating
branch probabilities, this further complication seems unjustified without a
motivating regression. Instead, unless and until a regression appears, the
branch counting will be removed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208255 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 22:25:18 +00:00
Quentin Colombet
97e15a8309 [X86] Selectively mark the FMA variants inside a family as isCommutable.
Given a FMA family (e.g., 213, 231), not all the variants (i.e., register or
memory) are commutable.
E.g., for the 213 family (with the syntax src1, src2, src3):
fmaXXX213 A, B, reg3/mem3 == fmaXXX213 B, A, reg3/mem3

Now consider the 231 family:
fmaXXX231 A, B, reg3 == fmaXXX231 A, reg3, B
But
fmaXXX231 A, B, mem3 != fmaXXX231 A, mem3, B
Indeed, mem3 cannot be the second argument of the memory variant of fmaXXX231.

Working on a reduced test case!

<rdar://problem/16800495>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208252 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 21:43:35 +00:00
Eric Christopher
db411a94d2 Reformat a couple of functions for clarity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208248 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 21:05:47 +00:00
Jyotsna Verma
8b915bad69 [Hexagon] Add New TSFlags to be used in the upcoming patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208239 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 19:07:34 +00:00
Chandler Carruth
905e33545c [x86] Make the 'x86-64' cpu, what I see as and many use as the generic
default architecture for reasonable modern x86 processors, actually be
modern. This processor model should essentially be "tuned" for modern
x86 chips as much as possible without undue penalties on any specific
architecture. Previously we weren't even using the nice scheduling
models. There are a few other tweaks needed here, but this change at
least I have benchmarked across a decent swatch of chips (intel's
clovertown, westmere, and sandybridge; amd's istanbul) and seen no
significant regressions.

If anyone has suggested ways to test this, just let me know. Somewhat
alarmingly, no existing tests failed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208230 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 17:37:03 +00:00
Chad Rosier
8f0f458824 [ARM64][fast-isel] Disable target specific optimizations at -O0. Functionally,
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0.  The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check.  The AdvSIMDScalar pass is disabled by default at all
optimization levels.  This patch leaves that pass disabled by default.

Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging.  This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208223 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 16:41:55 +00:00
Daniel Sanders
7858e495e9 [mips] Add highly experimental support for MIPS-I, MIPS-II, MIPS-III, and MIPS-V
Summary:
These processors will only be available for the integrated assembler at
first (CodeGen will emit a fatal error saying they are not implemented).

The intention is to work through the existing instructions and correctly
annotate the ISA they were added in so that we have a sufficiently good
base to start MIPS64r6 development. MIPS64r6 removes/re-encodes certain
instructions and I believe it is best to define ISA's using set-union's
as far as possible rather than using set-subtraction.

Reviewers: vmedic

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D3569

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208221 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 16:25:22 +00:00
Rafael Espindola
6cf16a40d3 Use range loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208218 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:53:32 +00:00
Daniel Sanders
0c78010b88 [mips] Add FGR_32/FGR_64/GPR_64 adjectives and use then instead of FGRPredicates/GPRPredicates
Summary:
No functional change (confirmed by diffing tablegen-erated files).

Depends on D3642

Reviewers: vmedic, dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3645

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208213 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:25:43 +00:00
Daniel Sanders
b49c582218 [mips] Add INSN_<name> adverbs and start using them instead of AdditionalPredicates overrides
Summary:
No functional change

Depends on D3641

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3642

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208212 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:11:46 +00:00
Tim Northover
04a359f768 AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208210 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 14:10:27 +00:00
Daniel Sanders
b1c5f88237 [mips] Add ISA_<name> adverbs and start using them instead of AdditionalPredicates overrides
Summary:
One small functional change. The recently added PAUSE instruction now has
the HasStdEnc predicate which was accidentally removed by a Requires<>.

Depends on D3640

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3641

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208209 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 13:57:22 +00:00
Rafael Espindola
2842c051b3 Remove the UseCFI option from createAsmStreamer.
We were already always passing true, this just removes the option.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208205 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 13:00:43 +00:00
Daniel Sanders
b2d170d61b [mips] Continue splitting Instruction.Predicates into smaller lists and re-join them with !listconcat
Summary:
Move IsGP64bit into GPRPredicates, and IsFP64bit/NotFP64bit into FGRPredicates

No functional change (confirmed by diffing tablegen-erated files).

Depends on D3639

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3640

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208201 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:48:37 +00:00
James Molloy
2712c87cfe [ARM64-BE] Fix fast-isel, and add appropriate RUN lines to appropriate tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208200 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:55 +00:00
James Molloy
d93d214a67 [ARM64-BE] Fix variable-argument saving.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208199 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:48 +00:00
James Molloy
fca7f5c585 [ARM64-BE] Implement the lane-twiddling logic at AAPCS boundaries for big endian.
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.

The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208198 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:33:41 +00:00
Daniel Sanders
1caec99d5d [mips] Move IsFP64bit/NotFP64bit to the front of the AdditionalPredicates list
Summary:
This makes it easier to prove a more complicated change in the next commit
is non-functional.

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3639

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208197 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 12:27:46 +00:00
James Molloy
737c2ac4fc [ARM64-BE] Implement the crazy bitcast handling for big endian vectors.
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.

Consider a simple memory load followed by a bitconvert then a store.
  v0 = load v2i32
  v1 = BITCAST v2i32 v0 to v4i16
       store v4i16 v2

In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.

LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence:  v0 = load v2i32

  v1 = REV v2i32                  (implicit)
  v2 = BITCAST v2i32 v1 to v4i16
  v3 = REV v4i16 v2               (implicit)
       store v4i16 v3

But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:

  v0 = load v2i32
  v1 = REV v2i32                  (implicit)
  v2 = REV v2i32
  v3 = BITCAST v2i32 v2 to v4i16
  v4 = REV v4i16
  v5 = REV v4i16 v4               (implicit)
       store v4i16 v5

This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
  (REV64_2s (REV64_4h X)) === (REV32_4h X)

There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.

Most bitconverts require some sort of conversion. The only exceptions are:
  a) Identity conversions -  vNfX <-> vNiX
  b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX

Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.

This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208194 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:53 +00:00
James Molloy
1f890ce2dc [ARM64-BE] Predicate VLDR/VSTR for vectors as little-endian only. We must use LD1/ST1 on big-endian.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208193 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:45 +00:00
James Molloy
104629cc7c [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208192 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 11:28:36 +00:00
Daniel Sanders
a3953a30b6 [mips] Split Instruction.Predicates into smaller lists and re-join them with !listconcat
Summary:
The overall idea is to chop the Predicates list into subsets that are
usually overridden independently. This allows subclasses to partially
override the predicates of their superclasses without having to re-add all
the existing predicates.

This patch starts the process by moving HasStdEnc into a new
EncodingPredicates list and almost everything else into
AdditionalPredicates.

It has revealed a couple likely bugs where 'let Predicates' has removed
the HasStdEnc predicate.

No functional change (confirmed by diffing tablegen-erated files).

Depends on D3549, D3506

Reviewers: vmedic

Differential Revision: http://reviews.llvm.org/D3550

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208184 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 10:27:09 +00:00
Daniel Sanders
0c9ea21554 [mips] Move HasStdEnc to the front of the predicates lists.
Summary:
This will make it easier to prove that a more complicated change in the
following commit is non-functional.

No functional change.

Depends on D3506

Reviewers: vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3549

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208179 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 09:58:05 +00:00
Evgeniy Stepanov
227c4c6185 [asan] Add a flag to control asm instrumentation.
With this change, asm instrumentation is disabled by default.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208167 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 07:54:11 +00:00
Joerg Sonnenberger
2ecdcdc026 Allow using normal .eh_frame based unwinding on ARM. Use the same
encodings as x86. Use this exception model for NetBSD.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208166 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 07:49:34 +00:00
Saleem Abdulrasool
3fe09b705c ARM: mark additional instructions as MachineFrameSetup
Mark up additional instructions which are part of the function prologue as
MachineFrameSetup.  These instructions are part of the function prologue,
emitted by the PEI pass to setup the stack for use in the activating frame.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208153 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 03:03:31 +00:00
Saleem Abdulrasool
0029e2d665 ARM: fix WoA PEI instruction selection
The ARM::BLX instruction is an ARM mode instruction.  The Windows on ARM target
is limited to Thumb instructions.  Correctly use the thumb mode tBLXr
instruction.  This would manifest as an errant write into the object file as the
instruction is 4-bytes in length rather than 2.  The result would be a corrupted
object file that would eventually result in an executable that would crash at
runtime.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208152 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-07 03:03:27 +00:00
Andrew Trick
8abb75bc61 Update an embarassing out-of-date comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208137 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 22:18:43 +00:00
Joerg Sonnenberger
b84f890bc3 If a function needs a frame pointer, but r11 (aka fp) has not been used,
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208129 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 20:43:01 +00:00
Andrea Di Biagio
8a712ba229 [X86] Improve the lowering of BITCAST dag nodes from type f64 to type v2i32 (and vice versa).
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.

With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).

This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 17:09:03 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Tim Northover
3524723195 AArch64/ARM64: implement diagnosis of unpredictable loads & stores
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208091 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 14:15:14 +00:00
Tim Northover
09b3bd8ca3 AArch64/ARM64: make NEON vector list parsing a bit more robust
It doesn't change the results, but it seems silly not to diagnose obvious
problems early on.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208083 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 12:50:51 +00:00
Tim Northover
d58350d789 AArch64/ARM64: add more specific diagnostic for floating imm 0.0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208082 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 12:50:47 +00:00
Tim Northover
c5f9aff43e AArch64/ARM64: add more specific diagnostic for invalid vector lanes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208081 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 12:50:44 +00:00
Tim Northover
6e64f90dc5 AArch64/ARM64: produce more informative diagnostic assembling some immediates
No tests here, they'll be added when the entire neon-diagnostics.s test from
AArch64 is enabled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208079 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 11:18:53 +00:00
Christian Pirker
80fd09110d ARM: For thumb fixups store halfwords high first and low second
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208076 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 10:05:11 +00:00
Kevin Qin
03145ebd88 [ARM64] Enable alignment control option in front-end for ARM64.
This is the modification in llvm part.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208074 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 09:48:52 +00:00