The distinction is mostly useful in the front-end. By the time we get here,
there are very few situations where we actually want different behaviour for
Darwin and IOS (in fact Darwin mostly just exists in a few tests). So this
should reduce any surprising weirdness for anyone using it.
No functional change on anything anyone actually cares about.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224035 91177308-0d34-0410-b5e6-96231b3b80d8
Quite a major error here: the expansions for the Pseudos with and without
folded load were mixed up. Fortunately it only affects ARM-mode, when not using
movw/movt, on Darwin. I'm guessing no-one actually uses that combination.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223986 91177308-0d34-0410-b5e6-96231b3b80d8
Removing an unused function which is causing one of the build bots to fail.
This was introduced in the commit r223113. A proper cleanup of the so_imm
tblgen defintion (made redundant by the mod_imm definition) needs to happen
soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223115 91177308-0d34-0410-b5e6-96231b3b80d8
Certain ARM instructions accept 32-bit immediate operands encoded as a 8-bit
integer value (0-255) and a 4-bit rotation (0-30, even). Current ARM assembly
syntax support in LLVM allows the decoded (32-bit) immediate to be specified
as a single immediate operand for such instructions:
mov r0, #4278190080
The ARMARM defines an extended assembly syntax allowing the encoding to be made
more explicit, as in:
mov r0, #255, #8 ; (same 32-bit value as above)
The behaviour of the two instructions can be different w.r.t flags, which is
documented under "Modified immediate constants" in ARMARM. This patch enables
support for this extended syntax at the MC layer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223113 91177308-0d34-0410-b5e6-96231b3b80d8
Creating tests for the ConstantIslands pass is very difficult, since it depends
on precise layout details. Having the ability to precisely inject a number of
bytes into the stream helps greatly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221903 91177308-0d34-0410-b5e6-96231b3b80d8
The current instruction selection patterns for SMULW[BT] and SMLAW[BT]
are incorrect. These instructions multiply a 32-bit and a 16-bit value
(both signed) and return the top 32 bits of the 48-bit result. This
preserves the 16 bits of overflow, whereas the patterns they currently
match truncate the result to 16 bits then sign extend.
To select these instructions, we would need to match an ISD::SMUL_LOHI,
a sign extend, two shifts and an or. There is no way to match SMUL_LOHI
in an instruction pattern as it defines multiple values, so this would
have to be done in C++. I have raised
http://llvm.org/bugs/show_bug.cgi?id=21297 to cover allowing correct
selection of these instructions.
This fixes http://llvm.org/bugs/show_bug.cgi?id=19396
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220196 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Make use of isAtLeastRelease/Acquire in the ARM/AArch64 backends
These helper functions are introduced in D4844.
Depends D4844
Test Plan: make check-all passes
Reviewers: jfb
Subscribers: aemerson, llvm-commits, mcrosier, reames
Differential Revision: http://reviews.llvm.org/D4937
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215902 91177308-0d34-0410-b5e6-96231b3b80d8
These are system-only instructions for CPUs with virtualization
extensions, allowing a hypervisor easy access to all of the various
different AArch32 registers.
rdar://problem/17861345
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@215700 91177308-0d34-0410-b5e6-96231b3b80d8
Although the final shifter operand is a rotate, this actually only matters for
the half-word extends when the amount == 24. Otherwise folding a shift in is
just as good.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213753 91177308-0d34-0410-b5e6-96231b3b80d8
The post-indexed instructions were missing the constraint, causing unpredictable STRH instructions to be emitted.
The earlyclobber constraint on the pre-indexed STR instructions is not strictly necessary, as the instruction selection for pre-indexed STR instructions goes through an additional layer of pseudo instructions which have the constraint defined, however it doesn't hurt to specify the constraint directly on the pre-indexed instructions as well, since at some point someone might create instances of them programmatically and then the constraint is definitely needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213729 91177308-0d34-0410-b5e6-96231b3b80d8
The post-indexed instructions were missing the constraint, causing unpredictable STR instructions to be emitted.
The earlyclobber constraint on the pre-indexed STR instructions is not strictly necessary, as the instruction selection for pre-indexed STR instructions goes through an additional layer of pseudo instructions which have the constraint defined, however it doesn't hurt to specify the constraint directly on the pre-indexed instructions as well, since at some point someone might create instances of them programmatically and then the constraint is definitely needed.
This fixes PR20323.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213369 91177308-0d34-0410-b5e6-96231b3b80d8
subtarget. This involved having the movt predicate take the current
function - since we care about size in instruction selection for
whether or not to use movw/movt take the function so we can check
the attributes. This required adding the current MachineFunction to
FastISel and propagating through.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212309 91177308-0d34-0410-b5e6-96231b3b80d8
Adds support for __builtin_arm_isb. Also corrects DSB and ISB instructions
modelling by adding has-side-effects property.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212276 91177308-0d34-0410-b5e6-96231b3b80d8
Strictly, it's unpredictable. But we don't quite model that yet and an error is
better than ignoring the issue. This one somehow got left out before though.
rdar://problem/15997748
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211490 91177308-0d34-0410-b5e6-96231b3b80d8
The armv7-windows-itanium environment is nearly identical to the MSVC ABI. It
has a few divergences, mostly revolving around the use of the Itanium ABI for
C++. VLA support is one of the extensions that are amongst the set of the
extensions.
This adds support for proper VLA emission for this environment. This is
somewhat similar to the handling for __chkstk emission on X86 and the large
stack frame emission for ARM. The invocation style for chkstk is still
controlled via the -mcmodel flag to clang.
Make an explicit note that this is an extension.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210489 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic permits the emission of platform specific undefined sequences.
ARM has reserved the 0xde opcode which takes a single integer parameter (ignored
by the CPU). This permits the operating system to implement custom behaviour on
this trap. The llvm.arm.undefined intrinsic is meant to provide a means for
generating the target specific behaviour from the frontend. This is
particularly useful for Windows on ARM which has made use of a series of these
special opcodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209390 91177308-0d34-0410-b5e6-96231b3b80d8
To get at least one use of the change (and some actual tests) in with its
commit, I've enabled the AArch64 & ARM64 NEON mov aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208867 91177308-0d34-0410-b5e6-96231b3b80d8
The UDF instruction is a reserved undefined instruction space. The assembler
mnemonic was introduced with ARM ARM rev C.a. The instruction is not predicated
and the immediate constant is ignored by the CPU. Add support for the three
encodings for this instruction.
The changes to the invalid instruction test is due to the fact that the invalid
instructions actually overlap with the undefined instruction. Introduction of
the new instruction results in a partial decode as an undefined sequence. Drop
the tests as they are invalid instruction patterns anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208751 91177308-0d34-0410-b5e6-96231b3b80d8
The current patterns for REV16 misses mostn __builtin_bswap16() due to
legalization promoting the operands to from load/stores toi32s and then
truncing/extending them. This patch adds new patterns that catch the resultant
DAGs and codegens them to rev16 instructions. Tests included.
rdar://15353652
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208620 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic is no longer needed with the new @llvm.arm.hint(i32) intrinsic
which provides a generic, extensible manner for adding hint instructions. This
functionality can now be represented as @llvm.arm.hint(i32 5).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207246 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce the llvm.arm.hint(i32) intrinsic that can be used to inject hints into
the instruction stream. This is particularly useful for generating IR from a
compiler where the user may inject an intrinsic (e.g. __yield). These are then
pattern substituted into the correct instruction which already existed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207242 91177308-0d34-0410-b5e6-96231b3b80d8
alignments on vld/vst instructions. And report errors for
alignments that are not supported.
While this is a large diff and an big test case, the changes
are very straight forward. But pretty much had to touch
all vld/vst instructions changing the addrmode to one of the
new ones that where added will do the proper checking for
the specific instruction.
FYI, re-committing this with a tweak so MemoryOp's default
constructor is trivial and will work with MSVC 2012. Thanks
to Reid Kleckner and Jim Grosbach for help with the tweak.
rdar://11312406
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205986 91177308-0d34-0410-b5e6-96231b3b80d8
It doesn't build with MSVC 2012, because MSVC doesn't allow union
members that have non-trivial default constructors. This change added
'SMLoc AlignmentLoc' to MemoryOp, which made MemoryOp's default ctor
non-trivial.
This reverts commit r205930.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205944 91177308-0d34-0410-b5e6-96231b3b80d8
alignments on vld/vst instructions. And report errors for
alignments that are not supported.
While this is a large diff and an big test case, the changes
are very straight forward. But pretty much had to touch
all vld/vst instructions changing the addrmode to one of the
new ones that where added will do the proper checking for
the specific instruction.
rdar://11312406
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205930 91177308-0d34-0410-b5e6-96231b3b80d8
Removed "GNU Assembler extension (compatibility)" definitions from ARMInstrInfo.td
Fixed ARMAsmParser::ParseInstruction GNU compatability branch, so it also works for thumb mode from now.
Added new tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205622 91177308-0d34-0410-b5e6-96231b3b80d8
Implementing this via ComputeMaskedBits has two advantages:
+ It actually works. DAGISel doesn't deal with the chains properly
in the previous pattern-based solution, so they never trigger.
+ The information can be used in other DAG combines, as well as the
trivial "get rid of truncs". For example if the trunc is in a
different basic block.
rdar://problem/16227836
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205540 91177308-0d34-0410-b5e6-96231b3b80d8
The previous situation where ATOMIC_LOAD_WHATEVER nodes were expanded
at MachineInstr emission time had grown to be extremely large and
involved, to account for the subtly different code needed for the
various flavours (8/16/32/64 bit, cmpxchg/add/minmax).
Moving this transformation into the IR clears up the code
substantially, and makes future optimisations much easier:
1. an atomicrmw followed by using the *new* value can be more
efficient. As an IR pass, simple CSE could handle this
efficiently.
2. Making use of cmpxchg success/failure orderings only has to be done
in one (simpler) place.
3. The common "cmpxchg; did we store?" idiom can be exposed to
optimisation.
I intend to gradually improve this situation within the ARM backend
and make sure there are no hidden issues before moving the code out
into CodeGen to be shared with (at least ARM64/AArch64, though I think
PPC & Mips could benefit too).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205525 91177308-0d34-0410-b5e6-96231b3b80d8
The Cyclone CPU is similar to swift for most LLVM purposes, but does have two
preferred instructions for zeroing a VFP register. This teaches LLVM about
them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205309 91177308-0d34-0410-b5e6-96231b3b80d8
We've already got versions without the barriers, so this just adds IR-level
support for generating the new v8 ones.
rdar://problem/16227836
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204813 91177308-0d34-0410-b5e6-96231b3b80d8
ATOMIC_STORE operations always get here as a lowered ATOMIC_SWAP, so there's no
need for any code to handle them specially.
There should be no functionality change so no tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203567 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This commit gives an address mode to the PLD instruction. We
were getting an assertion failure in the frame lowering code
because we had code that was doing a pld of a stack allocated
address. The frame lowering was checking the address mode and
then asserting because pld had none defined.
This commit fixes pld for arm mode. There was a previous fix for
thumb mode in a separate commit. The commit for thumb mode
added a test in a separate file because it would otherwise fail
for arm. This commit moves the thumb test back into the prefetch.ll
file and adds the corresponding arm test.
Differential Revision: http://llvm-reviews.chandlerc.com/D2622
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200248 91177308-0d34-0410-b5e6-96231b3b80d8
With constant-sharing, litpool loads consume 4 + N*2 bytes of code, but
movw/movt pairs consume 8*N. This means litpools are better than movw/movt even
with just one use. Other materialisation strategies can still be better though,
so the logic is a little odd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199891 91177308-0d34-0410-b5e6-96231b3b80d8
Fix MLA defs to use register class GPRnopc.
Add encoding tests for multiply instructions.
(Alias for MUL/SMLAL/UMLAL added by r199026.)
Patch by Zhaoshi.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199491 91177308-0d34-0410-b5e6-96231b3b80d8
The implicit immediate 0 forms are assembly aliases, not distinct instruction
encodings. Fix the initial implementation introduced in r198914 to an alias to
avoid two separate instruction definitions for the same encoding.
An InstAlias is insufficient in this case as the necessary due to the need to
add a new additional operand for the implicit zero. By using the AsmPsuedoInst,
fall back to the C++ code to transform the instruction to the equivalent
_POST_IMM form, inserting the additional implicit immediate 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199032 91177308-0d34-0410-b5e6-96231b3b80d8
The disassembler would no longer be able to disambiguage between the two
variants (explicit immediate #0 vs implicit, omitted #0) for the ldrt, strt,
ldrbt, strbt mnemonics as both versions indicated the disassembler routine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198944 91177308-0d34-0410-b5e6-96231b3b80d8
The GNU assembler has an extension that allows for the elision of the paired
register (dt2) for the LDRD and STRD mnemonics. Add support for this in the
assembly parser. Canonicalise the usage during the instruction parsing from
the specified version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198915 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM ARM indicates the mnemonics as follows:
ldrbt{<c>}{<q>} <Rt>, [<Rn>], {, #+/-<imm>}
ldrt{<c>}{<q>} <Rt>, [<Rn>] {, #+/-<imm>}
strbt{<c>}{<q>} <Rt>, [<Rn>] {, #<imm>}
strt{<c>}{<q>} <Rt>, [<Rn>] {, #+/-<imm>}
This improves the parser to deal with the implicit immediate 0 for the mnemonics
as per the specification.
Thanks to Joerg Sonnenberger for the tests!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198914 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM backend has been using most of the MachO related subtarget
checks almost interchangeably, and since the only target it's had to
run on has been IOS (which is all three of MachO, Darwin and IOS) it's
worked out OK so far.
But we'd like to support embedded targets under the "*-*-none-macho"
triple, which means everything starts falling apart and inconsistent
behaviours emerge.
This patch should pick a reasonably sensible set of behaviours for the
new triple (and any others that come along, with luck). Some choices
were debatable (notably FP == r7 or r11), but we can revisit those
later when deficiencies become apparent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198617 91177308-0d34-0410-b5e6-96231b3b80d8