Flesh out the options supported for the instruction. Shuffle tests a bit and
add entries for the rest of the options. Add an alias to handle the default
operand of "sy".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135109 91177308-0d34-0410-b5e6-96231b3b80d8
Keeping the instructions in alphabetical order, just like in the ARM ARM.
Adding FIXMEs for skipped instructions when adding tests out of order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135060 91177308-0d34-0410-b5e6-96231b3b80d8
Catch potential cascading errors on a malformed so_reg operand and bail after
the first error.
Add some tests for the diagnostics we do want.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135055 91177308-0d34-0410-b5e6-96231b3b80d8
Now works for parsing register shifted register and register shifted
immediate arithmetic instructions, including the 'rrx' rotate with extend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135049 91177308-0d34-0410-b5e6-96231b3b80d8
if (x != 0) x = 1
if (x == 1) x = 1
Previous codegen looks like this:
mov r1, r0
cmp r1, #1
mov r0, #0
moveq r0, #1
The naive lowering select between two different values. It should recognize the
test is equality test so it's more a conditional move rather than a select:
cmp r0, #1
movne r0, #0
rdar://9758317
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135017 91177308-0d34-0410-b5e6-96231b3b80d8
Use memory barriers to force if-conversion off for these tests instead of
the internal llc command line option ifcvt-limit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134986 91177308-0d34-0410-b5e6-96231b3b80d8
Print shifted immediate values directly rather than as a payload+shifter
value pair. This makes for more readable output assembly code, simplifies
the instruction printer, and is consistent with how Thumb immediates are
displayed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134902 91177308-0d34-0410-b5e6-96231b3b80d8
and MCSubtargetInfo.
- Added methods to update subtarget features (used when targets automatically
detect subtarget features or switch modes).
- Teach X86Subtarget to update MCSubtargetInfo features bits since the
MCSubtargetInfo layer can be shared with other modules.
- These fixes .code 16 / .code 32 support since mode switch is updated in
MCSubtargetInfo so MC code emitter can do the right thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134884 91177308-0d34-0410-b5e6-96231b3b80d8
patch brings numerous advantages to LLVM. One way to look at it
is through diffstat:
109 files changed, 3005 insertions(+), 5906 deletions(-)
Removing almost 3K lines of code is a good thing. Other advantages
include:
1. Value::getType() is a simple load that can be CSE'd, not a mutating
union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
uniques them. This means that the compiler doesn't merge them structurally
which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead
"const Type *" everywhere.
Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.
"LLVM 3.0" is the right time to do this.
There are still some cleanups pending after this, this patch is large enough
as-is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134829 91177308-0d34-0410-b5e6-96231b3b80d8
With Lit (not bash) in a test, multiple redirects >%t might open(%t, "w") multiple. It can be avoided if latter redirect is >>%t.
It might work even if ">/dev/null" were used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134814 91177308-0d34-0410-b5e6-96231b3b80d8
Try to move spills as early as possible in their basic block. This can
help eliminate interferences by shortening the live range being
spilled.
This fixes PR10221.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134776 91177308-0d34-0410-b5e6-96231b3b80d8
The normal tBX instruction is predicable, so there's no reason the
pseudos for using it as a return shouldn't be. Gives us some nice code-gen
improvements as can be seen by the test changes. In particular, several
tests now have to disable if-conversion because it works too well and defeats
the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134746 91177308-0d34-0410-b5e6-96231b3b80d8
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134735 91177308-0d34-0410-b5e6-96231b3b80d8
We have to do this in DAGBuilder instead of DAGCombiner, because the exact bit is lost after building.
struct foo { char x[24]; };
long bar(struct foo *a, struct foo *b) { return a-b; }
is now compiled into
movl 4(%esp), %eax
subl 8(%esp), %eax
sarl $3, %eax
imull $-1431655765, %eax, %eax
instead of
movl 4(%esp), %eax
subl 8(%esp), %eax
movl $715827883, %ecx
imull %ecx
movl %edx, %eax
shrl $31, %eax
sarl $2, %edx
addl %eax, %edx
movl %edx, %eax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134695 91177308-0d34-0410-b5e6-96231b3b80d8
It was testing a linear scan feature:
Test if linearscan is unfavoring registers for allocation to allow
more reuse of reloads from stack slots.
The greedy register allocator doesn't access any stack slots in this
function, so the linear scan feature was not being tested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134666 91177308-0d34-0410-b5e6-96231b3b80d8
The promotion code lost any alignment information, when hoisting loads and
stores out of the loop. This lead to incorrect aligned memory accesses. We now
use the largest alignment we can prove to be correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134520 91177308-0d34-0410-b5e6-96231b3b80d8
Remat during spilling triggers dead code elimination. If a phi-def
becomes unused, that may also cause live ranges to split into separate
connected components.
This type of splitting is different from normal live range splitting. In
particular, there may not be a common original interval.
When the split range is its own original, make sure that the new
siblings are also their own originals. The range being split cannot be
used as an original since it doesn't cover the new siblings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134413 91177308-0d34-0410-b5e6-96231b3b80d8
makes one of the tests actually mean something (as the string 'add' will
always appear in the output of this file).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134358 91177308-0d34-0410-b5e6-96231b3b80d8
outside the loop and reducible.
This more completely hides them from LSR, which isn't usually able to
do anything meaningful with non-affine expressions anyway, and this
consequently hides them from SCEVExpander, which is acutely unprepared
for non-affine expressions.
Replace test/CodeGen/X86/lsr-nonaffine.ll with a new test that tests
the new behavior.
This works around the bug in PR10117 / rdar://problem/9633149, and is
generally an improvement besides.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134268 91177308-0d34-0410-b5e6-96231b3b80d8
The DSP instructions in the Thumb2 instruction set are an optional extension
in the Cortex-M* archtitecture. When present, the implementation is considered
an "ARMv7E-M implementation," and when not, an "ARMv7-M implementation."
Add a subtarget feature hook for the v7e-m instructions and hook it up. The
cortex-m3 cpu is an example of a v7m implementation, while the cortex-m4 is
a v7e-m implementation.
rdar://9572992
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134261 91177308-0d34-0410-b5e6-96231b3b80d8
We would put the return value from long double functions in the wrong
register.
This fixes gcc.c-torture/execute/conversion.c
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134205 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a FIXME and allow predication (in Thumb2) for the T1 register to
register MOV instructions. This allows some better codegen with
if-conversion (as seen in the test updates), plus it lays the groundwork
for pseudo-izing the tMOVCC instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134197 91177308-0d34-0410-b5e6-96231b3b80d8
It's just a t2LDMIA_UPD instruction with extra codegen properties, so it
doesn't need the encoding information. As a side-benefit, we now correctly
recognize for instruction printing as a 'pop' instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134173 91177308-0d34-0410-b5e6-96231b3b80d8
already makes the assumption, which is correct on ARM, that a type's alignment is
less than its alloc size. This improves codegen with Clang (which inserts a lot of
extraneous alignment specifiers) and fixes <rdar://problem/9695089>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134106 91177308-0d34-0410-b5e6-96231b3b80d8
For example, ".byte 256" would previously assert() when emitting an object
file. Now it generates a diagnostic that the literal value is out of range.
rdar://9686950
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134069 91177308-0d34-0410-b5e6-96231b3b80d8
Drop the FpMov instructions, use plain COPY instead.
Drop the FpSET/GET instruction for accessing fixed stack positions.
Instead use normal COPY to/from ST registers around inline assembly, and
provide a single new FpPOP_RETVAL instruction that can access the return
value(s) from a call. This is still necessary since you cannot tell from
the CALL instruction alone if it returns anything on the FP stack. Teach
fast isel to use this.
This provides a much more robust way of handling fixed stack registers -
we can tolerate arbitrary FP stack instructions inserted around calls
and inline assembly. Live range splitting could sometimes break x87 code
by inserting spill code in unfortunate places.
As a bonus we handle floating point inline assembly correctly now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134018 91177308-0d34-0410-b5e6-96231b3b80d8
opening single quote with no closing single quote, and with {} quotes
"inside" of it. This broke some of our tools that scrape test cases.
Also, while here, make the test actually assert what the comment says it
asserts. This was essentially authored by Nick Lewycky, and merely typed
in by myself. Let me know if this is still missing the mark, but the
previous test only succeeded due to the improper quoting preventing
*anything* from matching the grep -- it had a '4(%...)' sequence in the
output!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133980 91177308-0d34-0410-b5e6-96231b3b80d8
When the destination operand is the same as the first source register
operand for arithmetic instructions, the destination operand may be omitted.
For example, the following two instructions are equivalent:
and r1, #ff
and r1, r1, #ff
rdar://9672867
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133973 91177308-0d34-0410-b5e6-96231b3b80d8
Correctly parse the forms of the Thumb mov-immediate instruction:
1. 8-bit immediate 0-255.
2. 12-bit shifted-immediate.
The 16-bit immediate "movw" form is also legal with just a "mov" mnemonic,
but is not yet supported. More parser logic necessary there due to fixups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133966 91177308-0d34-0410-b5e6-96231b3b80d8
Add aliases for the vpush/vpop mnemonics to the VFP load/store multiple
writeback instructions w/ SP as the base pointer.
rdar://9683231
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133932 91177308-0d34-0410-b5e6-96231b3b80d8
When the destination operand is the same as the first source register
operand for arithmetic instructions, the destination operand may be omitted.
For example, the following two instructions are equivalent:
sub r2, r2, #6
sub r2, #6
rdar://9682597
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133925 91177308-0d34-0410-b5e6-96231b3b80d8
Also fix some of the tests that were actually testing wrong behavior -
An input operand in {st} is only popped by the inline asm when {st} is
also in the clobber list.
The original bug reports all had ~{st} clobbers as they should.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133916 91177308-0d34-0410-b5e6-96231b3b80d8
alloca that only holds a copy of a global and we're going to replace the users
of the alloca with that global, just nuke the lifetime intrinsics. Part of
PR10121.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133905 91177308-0d34-0410-b5e6-96231b3b80d8
The .b8 operations in PTX are far more limiting than I first thought. The mov operation isn't even supported, so there's no way of converting a .pred value into a .b8 without going via .b16, which is
not sensible. An improved implementation needs to use the fact that loads and stores automatically extend and truncate to implement support for EXTLOAD and TRUNCSTORE in order to correctly support
boolean values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133873 91177308-0d34-0410-b5e6-96231b3b80d8