The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are
expanded. This causes changes to some dejagnu tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111360 91177308-0d34-0410-b5e6-96231b3b80d8
"SPU Application Binary Interface Specification, v1.9" by
IBM.
Specifically: use r3-r74 to pass parameters and the return value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111358 91177308-0d34-0410-b5e6-96231b3b80d8
where the step value is an induction variable from an outer loop, to
avoid trouble trying to re-expand such expressions. This effectively
hides such expressions from indvars and lsr, which prevents them
from getting into trouble.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111317 91177308-0d34-0410-b5e6-96231b3b80d8
printing "lsl #0". This fixes the remaining parts of pr7792. Make
corresponding changes for encoding/decoding these instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111251 91177308-0d34-0410-b5e6-96231b3b80d8
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110946 91177308-0d34-0410-b5e6-96231b3b80d8
term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110897 91177308-0d34-0410-b5e6-96231b3b80d8
platform. It's apparently "bl __muldf3" on linux, for example. Since that's
not what we're checking here, it's more robust to just force a triple. We
just wwant to check that the inline FP instructions are only generated
on cpus that have them."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110830 91177308-0d34-0410-b5e6-96231b3b80d8
float t1(int argc) {
return (argc == 1123) ? 1.234f : 2.38213f;
}
We would generate truly awful code on ARM (those with a weak stomach should look
away):
_t1:
movw r1, #1123
movs r2, #1
movs r3, #0
cmp r0, r1
mov.w r0, #0
it eq
moveq r0, r2
movs r1, #4
cmp r0, #0
it ne
movne r3, r1
adr r0, #LCPI1_0
ldr r0, [r0, r3]
bx lr
The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".
I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.
Now we generate this, which looks optimal to me:
_t1:
movw r1, #1123
movs r2, #0
cmp r0, r1
adr r0, #LCPI0_0
it eq
moveq r2, #4
ldr r0, [r0, r2]
bx lr
.align 2
LCPI0_0:
.long 1075344593 @ float 2.382130e+00
.long 1067316150 @ float 1.234000e+00
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110799 91177308-0d34-0410-b5e6-96231b3b80d8
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
instructions).
- Added tests for memory barrier codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110785 91177308-0d34-0410-b5e6-96231b3b80d8
Also added a test case to check for the added benefit of this patch: it's optimizing away the unnecessary restore of sp from fp for some non-leaf functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110707 91177308-0d34-0410-b5e6-96231b3b80d8
reserved, not available for general allocation. This eliminates all the
extra checks for Darwin.
This change also fixes the use of FP to access frame indices in leaf
functions and cleaned up some confusing code in epilogue emission.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110655 91177308-0d34-0410-b5e6-96231b3b80d8
form of CMPSD (etc.) Matching a 128-bit memory
operand is wrong, the instruction uses only 64 bits
(same as ADDSD etc.) 8193553.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110491 91177308-0d34-0410-b5e6-96231b3b80d8
Without this what was happening was:
* R3 is not marked as "used"
* ARM backend thinks it has to save it to the stack because of vaarg
* Offset computation correctly ignores it
* Offsets are wrong
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110446 91177308-0d34-0410-b5e6-96231b3b80d8