term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110897 91177308-0d34-0410-b5e6-96231b3b80d8
entry for ARM STRBT is actually a super-instruction for A8.6.199 STRBT A1 & A2.
Recover by looking for ARM:USAT encoding pattern before delegating to the auto-
gened decoder.
Added a "usat" test case to arm-tests.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110894 91177308-0d34-0410-b5e6-96231b3b80d8
When a register is defined by a partial load:
%reg1234:sub_32 = MOV32mr <fi#-1>; GR64:%reg1234
That load cannot be folded into an instruction using the full 64-bit register.
It would become a 64-bit load.
This is related to the recent change to have isLoadFromStackSlot return false on
a sub-register load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110874 91177308-0d34-0410-b5e6-96231b3b80d8
- remove ashr which never worked.
- fix lshr and shl and add tests.
- remove dead function "intersect1Wrapped".
- add a new sub method to subtract ranges, with test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110861 91177308-0d34-0410-b5e6-96231b3b80d8
that many of these things, so the memory savings isn't significant,
and there are now situations where there can be alignments greater
than 128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110836 91177308-0d34-0410-b5e6-96231b3b80d8
avoids trouble if the return type of TD->getPointerSize() is
changed to something which doesn't promote to a signed type,
and is simpler anyway.
Also, use getCopyFromReg instead of getRegister to read a
physical register's value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110835 91177308-0d34-0410-b5e6-96231b3b80d8
platform. It's apparently "bl __muldf3" on linux, for example. Since that's
not what we're checking here, it's more robust to just force a triple. We
just wwant to check that the inline FP instructions are only generated
on cpus that have them."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110830 91177308-0d34-0410-b5e6-96231b3b80d8
float t1(int argc) {
return (argc == 1123) ? 1.234f : 2.38213f;
}
We would generate truly awful code on ARM (those with a weak stomach should look
away):
_t1:
movw r1, #1123
movs r2, #1
movs r3, #0
cmp r0, r1
mov.w r0, #0
it eq
moveq r0, r2
movs r1, #4
cmp r0, #0
it ne
movne r3, r1
adr r0, #LCPI1_0
ldr r0, [r0, r3]
bx lr
The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".
I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.
Now we generate this, which looks optimal to me:
_t1:
movw r1, #1123
movs r2, #0
cmp r0, r1
adr r0, #LCPI0_0
it eq
moveq r2, #4
ldr r0, [r0, r2]
bx lr
.align 2
LCPI0_0:
.long 1075344593 @ float 2.382130e+00
.long 1067316150 @ float 1.234000e+00
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110799 91177308-0d34-0410-b5e6-96231b3b80d8
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
instructions).
- Added tests for memory barrier codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110785 91177308-0d34-0410-b5e6-96231b3b80d8
for some reason they have a very odd MCInst form where the operands overlap, but
I haven't dug in to find out why yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110781 91177308-0d34-0410-b5e6-96231b3b80d8