This updates check for double precision zero floating point constant to allow
use of instruction with immediate value rather than temporary register.
Currently "a == 0.0", where "a" is of "double" type generates:
vmov.i32 d16, #0x0
vcmpe.f64 d0, d16
With this change it becomes:
vcmpe.f64 d0, #0
Patch by Sergey Dmitrouk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220486 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, the ARM disassembler will disassemble the Thumb2 memory hint
instructions (PLD, PLDW and PLI), even for targets which do not have
these instructions. This patch adds the required checks to the
disassmebler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220472 91177308-0d34-0410-b5e6-96231b3b80d8
A previous patch enabled SELECT_VSRC and SELECT_CC_VSRC for VSX to
handle <2 x double> cases. This patch adds SELECT_VSFRC and
SELECT_CC_VSFRC to allow use of all 64 vector-scalar registers for the
f64 type when VSX is enabled. The changes are analogous to those in
the previous patch. I've added a new variant to vsx.ll to test the
code generation.
(I also cleaned up a little formatting in PPCInstrVSX.td from the
previous patch.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220395 91177308-0d34-0410-b5e6-96231b3b80d8
The tests test/CodeGen/Generic/select-cc.ll and
test/CodeGen/PowerPC/select-cc.ll both fail with VSX enabled. The
problem is that the lowering logic for the SELECT and SELECT_CC
operations doesn't currently support the VSX registers. This patch
fixes that.
In lib/Target/PowerPC/PPCInstrInfo.td, we have pseudos to handle this
for other register classes. Similar pseudos are added in
PPCInstrVSX.td (they must be there, because the "vsrc" register class
definition appears there) for the VSRC register class. The
SELECT_VSRC pseudo is then used in pattern matching for SELECT_CC.
The rest of the patch just adds logic for SELECT_VSRC wherever similar
logic appears for SELECT_VRRC.
There are no new test cases because the existing tests above test
this, along with a variant in test/CodeGen/PowerPC/vsx.ll.
After discussion with Hal, a future patch will add similar _VSFRC
variants to override f64 type handling (currently using F8RC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220385 91177308-0d34-0410-b5e6-96231b3b80d8
This enables targets to adapt their pass pipeline to the register
allocator in use. For example, with the AArch64 backend, using PBQP
with the cortex-a57, the FPLoadBalancing pass is no longer necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220321 91177308-0d34-0410-b5e6-96231b3b80d8
With VSX enabled, test/CodeGen/PowerPC/recipest.ll exposes a bug in
the FMA mutation pass. If we have a situation where a killed product
register is the same register as the FMA target, such as:
%vreg5<def,tied1> = XSNMSUBADP %vreg5<tied0>, %vreg11, %vreg5,
%RM<imp-use>; VSFRC:%vreg5 F8RC:%vreg11
then the substitution makes no sense. We end up getting a crash when
we try to extend the interval associated with the killed product
register, as there is already a live range for %vreg5 there. This
patch just disables the mutation under those circumstances.
Since recipest.ll generates different code with VMX enabled, I've
modified that test to use -mattr=-vsx. I've borrowed the code from
that test that exposed the bug and placed it in fma-mutate.ll, where
it tests several mutation opportunities including the "bad" one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220290 91177308-0d34-0410-b5e6-96231b3b80d8
The 32-bit variants of the NEON scalar<->GPR move instructions are
also available in VFPv2. The 8- and 16-bit variants do require NEON.
Note that the checks in the test file are all -DAG because they are
checking a mixture of stdout and stderr, and the ordering is not
guaranteed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220288 91177308-0d34-0410-b5e6-96231b3b80d8
The Thumb2 LDRS?[BH] instructions are not valid when the destination
register is the PC (these encodings are used for preload hints).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220278 91177308-0d34-0410-b5e6-96231b3b80d8
Every target we support has support for assembly that looks like
a = b - c
.long a
What is special about MachO is that the above combination suppresses the
production of a relocation.
With this change we avoid producing the intermediary labels when they don't
add any value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220256 91177308-0d34-0410-b5e6-96231b3b80d8
X86 code to lower VSELECT messed a bit with the bits set in the mask of VSELECT
when it knows it can be lowered into BLEND. Indeed, only the high bits need to be
set for those and it optimizes those accordingly.
However, when the mask is a compile time constant, the lowering will be handled
by the generic optimizer and those modifications will generate bad code in the
generic optimizer.
This patch fixes that by preventing the optimization if the VSELECT will be
handled by the generic optimizer.
<rdar://problem/18675020>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220242 91177308-0d34-0410-b5e6-96231b3b80d8
This patch improves support for commutative instructions in the x86 memory folding implementation by attempting to fold a commuted version of the instruction if the original folding fails - if that folding fails as well the instruction is 're-commuted' back to its original order before returning.
Updated version of r219584 (reverted in r219595) - the commutation attempt now explicitly ensures that neither of the commuted source operands are tied to the destination operand / register, which was the source of all the regressions that occurred with the original patch attempt.
Added additional regression test case provided by Joerg Sonnenberger.
Differential Revision: http://reviews.llvm.org/D5818
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220239 91177308-0d34-0410-b5e6-96231b3b80d8
The previous code had a few problems, motivating the choices here.
1. It could create instructions clobbering CPSR, but the incoming MachineInstr
didn't reflect this. A potential source of corruption. This is why the patch
has a new PseudoInst for before lowering.
2. Similarly, there was some code to handle the incoming instruction not being
ARMCC::AL, but this would have caused massive problems if it was actually
invoked when a complex offset needing more than one instruction was requested.
3. It wasn't designed to handle unaligned pointers (or offsets). These should
probably be minimised anyway, but the code needs to deal with them properly
regardless.
4. It had some rather dubious ad-hoc code to avoid calling
emitThumbRegPlusImmediate, a function which should be designed to do precisely
this job.
We seem to cover the common cases correctly now, and hopefully can enhance
emitThumbRegPlusImmediate to handle any extra optimisations we need to add in
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220236 91177308-0d34-0410-b5e6-96231b3b80d8
The current instruction selection patterns for SMULW[BT] and SMLAW[BT]
are incorrect. These instructions multiply a 32-bit and a 16-bit value
(both signed) and return the top 32 bits of the 48-bit result. This
preserves the 16 bits of overflow, whereas the patterns they currently
match truncate the result to 16 bits then sign extend.
To select these instructions, we would need to match an ISD::SMUL_LOHI,
a sign extend, two shifts and an or. There is no way to match SMUL_LOHI
in an instruction pattern as it defines multiple values, so this would
have to be done in C++. I have raised
http://llvm.org/bugs/show_bug.cgi?id=21297 to cover allowing correct
selection of these instructions.
This fixes http://llvm.org/bugs/show_bug.cgi?id=19396
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220196 91177308-0d34-0410-b5e6-96231b3b80d8
This function can, for some offsets from the SP, split one instruction
into two. Since it re-uses the original instruction as the first
instruction of the result, we need ensure its result register is not
marked as dead before we use it in the second instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220194 91177308-0d34-0410-b5e6-96231b3b80d8