Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110152 91177308-0d34-0410-b5e6-96231b3b80d8
reference registers past the end of the NEON register file, and report them
as invalid instead of asserting when trying to print them. PR7746.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109933 91177308-0d34-0410-b5e6-96231b3b80d8
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
mls r0,r9,r0,sp
instead of:
mov r2, sp
mls r0, r9, r0, r2
This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.
PR7499
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109842 91177308-0d34-0410-b5e6-96231b3b80d8
integers with mov + vdup. 8003375. This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109799 91177308-0d34-0410-b5e6-96231b3b80d8
This assumption is not satisfied due to global mergeing.
Workaround the issue by temporary disablinge mergeing of const globals.
Also, ignore LLVM "special" globals. This fixes PR7716
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109423 91177308-0d34-0410-b5e6-96231b3b80d8
function live in set. This will give us tGPR for Thumb1 and GPR otherwise,
so the copy will be spillable. rdar://8224931
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109293 91177308-0d34-0410-b5e6-96231b3b80d8
comments explaining why it was wrong. 8225024.
Fix the real problem in 8213383: the code that splits very large
blocks when no other place to put constants can be found was not
considering the case that the block contained a Thumb tablejump.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109282 91177308-0d34-0410-b5e6-96231b3b80d8
it's too late to start backing off aggressive latency scheduling when most
of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
For ARM, this is almost always a win on # of instructions. It's runtime
neutral for most of the tests. But for some kernels with high register
pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
54 and sped up by 20%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109279 91177308-0d34-0410-b5e6-96231b3b80d8
ARM/PPC/MSP430-specific code (which are the only targets that
implement the hook) can directly reference their target-specific
instrinfo classes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109171 91177308-0d34-0410-b5e6-96231b3b80d8
This is probably not the best way to implement "Force LR to
be spilled if the Thumb function size is > 2048." do this,
it should use the branch shortening infrastructure, but I'm
just preserving functionality here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109165 91177308-0d34-0410-b5e6-96231b3b80d8
mov pc, r1
.align 2
LJTI0_0_0:
.long LBB0_14
This fixes rdar://8213383. No test case since it's not possible to come up with a suitable small one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109076 91177308-0d34-0410-b5e6-96231b3b80d8
out of the AsmPrinter directory into libarm. Now the
ARM InstPrinters depend jsut on the MC stuff, not on vmcore
or codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108783 91177308-0d34-0410-b5e6-96231b3b80d8
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR6581.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108730 91177308-0d34-0410-b5e6-96231b3b80d8
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR7499.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108722 91177308-0d34-0410-b5e6-96231b3b80d8
- Unfortunate, but necessary for now to handle subtarget instruction matching. Eventually we should factor out the lower level target machine information so we don't need to do this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108664 91177308-0d34-0410-b5e6-96231b3b80d8
instruction for non-constant operands. This includes the case referenced
in the README.txt regarding a bitfield copy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108608 91177308-0d34-0410-b5e6-96231b3b80d8
stack realignment on ARM.
Also check for function attributes as we do on X86 as well as
make explicit that we're checking can as well as needs in this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108582 91177308-0d34-0410-b5e6-96231b3b80d8
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108570 91177308-0d34-0410-b5e6-96231b3b80d8
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108465 91177308-0d34-0410-b5e6-96231b3b80d8
instructions use different values (e.g., 2-byte or 4-byte alignment).
Also fix ARMInstPrinter to print these alignments as bits instead of bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108386 91177308-0d34-0410-b5e6-96231b3b80d8
in the literal field of an instruction. E.g.,
long long foo(long long a) {
return a - 734439407618LL;
}
rdar://7038284
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108339 91177308-0d34-0410-b5e6-96231b3b80d8
instructions already have implicit defs of LR. The comment suggests that
this is intended to fix something like pr6111, but it doesn't really do
that either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108186 91177308-0d34-0410-b5e6-96231b3b80d8
The only folding these load/store architectures can do is converting COPY into a
load or store, and the target independent part of foldMemoryOperand already
knows how to do that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108099 91177308-0d34-0410-b5e6-96231b3b80d8
correct alignment information, which simplifies ExpandRes_VAARG a bit.
The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:
* The 's' in target data: If this is set to the minimal alignment of any
argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
example.
* The getTransientStackAlignment method. It is possible for an architecture to
have argument less aligned than what we maintain the stack pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108072 91177308-0d34-0410-b5e6-96231b3b80d8
EXTRACT_SUBREG no longer appears as a machine instruction. Use COPY instead.
Add isCopy() checks in many places using isMoveInstr() and isExtractSubreg().
The isMoveInstr hook will be removed later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107879 91177308-0d34-0410-b5e6-96231b3b80d8
1. The arguments are f32.
2. The arguments are loads and they have no uses other than the comparison.
3. The comparison code is EQ or NE.
e.g.
vldr.32 s0, [r1]
vldr.32 s1, [r0]
vcmpe.f32 s1, s0
vmrs apsr_nzcv, fpscr
beq LBB0_2
=>
ldr r1, [r1]
ldr r0, [r0]
cmp r0, r1
beq LBB0_2
More complicated cases will be implemented in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107852 91177308-0d34-0410-b5e6-96231b3b80d8
Add explicit testcases for tail calls within the same module.
Duplicate some code to humor those who think .w doesn't apply on ARM.
Leave this disabled on Thumb1, and add some comments explaining why it's hard
and won't gain much.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107851 91177308-0d34-0410-b5e6-96231b3b80d8
address calculation instructions leading up to a jump table when we're trying
to convert them into a TB[H] instruction in Thumb2. This realistically
shouldn't happen much, if at all, for well formed inputs, but it's more correct
to handle it. rdar://7387682
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107830 91177308-0d34-0410-b5e6-96231b3b80d8
than assuming a target will custom lower them. Targets which do so should
exlicitly mark them as having custom lowerings. PR7454.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107734 91177308-0d34-0410-b5e6-96231b3b80d8