reference registers past the end of the NEON register file, and report them
as invalid instead of asserting when trying to print them. PR7746.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109933 91177308-0d34-0410-b5e6-96231b3b80d8
formerly rejected by the FE, so asserted in the BE; now the FE only
warns, so we treat it as a legitimate fatal error in PPC BE.
This means the test for the feature won't pass, so it's xfail'd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109892 91177308-0d34-0410-b5e6-96231b3b80d8
declared during the addition of the assembler support, the additional
changes are:
- Add missing intrinsics
- Move all SSE conversion instructions in X86InstInfo64.td to the SSE.td file.
- Duplicate some patterns to AVX mode.
- Step into PCMPEST/PCMPIST custom inserter and add AVX versions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109878 91177308-0d34-0410-b5e6-96231b3b80d8
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
mls r0,r9,r0,sp
instead of:
mov r2, sp
mls r0, r9, r0, r2
This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.
PR7499
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109842 91177308-0d34-0410-b5e6-96231b3b80d8
integers with mov + vdup. 8003375. This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109799 91177308-0d34-0410-b5e6-96231b3b80d8
We do sometimes load from a too small stack slot when dealing with x86 arguments
(varargs and smaller-than-32-bit args). It looks like we know what we are doing
in those cases, so I am going to remove the assert instead of artifically
enlarging stack slot sizes.
The assert in storeRegToStackSlot stays in. We don't want to write beyond the
bounds of a stack slot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109764 91177308-0d34-0410-b5e6-96231b3b80d8
The size of this object isn't used for anything - technically it is of variable
size.
This avoids a false positive from the assert in
X86InstrInfo::loadRegFromStackSlot, and fixes PR7735.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109652 91177308-0d34-0410-b5e6-96231b3b80d8
subregister operands like this:
%reg1040:sub_32bit<def> = MOV32rm <fi#-2>, 1, %reg0, 0, %reg0, %reg1040<imp-def>; mem:LD4[FixedStack-2](align=8)
Make them return false when subreg operands are present. VirtRegRewriter is
making bad assumptions otherwise.
This fixes PR7713.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109489 91177308-0d34-0410-b5e6-96231b3b80d8
we are using AVX and no AVX version of the desired intruction is present,
this is better for incremental dev (without fallbacks it's easier to spot
what's missing). Not sure this is the best hack thought (we can also disable
all HasSSE* predicates by dinamically marking them 'false' if AVX is present)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109434 91177308-0d34-0410-b5e6-96231b3b80d8
This assumption is not satisfied due to global mergeing.
Workaround the issue by temporary disablinge mergeing of const globals.
Also, ignore LLVM "special" globals. This fixes PR7716
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109423 91177308-0d34-0410-b5e6-96231b3b80d8
appropriate for targets without detailed instruction iterineries.
The scheduler schedules for increased instruction level parallelism in
low register pressure situation; it schedules to reduce register pressure
when the register pressure becomes high.
On x86_64, this is a win for all tests in CFP2000. It also sped up 256.bzip2
by 16%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109300 91177308-0d34-0410-b5e6-96231b3b80d8
function live in set. This will give us tGPR for Thumb1 and GPR otherwise,
so the copy will be spillable. rdar://8224931
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109293 91177308-0d34-0410-b5e6-96231b3b80d8
comments explaining why it was wrong. 8225024.
Fix the real problem in 8213383: the code that splits very large
blocks when no other place to put constants can be found was not
considering the case that the block contained a Thumb tablejump.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109282 91177308-0d34-0410-b5e6-96231b3b80d8
it's too late to start backing off aggressive latency scheduling when most
of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
For ARM, this is almost always a win on # of instructions. It's runtime
neutral for most of the tests. But for some kernels with high register
pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
54 and sped up by 20%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109279 91177308-0d34-0410-b5e6-96231b3b80d8