12673 Commits

Author SHA1 Message Date
Tim Northover
004d725549 AArch64: add backend option to reserve x18 (platform register)
AAPCS64 says that it's up to the platform to specify whether x18 is
reserved, and a first step on that way is to add a flag controlling
it.

From: Andrew Turner <andrew@fubar.geek.nz>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226664 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:43:31 +00:00
Tim Northover
47f47f5d2a DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226663 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:43:28 +00:00
Michael Kuperstein
0b4244ade1 [x32] Fast ISel should use LEA64_32r instead of LEA32r to adjust addresses in x32 mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226661 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 14:44:05 +00:00
Simon Pilgrim
650d4f00ae [X86][AVX] Simplified diff between AVX1 and SSE42 fp stack folding tests. NFC.
Changed the AVX1 tests register spill tail call to return a xmm like the SSE42 version - makes doing diffs between them a lot easier without affecting the spills themselves.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226623 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 00:02:13 +00:00
Simon Pilgrim
8608e5bbc7 [X86][SSE] Added SSE/AVX1 integer stack folding tests.
Some folding patterns + tests are missing (marked as TODO) - these will be added in a future patch for review.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226622 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:54:17 +00:00
Simon Pilgrim
32f0438ada [X86][SSE] Added SSE fp stack folding tests.
Some folding patterns + tests are missing (marked as TODO) - these will be added in a future patch for review.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226621 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:50:18 +00:00
Simon Pilgrim
bddfe2660e [X86][AVX] Renamed AVX1 fp stack folding tests. NFC.
The SSE42 version of the AVX1 float stack folding tests will be added shortly, this renames the AVX1 file so that the files will be near each other in a directory listing to help ensure they are kept in sync.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226620 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 23:45:50 +00:00
Colin LeMahieu
0d9733d596 [Hexagon] Adding intrinsics for doubleword ALU operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226606 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 20:45:05 +00:00
Colin LeMahieu
50e4abc0ee [Hexagon] Removing unnecessary clutter in intrinsic tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226602 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:46:07 +00:00
Daniel Jasper
529ff2f257 Prevent binary-tree deterioration in sparse switch statements.
This addresses part of llvm.org/PR22262. Specifically, it prevents
considering the densities of sub-ranges that have fewer than
TLI.getMinimumJumpTableEntries() elements. Those densities won't help
jump tables.

This is not a complete solution but works around the most pressing
issue.

Review: http://reviews.llvm.org/D7070

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226600 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:43:33 +00:00
Ramkumar Ramachandra
b8ae0acfaf [GC] Verify-pass void vararg functions in gc.statepoint
With the appropriate Verifier changes, exactracting the result out of a
statepoint wrapping a vararg function crashes. However, a void vararg
function works fine: commit this first step.

Differential Revision: http://reviews.llvm.org/D7071

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226599 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:42:46 +00:00
Tom Stellard
5d96beaab5 R600/SI: Fix simple-loop.ll test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226596 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 19:33:02 +00:00
Tom Stellard
ad7a884efe R600/SI: Add kill flag when copying scratch offset to a register
This allows us to re-use the same register for the scratch offset
when accessing large private arrays.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226585 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 17:49:45 +00:00
Tom Stellard
a978a481bb R600/SI: Don't store scratch buffer frame index in MUBUF offset field
We don't have a good way of legalizing this if the frame index offset
is more than the 12-bits, which is size of MUBUF's offset field, so
now we store the frame index in the vaddr field.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226584 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 17:49:43 +00:00
Kai Nacke
fa4d8baf54 [mips] Add registers and ALL check prefix to octeon test case.
No functional change.

Reviewed by D. Sanders


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226574 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 16:14:02 +00:00
Kai Nacke
57e80129b9 [mips] Add octeon branch instructions bbit0/bbit032/bbit1/bbit132
This commits adds the octeon branch instructions bbit0/bbit032/bbit1/bbit132.
It also includes patterns for instruction selection and test cases.

Reviewed by D. Sanders


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226573 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-20 16:10:51 +00:00
Simon Pilgrim
2a2f94c5f2 [X86][AVX] Missing AVX1 memory folding float instructions
Now that we can create much more exhaustive X86 memory folding tests, this patch adds the missing AVX1/F16C floating point instruction stack foldings we can easily test for including the scalar intrinsics (add, div, max, min, mul, sub), conversions float/int to double, half precision conversions, rounding, dot product and bit test. The patch also adds a couple of obviously missing SSE instructions (more to follow once we have full SSE testing).

Now that scalar folding is working it broke a very old test (2006-10-07-ScalarSSEMiscompile.ll) - this test appears to make no sense as its trying to ensure that a scalar subtraction isn't folded as it 'would zero the top elts of the loaded vector' - this test just appears to be wrong to me.

Differential Revision: http://reviews.llvm.org/D7055



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 22:40:45 +00:00
Colin LeMahieu
dc8beeba1b [Hexagon] Updating muxir/ri/ii intrinsics. Setting predicate registers as compatible with i32 rather than doing custom type conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226500 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 20:31:18 +00:00
Colin LeMahieu
3bea6a4959 [Hexagon] Converting intrinsics combine imm/imm, simple shifts and extends.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226483 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 18:56:19 +00:00
Colin LeMahieu
596cfabbc4 [Hexagon] Converting remaining ALU32/ALU intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226480 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 18:33:58 +00:00
Colin LeMahieu
254d992ab8 [Hexagon] Converting ALU32/ALU intrinsics to new patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226478 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 18:22:19 +00:00
Greg Fitzgerald
4b96323e81 [AArch64] Implement GHC calling convention
Original patch by Luke Iannini.  Minor improvements and test added by
Erik de Castro Lopo.

Differential Revision: http://reviews.llvm.org/D6877

From: Erik de Castro Lopo <erikd@mega-nerd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226473 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 17:40:05 +00:00
Colin LeMahieu
438f1e4979 [Hexagon] Converting halfword to double accumulating multiply intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226472 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 17:36:32 +00:00
Rafael Espindola
4b678bff4e Bring r226038 back.
No change in this commit, but clang was changed to also produce trivial comdats when
needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226467 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 15:16:06 +00:00
Michael Kuperstein
5a0c8601d3 [MIScheduler] Slightly better handling of constrainLocalCopy when both source and dest are local
This fixes PR21792.

Differential Revision: http://reviews.llvm.org/D6823

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226433 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 07:30:47 +00:00
Hal Finkel
d1f1656447 [PowerPC] Add r2 as an operand for all calls under both PPC64 ELF V1 and V2
Our PPC64 ELF V2 call lowering logic added r2 as an operand to all direct call
instructions in order to represent the dependency on the TOC base pointer
value. Restricting this to ELF V2, however, does not seem to make sense: calls
under ELF V1 have the same dependence, and indirect calls have an r2 dependence
just as direct ones. Make sure the dependence is noted for all calls under both
ELF V1 and ELF V2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226432 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 07:20:27 +00:00
Matt Arsenault
676db0a373 R600: Remove redundant test
This is already covered in ftrunc.ll

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226412 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 19:30:32 +00:00
Simon Pilgrim
a707cf4652 [X86][SSE] Added scalar min/max folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226406 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 18:06:23 +00:00
Simon Pilgrim
7c62013afe [X86][SSE] Added float extract and xmm extract/insert stack folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226405 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 17:04:32 +00:00
Simon Pilgrim
2793dd7a29 [X86][SSE] Added scalar conversion stack folding tests. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226404 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 16:22:15 +00:00
Simon Pilgrim
65a3a9bea3 AVX1 stack folding tests. NFC.
Begun adding more exhaustive tests - all floating point instructions should now be either tested or have placeholders. We do seem to have a number of missing instructions, I will add a patch for review once the remaining working instructions are added.

I'll then move on to SSE tests and then the integer instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226400 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 12:56:39 +00:00
Hal Finkel
a01b583dbc [PowerPC] Initial PPC64 calling-convention changes for fastcc
The default calling convention specified by the PPC64 ELF (V1 and V2) ABI is
designed to work with both prototyped and non-prototyped/varargs functions. As
a result, GPRs and stack space are allocated for every argument, even those
that are passed in floating-point or vector registers.

GlobalOpt::OptimizeFunctions will transform local non-varargs functions (that
do not have their address taken) to use the 'fast' calling convention.

When functions are using the 'fast' calling convention, don't allocate GPRs for
arguments passed in other types of registers, and don't allocate stack space for
arguments passed in registers. Other changes for the fast calling convention
may be added in the future.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226399 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-18 12:08:47 +00:00
Hal Finkel
962cff0c08 [PowerPC] Don't list R11 as a patchpoint scratch register
R11's status is the same under both the PPC64 ELF V1 and V2 ABIs: it is
reserved for use as an "environment pointer" for compilation models that
require such a thing. We don't, we also don't need a second scratch register,
and because we support only "local" patchpoint call targets, we might as well
let R11 be used for anyregcc patchpoints.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226369 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-17 03:57:34 +00:00
Mehdi Amini
5eed637b34 Improve DAG combine pass on certain IR vector patterns
Loading 2 2x32-bit float vectors into the bottom half of a 256-bit vector
produced suboptimal code in AVX2 mode with certain IR combinations.

In particular, the IR optimizer folded 2f32 + 2f32 -> 4f32, 4f32 + 4f32
(undef) -> 8f32 into a 2f32 + 2f32 -> 8f32, which seems more canonical,
but then mysteriously generated rather bad code; the movq/movhpd combination
didn't match.

The problem lay in the BUILD_VECTOR optimization path. The 2f32 inputs
would get promoted to 4f32 by the type legalizer, eventually resulting
in a BUILD_VECTOR on two 4f32 into an 8f32. The BUILD_VECTOR then, recognizing
these were both half the output size, concatted them and then produced
a shuffle. However, the resulting concat + shuffle was more complex than
it should be; in the case where the upper half of the output is undef, we
probably want to generate shuffle + concat instead.

This enhancement causes the vector_shuffle combine step to recognize this
suboptimal pattern and correct it. I included it there instead of in BUILD_VECTOR
in case the same suboptimal pattern occurs for other reasons.

This results in the optimizer correctly producing the optimal movq + movhpd
sequence for all three variations on this IR, even with AVX2.

I've included a test case.

Radar link: rdar://problem/19287012
Fix for PR 21943.

From: Fiona Glaser <fglaser@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226360 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-17 01:35:56 +00:00
Matt Arsenault
b2bb846f17 R600: Clean up floor tests
These were using different naming schemes,
not using multiple check prefixes and not using
-LABEL.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 22:11:00 +00:00
Colin LeMahieu
d08a6cd083 [Hexagon] Converting halfword to doubleword multiply intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226326 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 21:41:57 +00:00
Colin LeMahieu
9a56f06825 [Hexagon] Converting accumulating halfword multiply intrinsics to patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226324 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 21:36:34 +00:00
Colin LeMahieu
67451fe320 [Hexagon] Beginning converting intrinsics to patterns instead of duplicated definitions. Converting halfword multiply intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226318 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 20:38:54 +00:00
Adam Nemet
ad2ac976af [AVX512] Add intrinsics for masked aligned FP loads and stores
Similar to the unaligned cases.

Test was generated with update_llc_test_checks.py.

Part of <rdar://problem/17688758>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 18:50:09 +00:00
Adam Nemet
1310e4f3c7 [AVX512] Remove trailing whitespaces in this test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226295 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 18:50:07 +00:00
Andrea Di Biagio
ac7b9c828f [X86][DAG] Disable target specific combine on INSERTPS dag nodes at -O0.
This patch disables target specific combine on X86ISD::INSERTPS dag nodes
if optlevel is CodeGenOpt::None.

The backend currently implements a target specific combine rule that converts
a vector load used by an INSERTPS dag node into a scalar load plus a
scalar_to_vector. This allows ISel to select a single INSERTPSrm instead of
two instructions (i.e. a vector load plus INSERTPSrr).

However, the existing target combine rule on INSERTPS nodes only works under
the assumption that ISel will always be able to match an INSERTPSrm. This is
not true in general at -O0, since the backend only allows folding a load into
the memory operand of an instruction if the optimization level is not
CodeGenOpt::None.

In the example below:

//
__m128 test(__m128 a, __m128 *b) {
  __m128 c = _mm_insert_ps(a, *b, 1 << 6);
  return c;
}
//

Before this patch, at -O0, the backend would have canonicalized the load to 'b'
into a scalar load plus scalar_to_vector. Later on, ISel would have selected an
INSERTPSrr leaving the insertps mask in an inconsistent state:

  movss 4(%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # xmm0 = xmm1[1],xmm0[1,2,3].

With this patch, the backend avoids folding the vector load into the operand of
the INSERTPS. The new codegen at -O0 is:

  movaps (%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # %xmm1[1],xmm0[1,2,3].


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226277 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 14:55:26 +00:00
Simon Pilgrim
3717f7c80c [X86] Refactored stack memory folding tests to explicitly force register spilling
The current 'big vectors' stack folded reload testing pattern is very bulky and makes it difficult to test all instructions as big vectors will tend to use only the ymm instruction implementations.

This patch changes the tests to use a nop call that lists explicit xmm registers as sideeffects, with this we can force a partial register spill of the relevant registers and then check that the reload is correctly folded. The asm generated only adds the forced spill, a nop instruction and a couple of extra labels (a fraction of the current approach).

More exhaustive tests will follow shortly, I've added some extra tests (the xmm versions of some of the existing folding tests) as a starting point.

Differential Revision: http://reviews.llvm.org/D6932



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226264 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 09:32:54 +00:00
Timur Iskhodzhanov
6a7c74de33 Revert r226242 - Revert Revert Don't create new comdats in CodeGen
This breaks AddressSanitizer (ninja check-asan) on Windows

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226251 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 08:38:45 +00:00
Hal Finkel
92cd0ca3b2 [PowerPC] Adjust PatchPoints for ppc64le
Bill Schmidt pointed out that some adjustments would be needed to properly
support powerpc64le (using the ELF V2 ABI). For one thing, R11 is not available
as a scratch register, so we need to use R12. R12 is also available under ELF
V1, so to maintain consistency, I flipped the order to make R12 the first
scratch register in the array under both ABIs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226247 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 04:40:58 +00:00
Rafael Espindola
dfe88a08c7 Revert "Revert Don't create new comdats in CodeGen"
This reverts commit r226173, adding r226038 back.

No change in this commit, but clang was changed to also produce trivial comdats for
costructors, destructors and vtables when needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 02:22:55 +00:00
Matt Arsenault
ab2315014e R600/SI: Add patterns for v_cvt_{flr|rpi}_i32_f32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226230 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 23:58:35 +00:00
Matt Arsenault
c204f47feb R600/SI: Fix trailing comma with modifiers
Instructions with 1 operand can still use source modifiers,
so make sure we don't print an extra comma afterwards.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 23:17:03 +00:00
Hal Finkel
94dc061e85 [PowerPC] Loosen ELFv1 PPC64 func descriptor loads for indirect calls
Function pointers under PPC64 ELFv1 (which is used on PPC64/Linux on the
POWER7, A2 and earlier cores) are really pointers to a function descriptor, a
structure with three pointers: the actual pointer to the code to which to jump,
the pointer to the TOC needed by the callee, and an environment pointer. We
used to chain these loads, and make them opaque to the rest of the optimizer,
so that they'd always occur directly before the call. This is not necessary,
and in fact, highly suboptimal on embedded cores. Once the function pointer is
known, the loads can be performed ahead of time; in fact, they can be hoisted
out of loops.

Now these function descriptors are almost always generated by the linker, and
thus the contents of the descriptors are invariant. As a result, by default,
we'll mark the associated loads as invariant (allowing them to be hoisted out
of loops). I've added a target feature to turn this off, however, just in case
someone needs that option (constructing an on-stack descriptor, casting it to a
function pointer, and then calling it cannot be well-defined C/C++ code, but I
can imagine some JIT-compilation system doing so).

Consider this simple test:
  $ cat call.c

  typedef void (*fp)();
  void bar(fp x) {
    for (int i = 0; i < 1600000000; ++i)
      x();
  }

  $ cat main.c

  typedef void (*fp)();
  void bar(fp x);
  void foo() {}
  int main() {
    bar(foo);
  }

On the PPC A2 (the BG/Q supercomputer), marking the function-descriptor loads
as invariant brings the execution time down to ~8 seconds from ~32 seconds with
the loads in the loop.

The difference on the POWER7 is smaller. Compiling with:

  gcc -std=c99 -O3 -mcpu=native call.c main.c : ~6 seconds [this is 4.8.2]

  clang -O3 -mcpu=native call.c main.c : ~5.3 seconds

  clang -O3 -mcpu=native call.c main.c -mno-invariant-function-descriptors : ~4 seconds
  (looks like we'd benefit from additional loop unrolling here, as a first
   guess, because this is faster with the extra loads)

The -mno-invariant-function-descriptors will be added to Clang shortly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226207 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 21:17:34 +00:00
Colin LeMahieu
02b677594c [Hexagon] Updating indexed load-extend patterns and changing test to new expected output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226206 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 21:07:52 +00:00
Hal Finkel
39b09ae788 Revert "r226086 - Revert "r226071 - [RegisterCoalescer] Remove copies to reserved registers""
Reapply r226071 with fixes. Two fixes:

 1. We need to manually remove the old and create the new 'deaf defs'
    associated with physical register definitions when we move the definition of
    the physical register from the copy point to the point of the original vreg def.

    This problem was picked up by the machinstr verifier, and could trigger a
    verification failure on test/CodeGen/X86/2009-02-12-DebugInfoVLA.ll, so I've
    turned on the verifier in the tests.

 2. When moving the def point of the phys reg up, we need to make sure that it
    is neither defined nor read in between the two instructions. We don't, however,
    extend the live ranges of phys reg defs to cover uses, so just checking for
    live-range overlap between the pair interval and the phys reg aliases won't
    pick up reads. As a result, we manually iterate over the range and check for
    reads.

    A test soon to be committed to the PowerPC backend will test this change.

Original commit message:

[RegisterCoalescer] Remove copies to reserved registers

This allows the RegisterCoalescer to join "non-flipped" range pairs with a
physical destination register -- which allows the RegisterCoalescer to remove
copies like this:

<vreg> = something (maybe a load, for example)
... (things that don't use PHYSREG)
PHYSREG = COPY <vreg>

(with all of the restrictions normally applied by the RegisterCoalescer: having
compatible register classes, etc. )

Previously, the RegisterCoalescer handled only the opposite case (copying
*from* a physical register). I don't handle the problem fully here, but try to
get the common case where there is only one use of <vreg> (the COPY).

An upcoming commit to the PowerPC backend will make this pattern much more
common on PPC64/ELF systems.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226200 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 20:32:09 +00:00