an RFP register class.
Teach ScheduleDAG how to handle CopyToReg with different src/dst
reg classes.
This allows us to compile trivial inline asms that expect stuff
on the top of x87-fp stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48107 91177308-0d34-0410-b5e6-96231b3b80d8
in different register classes, e.g. copy of ST(0) to RFP*. This gets
some really trivial inline asm working that plops things on the top of
stack (PR879)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48105 91177308-0d34-0410-b5e6-96231b3b80d8
of BUILD_VECTORS that only have two unique elements:
1. The previous code was nondeterminstic, because it walked a map in
SDOperand order, which isn't determinstic.
2. The previous code didn't handle the case when one element was undef
very well. Now we ensure that the generated shuffle mask has the
undef vector on the RHS (instead of potentially being on the LHS)
and that any elements that refer to it are themselves undef. This
allows us to compile CodeGen/X86/vec_set-9.ll into:
_test3:
movd %rdi, %xmm0
punpcklqdq %xmm0, %xmm0
ret
instead of:
_test3:
movd %rdi, %xmm1
#IMPLICIT_DEF %xmm0
punpcklqdq %xmm1, %xmm0
ret
... saving a register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48060 91177308-0d34-0410-b5e6-96231b3b80d8
_test3:
movd %rdi, %xmm1
#IMPLICIT_DEF %xmm0
punpcklqdq %xmm1, %xmm0
ret
instead of:
_test3:
#IMPLICIT_DEF %rax
movd %rax, %xmm0
movd %rdi, %xmm1
punpcklqdq %xmm1, %xmm0
ret
This is still not ideal. There is no reason to two xmm regs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48058 91177308-0d34-0410-b5e6-96231b3b80d8
%r3<def> = OR %x3<kill>, %x3
We don't want to mark the %r3 as unused even though it's a sub-register of %x3.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48003 91177308-0d34-0410-b5e6-96231b3b80d8
except ppc long double. This allows us to shrink constant pool
entries for x86 long double constants, which in turn allows us to
use flds/fldl instead of fldt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47938 91177308-0d34-0410-b5e6-96231b3b80d8
bug in r47928 (Int64Ty is the correct type for the constant
pool entry here) and removes the asserts, now that the code
is capable of handling i128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47932 91177308-0d34-0410-b5e6-96231b3b80d8
For x86, if sse2 is available, it's not a good idea since cvtss2sd is slower than a movsd load and it prevents load folding. On x87, it's important to shrink fp constant since fldt is very expensive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47931 91177308-0d34-0410-b5e6-96231b3b80d8
The basic idea is that all these algorithms are computing the longest paths from the root node or to the exit node. Therefore the existing implementation that uses and iterative and potentially
exponential algorithm was changed to a well-known graph algorithm based on dynamic programming. It has a linear run-time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47884 91177308-0d34-0410-b5e6-96231b3b80d8
- Cleaned up how the prologue-epilogue inserter loops over the instructions.
- Instead of restarting the processing of an instruction if we remove an
implicit kill, just update the end iterator and make sure that the iterator
isn't incremented.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47870 91177308-0d34-0410-b5e6-96231b3b80d8
marking both a super- and sub-register as "killed". This removes implicit uses
that are marked as "killed".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47862 91177308-0d34-0410-b5e6-96231b3b80d8
the register scavenger to process all of those new instructions instead of just
the last one inserted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47860 91177308-0d34-0410-b5e6-96231b3b80d8
generic & x86 versions; change generic to follow x86
and improve comments. Add PPC version (not right
for non-Darwin.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47734 91177308-0d34-0410-b5e6-96231b3b80d8