The only folding these load/store architectures can do is converting COPY into a
load or store, and the target independent part of foldMemoryOperand already
knows how to do that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108099 91177308-0d34-0410-b5e6-96231b3b80d8
We are generating movaps for all XMM register copies, including scalar
floating point values. This is known to be at least as good as movss and movsd
for all known architectures up to and including Nehalem because it avoids a
partial register stall.
The SSEDomainFix pass will switch movaps to movdqa when appropriate (i.e., when
operands come from the integer unit). We don't now that switching movaps to
movapd has any benefit.
The same applies to andps -> pand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108096 91177308-0d34-0410-b5e6-96231b3b80d8
Targets must now implement TargetInstrInfo::copyPhysReg instead. There is no
longer a default implementation forwarding to copyRegToReg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108095 91177308-0d34-0410-b5e6-96231b3b80d8
The first one was used just to call isSafeToMoveRegClassDefs. In
general, using a more specific reg class is better, in practice only
x86 implements that method and the results are always the same.
The second one is in FindFreeRegister and is used to check if a register
is in a register class, a much more direct call to contains is better as
it should cover more cases and is faster.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108093 91177308-0d34-0410-b5e6-96231b3b80d8
assert()s, switching to void-casts. Removed an unneeded Compiler.h include as
a result. There are two other uses in LLVM, but they're not due to assert()s,
so I've left them alone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108088 91177308-0d34-0410-b5e6-96231b3b80d8
Use a COPY instruction instead for register copies, or TII::copyPhysReg() after
COPY instructions are lowered.
Targets should implement copyPhysReg instead of copyRegToReg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108075 91177308-0d34-0410-b5e6-96231b3b80d8
Don't try a cross-class copy. That is very unlikely anywy since return value
registers are usually register class friendly. (%EAX, %XMM0, etc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108074 91177308-0d34-0410-b5e6-96231b3b80d8
correct alignment information, which simplifies ExpandRes_VAARG a bit.
The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:
* The 's' in target data: If this is set to the minimal alignment of any
argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
example.
* The getTransientStackAlignment method. It is possible for an architecture to
have argument less aligned than what we maintain the stack pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108072 91177308-0d34-0410-b5e6-96231b3b80d8
The remaining copyRegToReg calls actually check the return value (shock!), so we
cannot trivially replace them with COPY instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108069 91177308-0d34-0410-b5e6-96231b3b80d8
if a block is split (by a custom inserter), the insert point may be in a
different block than it was originally. This fixes 32-bit llvm-gcc
bootstrap builds, and I haven't been able to reproduce it otherwise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108060 91177308-0d34-0410-b5e6-96231b3b80d8
operation, but the way it's implemented requires the operation to also be
commutative. So add a check for commutativity (and tweak the corresponding
comments). This makes no difference in practice since every associative
LLVM instruction is also commutative! Here's an example to show the need
for commutativity: the accum_recursion.ll testcase calculates the factorial
function. Before the transformation the result of a call is
((((1*1)*2)*3)...)*x
while afterwards it is
(((1*x)*(x-1))...*2)*1
which clearly requires both associativity and commutativity of * to be equal
to the original.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108056 91177308-0d34-0410-b5e6-96231b3b80d8
ScheduleDAGEmit, TwoAddressLowering, and PHIElimination.
This switches the bulk of register copies to using COPY, but many less used
copyRegToReg calls remain.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108050 91177308-0d34-0410-b5e6-96231b3b80d8
Based on a patch by Rafael Espíndola.
Attempt to make the FpSET_ST1 hack more robust, but we are still relying on
FpSET_ST0 preceeding it. This is only for supporting really weird x87 inline
asm.
We support:
FpSET_ST0
INLINEASM
FpSET_ST0
FpSET_ST1
INLINEASM
with and without kills on the arguments. We don't support:
FpSET_ST1
FpSET_ST0
INLINEASM
nor
FpSET_ST1
INLINEASM
Just Don't Do It!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108047 91177308-0d34-0410-b5e6-96231b3b80d8