the example in the testcase, we now generate:
_test1: ## @test1
movss 4(%esp), %xmm0
addss 8(%esp), %xmm0
movl 12(%esp), %eax
movss %xmm0, (%eax)
ret
instead of:
_test1: ## @test1
subl $20, %esp
movl 24(%esp), %eax
movq %mm0, (%esp)
movq %mm0, 8(%esp)
movss (%esp), %xmm0
addss 12(%esp), %xmm0
movss %xmm0, (%eax)
addl $20, %esp
ret
v2f32 support did not work reliably because most of the X86
backend didn't know it was legal. It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode. If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32. However,
we generally don't try very hard to be abi compatible on gcc
extended vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107601 91177308-0d34-0410-b5e6-96231b3b80d8
v2f32 as legal in 32-bit mode. It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107600 91177308-0d34-0410-b5e6-96231b3b80d8
second round of low-level interface squeeze-out:
making all of CallInst's low-level operand accessors
private
If you get compile errors I strongly urge you to
update your code.
I tried to write the necessary clues into the
header where the compiler may point to, but no
guarantees. It works for my GCC.
You have several options to update your code:
- you can use the v2.8 ArgOperand accessors
- you can go via a temporary CallSite
- you can upcast to, say, User and call its
low-level accessors if your code is definitely
operand-order agnostic.
If you run into serious problems, please
comment in below thread (and back out this
revision only if absolutely necessary):
<http://groups.google.com/group/llvm-dev/browse_thread/thread/64650cf343b28271>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107580 91177308-0d34-0410-b5e6-96231b3b80d8
This code is transitional, it will soon be possible to eliminate
isExtractSubreg, isInsertSubreg, and isMoveInstr in most places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107547 91177308-0d34-0410-b5e6-96231b3b80d8
The COPY instruction is intended to replace the target specific copy
instructions for virtual registers as well as the EXTRACT_SUBREG and
INSERT_SUBREG instructions in MachineFunctions. It won't we used in a selection
DAG.
COPY is lowered to native register copies by LowerSubregs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107529 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix VEX prefix to be emitted with 3 bytes whenever VEX_5M
represents a REX equivalent two byte leading opcode
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107523 91177308-0d34-0410-b5e6-96231b3b80d8
list of predefined instructions appear. Add some consistency checks.
Ideally, TargetOpcodes.h should be produced by TableGen from Target.td, but it
is hardly worth the effort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107520 91177308-0d34-0410-b5e6-96231b3b80d8
new basic blocks, and if used as a function argument, that can cause call frame
setup / destroy pairs to be split across a basic block boundary. That prevents
us from doing a simple assertion to check that the pairs match and alloc/
dealloc the same amount of space. Modify the assertion to only check the
amount allocated when there are matching pairs in the same basic block.
rdar://8022442
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107517 91177308-0d34-0410-b5e6-96231b3b80d8
- X86 unfolding should check if the instructions being unfolded has memoperands.
If there is no memoperands, then it must assume conservative alignment. If this
would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
etc. should not unfold the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107509 91177308-0d34-0410-b5e6-96231b3b80d8
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not. gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks. There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it. PR 5125. Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now. I'm not making it any
worse. If anyone is inspired I think you can find all
the right places from this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107506 91177308-0d34-0410-b5e6-96231b3b80d8
SlotIndexes::insertMachineInstrInMaps would crash when trying to insert an
instruction imediately after an unmapped debug value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107504 91177308-0d34-0410-b5e6-96231b3b80d8