The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115243 91177308-0d34-0410-b5e6-96231b3b80d8
notes:
- The instructions are being added with dummy placeholder patterns using some 256
specifiers, this is not meant to work now, but since there are some multiclasses
generic enough to accept them, when we go for codegen, the stuff will be already
there.
- Add VEX encoding bits to support YMM
- Add MOVUPS and MOVAPS in the first round
- Use "Y" as suffix for those Instructions: MOVUPSYrr, ...
- All AVX instructions in X86InstrSSE.td will move soon to a new X86InstrAVX
file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107996 91177308-0d34-0410-b5e6-96231b3b80d8
A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104704 91177308-0d34-0410-b5e6-96231b3b80d8
A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104654 91177308-0d34-0410-b5e6-96231b3b80d8
SubRegIndex instances are now numbered uniquely the same way Register instances
are - in lexicographical order by name.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104627 91177308-0d34-0410-b5e6-96231b3b80d8
structure that represents a mapping without any dependencies on SubRegIndex
numbering.
This brings us closer to being able to remove the explicit SubRegIndex
numbering, and it is now possible to specify any mapping without inventing
*_INVALID register classes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104563 91177308-0d34-0410-b5e6-96231b3b80d8
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104492 91177308-0d34-0410-b5e6-96231b3b80d8
and %rcr_, leaving just %cr_ which is what people expect.
Updated the disassembler to support this unified register set.
Added a testcase to verify that the registers continue to be
decoded correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103196 91177308-0d34-0410-b5e6-96231b3b80d8
When a frame pointer is not otherwise required, and dynamic stack alignment
is necessary solely due to the spilling of a register with larger alignment
requirements than the default stack alignment, the frame pointer can be both
used as a general purpose register and a frame pointer. That goes poorly, for
obvious reasons. This patch brings back a bit of old logic for identifying
the use of such registers and conservatively reserves the frame pointer
during register allocation in such cases.
For now, implement for X86 only since it's 32-bit linux which is hitting this,
and we want a targeted fix for 2.7. As a follow-on, this will be expanded
to handle other targets, as theoretically the problem could arise elsewhere
as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100559 91177308-0d34-0410-b5e6-96231b3b80d8
Extracting the low element of a vector is now done with EXTRACT_SUBREG,
and the zero-extension performed by load movss is now modeled with
SUBREG_TO_REG, and so on.
Register-to-register movss and movsd are no longer considered copies;
they are two-address instructions which insert a scalar into a vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97354 91177308-0d34-0410-b5e6-96231b3b80d8
to the Intel register table.
Added 16- and 64-bit MOVs to and from the segment
registers to the Intel instruction tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81895 91177308-0d34-0410-b5e6-96231b3b80d8
due to x86 encoding restrictions. This is currently off by default
because it may cause code quality regressions. This is for PR4572.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77565 91177308-0d34-0410-b5e6-96231b3b80d8
to precisely describe the h-register subreg register classes.
Thanks to Jakob Stoklund Olesen for spotting this and for the
initial patch!
Also, make getStoreRegOpcode and getLoadRegOpcode aware of the
needs of h registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70211 91177308-0d34-0410-b5e6-96231b3b80d8
- Add patterns for h-register extract, which avoids a shift and mask,
and in some cases a temporary register.
- Add address-mode matching for turning (X>>(8-n))&(255<<n), where
n is a valid address-mode scale value, into an h-register extract
and a scaled-offset address.
- Replace X86's MOV32to32_ and related instructions with the new
target-independent COPY_TO_SUBREG instruction.
On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.
These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68962 91177308-0d34-0410-b5e6-96231b3b80d8
builds.
--- Reverse-merging (from foreign repository) r68552 into '.':
U test/CodeGen/X86/tls8.ll
U test/CodeGen/X86/tls10.ll
U test/CodeGen/X86/tls2.ll
U test/CodeGen/X86/tls6.ll
U lib/Target/X86/X86Instr64bit.td
U lib/Target/X86/X86InstrSSE.td
U lib/Target/X86/X86InstrInfo.td
U lib/Target/X86/X86RegisterInfo.cpp
U lib/Target/X86/X86ISelLowering.cpp
U lib/Target/X86/X86CodeEmitter.cpp
U lib/Target/X86/X86FastISel.cpp
U lib/Target/X86/X86InstrInfo.h
U lib/Target/X86/X86ISelDAGToDAG.cpp
U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U lib/Target/X86/X86ISelLowering.h
U lib/Target/X86/X86InstrInfo.cpp
U lib/Target/X86/X86InstrBuilder.h
U lib/Target/X86/X86RegisterInfo.td
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68560 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.
Will work on it and on X86-64 support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68552 91177308-0d34-0410-b5e6-96231b3b80d8