LEA64_32r, eliminating a bunch of modifier logic stuff on addr modes.
Implement support for printing mbb labels as operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73817 91177308-0d34-0410-b5e6-96231b3b80d8
step is to make tblgen generate something more appropriate for MCInst,
and generate calls to operand translation routines where needed.
This includes a bunch of #if 0 code which will slowly be refactored into
something sensible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73810 91177308-0d34-0410-b5e6-96231b3b80d8
implementation. The idea is that we want asmprinting to
work by converting MachineInstrs into a new MCInst class,
then the per-instruction asmprinter works on MCInst. MCInst
and the new asmprinters will not depend on most of the
llvm code generators. This allows building diassemblers
that don't link in the whole llvm code generator. This is
step #1 of many.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73743 91177308-0d34-0410-b5e6-96231b3b80d8
into DarwinTargetAsmInfo.cpp. The remaining differences should
be evaluated. It seems strange that x86/arm has .zerofill but ppc
doesn't, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73742 91177308-0d34-0410-b5e6-96231b3b80d8
initialization of all targets (InitializeAllTargets.h) or assembler
printers (InitializeAllAsmPrinters.h). This is a step toward the
elimination of relinked object files, so that we can build normal
archives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73543 91177308-0d34-0410-b5e6-96231b3b80d8
comes after the DW_CFA_def_cfa_register, because the CFA is really ESP from the
start of the function and only gets an offset when the "subl $xxx,%esp"
instruction happens, not the other way around.
And reapply r72898:
The DWARF unwind info was incorrect. While compiling with
`-fomit-frame-pointer', we would lack the DW_CFA_advance_loc information for a
lot of function, and then they would be `0'. The linker (at least on Darwin)
needs to encode the stack size. In some cases, the stack size is too large to
directly encode. So the linker checks to see if there is a "subl $xxx,%esp"
instruction at the point where the `DW_CFA_def_cfa_offset' says the pc was. If
so, the compact encoding records the offset in the function to where the stack
size is embedded. But because the `DW_CFA_advance_loc' instructions are missing,
it looks before the function and dies.
So, instead of emitting the EH debug label before the stack adjustment
operations, emit it afterwards, right before the frame move stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73465 91177308-0d34-0410-b5e6-96231b3b80d8
that push immediate operands of 1, 2, and 4 bytes (extended to the native
register size in each case). The assembly mnemonics are "pushl" and "pushq."
One such instruction appears at the beginning of the "start" function , so this
is essential for accurate disassembly when unwinding."
Patch by Sean Callanan!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73407 91177308-0d34-0410-b5e6-96231b3b80d8
out of sync with regular cc.
The only difference between the tail call cc and the normal
cc was that one parameter register - R9 - was reserved for
calling functions through a function pointer. After time the
tail call cc has gotten out of sync with the regular cc.
We can use R11 which is also caller saved but not used as
parameter register for potential function pointers and
remove the special tail call cc on x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73233 91177308-0d34-0410-b5e6-96231b3b80d8
Emission for globals, using the correct data sections
Function alignment can be computed for each target using TargetELFWriterInfo
Some small fixes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73201 91177308-0d34-0410-b5e6-96231b3b80d8
ABI. The missing piece is support for putting "homogeneous aggregates"
into registers.
Patch by Sandeep Patel!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73095 91177308-0d34-0410-b5e6-96231b3b80d8
on x86 to handle more cases. Fix a bug in said code that would cause it
to read past the end of an object. Rewrite the code in
SelectionDAGLegalize::ExpandBUILD_VECTOR to be a bit more general.
Remove PerformBuildVectorCombine, which is no longer necessary with
these changes. In addition to simplifying the code, with this change,
we can now catch a few more cases of consecutive loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73012 91177308-0d34-0410-b5e6-96231b3b80d8
nodes for vectors with an i16 element type. Add an optimization for
building a vector which is all zeros/undef except for the bottom
element, where the bottom element is an i8 or i16.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72988 91177308-0d34-0410-b5e6-96231b3b80d8
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72959 91177308-0d34-0410-b5e6-96231b3b80d8
build vectors with i64 elements will only appear on 32b x86 before legalize.
Since vector widening occurs during legalize, and produces i64 build_vector
elements, the dag combiner is never run on these before legalize splits them
into 32b elements.
Teach the build_vector dag combine in x86 back end to recognize consecutive
loads producing the low part of the vector.
Convert the two uses of TLI's consecutive load recognizer to pass LoadSDNodes
since that was required implicitly.
Add a testcase for the transform.
Old:
subl $28, %esp
movl 32(%esp), %eax
movl 4(%eax), %ecx
movl %ecx, 4(%esp)
movl (%eax), %eax
movl %eax, (%esp)
movaps (%esp), %xmm0
pmovzxwd %xmm0, %xmm0
movl 36(%esp), %eax
movaps %xmm0, (%eax)
addl $28, %esp
ret
New:
movl 4(%esp), %eax
pmovzxwd (%eax), %xmm0
movl 8(%esp), %eax
movaps %xmm0, (%eax)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72957 91177308-0d34-0410-b5e6-96231b3b80d8