- The COFF backend doesn't support MingW/Cygwin at the moment, it'll report an
error, but it's still much better than random assertions from the MachO backend.
- We want to make ELF the default eventually, it's what the majority of targets use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110197 91177308-0d34-0410-b5e6-96231b3b80d8
pass that inserted it.
It is no longer necessary to limit the live ranges of FP registers to a single
basic block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108536 91177308-0d34-0410-b5e6-96231b3b80d8
FP_REG_KILL instructions are still inserted, but can be disabled by passing
-live-x87 to llc. The X87FPRegKillInserterPass is going to be removed shortly.
CFG edges are partioned into bundles where the x87 stack must be allocated
identically. Code is insertad at the end of each basic block that shuffles the
live FP registers to match the outgoing bundles expectations.
This fix is in preparation for some upcoming register allocator improvements
that may extend the live range of registers beyond a basic block, similar to
LICM. It also provides a nice runtime speedup if you are building with
-mfpmath=387.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108529 91177308-0d34-0410-b5e6-96231b3b80d8
- Check getBytesToPopOnReturn().
- Eschew ST0 and ST1 for return values.
- Fix the PIC base register initialization so that it doesn't ever
fail to end up the top of the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108039 91177308-0d34-0410-b5e6-96231b3b80d8
isn't ideal if we want to be able to use another object file format.
Add a createObjectStreamer() factory method so that the correct object
file streamer can be instantiated for a given target triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104318 91177308-0d34-0410-b5e6-96231b3b80d8
Move EmitTargetCodeForMemcpy, EmitTargetCodeForMemset, and
EmitTargetCodeForMemmove out of TargetLowering and into
SelectionDAGInfo to exercise this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103481 91177308-0d34-0410-b5e6-96231b3b80d8
When a frame pointer is not otherwise required, and dynamic stack alignment
is necessary solely due to the spilling of a register with larger alignment
requirements than the default stack alignment, the frame pointer can be both
used as a general purpose register and a frame pointer. That goes poorly, for
obvious reasons. This patch brings back a bit of old logic for identifying
the use of such registers and conservatively reserves the frame pointer
during register allocation in such cases.
For now, implement for X86 only since it's 32-bit linux which is hitting this,
and we want a targeted fix for 2.7. As a follow-on, this will be expanded
to handle other targets, as theoretically the problem could arise elsewhere
as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100559 91177308-0d34-0410-b5e6-96231b3b80d8
On Nehalem and newer CPUs there is a 2 cycle latency penalty on using a register
in a different domain than where it was defined. Some instructions have
equvivalents for different domains, like por/orps/orpd.
The SSEDomainFix pass tries to minimize the number of domain crossings by
changing between equvivalent opcodes where possible.
This is a work in progress, in particular the pass doesn't do anything yet. SSE
instructions are tagged with their execution domain in TableGen using the last
two bits of TSFlags. Note that not all instructions are tagged correctly. Life
just isn't that simple.
The SSE execution domain issue is very similar to the ARM NEON/VFP pipeline
issue handled by NEONMoveFixPass. This pass may become target independent to
handle both.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99524 91177308-0d34-0410-b5e6-96231b3b80d8
This is work in progress. So far, SSE execution domain tables are added to
X86InstrInfo, and a skeleton pass is enabled with -sse-domain-fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99345 91177308-0d34-0410-b5e6-96231b3b80d8
Otherwise AT&T asm printer is used with non-compatible MCAsmInfo and
there is no way to override this behaviour.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96165 91177308-0d34-0410-b5e6-96231b3b80d8
We still have the templated X86 JIT emitter, *and* the
almost-copy in X86InstrInfo for getting instruction sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96059 91177308-0d34-0410-b5e6-96231b3b80d8
only run for x86 with fastisel. I've found it being very effective in
eliminating some obvious dead code as result of formal parameter lowering
especially when tail call optimization eliminated the need for some of the loads
from fixed frame objects. It also shrinks a number of the tests. A couple of
tests no longer make sense and are now eliminated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95493 91177308-0d34-0410-b5e6-96231b3b80d8
eliminate random "code emitter" stuff in Alpha, except for
the JIT path. Next up, remove the template cruft.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95131 91177308-0d34-0410-b5e6-96231b3b80d8
function can support dynamic stack realignment. That's a much easier question
to answer at instruction selection stage than whether the function actually
will have dynamic alignment prologue. This allows the removal of the
stack alignment heuristic pass, and improves code quality for cases where
the heuristic would result in dynamic alignment code being generated when
it was not strictly necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93885 91177308-0d34-0410-b5e6-96231b3b80d8
idea, but unfortunately necessary.
- Default to using 4-bytes for the LSDA pointer encoding to agree with the
encoded value in the CIE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93753 91177308-0d34-0410-b5e6-96231b3b80d8
The CIE says that the LSDA point in the FDE section is an "sdata4". That's fine,
but we need it to actually be 4-bytes in the FDE for some platforms. Allow
individual platforms to decide for themselves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93616 91177308-0d34-0410-b5e6-96231b3b80d8
by allowing backends to override routines that will default
the JIT and Static code generation to an appropriate code model
for the architecture.
Should fix PR 5773.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91824 91177308-0d34-0410-b5e6-96231b3b80d8
incarnations), integrated into the MC framework.
The disassembler is table-driven, using a custom TableGen backend to
generate hierarchical tables optimized for fast decode. The disassembler
consumes MemoryObjects and produces arrays of MCInsts, adhering to the
abstract base class MCDisassembler (llvm/MC/MCDisassembler.h).
The disassembler is documented in detail in
- lib/Target/X86/Disassembler/X86Disassembler.cpp (disassembler runtime)
- utils/TableGen/DisassemblerEmitter.cpp (table emitter)
You can test the disassembler by running llvm-mc -disassemble for i386
or x86_64 targets. Please let me know if you encounter any problems
with it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91749 91177308-0d34-0410-b5e6-96231b3b80d8
The large code model is documented at
http://www.x86-64.org/documentation/abi.pdf and says that calls should
assume their target doesn't live within the 32-bit pc-relative offset
that fits in the call instruction.
To do this, we turn off the global-address->target-global-address
conversion in X86TargetLowering::LowerCall(). The first attempt at
this broke the lazy JIT because it can separate the movabs(imm->reg)
from the actual call instruction. The lazy JIT receives the address of
the movabs as a relocation and needs to record the return address from
the call; and then when that call happens, it needs to patch the
movabs with the newly-compiled target. We could thread the call
instruction into the relocation and record the movabs<->call mapping
explicitly, but that seems to require at least as much new
complication in the code generator as this change.
To fix this, we make lazy functions _always_ go through a call
stub. You'd think we'd only have to force lazy calls through a stub on
difficult platforms, but that turns out to break indirect calls
through a function pointer. The right fix for that is to distinguish
between calls and address-of operations on uncompiled functions, but
that's complex enough to leave for someone else to do.
Another attempt at this defined a new CALL64i pseudo-instruction,
which expanded to a 2-instruction sequence in the assembly output and
was special-cased in the X86CodeEmitter's emitInstruction()
function. That broke indirect calls in the same way as above.
This patch also removes a hack forcing Darwin to the small code model.
Without far-call-stubs, the small code model requires things of the
JITMemoryManager that the DefaultJITMemoryManager can't provide.
Thanks to echristo for lots of testing!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88984 91177308-0d34-0410-b5e6-96231b3b80d8