mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-30 02:32:08 +00:00
48575f6ea7
difficult on current ARM implementations for a few reasons. 1. Even though a single vmla has latency that is one cycle shorter than a pair of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause additional pipeline stall. So it's frequently better to single codegen vmul + vadd. 2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to stall for 4 cycles. We need to schedule them apart. 3. A vmla followed vmla is a special case. Obvious issuing back to back RAW vmla + vmla is very bad. But this isn't ideal either: vmul vadd vmla Instead, we want to expand the second vmla: vmla vmul vadd Even with the 4 cycle vmul stall, the second sequence is still 2 cycles faster. Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough but it isn't the optimial solution. This patch attempts to make it possible to use vmla / vmls in cases where it is profitable. A. Add missing isel predicates which cause vmla to be codegen'ed. B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to compute a fmul and a fmla. C. Add additional isel checks for vmla, avoid cases where vmla is feeding into fp instructions (except for the #3 exceptional case). D. Add ARM hazard recognizer to model the vmla / vmls hazards. E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the vmla / vmls will trigger one of the special hazards. Work in progress, only A+B are enabled. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120960 91177308-0d34-0410-b5e6-96231b3b80d8
64 lines
2.0 KiB
C++
64 lines
2.0 KiB
C++
//===-- ARM.h - Top-level interface for ARM representation---- --*- C++ -*-===//
|
|
//
|
|
// The LLVM Compiler Infrastructure
|
|
//
|
|
// This file is distributed under the University of Illinois Open Source
|
|
// License. See LICENSE.TXT for details.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
//
|
|
// This file contains the entry points for global functions defined in the LLVM
|
|
// ARM back-end.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#ifndef TARGET_ARM_H
|
|
#define TARGET_ARM_H
|
|
|
|
#include "ARMBaseInfo.h"
|
|
#include "llvm/Support/ErrorHandling.h"
|
|
#include "llvm/Target/TargetMachine.h"
|
|
#include <cassert>
|
|
|
|
namespace llvm {
|
|
|
|
class ARMBaseTargetMachine;
|
|
class FunctionPass;
|
|
class JITCodeEmitter;
|
|
class formatted_raw_ostream;
|
|
class MCCodeEmitter;
|
|
class TargetAsmBackend;
|
|
class MachineInstr;
|
|
class ARMAsmPrinter;
|
|
class MCInst;
|
|
|
|
MCCodeEmitter *createARMMCCodeEmitter(const Target &,
|
|
TargetMachine &TM,
|
|
MCContext &Ctx);
|
|
|
|
TargetAsmBackend *createARMAsmBackend(const Target &, const std::string &);
|
|
|
|
FunctionPass *createARMISelDag(ARMBaseTargetMachine &TM,
|
|
CodeGenOpt::Level OptLevel);
|
|
|
|
FunctionPass *createARMJITCodeEmitterPass(ARMBaseTargetMachine &TM,
|
|
JITCodeEmitter &JCE);
|
|
|
|
FunctionPass *createARMLoadStoreOptimizationPass(bool PreAlloc = false);
|
|
FunctionPass *createARMExpandPseudoPass();
|
|
FunctionPass *createARMGlobalMergePass(const TargetLowering* tli);
|
|
FunctionPass *createARMConstantIslandPass();
|
|
FunctionPass *createNEONMoveFixPass();
|
|
FunctionPass *createMLxExpansionPass();
|
|
FunctionPass *createThumb2ITBlockPass();
|
|
FunctionPass *createThumb2SizeReductionPass();
|
|
|
|
extern Target TheARMTarget, TheThumbTarget;
|
|
|
|
void LowerARMMachineInstrToMCInst(const MachineInstr *MI, MCInst &OutMI,
|
|
ARMAsmPrinter &AP);
|
|
|
|
} // end namespace llvm;
|
|
|
|
#endif
|