2013-01-07 03:08:10 +00:00
|
|
|
//===- llvm/Analysis/TargetTransformInfo.h ----------------------*- C++ -*-===//
|
2012-10-18 23:22:48 +00:00
|
|
|
//
|
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This pass exposes codegen information to IR-level passes. Every
|
|
|
|
// transformation that uses codegen information is broken into three parts:
|
|
|
|
// 1. The IR-level analysis pass.
|
|
|
|
// 2. The IR-level transformation interface which provides the needed
|
|
|
|
// information.
|
|
|
|
// 3. Codegen-level implementation which uses target-specific hooks.
|
|
|
|
//
|
|
|
|
// This file defines #2, which is the interface that IR-level transformations
|
|
|
|
// use for querying the codegen.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2013-01-10 00:45:19 +00:00
|
|
|
#ifndef LLVM_ANALYSIS_TARGETTRANSFORMINFO_H
|
|
|
|
#define LLVM_ANALYSIS_TARGETTRANSFORMINFO_H
|
2012-10-18 23:22:48 +00:00
|
|
|
|
2013-01-07 03:08:10 +00:00
|
|
|
#include "llvm/IR/Intrinsics.h"
|
2012-12-03 17:02:12 +00:00
|
|
|
#include "llvm/Pass.h"
|
2012-10-18 23:22:48 +00:00
|
|
|
#include "llvm/Support/DataTypes.h"
|
|
|
|
|
|
|
|
namespace llvm {
|
|
|
|
|
2013-03-10 00:20:16 +00:00
|
|
|
class GlobalValue;
|
2013-09-11 19:25:43 +00:00
|
|
|
class Loop;
|
2013-03-10 00:20:16 +00:00
|
|
|
class Type;
|
|
|
|
class User;
|
|
|
|
class Value;
|
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// TargetTransformInfo - This pass provides access to the codegen
|
|
|
|
/// interfaces that are needed for IR-level transformations.
|
|
|
|
class TargetTransformInfo {
|
|
|
|
protected:
|
|
|
|
/// \brief The TTI instance one level down the stack.
|
|
|
|
///
|
|
|
|
/// This is used to implement the default behavior all of the methods which
|
|
|
|
/// is to delegate up through the stack of TTIs until one can answer the
|
|
|
|
/// query.
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
TargetTransformInfo *PrevTTI;
|
2013-01-05 11:43:11 +00:00
|
|
|
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
/// \brief The top of the stack of TTI analyses available.
|
|
|
|
///
|
|
|
|
/// This is a convenience routine maintained as TTI analyses become available
|
|
|
|
/// that complements the PrevTTI delegation chain. When one part of an
|
|
|
|
/// analysis pass wants to query another part of the analysis pass it can use
|
|
|
|
/// this to start back at the top of the stack.
|
|
|
|
TargetTransformInfo *TopTTI;
|
|
|
|
|
|
|
|
/// All pass subclasses must in their initializePass routine call
|
|
|
|
/// pushTTIStack with themselves to update the pointers tracking the previous
|
|
|
|
/// TTI instance in the analysis group's stack, and the top of the analysis
|
|
|
|
/// group's stack.
|
|
|
|
void pushTTIStack(Pass *P);
|
|
|
|
|
|
|
|
/// All pass subclasses must in their finalizePass routine call popTTIStack
|
|
|
|
/// to update the pointers tracking the previous TTI instance in the analysis
|
|
|
|
/// group's stack, and the top of the analysis group's stack.
|
|
|
|
void popTTIStack();
|
2013-01-05 11:43:11 +00:00
|
|
|
|
|
|
|
/// All pass subclasses must call TargetTransformInfo::getAnalysisUsage.
|
|
|
|
virtual void getAnalysisUsage(AnalysisUsage &AU) const;
|
2012-10-18 23:22:48 +00:00
|
|
|
|
|
|
|
public:
|
2013-01-05 11:43:11 +00:00
|
|
|
/// This class is intended to be subclassed by real implementations.
|
|
|
|
virtual ~TargetTransformInfo() = 0;
|
|
|
|
|
2013-01-21 01:27:39 +00:00
|
|
|
/// \name Generic Target Information
|
|
|
|
/// @{
|
|
|
|
|
|
|
|
/// \brief Underlying constants for 'cost' values in this interface.
|
|
|
|
///
|
|
|
|
/// Many APIs in this interface return a cost. This enum defines the
|
|
|
|
/// fundamental values that should be used to interpret (and produce) those
|
|
|
|
/// costs. The costs are returned as an unsigned rather than a member of this
|
|
|
|
/// enumeration because it is expected that the cost of one IR instruction
|
2013-03-05 22:05:16 +00:00
|
|
|
/// may have a multiplicative factor to it or otherwise won't fit directly
|
2013-01-21 01:27:39 +00:00
|
|
|
/// into the enum. Moreover, it is common to sum or average costs which works
|
|
|
|
/// better as simple integral values. Thus this enum only provides constants.
|
|
|
|
///
|
|
|
|
/// Note that these costs should usually reflect the intersection of code-size
|
|
|
|
/// cost and execution cost. A free instruction is typically one that folds
|
|
|
|
/// into another instruction. For example, reg-to-reg moves can often be
|
|
|
|
/// skipped by renaming the registers in the CPU, but they still are encoded
|
|
|
|
/// and thus wouldn't be considered 'free' here.
|
|
|
|
enum TargetCostConstants {
|
|
|
|
TCC_Free = 0, ///< Expected to fold away in lowering.
|
|
|
|
TCC_Basic = 1, ///< The cost of a typical 'add' instruction.
|
2013-01-21 02:17:36 +00:00
|
|
|
TCC_Expensive = 4 ///< The cost of a 'div' instruction on x86.
|
2013-01-21 01:27:39 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of a specific operation when lowered.
|
|
|
|
///
|
|
|
|
/// Note that this is designed to work on an arbitrary synthetic opcode, and
|
|
|
|
/// thus work for hypothetical queries before an instruction has even been
|
|
|
|
/// formed. However, this does *not* work for GEPs, and must not be called
|
|
|
|
/// for a GEP instruction. Instead, use the dedicated getGEPCost interface as
|
|
|
|
/// analyzing a GEP's cost required more information.
|
|
|
|
///
|
|
|
|
/// Typically only the result type is required, and the operand type can be
|
|
|
|
/// omitted. However, if the opcode is one of the cast instructions, the
|
|
|
|
/// operand type is required.
|
|
|
|
///
|
|
|
|
/// The returned cost is defined in terms of \c TargetCostConstants, see its
|
|
|
|
/// comments for a detailed explanation of the cost values.
|
|
|
|
virtual unsigned getOperationCost(unsigned Opcode, Type *Ty,
|
|
|
|
Type *OpTy = 0) const;
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of a GEP operation when lowered.
|
|
|
|
///
|
|
|
|
/// The contract for this function is the same as \c getOperationCost except
|
|
|
|
/// that it supports an interface that provides extra information specific to
|
|
|
|
/// the GEP operation.
|
|
|
|
virtual unsigned getGEPCost(const Value *Ptr,
|
|
|
|
ArrayRef<const Value *> Operands) const;
|
|
|
|
|
2013-01-22 11:26:02 +00:00
|
|
|
/// \brief Estimate the cost of a function call when lowered.
|
|
|
|
///
|
|
|
|
/// The contract for this is the same as \c getOperationCost except that it
|
|
|
|
/// supports an interface that provides extra information specific to call
|
|
|
|
/// instructions.
|
|
|
|
///
|
|
|
|
/// This is the most basic query for estimating call cost: it only knows the
|
|
|
|
/// function type and (potentially) the number of arguments at the call site.
|
|
|
|
/// The latter is only interesting for varargs function types.
|
|
|
|
virtual unsigned getCallCost(FunctionType *FTy, int NumArgs = -1) const;
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of calling a specific function when lowered.
|
|
|
|
///
|
|
|
|
/// This overload adds the ability to reason about the particular function
|
|
|
|
/// being called in the event it is a library call with special lowering.
|
|
|
|
virtual unsigned getCallCost(const Function *F, int NumArgs = -1) const;
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of calling a specific function when lowered.
|
|
|
|
///
|
|
|
|
/// This overload allows specifying a set of candidate argument values.
|
|
|
|
virtual unsigned getCallCost(const Function *F,
|
|
|
|
ArrayRef<const Value *> Arguments) const;
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of an intrinsic when lowered.
|
|
|
|
///
|
|
|
|
/// Mirrors the \c getCallCost method but uses an intrinsic identifier.
|
|
|
|
virtual unsigned getIntrinsicCost(Intrinsic::ID IID, Type *RetTy,
|
|
|
|
ArrayRef<Type *> ParamTys) const;
|
|
|
|
|
|
|
|
/// \brief Estimate the cost of an intrinsic when lowered.
|
|
|
|
///
|
|
|
|
/// Mirrors the \c getCallCost method but uses an intrinsic identifier.
|
|
|
|
virtual unsigned getIntrinsicCost(Intrinsic::ID IID, Type *RetTy,
|
|
|
|
ArrayRef<const Value *> Arguments) const;
|
|
|
|
|
2013-01-21 01:27:39 +00:00
|
|
|
/// \brief Estimate the cost of a given IR user when lowered.
|
|
|
|
///
|
|
|
|
/// This can estimate the cost of either a ConstantExpr or Instruction when
|
|
|
|
/// lowered. It has two primary advantages over the \c getOperationCost and
|
|
|
|
/// \c getGEPCost above, and one significant disadvantage: it can only be
|
|
|
|
/// used when the IR construct has already been formed.
|
|
|
|
///
|
|
|
|
/// The advantages are that it can inspect the SSA use graph to reason more
|
|
|
|
/// accurately about the cost. For example, all-constant-GEPs can often be
|
|
|
|
/// folded into a load or other instruction, but if they are used in some
|
|
|
|
/// other context they may not be folded. This routine can distinguish such
|
|
|
|
/// cases.
|
|
|
|
///
|
|
|
|
/// The returned cost is defined in terms of \c TargetCostConstants, see its
|
|
|
|
/// comments for a detailed explanation of the cost values.
|
|
|
|
virtual unsigned getUserCost(const User *U) const;
|
|
|
|
|
2013-07-27 00:01:07 +00:00
|
|
|
/// \brief hasBranchDivergence - Return true if branch divergence exists.
|
|
|
|
/// Branch divergence has a significantly negative impact on GPU performance
|
|
|
|
/// when threads in the same wavefront take different paths due to conditional
|
|
|
|
/// branches.
|
|
|
|
virtual bool hasBranchDivergence() const;
|
|
|
|
|
2013-01-22 11:26:02 +00:00
|
|
|
/// \brief Test whether calls to a function lower to actual program function
|
|
|
|
/// calls.
|
|
|
|
///
|
|
|
|
/// The idea is to test whether the program is likely to require a 'call'
|
|
|
|
/// instruction or equivalent in order to call the given function.
|
|
|
|
///
|
|
|
|
/// FIXME: It's not clear that this is a good or useful query API. Client's
|
|
|
|
/// should probably move to simpler cost metrics using the above.
|
|
|
|
/// Alternatively, we could split the cost interface into distinct code-size
|
|
|
|
/// and execution-speed costs. This would allow modelling the core of this
|
|
|
|
/// query more accurately as the a call is a single small instruction, but
|
|
|
|
/// incurs significant execution cost.
|
|
|
|
virtual bool isLoweredToCall(const Function *F) const;
|
|
|
|
|
2013-09-11 19:25:43 +00:00
|
|
|
/// Parameters that control the generic loop unrolling transformation.
|
|
|
|
struct UnrollingPreferences {
|
|
|
|
/// The cost threshold for the unrolled loop, compared to
|
|
|
|
/// CodeMetrics.NumInsts aggregated over all basic blocks in the loop body.
|
|
|
|
/// The unrolling factor is set such that the unrolled loop body does not
|
|
|
|
/// exceed this cost. Set this to UINT_MAX to disable the loop body cost
|
|
|
|
/// restriction.
|
|
|
|
unsigned Threshold;
|
|
|
|
/// The cost threshold for the unrolled loop when optimizing for size (set
|
|
|
|
/// to UINT_MAX to disable).
|
|
|
|
unsigned OptSizeThreshold;
|
|
|
|
/// A forced unrolling factor (the number of concatenated bodies of the
|
|
|
|
/// original loop in the unrolled loop body). When set to 0, the unrolling
|
|
|
|
/// transformation will select an unrolling factor based on the current cost
|
|
|
|
/// threshold and other factors.
|
|
|
|
unsigned Count;
|
|
|
|
/// Allow partial unrolling (unrolling of loops to expand the size of the
|
|
|
|
/// loop body, not only to eliminate small constant-trip-count loops).
|
|
|
|
bool Partial;
|
|
|
|
/// Allow runtime unrolling (unrolling of loops to expand the size of the
|
|
|
|
/// loop body even when the number of loop iterations is not known at compile
|
|
|
|
/// time).
|
|
|
|
bool Runtime;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// \brief Get target-customized preferences for the generic loop unrolling
|
|
|
|
/// transformation. The caller will initialize UP with the current
|
|
|
|
/// target-independent defaults.
|
|
|
|
virtual void getUnrollingPreferences(Loop *L, UnrollingPreferences &UP) const;
|
|
|
|
|
2013-01-21 01:27:39 +00:00
|
|
|
/// @}
|
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// \name Scalar Target Information
|
|
|
|
/// @{
|
|
|
|
|
2013-01-07 03:16:03 +00:00
|
|
|
/// \brief Flags indicating the kind of support for population count.
|
|
|
|
///
|
|
|
|
/// Compared to the SW implementation, HW support is supposed to
|
|
|
|
/// significantly boost the performance when the population is dense, and it
|
|
|
|
/// may or may not degrade performance if the population is sparse. A HW
|
|
|
|
/// support is considered as "Fast" if it can outperform, or is on a par
|
2013-03-05 22:05:16 +00:00
|
|
|
/// with, SW implementation when the population is sparse; otherwise, it is
|
2013-01-07 03:16:03 +00:00
|
|
|
/// considered as "Slow".
|
|
|
|
enum PopcntSupportKind {
|
|
|
|
PSK_Software,
|
|
|
|
PSK_SlowHardware,
|
|
|
|
PSK_FastHardware
|
2012-12-09 03:12:46 +00:00
|
|
|
};
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if the specified immediate is legal add immediate, that
|
|
|
|
/// is the target has add instructions which can add a register with the
|
|
|
|
/// immediate without having to materialize the immediate into a register.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual bool isLegalAddImmediate(int64_t Imm) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if the specified immediate is legal icmp immediate,
|
|
|
|
/// that is the target has icmp instructions which can compare a register
|
|
|
|
/// against the immediate without having to materialize the immediate into a
|
|
|
|
/// register.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual bool isLegalICmpImmediate(int64_t Imm) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if the addressing mode represented by AM is legal for
|
|
|
|
/// this target, for a load/store of the specified type.
|
2012-10-18 23:22:48 +00:00
|
|
|
/// The type may be VoidTy, in which case only return true if the addressing
|
|
|
|
/// mode is legal for a load/store of any legal type.
|
|
|
|
/// TODO: Handle pre/postinc as well.
|
2013-01-05 03:36:17 +00:00
|
|
|
virtual bool isLegalAddressingMode(Type *Ty, GlobalValue *BaseGV,
|
|
|
|
int64_t BaseOffset, bool HasBaseReg,
|
2013-01-05 11:43:11 +00:00
|
|
|
int64_t Scale) const;
|
|
|
|
|
2013-05-31 21:29:03 +00:00
|
|
|
/// \brief Return the cost of the scaling factor used in the addressing
|
|
|
|
/// mode represented by AM for this target, for a load/store
|
|
|
|
/// of the specified type.
|
|
|
|
/// If the AM is supported, the return value must be >= 0.
|
|
|
|
/// If the AM is not supported, it returns a negative value.
|
|
|
|
/// TODO: Handle pre/postinc as well.
|
|
|
|
virtual int getScalingFactorCost(Type *Ty, GlobalValue *BaseGV,
|
|
|
|
int64_t BaseOffset, bool HasBaseReg,
|
|
|
|
int64_t Scale) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if it's free to truncate a value of type Ty1 to type
|
|
|
|
/// Ty2. e.g. On x86 it's free to truncate a i32 value in register EAX to i16
|
|
|
|
/// by referencing its sub-register AX.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual bool isTruncateFree(Type *Ty1, Type *Ty2) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if this type is legal.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual bool isTypeLegal(Type *Ty) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Returns the target's jmp_buf alignment in bytes.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getJumpBufAlignment() const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Returns the target's jmp_buf size in bytes.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getJumpBufSize() const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if switches should be turned into lookup tables for the
|
|
|
|
/// target.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual bool shouldBuildLookupTables() const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return hardware support for population count.
|
2013-01-07 03:16:03 +00:00
|
|
|
virtual PopcntSupportKind getPopcntSupport(unsigned IntTyWidthInBit) const;
|
2013-01-05 11:43:11 +00:00
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return true if the hardware has a fast square-root instruction.
|
2013-08-23 10:27:02 +00:00
|
|
|
virtual bool haveFastSqrt(Type *Ty) const;
|
|
|
|
|
2014-01-24 18:22:55 +00:00
|
|
|
/// \brief Return the expected cost of materializing for the given integer
|
|
|
|
/// immediate of the specified type.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getIntImmCost(const APInt &Imm, Type *Ty) const;
|
2012-10-18 23:22:48 +00:00
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// @}
|
|
|
|
|
|
|
|
/// \name Vector Target Information
|
|
|
|
/// @{
|
2012-10-24 17:22:41 +00:00
|
|
|
|
2013-01-07 03:20:02 +00:00
|
|
|
/// \brief The various kinds of shuffle patterns for vector queries.
|
2012-12-24 08:57:47 +00:00
|
|
|
enum ShuffleKind {
|
2013-01-07 03:20:02 +00:00
|
|
|
SK_Broadcast, ///< Broadcast element 0 to all other elements.
|
|
|
|
SK_Reverse, ///< Reverse the order of the vector.
|
|
|
|
SK_InsertSubvector, ///< InsertSubvector. Index indicates start offset.
|
|
|
|
SK_ExtractSubvector ///< ExtractSubvector Index indicates start offset.
|
2012-12-24 08:57:47 +00:00
|
|
|
};
|
|
|
|
|
2013-10-22 15:18:03 +00:00
|
|
|
/// \brief Additional information about an operand's possible values.
|
2013-04-04 23:26:21 +00:00
|
|
|
enum OperandValueKind {
|
|
|
|
OK_AnyValue, // Operand can have any value.
|
|
|
|
OK_UniformValue, // Operand is uniform (splat of a value).
|
|
|
|
OK_UniformConstantValue // Operand is uniform constant.
|
|
|
|
};
|
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The number of scalar or vector registers that the target has.
|
|
|
|
/// If 'Vectors' is true, it returns the number of vector registers. If it is
|
|
|
|
/// set to false, it returns the number of scalar registers.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getNumberOfRegisters(bool Vector) const;
|
2013-01-04 17:48:25 +00:00
|
|
|
|
2013-01-09 22:29:00 +00:00
|
|
|
/// \return The width of the largest scalar or vector register type.
|
|
|
|
virtual unsigned getRegisterBitWidth(bool Vector) const;
|
|
|
|
|
2013-01-09 01:15:42 +00:00
|
|
|
/// \return The maximum unroll factor that the vectorizer should try to
|
|
|
|
/// perform for this target. This number depends on the level of parallelism
|
|
|
|
/// and the number of execution units in the CPU.
|
|
|
|
virtual unsigned getMaximumUnrollFactor() const;
|
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The expected cost of arithmetic ops, such as mul, xor, fsub, etc.
|
2013-04-04 23:26:21 +00:00
|
|
|
virtual unsigned getArithmeticInstrCost(unsigned Opcode, Type *Ty,
|
|
|
|
OperandValueKind Opd1Info = OK_AnyValue,
|
|
|
|
OperandValueKind Opd2Info = OK_AnyValue) const;
|
2012-10-26 23:49:28 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The cost of a shuffle instruction of kind Kind and of type Tp.
|
2013-01-03 05:02:41 +00:00
|
|
|
/// The index and subtype parameters are used by the subvector insertion and
|
|
|
|
/// extraction shuffle kinds.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getShuffleCost(ShuffleKind Kind, Type *Tp, int Index = 0,
|
|
|
|
Type *SubTp = 0) const;
|
2012-10-24 17:22:41 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The expected cost of cast instructions, such as bitcast, trunc,
|
2012-10-26 23:49:28 +00:00
|
|
|
/// zext, etc.
|
|
|
|
virtual unsigned getCastInstrCost(unsigned Opcode, Type *Dst,
|
2013-01-05 11:43:11 +00:00
|
|
|
Type *Src) const;
|
2012-10-26 23:49:28 +00:00
|
|
|
|
2013-03-05 22:05:16 +00:00
|
|
|
/// \return The expected cost of control-flow related instructions such as
|
2012-10-26 23:49:28 +00:00
|
|
|
/// Phi, Ret, Br.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getCFInstrCost(unsigned Opcode) const;
|
2012-10-26 23:49:28 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \returns The expected cost of compare and select instructions.
|
2012-10-26 23:49:28 +00:00
|
|
|
virtual unsigned getCmpSelInstrCost(unsigned Opcode, Type *ValTy,
|
2013-01-05 11:43:11 +00:00
|
|
|
Type *CondTy = 0) const;
|
2012-10-26 23:49:28 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The expected cost of vector Insert and Extract.
|
2012-11-02 22:31:56 +00:00
|
|
|
/// Use -1 to indicate that there is no information on the index value.
|
2012-10-26 23:49:28 +00:00
|
|
|
virtual unsigned getVectorInstrCost(unsigned Opcode, Type *Val,
|
2013-01-05 11:43:11 +00:00
|
|
|
unsigned Index = -1) const;
|
2012-10-26 23:49:28 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \return The cost of Load and Store instructions.
|
2012-10-24 17:22:41 +00:00
|
|
|
virtual unsigned getMemoryOpCost(unsigned Opcode, Type *Src,
|
|
|
|
unsigned Alignment,
|
2013-01-05 11:43:11 +00:00
|
|
|
unsigned AddressSpace) const;
|
2012-10-24 17:22:41 +00:00
|
|
|
|
Costmodel: Add support for horizontal vector reductions
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190876 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-17 18:06:50 +00:00
|
|
|
/// \brief Calculate the cost of performing a vector reduction.
|
|
|
|
///
|
|
|
|
/// This is the cost of reducing the vector value of type \p Ty to a scalar
|
|
|
|
/// value using the operation denoted by \p Opcode. The form of the reduction
|
|
|
|
/// can either be a pairwise reduction or a reduction that splits the vector
|
|
|
|
/// at every reduction level.
|
|
|
|
///
|
|
|
|
/// Pairwise:
|
|
|
|
/// (v0, v1, v2, v3)
|
|
|
|
/// ((v0+v1), (v2, v3), undef, undef)
|
|
|
|
/// Split:
|
|
|
|
/// (v0, v1, v2, v3)
|
|
|
|
/// ((v0+v2), (v1+v3), undef, undef)
|
|
|
|
virtual unsigned getReductionCost(unsigned Opcode, Type *Ty,
|
|
|
|
bool IsPairwiseForm) const;
|
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \returns The cost of Intrinsic instructions.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getIntrinsicInstrCost(Intrinsic::ID ID, Type *RetTy,
|
|
|
|
ArrayRef<Type *> Tys) const;
|
2012-12-09 20:42:17 +00:00
|
|
|
|
2013-01-04 17:48:25 +00:00
|
|
|
/// \returns The number of pieces into which the provided type must be
|
2012-10-26 04:28:02 +00:00
|
|
|
/// split during legalization. Zero is returned when the answer is unknown.
|
2013-01-05 11:43:11 +00:00
|
|
|
virtual unsigned getNumberOfParts(Type *Tp) const;
|
2012-10-18 23:22:48 +00:00
|
|
|
|
2013-02-08 14:50:48 +00:00
|
|
|
/// \returns The cost of the address computation. For most targets this can be
|
|
|
|
/// merged into the instruction indexing mode. Some targets might want to
|
|
|
|
/// distinguish between address computation for memory operations on vector
|
|
|
|
/// types and scalar types. Such targets should override this function.
|
2013-07-12 19:16:02 +00:00
|
|
|
/// The 'IsComplex' parameter is a hint that the address computation is likely
|
|
|
|
/// to involve multiple instructions and as such unlikely to be merged into
|
|
|
|
/// the address indexing mode.
|
|
|
|
virtual unsigned getAddressComputationCost(Type *Ty,
|
|
|
|
bool IsComplex = false) const;
|
2013-02-08 14:50:48 +00:00
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// @}
|
2013-01-05 09:56:20 +00:00
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// Analysis group identification.
|
|
|
|
static char ID;
|
|
|
|
};
|
2013-01-05 09:56:20 +00:00
|
|
|
|
2013-01-05 11:43:11 +00:00
|
|
|
/// \brief Create the base case instance of a pass in the TTI analysis group.
|
|
|
|
///
|
2013-03-05 22:05:16 +00:00
|
|
|
/// This class provides the base case for the stack of TTI analyzes. It doesn't
|
2013-01-05 11:43:11 +00:00
|
|
|
/// delegate to anything and uses the STTI and VTTI objects passed in to
|
|
|
|
/// satisfy the queries.
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
ImmutablePass *createNoTargetTransformInfoPass();
|
2013-01-05 11:43:11 +00:00
|
|
|
|
2012-10-18 23:22:48 +00:00
|
|
|
} // End llvm namespace
|
|
|
|
|
|
|
|
#endif
|