Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
//===-- ARMTargetTransformInfo.cpp - ARM specific TTI pass ----------------===//
|
|
|
|
//
|
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
/// \file
|
|
|
|
/// This file implements a TargetTransformInfo analysis pass specific to the
|
|
|
|
/// ARM target machine. It uses the target's detailed information to provide
|
|
|
|
/// more precise answers to certain TTI queries, while letting the target
|
|
|
|
/// independent and default TTI implementations handle the rest.
|
|
|
|
///
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#define DEBUG_TYPE "armtti"
|
|
|
|
#include "ARM.h"
|
|
|
|
#include "ARMTargetMachine.h"
|
2013-01-07 03:08:10 +00:00
|
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
#include "llvm/Support/Debug.h"
|
|
|
|
#include "llvm/Target/TargetLowering.h"
|
2013-01-29 23:31:38 +00:00
|
|
|
#include "llvm/Target/CostTable.h"
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
using namespace llvm;
|
|
|
|
|
|
|
|
// Declare the pass initialization routine locally as target-specific passes
|
|
|
|
// don't havve a target-wide initialization entry point, and so we rely on the
|
|
|
|
// pass constructor initialization.
|
|
|
|
namespace llvm {
|
|
|
|
void initializeARMTTIPass(PassRegistry &);
|
|
|
|
}
|
|
|
|
|
|
|
|
namespace {
|
|
|
|
|
|
|
|
class ARMTTI : public ImmutablePass, public TargetTransformInfo {
|
|
|
|
const ARMBaseTargetMachine *TM;
|
|
|
|
const ARMSubtarget *ST;
|
2013-01-29 23:31:38 +00:00
|
|
|
const ARMTargetLowering *TLI;
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
|
|
|
|
/// Estimate the overhead of scalarizing an instruction. Insert and Extract
|
|
|
|
/// are set if the result needs to be inserted and/or extracted from vectors.
|
|
|
|
unsigned getScalarizationOverhead(Type *Ty, bool Insert, bool Extract) const;
|
|
|
|
|
|
|
|
public:
|
2013-01-29 23:31:38 +00:00
|
|
|
ARMTTI() : ImmutablePass(ID), TM(0), ST(0), TLI(0) {
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
llvm_unreachable("This pass cannot be directly constructed");
|
|
|
|
}
|
|
|
|
|
|
|
|
ARMTTI(const ARMBaseTargetMachine *TM)
|
2013-01-29 23:31:38 +00:00
|
|
|
: ImmutablePass(ID), TM(TM), ST(TM->getSubtargetImpl()),
|
|
|
|
TLI(TM->getTargetLowering()) {
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
initializeARMTTIPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual void initializePass() {
|
|
|
|
pushTTIStack(this);
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual void finalizePass() {
|
|
|
|
popTTIStack();
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual void getAnalysisUsage(AnalysisUsage &AU) const {
|
|
|
|
TargetTransformInfo::getAnalysisUsage(AU);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Pass identification.
|
|
|
|
static char ID;
|
|
|
|
|
|
|
|
/// Provide necessary pointer adjustments for the two base classes.
|
|
|
|
virtual void *getAdjustedAnalysisPointer(const void *ID) {
|
|
|
|
if (ID == &TargetTransformInfo::ID)
|
|
|
|
return (TargetTransformInfo*)this;
|
|
|
|
return this;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// \name Scalar TTI Implementations
|
|
|
|
/// @{
|
|
|
|
|
|
|
|
virtual unsigned getIntImmCost(const APInt &Imm, Type *Ty) const;
|
|
|
|
|
|
|
|
/// @}
|
2013-01-09 01:15:42 +00:00
|
|
|
|
|
|
|
|
|
|
|
/// \name Vector TTI Implementations
|
|
|
|
/// @{
|
|
|
|
|
|
|
|
unsigned getNumberOfRegisters(bool Vector) const {
|
|
|
|
if (Vector) {
|
|
|
|
if (ST->hasNEON())
|
|
|
|
return 16;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ST->isThumb1Only())
|
|
|
|
return 8;
|
|
|
|
return 16;
|
|
|
|
}
|
|
|
|
|
2013-01-09 22:29:00 +00:00
|
|
|
unsigned getRegisterBitWidth(bool Vector) const {
|
|
|
|
if (Vector) {
|
|
|
|
if (ST->hasNEON())
|
|
|
|
return 128;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 32;
|
|
|
|
}
|
|
|
|
|
2013-01-09 01:15:42 +00:00
|
|
|
unsigned getMaximumUnrollFactor() const {
|
|
|
|
// These are out of order CPUs:
|
|
|
|
if (ST->isCortexA15() || ST->isSwift())
|
|
|
|
return 2;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2013-02-12 02:40:39 +00:00
|
|
|
unsigned getShuffleCost(ShuffleKind Kind, Type *Tp,
|
|
|
|
int Index, Type *SubTp) const;
|
|
|
|
|
2013-01-29 23:31:38 +00:00
|
|
|
unsigned getCastInstrCost(unsigned Opcode, Type *Dst,
|
|
|
|
Type *Src) const;
|
|
|
|
|
2013-02-07 16:10:15 +00:00
|
|
|
unsigned getCmpSelInstrCost(unsigned Opcode, Type *ValTy, Type *CondTy) const;
|
|
|
|
|
2013-02-04 02:52:05 +00:00
|
|
|
unsigned getVectorInstrCost(unsigned Opcode, Type *Val, unsigned Index) const;
|
2013-02-08 14:50:48 +00:00
|
|
|
|
2013-07-12 19:16:02 +00:00
|
|
|
unsigned getAddressComputationCost(Type *Val, bool IsComplex) const;
|
2013-04-25 21:16:18 +00:00
|
|
|
|
|
|
|
unsigned getArithmeticInstrCost(unsigned Opcode, Type *Ty,
|
|
|
|
OperandValueKind Op1Info = OK_AnyValue,
|
|
|
|
OperandValueKind Op2Info = OK_AnyValue) const;
|
2013-10-29 01:33:57 +00:00
|
|
|
|
|
|
|
unsigned getMemoryOpCost(unsigned Opcode, Type *Src, unsigned Alignment,
|
|
|
|
unsigned AddressSpace) const;
|
2013-01-09 01:15:42 +00:00
|
|
|
/// @}
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171681 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-07 01:37:14 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
} // end anonymous namespace
|
|
|
|
|
|
|
|
INITIALIZE_AG_PASS(ARMTTI, TargetTransformInfo, "armtti",
|
|
|
|
"ARM Target Transform Info", true, true, false)
|
|
|
|
char ARMTTI::ID = 0;
|
|
|
|
|
|
|
|
ImmutablePass *
|
|
|
|
llvm::createARMTargetTransformInfoPass(const ARMBaseTargetMachine *TM) {
|
|
|
|
return new ARMTTI(TM);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
unsigned ARMTTI::getIntImmCost(const APInt &Imm, Type *Ty) const {
|
|
|
|
assert(Ty->isIntegerTy());
|
|
|
|
|
|
|
|
unsigned Bits = Ty->getPrimitiveSizeInBits();
|
|
|
|
if (Bits == 0 || Bits > 32)
|
|
|
|
return 4;
|
|
|
|
|
|
|
|
int32_t SImmVal = Imm.getSExtValue();
|
|
|
|
uint32_t ZImmVal = Imm.getZExtValue();
|
|
|
|
if (!ST->isThumb()) {
|
|
|
|
if ((SImmVal >= 0 && SImmVal < 65536) ||
|
|
|
|
(ARM_AM::getSOImmVal(ZImmVal) != -1) ||
|
|
|
|
(ARM_AM::getSOImmVal(~ZImmVal) != -1))
|
|
|
|
return 1;
|
|
|
|
return ST->hasV6T2Ops() ? 2 : 3;
|
|
|
|
} else if (ST->isThumb2()) {
|
|
|
|
if ((SImmVal >= 0 && SImmVal < 65536) ||
|
|
|
|
(ARM_AM::getT2SOImmVal(ZImmVal) != -1) ||
|
|
|
|
(ARM_AM::getT2SOImmVal(~ZImmVal) != -1))
|
|
|
|
return 1;
|
|
|
|
return ST->hasV6T2Ops() ? 2 : 3;
|
|
|
|
} else /*Thumb1*/ {
|
|
|
|
if (SImmVal >= 0 && SImmVal < 256)
|
|
|
|
return 1;
|
|
|
|
if ((~ZImmVal < 256) || ARM_AM::isThumbImmShiftedVal(ZImmVal))
|
|
|
|
return 2;
|
|
|
|
// Load from constantpool.
|
|
|
|
return 3;
|
|
|
|
}
|
|
|
|
return 2;
|
|
|
|
}
|
2013-01-29 23:31:38 +00:00
|
|
|
|
|
|
|
unsigned ARMTTI::getCastInstrCost(unsigned Opcode, Type *Dst,
|
|
|
|
Type *Src) const {
|
|
|
|
int ISD = TLI->InstructionOpcodeToISD(Opcode);
|
|
|
|
assert(ISD && "Invalid opcode");
|
|
|
|
|
2013-03-15 15:10:47 +00:00
|
|
|
// Single to/from double precision conversions.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const CostTblEntry<MVT::SimpleValueType> NEONFltDblTbl[] = {
|
2013-03-15 15:10:47 +00:00
|
|
|
// Vector fptrunc/fpext conversions.
|
|
|
|
{ ISD::FP_ROUND, MVT::v2f64, 2 },
|
|
|
|
{ ISD::FP_EXTEND, MVT::v2f32, 2 },
|
|
|
|
{ ISD::FP_EXTEND, MVT::v4f32, 4 }
|
|
|
|
};
|
|
|
|
|
|
|
|
if (Src->isVectorTy() && ST->hasNEON() && (ISD == ISD::FP_ROUND ||
|
|
|
|
ISD == ISD::FP_EXTEND)) {
|
|
|
|
std::pair<unsigned, MVT> LT = TLI->getTypeLegalizationCost(Src);
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = CostTableLookup(NEONFltDblTbl, ISD, LT.second);
|
2013-03-15 15:10:47 +00:00
|
|
|
if (Idx != -1)
|
|
|
|
return LT.first * NEONFltDblTbl[Idx].Cost;
|
|
|
|
}
|
|
|
|
|
2013-01-29 23:31:38 +00:00
|
|
|
EVT SrcTy = TLI->getValueType(Src);
|
|
|
|
EVT DstTy = TLI->getValueType(Dst);
|
|
|
|
|
|
|
|
if (!SrcTy.isSimple() || !DstTy.isSimple())
|
|
|
|
return TargetTransformInfo::getCastInstrCost(Opcode, Dst, Src);
|
|
|
|
|
|
|
|
// Some arithmetic, load and store operations have specific instructions
|
2013-02-05 14:05:55 +00:00
|
|
|
// to cast up/down their types automatically at no extra cost.
|
|
|
|
// TODO: Get these tables to know at least what the related operations are.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const TypeConversionCostTblEntry<MVT::SimpleValueType>
|
|
|
|
NEONVectorConversionTbl[] = {
|
2013-01-29 23:31:38 +00:00
|
|
|
{ ISD::SIGN_EXTEND, MVT::v4i32, MVT::v4i16, 0 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v4i32, MVT::v4i16, 0 },
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v2i64, MVT::v2i32, 1 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v2i64, MVT::v2i32, 1 },
|
|
|
|
{ ISD::TRUNCATE, MVT::v4i32, MVT::v4i64, 0 },
|
|
|
|
{ ISD::TRUNCATE, MVT::v4i16, MVT::v4i32, 1 },
|
2013-02-05 14:05:55 +00:00
|
|
|
|
2013-03-19 08:15:38 +00:00
|
|
|
// The number of vmovl instructions for the extension.
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v4i64, MVT::v4i16, 3 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v4i64, MVT::v4i16, 3 },
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v8i32, MVT::v8i8, 3 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v8i32, MVT::v8i8, 3 },
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v8i64, MVT::v8i8, 7 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v8i64, MVT::v8i8, 7 },
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v8i64, MVT::v8i16, 6 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v8i64, MVT::v8i16, 6 },
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::v16i32, MVT::v16i8, 6 },
|
|
|
|
{ ISD::ZERO_EXTEND, MVT::v16i32, MVT::v16i8, 6 },
|
|
|
|
|
Legalize vector truncates by parts rather than just splitting.
Rather than just splitting the input type and hoping for the best, apply
a bit more cleverness. Just splitting the types until the source is
legal often leads to an illegal result time, which is then widened and a
scalarization step is introduced which leads to truly horrible code
generation. With the loop vectorizer, these sorts of operations are much
more common, and so it's worth extra effort to do them well.
Add a legalization hook for the operands of a TRUNCATE node, which will
be encountered after the result type has been legalized, but if the
operand type is still illegal. If simple splitting of both types
ends up with the result type of each half still being legal, just
do that (v16i16 -> v16i8 on ARM, for example). If, however, that would
result in an illegal result type (v8i32 -> v8i8 on ARM, for example),
we can get more clever with power-two vectors. Specifically,
split the input type, but also widen the result element size, then
concatenate the halves and truncate again. For example on ARM,
To perform a "%res = v8i8 trunc v8i32 %in" we transform to:
%inlo = v4i32 extract_subvector %in, 0
%inhi = v4i32 extract_subvector %in, 4
%lo16 = v4i16 trunc v4i32 %inlo
%hi16 = v4i16 trunc v4i32 %inhi
%in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16
%res = v8i8 trunc v8i16 %in16
This allows instruction selection to generate three VMOVN instructions
instead of a sequences of moves, stores and loads.
Update the ARMTargetTransformInfo to take this improved legalization
into account.
Consider the simplified IR:
define <16 x i8> @test1(<16 x i32>* %ap) {
%a = load <16 x i32>* %ap
%tmp = trunc <16 x i32> %a to <16 x i8>
ret <16 x i8> %tmp
}
define <8 x i8> @test2(<8 x i32>* %ap) {
%a = load <8 x i32>* %ap
%tmp = trunc <8 x i32> %a to <8 x i8>
ret <8 x i8> %tmp
}
Previously, we would generate the truly hideous:
.syntax unified
.section __TEXT,__text,regular,pure_instructions
.globl _test1
.align 2
_test1: @ @test1
@ BB#0:
push {r7}
mov r7, sp
sub sp, sp, #20
bic sp, sp, #7
add r1, r0, #48
add r2, r0, #32
vld1.64 {d24, d25}, [r0:128]
vld1.64 {d16, d17}, [r1:128]
vld1.64 {d18, d19}, [r2:128]
add r1, r0, #16
vmovn.i32 d22, q8
vld1.64 {d16, d17}, [r1:128]
vmovn.i32 d20, q9
vmovn.i32 d18, q12
vmov.u16 r0, d22[3]
strb r0, [sp, #15]
vmov.u16 r0, d22[2]
strb r0, [sp, #14]
vmov.u16 r0, d22[1]
strb r0, [sp, #13]
vmov.u16 r0, d22[0]
vmovn.i32 d16, q8
strb r0, [sp, #12]
vmov.u16 r0, d20[3]
strb r0, [sp, #11]
vmov.u16 r0, d20[2]
strb r0, [sp, #10]
vmov.u16 r0, d20[1]
strb r0, [sp, #9]
vmov.u16 r0, d20[0]
strb r0, [sp, #8]
vmov.u16 r0, d18[3]
strb r0, [sp, #3]
vmov.u16 r0, d18[2]
strb r0, [sp, #2]
vmov.u16 r0, d18[1]
strb r0, [sp, #1]
vmov.u16 r0, d18[0]
strb r0, [sp]
vmov.u16 r0, d16[3]
strb r0, [sp, #7]
vmov.u16 r0, d16[2]
strb r0, [sp, #6]
vmov.u16 r0, d16[1]
strb r0, [sp, #5]
vmov.u16 r0, d16[0]
strb r0, [sp, #4]
vldmia sp, {d16, d17}
vmov r0, r1, d16
vmov r2, r3, d17
mov sp, r7
pop {r7}
bx lr
.globl _test2
.align 2
_test2: @ @test2
@ BB#0:
push {r7}
mov r7, sp
sub sp, sp, #12
bic sp, sp, #7
vld1.64 {d16, d17}, [r0:128]
add r0, r0, #16
vld1.64 {d20, d21}, [r0:128]
vmovn.i32 d18, q8
vmov.u16 r0, d18[3]
vmovn.i32 d16, q10
strb r0, [sp, #3]
vmov.u16 r0, d18[2]
strb r0, [sp, #2]
vmov.u16 r0, d18[1]
strb r0, [sp, #1]
vmov.u16 r0, d18[0]
strb r0, [sp]
vmov.u16 r0, d16[3]
strb r0, [sp, #7]
vmov.u16 r0, d16[2]
strb r0, [sp, #6]
vmov.u16 r0, d16[1]
strb r0, [sp, #5]
vmov.u16 r0, d16[0]
strb r0, [sp, #4]
ldm sp, {r0, r1}
mov sp, r7
pop {r7}
bx lr
Now, however, we generate the much more straightforward:
.syntax unified
.section __TEXT,__text,regular,pure_instructions
.globl _test1
.align 2
_test1: @ @test1
@ BB#0:
add r1, r0, #48
add r2, r0, #32
vld1.64 {d20, d21}, [r0:128]
vld1.64 {d16, d17}, [r1:128]
add r1, r0, #16
vld1.64 {d18, d19}, [r2:128]
vld1.64 {d22, d23}, [r1:128]
vmovn.i32 d17, q8
vmovn.i32 d16, q9
vmovn.i32 d18, q10
vmovn.i32 d19, q11
vmovn.i16 d17, q8
vmovn.i16 d16, q9
vmov r0, r1, d16
vmov r2, r3, d17
bx lr
.globl _test2
.align 2
_test2: @ @test2
@ BB#0:
vld1.64 {d16, d17}, [r0:128]
add r0, r0, #16
vld1.64 {d18, d19}, [r0:128]
vmovn.i32 d16, q8
vmovn.i32 d17, q9
vmovn.i16 d16, q8
vmov r0, r1, d16
bx lr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179989 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-21 23:47:41 +00:00
|
|
|
// Operations that we legalize using splitting.
|
|
|
|
{ ISD::TRUNCATE, MVT::v16i8, MVT::v16i32, 6 },
|
|
|
|
{ ISD::TRUNCATE, MVT::v8i8, MVT::v8i32, 3 },
|
2013-03-12 21:19:22 +00:00
|
|
|
|
2013-02-05 14:05:55 +00:00
|
|
|
// Vector float <-> i32 conversions.
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v4f32, MVT::v4i32, 1 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v4f32, MVT::v4i32, 1 },
|
2013-03-18 22:47:09 +00:00
|
|
|
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f32, MVT::v2i8, 3 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f32, MVT::v2i8, 3 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f32, MVT::v2i16, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f32, MVT::v2i16, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f32, MVT::v2i32, 1 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f32, MVT::v2i32, 1 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v4f32, MVT::v4i1, 3 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v4f32, MVT::v4i1, 3 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v4f32, MVT::v4i8, 3 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v4f32, MVT::v4i8, 3 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v4f32, MVT::v4i16, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v4f32, MVT::v4i16, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v8f32, MVT::v8i16, 4 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v8f32, MVT::v8i16, 4 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v8f32, MVT::v8i32, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v8f32, MVT::v8i32, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v16f32, MVT::v16i16, 8 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v16f32, MVT::v16i16, 8 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v16f32, MVT::v16i32, 4 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v16f32, MVT::v16i32, 4 },
|
|
|
|
|
2013-02-05 14:05:55 +00:00
|
|
|
{ ISD::FP_TO_SINT, MVT::v4i32, MVT::v4f32, 1 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::v4i32, MVT::v4f32, 1 },
|
2013-03-18 22:47:06 +00:00
|
|
|
{ ISD::FP_TO_SINT, MVT::v4i8, MVT::v4f32, 3 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::v4i8, MVT::v4f32, 3 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::v4i16, MVT::v4f32, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::v4i16, MVT::v4f32, 2 },
|
2013-02-05 14:05:55 +00:00
|
|
|
|
|
|
|
// Vector double <-> i32 conversions.
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f64, MVT::v2i32, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f64, MVT::v2i32, 2 },
|
2013-03-18 22:47:09 +00:00
|
|
|
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f64, MVT::v2i8, 4 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f64, MVT::v2i8, 4 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f64, MVT::v2i16, 3 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f64, MVT::v2i16, 3 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::v2f64, MVT::v2i32, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::v2f64, MVT::v2i32, 2 },
|
|
|
|
|
2013-02-05 14:05:55 +00:00
|
|
|
{ ISD::FP_TO_SINT, MVT::v2i32, MVT::v2f64, 2 },
|
2013-03-18 22:47:06 +00:00
|
|
|
{ ISD::FP_TO_UINT, MVT::v2i32, MVT::v2f64, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::v8i16, MVT::v8f32, 4 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::v8i16, MVT::v8f32, 4 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::v16i16, MVT::v16f32, 8 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::v16i16, MVT::v16f32, 8 }
|
2013-01-29 23:31:38 +00:00
|
|
|
};
|
|
|
|
|
2013-02-05 14:05:55 +00:00
|
|
|
if (SrcTy.isVector() && ST->hasNEON()) {
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = ConvertCostTableLookup(NEONVectorConversionTbl, ISD,
|
|
|
|
DstTy.getSimpleVT(), SrcTy.getSimpleVT());
|
2013-01-29 23:31:38 +00:00
|
|
|
if (Idx != -1)
|
2013-02-05 14:05:55 +00:00
|
|
|
return NEONVectorConversionTbl[Idx].Cost;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Scalar float to integer conversions.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const TypeConversionCostTblEntry<MVT::SimpleValueType>
|
|
|
|
NEONFloatConversionTbl[] = {
|
2013-02-05 14:05:55 +00:00
|
|
|
{ ISD::FP_TO_SINT, MVT::i1, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i1, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i1, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i1, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i8, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i8, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i8, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i8, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i16, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i16, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i16, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i16, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i32, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i32, MVT::f32, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i32, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i32, MVT::f64, 2 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i64, MVT::f32, 10 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i64, MVT::f32, 10 },
|
|
|
|
{ ISD::FP_TO_SINT, MVT::i64, MVT::f64, 10 },
|
|
|
|
{ ISD::FP_TO_UINT, MVT::i64, MVT::f64, 10 }
|
|
|
|
};
|
|
|
|
if (SrcTy.isFloatingPoint() && ST->hasNEON()) {
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = ConvertCostTableLookup(NEONFloatConversionTbl, ISD,
|
|
|
|
DstTy.getSimpleVT(), SrcTy.getSimpleVT());
|
2013-02-05 14:05:55 +00:00
|
|
|
if (Idx != -1)
|
|
|
|
return NEONFloatConversionTbl[Idx].Cost;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Scalar integer to float conversions.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const TypeConversionCostTblEntry<MVT::SimpleValueType>
|
|
|
|
NEONIntegerConversionTbl[] = {
|
2013-02-05 14:05:55 +00:00
|
|
|
{ ISD::SINT_TO_FP, MVT::f32, MVT::i1, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f32, MVT::i1, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f64, MVT::i1, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f64, MVT::i1, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f32, MVT::i8, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f32, MVT::i8, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f64, MVT::i8, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f64, MVT::i8, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f32, MVT::i16, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f32, MVT::i16, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f64, MVT::i16, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f64, MVT::i16, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f32, MVT::i32, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f32, MVT::i32, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f64, MVT::i32, 2 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f64, MVT::i32, 2 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f32, MVT::i64, 10 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f32, MVT::i64, 10 },
|
|
|
|
{ ISD::SINT_TO_FP, MVT::f64, MVT::i64, 10 },
|
|
|
|
{ ISD::UINT_TO_FP, MVT::f64, MVT::i64, 10 }
|
|
|
|
};
|
|
|
|
|
|
|
|
if (SrcTy.isInteger() && ST->hasNEON()) {
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = ConvertCostTableLookup(NEONIntegerConversionTbl, ISD,
|
|
|
|
DstTy.getSimpleVT(), SrcTy.getSimpleVT());
|
2013-02-05 14:05:55 +00:00
|
|
|
if (Idx != -1)
|
|
|
|
return NEONIntegerConversionTbl[Idx].Cost;
|
2013-01-29 23:31:38 +00:00
|
|
|
}
|
|
|
|
|
2013-02-05 14:05:55 +00:00
|
|
|
// Scalar integer conversion costs.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const TypeConversionCostTblEntry<MVT::SimpleValueType>
|
|
|
|
ARMIntegerConversionTbl[] = {
|
2013-02-05 14:05:55 +00:00
|
|
|
// i16 -> i64 requires two dependent operations.
|
|
|
|
{ ISD::SIGN_EXTEND, MVT::i64, MVT::i16, 2 },
|
|
|
|
|
|
|
|
// Truncates on i64 are assumed to be free.
|
|
|
|
{ ISD::TRUNCATE, MVT::i32, MVT::i64, 0 },
|
|
|
|
{ ISD::TRUNCATE, MVT::i16, MVT::i64, 0 },
|
|
|
|
{ ISD::TRUNCATE, MVT::i8, MVT::i64, 0 },
|
|
|
|
{ ISD::TRUNCATE, MVT::i1, MVT::i64, 0 }
|
|
|
|
};
|
|
|
|
|
|
|
|
if (SrcTy.isInteger()) {
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = ConvertCostTableLookup(ARMIntegerConversionTbl, ISD,
|
|
|
|
DstTy.getSimpleVT(), SrcTy.getSimpleVT());
|
2013-02-05 14:05:55 +00:00
|
|
|
if (Idx != -1)
|
|
|
|
return ARMIntegerConversionTbl[Idx].Cost;
|
|
|
|
}
|
|
|
|
|
2013-01-29 23:31:38 +00:00
|
|
|
return TargetTransformInfo::getCastInstrCost(Opcode, Dst, Src);
|
|
|
|
}
|
2013-02-04 02:52:05 +00:00
|
|
|
|
|
|
|
unsigned ARMTTI::getVectorInstrCost(unsigned Opcode, Type *ValTy,
|
|
|
|
unsigned Index) const {
|
2013-02-08 14:50:48 +00:00
|
|
|
// Penalize inserting into an D-subregister. We end up with a three times
|
|
|
|
// lower estimated throughput on swift.
|
2013-02-04 02:52:05 +00:00
|
|
|
if (ST->isSwift() &&
|
|
|
|
Opcode == Instruction::InsertElement &&
|
|
|
|
ValTy->isVectorTy() &&
|
|
|
|
ValTy->getScalarSizeInBits() <= 32)
|
2013-02-08 14:50:48 +00:00
|
|
|
return 3;
|
2013-02-04 02:52:05 +00:00
|
|
|
|
|
|
|
return TargetTransformInfo::getVectorInstrCost(Opcode, ValTy, Index);
|
|
|
|
}
|
2013-02-07 16:10:15 +00:00
|
|
|
|
|
|
|
unsigned ARMTTI::getCmpSelInstrCost(unsigned Opcode, Type *ValTy,
|
|
|
|
Type *CondTy) const {
|
|
|
|
|
|
|
|
int ISD = TLI->InstructionOpcodeToISD(Opcode);
|
|
|
|
// On NEON a a vector select gets lowered to vbsl.
|
|
|
|
if (ST->hasNEON() && ValTy->isVectorTy() && ISD == ISD::SELECT) {
|
2013-03-14 19:17:02 +00:00
|
|
|
// Lowering of some vector selects is currently far from perfect.
|
2013-08-09 19:33:32 +00:00
|
|
|
static const TypeConversionCostTblEntry<MVT::SimpleValueType>
|
|
|
|
NEONVectorSelectTbl[] = {
|
2013-03-14 19:17:02 +00:00
|
|
|
{ ISD::SELECT, MVT::v16i1, MVT::v16i16, 2*16 + 1 + 3*1 + 4*1 },
|
|
|
|
{ ISD::SELECT, MVT::v8i1, MVT::v8i32, 4*8 + 1*3 + 1*4 + 1*2 },
|
|
|
|
{ ISD::SELECT, MVT::v16i1, MVT::v16i32, 4*16 + 1*6 + 1*8 + 1*4 },
|
|
|
|
{ ISD::SELECT, MVT::v4i1, MVT::v4i64, 4*4 + 1*2 + 1 },
|
|
|
|
{ ISD::SELECT, MVT::v8i1, MVT::v8i64, 50 },
|
|
|
|
{ ISD::SELECT, MVT::v16i1, MVT::v16i64, 100 }
|
|
|
|
};
|
|
|
|
|
|
|
|
EVT SelCondTy = TLI->getValueType(CondTy);
|
|
|
|
EVT SelValTy = TLI->getValueType(ValTy);
|
2013-08-02 17:10:04 +00:00
|
|
|
if (SelCondTy.isSimple() && SelValTy.isSimple()) {
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = ConvertCostTableLookup(NEONVectorSelectTbl, ISD,
|
|
|
|
SelCondTy.getSimpleVT(),
|
|
|
|
SelValTy.getSimpleVT());
|
2013-08-02 17:10:04 +00:00
|
|
|
if (Idx != -1)
|
|
|
|
return NEONVectorSelectTbl[Idx].Cost;
|
|
|
|
}
|
2013-03-14 19:17:02 +00:00
|
|
|
|
2013-02-07 16:10:15 +00:00
|
|
|
std::pair<unsigned, MVT> LT = TLI->getTypeLegalizationCost(ValTy);
|
|
|
|
return LT.first;
|
|
|
|
}
|
|
|
|
|
|
|
|
return TargetTransformInfo::getCmpSelInstrCost(Opcode, ValTy, CondTy);
|
|
|
|
}
|
2013-02-08 14:50:48 +00:00
|
|
|
|
2013-07-12 19:16:02 +00:00
|
|
|
unsigned ARMTTI::getAddressComputationCost(Type *Ty, bool IsComplex) const {
|
2013-07-12 19:16:04 +00:00
|
|
|
// Address computations in vectorized code with non-consecutive addresses will
|
|
|
|
// likely result in more instructions compared to scalar code where the
|
|
|
|
// computation can more often be merged into the index mode. The resulting
|
|
|
|
// extra micro-ops can significantly decrease throughput.
|
|
|
|
unsigned NumVectorInstToHideOverhead = 10;
|
|
|
|
|
|
|
|
if (Ty->isVectorTy() && IsComplex)
|
|
|
|
return NumVectorInstToHideOverhead;
|
|
|
|
|
2013-02-08 14:50:48 +00:00
|
|
|
// In many cases the address computation is not merged into the instruction
|
|
|
|
// addressing mode.
|
|
|
|
return 1;
|
|
|
|
}
|
2013-02-12 02:40:39 +00:00
|
|
|
|
|
|
|
unsigned ARMTTI::getShuffleCost(ShuffleKind Kind, Type *Tp, int Index,
|
|
|
|
Type *SubTp) const {
|
|
|
|
// We only handle costs of reverse shuffles for now.
|
|
|
|
if (Kind != SK_Reverse)
|
|
|
|
return TargetTransformInfo::getShuffleCost(Kind, Tp, Index, SubTp);
|
|
|
|
|
2013-08-09 19:33:32 +00:00
|
|
|
static const CostTblEntry<MVT::SimpleValueType> NEONShuffleTbl[] = {
|
2013-02-12 02:40:39 +00:00
|
|
|
// Reverse shuffle cost one instruction if we are shuffling within a double
|
|
|
|
// word (vrev) or two if we shuffle a quad word (vrev, vext).
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v2i32, 1 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v2f32, 1 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v2i64, 1 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v2f64, 1 },
|
|
|
|
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v4i32, 2 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v4f32, 2 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v8i16, 2 },
|
|
|
|
{ ISD::VECTOR_SHUFFLE, MVT::v16i8, 2 }
|
|
|
|
};
|
|
|
|
|
|
|
|
std::pair<unsigned, MVT> LT = TLI->getTypeLegalizationCost(Tp);
|
|
|
|
|
2013-08-09 19:33:32 +00:00
|
|
|
int Idx = CostTableLookup(NEONShuffleTbl, ISD::VECTOR_SHUFFLE, LT.second);
|
2013-02-12 02:40:39 +00:00
|
|
|
if (Idx == -1)
|
|
|
|
return TargetTransformInfo::getShuffleCost(Kind, Tp, Index, SubTp);
|
|
|
|
|
|
|
|
return LT.first * NEONShuffleTbl[Idx].Cost;
|
|
|
|
}
|
2013-04-25 21:16:18 +00:00
|
|
|
|
|
|
|
unsigned ARMTTI::getArithmeticInstrCost(unsigned Opcode, Type *Ty, OperandValueKind Op1Info,
|
|
|
|
OperandValueKind Op2Info) const {
|
|
|
|
|
|
|
|
int ISDOpcode = TLI->InstructionOpcodeToISD(Opcode);
|
|
|
|
std::pair<unsigned, MVT> LT = TLI->getTypeLegalizationCost(Ty);
|
|
|
|
|
|
|
|
const unsigned FunctionCallDivCost = 20;
|
|
|
|
const unsigned ReciprocalDivCost = 10;
|
2013-08-09 19:33:32 +00:00
|
|
|
static const CostTblEntry<MVT::SimpleValueType> CostTbl[] = {
|
2013-04-25 21:16:18 +00:00
|
|
|
// Division.
|
|
|
|
// These costs are somewhat random. Choose a cost of 20 to indicate that
|
|
|
|
// vectorizing devision (added function call) is going to be very expensive.
|
|
|
|
// Double registers types.
|
|
|
|
{ ISD::SDIV, MVT::v1i64, 1 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v1i64, 1 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v1i64, 1 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v1i64, 1 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v2i32, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v2i32, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v2i32, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v2i32, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v4i16, ReciprocalDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v4i16, ReciprocalDivCost},
|
|
|
|
{ ISD::SREM, MVT::v4i16, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v4i16, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v8i8, ReciprocalDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v8i8, ReciprocalDivCost},
|
|
|
|
{ ISD::SREM, MVT::v8i8, 8 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v8i8, 8 * FunctionCallDivCost},
|
|
|
|
// Quad register types.
|
|
|
|
{ ISD::SDIV, MVT::v2i64, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v2i64, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v2i64, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v2i64, 2 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v4i32, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v4i32, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v4i32, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v4i32, 4 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v8i16, 8 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v8i16, 8 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v8i16, 8 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v8i16, 8 * FunctionCallDivCost},
|
|
|
|
{ ISD::SDIV, MVT::v16i8, 16 * FunctionCallDivCost},
|
|
|
|
{ ISD::UDIV, MVT::v16i8, 16 * FunctionCallDivCost},
|
|
|
|
{ ISD::SREM, MVT::v16i8, 16 * FunctionCallDivCost},
|
|
|
|
{ ISD::UREM, MVT::v16i8, 16 * FunctionCallDivCost},
|
|
|
|
// Multiplication.
|
|
|
|
};
|
|
|
|
|
|
|
|
int Idx = -1;
|
|
|
|
|
|
|
|
if (ST->hasNEON())
|
2013-08-09 19:33:32 +00:00
|
|
|
Idx = CostTableLookup(CostTbl, ISDOpcode, LT.second);
|
2013-04-25 21:16:18 +00:00
|
|
|
|
|
|
|
if (Idx != -1)
|
|
|
|
return LT.first * CostTbl[Idx].Cost;
|
|
|
|
|
2013-10-29 01:33:53 +00:00
|
|
|
unsigned Cost =
|
|
|
|
TargetTransformInfo::getArithmeticInstrCost(Opcode, Ty, Op1Info, Op2Info);
|
|
|
|
|
|
|
|
// This is somewhat of a hack. The problem that we are facing is that SROA
|
|
|
|
// creates a sequence of shift, and, or instructions to construct values.
|
|
|
|
// These sequences are recognized by the ISel and have zero-cost. Not so for
|
|
|
|
// the vectorized code. Because we have support for v2i64 but not i64 those
|
|
|
|
// sequences look particularily beneficial to vectorize.
|
|
|
|
// To work around this we increase the cost of v2i64 operations to make them
|
|
|
|
// seem less beneficial.
|
|
|
|
if (LT.second == MVT::v2i64 &&
|
|
|
|
Op2Info == TargetTransformInfo::OK_UniformConstantValue)
|
|
|
|
Cost += 4;
|
|
|
|
|
|
|
|
return Cost;
|
2013-04-25 21:16:18 +00:00
|
|
|
}
|
|
|
|
|
2013-10-29 01:33:57 +00:00
|
|
|
unsigned ARMTTI::getMemoryOpCost(unsigned Opcode, Type *Src, unsigned Alignment,
|
|
|
|
unsigned AddressSpace) const {
|
|
|
|
std::pair<unsigned, MVT> LT = TLI->getTypeLegalizationCost(Src);
|
|
|
|
|
|
|
|
if (Src->isVectorTy() && Alignment != 16 &&
|
|
|
|
Src->getVectorElementType()->isDoubleTy()) {
|
|
|
|
// Unaligned loads/stores are extremely inefficient.
|
|
|
|
// We need 4 uops for vst.1/vld.1 vs 1uop for vldr/vstr.
|
|
|
|
return LT.first * 4;
|
|
|
|
}
|
|
|
|
return LT.first;
|
|
|
|
}
|