mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-11-01 00:17:01 +00:00
Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG instructions), and teach the DAGCombiner to take advantage of this on targets which support it. This eliminates many redundant zero-extension operations on x86-64. This adds a new TargetLowering hook, isZExtFree. It's similar to isTruncateFree, except it only applies to actual definitions, and not no-op truncates which may not zero the high bits. Also, this adds a new optimization to SimplifyDemandedBits: transform operations like x+y into (zext (add (trunc x), (trunc y))) on targets where all the casts are no-ops. In contexts where the high part of the add is explicitly masked off, this allows the mask operation to be eliminated. Fix the DAGCombiner to avoid undoing these transformations to eliminate casts on targets where the casts are no-ops. Also, this adds a new two-address lowering heuristic. Since two-address lowering runs before coalescing, it helps to be able to look through copies when deciding whether commuting and/or three-address conversion are profitable. Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle the case that a clobber range extended both before and beyond an existing live range. In that case, multiple live ranges need to be added. This was exposed by the new subreg coalescing code. Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the spiller behavior it was looking for no longer occurrs with the new instruction selection. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68576 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
@@ -747,6 +747,13 @@ public:
|
||||
/// there are any bits set in the constant that are not demanded. If so,
|
||||
/// shrink the constant and return true.
|
||||
bool ShrinkDemandedConstant(SDValue Op, const APInt &Demanded);
|
||||
|
||||
/// ShrinkDemandedOp - Convert x+y to (VT)((SmallVT)x+(SmallVT)y) if the
|
||||
/// casts are free. This uses isZExtFree and ZERO_EXTEND for the widening
|
||||
/// cast, but it could be generalized for targets with other types of
|
||||
/// implicit widening casts.
|
||||
bool ShrinkDemandedOp(SDValue Op, unsigned BitWidth, const APInt &Demanded,
|
||||
DebugLoc dl);
|
||||
};
|
||||
|
||||
/// SimplifyDemandedBits - Look at Op. At this point, we know that only the
|
||||
@@ -1386,7 +1393,23 @@ public:
|
||||
virtual bool isTruncateFree(MVT VT1, MVT VT2) const {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
/// isZExtFree - Return true if any actual instruction that defines a
|
||||
/// value of type Ty1 implicit zero-extends the value to Ty2 in the result
|
||||
/// register. This does not necessarily include registers defined in
|
||||
/// unknown ways, such as incoming arguments, or copies from unknown
|
||||
/// virtual registers. Also, if isTruncateFree(Ty2, Ty1) is true, this
|
||||
/// does not necessarily apply to truncate instructions. e.g. on x86-64,
|
||||
/// all instructions that define 32-bit values implicit zero-extend the
|
||||
/// result out to 64 bits.
|
||||
virtual bool isZExtFree(const Type *Ty1, const Type *Ty2) const {
|
||||
return false;
|
||||
}
|
||||
|
||||
virtual bool isZExtFree(MVT VT1, MVT VT2) const {
|
||||
return false;
|
||||
}
|
||||
|
||||
//===--------------------------------------------------------------------===//
|
||||
// Div utility functions
|
||||
//
|
||||
|
||||
Reference in New Issue
Block a user