[LAA] Try to prove non-wrapping of pointers if SCEV cannot

Summary:
Scalar evolution does not propagate the non-wrapping flags to values
that are derived from a non-wrapping induction variable because
the non-wrapping property could be flow-sensitive.

This change is a first attempt to establish the non-wrapping property in
some simple cases.  The main idea is to look through the operations
defining the pointer.  As long as we arrive to a non-wrapping AddRec via
a small chain of non-wrapping instruction, the pointer should not wrap
either.

I believe that this essentially is what Andy described in
http://article.gmane.org/gmane.comp.compilers.llvm.cvs/220731 as the way
forward.

Reviewers: aschwaighofer, nadav, sanjoy, atrick

Reviewed By: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10472

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240798 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Adam Nemet 2015-06-26 17:25:43 +00:00
parent cb2dc5bc1c
commit e11d1d2c31
2 changed files with 90 additions and 1 deletions

View File

@ -504,6 +504,54 @@ static bool isInBoundsGep(Value *Ptr) {
return false;
}
/// \brief Return true if an AddRec pointer \p Ptr is unsigned non-wrapping,
/// i.e. monotonically increasing/decreasing.
static bool isNoWrapAddRec(Value *Ptr, const SCEVAddRecExpr *AR,
ScalarEvolution *SE, const Loop *L) {
// FIXME: This should probably only return true for NUW.
if (AR->getNoWrapFlags(SCEV::NoWrapMask))
return true;
// Scalar evolution does not propagate the non-wrapping flags to values that
// are derived from a non-wrapping induction variable because non-wrapping
// could be flow-sensitive.
//
// Look through the potentially overflowing instruction to try to prove
// non-wrapping for the *specific* value of Ptr.
// The arithmetic implied by an inbounds GEP can't overflow.
auto *GEP = dyn_cast<GetElementPtrInst>(Ptr);
if (!GEP || !GEP->isInBounds())
return false;
// Make sure there is only one non-const index and analyze that.
Value *NonConstIndex = nullptr;
for (auto Index = GEP->idx_begin(); Index != GEP->idx_end(); ++Index)
if (!isa<ConstantInt>(*Index)) {
if (NonConstIndex)
return false;
NonConstIndex = *Index;
}
if (!NonConstIndex)
// The recurrence is on the pointer, ignore for now.
return false;
// The index in GEP is signed. It is non-wrapping if it's derived from a NSW
// AddRec using a NSW operation.
if (auto *OBO = dyn_cast<OverflowingBinaryOperator>(NonConstIndex))
if (OBO->hasNoSignedWrap() &&
// Assume constant for other the operand so that the AddRec can be
// easily found.
isa<ConstantInt>(OBO->getOperand(1))) {
auto *OpScev = SE->getSCEV(OBO->getOperand(0));
if (auto *OpAR = dyn_cast<SCEVAddRecExpr>(OpScev))
return OpAR->getLoop() == L && OpAR->getNoWrapFlags(SCEV::FlagNSW);
}
return false;
}
/// \brief Check whether the access through \p Ptr has a constant stride.
int llvm::isStridedPtr(ScalarEvolution *SE, Value *Ptr, const Loop *Lp,
const ValueToValueMap &StridesMap) {
@ -541,7 +589,7 @@ int llvm::isStridedPtr(ScalarEvolution *SE, Value *Ptr, const Loop *Lp,
// to access the pointer value "0" which is undefined behavior in address
// space 0, therefore we can also vectorize this case.
bool IsInBoundsGEP = isInBoundsGep(Ptr);
bool IsNoWrapAddRec = AR->getNoWrapFlags(SCEV::NoWrapMask);
bool IsNoWrapAddRec = isNoWrapAddRec(Ptr, AR, SE, Lp);
bool IsInAddressSpaceZero = PtrTy->getAddressSpace() == 0;
if (!IsNoWrapAddRec && !IsInBoundsGEP && !IsInAddressSpaceZero) {
DEBUG(dbgs() << "LAA: Bad stride - Pointer may wrap in the address space "

View File

@ -0,0 +1,41 @@
; RUN: opt -basicaa -loop-accesses -analyze < %s | FileCheck %s
; For this loop:
; for (int i = 0; i < n; i++)
; A[2 * i] = A[2 * i] + B[i];
;
; , SCEV is unable to prove that A[2 * i] does not overflow. However,
; analyzing the IR helps us to conclude it and in turn allow dependence
; analysis.
target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
; CHECK: Memory dependences are safe{{$}}
define void @f(i16* noalias %a,
i16* noalias %b, i64 %N) {
entry:
br label %for.body
for.body: ; preds = %for.body, %entry
%ind = phi i64 [ 0, %entry ], [ %inc, %for.body ]
%mul = mul nuw nsw i64 %ind, 2
%arrayidxA = getelementptr inbounds i16, i16* %a, i64 %mul
%loadA = load i16, i16* %arrayidxA, align 2
%arrayidxB = getelementptr inbounds i16, i16* %b, i64 %ind
%loadB = load i16, i16* %arrayidxB, align 2
%add = mul i16 %loadA, %loadB
store i16 %add, i16* %arrayidxA, align 2
%inc = add nuw nsw i64 %ind, 1
%exitcond = icmp eq i64 %inc, %N
br i1 %exitcond, label %for.end, label %for.body
for.end: ; preds = %for.body
ret void
}