payloads. APFloat's internal folding routines always make QNaNs now,
instead of sometimes making QNaNs and sometimes SNaNs depending on the
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97364 91177308-0d34-0410-b5e6-96231b3b80d8
getelementptr. Despite only doing so in the case where x is a known
array object and c can be converted to an index within range, this
could still be invalid if c is actually the address of an object
allocated outside of LLVM. Also, SCEVExpander, the original motivation
for this code, has since been improved to avoid inttoptr+ptroint in
more cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96950 91177308-0d34-0410-b5e6-96231b3b80d8
odd offsets since the bitcasted pointer size and the offset pointer
size are going to be different types for the GEP vs base object.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96134 91177308-0d34-0410-b5e6-96231b3b80d8
what it does. Enhance it to return false to optimizing vector
sign extensions from vector comparisions, which is the idiom used
to get a splatted vector for a vector comparison.
Doing this breaks vector-casts.ll, add some compensating
transformations to handle the important case they cover without
depending on this canonicalization.
This fixes rdar://7434900 a serious pessimization of vector compares.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95855 91177308-0d34-0410-b5e6-96231b3b80d8
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly. Move testcase
to new directory.
Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95628 91177308-0d34-0410-b5e6-96231b3b80d8
xform it is checking to actually pass. There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).
Add matches for sext(not(x)) in addition to not(sext(x)).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95420 91177308-0d34-0410-b5e6-96231b3b80d8
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.
Add a few new testcases, change existing testcase to use a private
global array instead of extern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95283 91177308-0d34-0410-b5e6-96231b3b80d8
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94987 91177308-0d34-0410-b5e6-96231b3b80d8
of objc message send was getting marked arm_apcscc, but the prototype
isn't. This is fine at runtime because objcmsgsend is implemented in
assembly. Only turn a mismatched caller and callee into 'unreachable'
if the callee is a definition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94986 91177308-0d34-0410-b5e6-96231b3b80d8
case, instcombine can't zap the invoke for fear of changing the CFG.
However, we have to do something to prevent the next iteration of
instcombine from inserting another store -> undef before the invoke
thereby getting into infinite iteration between dead store elim and
store insertion.
Just zap the callee to null, which will prevent the next iteration
from doing anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94985 91177308-0d34-0410-b5e6-96231b3b80d8
needed for this test, but otherwise, there's nothing ARM-specific about
it and no need to specify the calling convention.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94862 91177308-0d34-0410-b5e6-96231b3b80d8
when it should have been and'd with LowBits. Fix that and while there beef
up the logic in the case of a negative LHS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94745 91177308-0d34-0410-b5e6-96231b3b80d8
"sext cond" instead of a select. This simplifies some instcombine
code, matches the policy for zext (cond ? 1 : 0 -> zext), and allows
us to generate better code for a testcase on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94339 91177308-0d34-0410-b5e6-96231b3b80d8
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93787 91177308-0d34-0410-b5e6-96231b3b80d8
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).
This is causing LLVM to perform incorrect xforms for code like:
void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
double mh, ml;
double c = 134217729.0;
double up, u1, u2, vp, v1, v2;
up = xh*c;
u1 = (xh - up) + up;
u2 = xh - u1;
vp = yh*c;
v1 = (yh - vp) + vp;
v2 = yh - v1;
mh = xh*yh;
ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
ml += xh*yl + xl*yh;
*rhi = mh + ml;
*rlo = (mh - (*rhi)) + ml;
}
The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93369 91177308-0d34-0410-b5e6-96231b3b80d8