than a wider one, before trying to compare their contents which will crash
if their sizes are different.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74792 91177308-0d34-0410-b5e6-96231b3b80d8
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73866 91177308-0d34-0410-b5e6-96231b3b80d8
If C is a single bit and the and gets analyzed as a truncate and
zero-extend, the xor can be represnted as an add.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73664 91177308-0d34-0410-b5e6-96231b3b80d8
that gets recognized with a SCEVZeroExtendExpr must be an And
with a low-bits mask. With r73540, this is no longer the case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73594 91177308-0d34-0410-b5e6-96231b3b80d8
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72897 91177308-0d34-0410-b5e6-96231b3b80d8
add-recurrence to be exposed. Add a new SCEV folding rule to
help simplify expressions in the presence of these extra truncs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71264 91177308-0d34-0410-b5e6-96231b3b80d8
artificial "ptrtoint", as it tends to clutter up complicated
expressions. The cast operators now print both source and
destination types, which is usually sufficient.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70554 91177308-0d34-0410-b5e6-96231b3b80d8
compute an upper-bound value for the trip count, in addition to
the actual trip count. Use this to allow getZeroExtendExpr and
getSignExtendExpr to fold casts in more cases.
This may eventually morph into a more general value-range
analysis capability; there are certainly plenty of places where
more complete value-range information would allow more folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70509 91177308-0d34-0410-b5e6-96231b3b80d8
(sext i8 {-128,+,1} to i64) to i64 {-128,+,1}, where the iteration
crosses from negative to positive, but is still safe if the trip
count is within range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70421 91177308-0d34-0410-b5e6-96231b3b80d8
print sext, zext, and trunc, instead of signextend, zeroextend,
and truncate, respectively, for consistency with the main IR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70405 91177308-0d34-0410-b5e6-96231b3b80d8
type to truncate to should be the number of bits of the value that are
preserved, not the number that are clobbered with sign-extension.
This fixes regressions in ldecod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69704 91177308-0d34-0410-b5e6-96231b3b80d8
to more accurately describe what it does. Expand its doxygen comment
to describe what the backedge-taken count is and how it differs
from the actual iteration count of the loop. Adjust names and
comments in associated code accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65382 91177308-0d34-0410-b5e6-96231b3b80d8
Use it to safely handle less-than-or-equals-to exit conditions in loops. These
also occur when the loop exit branch is exit on true because SCEV inverses the
icmp predicate.
Use it again to handle non-zero strides, but only with an unsigned comparison
in the exit condition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59528 91177308-0d34-0410-b5e6-96231b3b80d8
If this patch causes a performance regression for anyone, please let me know,
and it can be fixed in a different way with much more effort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59384 91177308-0d34-0410-b5e6-96231b3b80d8
its callers to emit a space character before calling it when a
space is needed.
This fixes several spurious whitespace issues in
ScalarEvolution's debug dumps. See the test changes for
examples.
This also fixes odd space-after-tab indentation in the output
for switch statements, and changes calls from being printed like
this:
call void @foo( i32 %x )
to this:
call void @foo(i32 %x)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56196 91177308-0d34-0410-b5e6-96231b3b80d8
continue past the first conditional branch when looking for a
relevant test. This helps it avoid using MAX expressions in
loop trip counts in more cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54697 91177308-0d34-0410-b5e6-96231b3b80d8
version uses a new algorithm for evaluating the binomial coefficients
which is significantly more efficient for AddRecs of more than 2 terms
(see the comments in the code for details on how the algorithm works).
It also fixes some bugs: it removes the arbitrary length restriction for
AddRecs, it fixes the silent generation of incorrect code for AddRecs
which require a wide calculation width, and it fixes an issue where we
were incorrectly truncating the iteration count too far when evaluating
an AddRec expression narrower than the induction variable.
There are still a few related issues I know of: I think there's
still an issue with the SCEVExpander expansion of AddRec in terms of
the width of the induction variable used. The hack to avoid generating
too-wide integers shouldn't be necessary; instead, the callers should be
considering the cost of the expansion before expanding it (in addition
to not expanding too-wide integers, we might not want to expand
expressions that are really expensive, especially when optimizing for
size; calculating an length-17 32-bit AddRec currently generates about 250
instructions of straight-line code on X86). Also, for long 32-bit
AddRecs on X86, CodeGen really sucks at scheduling the code. I'm planning on
filing follow-up PRs for these issues.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54332 91177308-0d34-0410-b5e6-96231b3b80d8
time applying to the implicit comparison in smin expressions. The
correct way to transform an inequality into the opposite
inequality, either signed or unsigned, is with a not expression.
I looked through the SCEV code, and I don't think there are any more
occurrences of this issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54194 91177308-0d34-0410-b5e6-96231b3b80d8
SGT exit condition. Essentially, the correct way to flip an inequality
in 2's complement is the not operator, not the negation operator.
That said, the difference only affects cases involving INT_MIN.
Also, enhance the pre-test search logic to be a bit smarter about
inequalities flipped with a not operator, so it can eliminate the smax
from the iteration count for simple loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54184 91177308-0d34-0410-b5e6-96231b3b80d8
force evaluation (ComputeIterationCountExhaustively) should be turned off.
It doesn't apply to trip-count2.ll because this file tests the brute force
evaluation.
The test for PR2364 (2008-05-25-NegativeStepToZero.ll) currently fails
showing that the patch for this bug doesn't work. I'll fix it in a few hours
with a patch for PR2088.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53792 91177308-0d34-0410-b5e6-96231b3b80d8