rewrite the comparison if there is any implicit extension or truncation
on the induction variable. I'm planning for IVUsers to eventually take
over some of the work of this code, and for it to be generalized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72496 91177308-0d34-0410-b5e6-96231b3b80d8
in the case where a loop exit value cannot be computed, instead of only in
some cases while using SCEVCouldNotCompute in others. This simplifies
getSCEVAtScope's callers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72375 91177308-0d34-0410-b5e6-96231b3b80d8
one of the RecursivelyDeleteTriviallyDeadInstructions.
Add a comment explaining why the cache needs to be cleared.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72372 91177308-0d34-0410-b5e6-96231b3b80d8
leave the original comparison in place if it has other uses, since the
other uses won't be dominated by the new comparison instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72369 91177308-0d34-0410-b5e6-96231b3b80d8
Fix by clearing the rewriter cache before deleting the trivially dead
instructions.
Also make InsertedExpressions use an AssertingVH to catch these
bugs easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72364 91177308-0d34-0410-b5e6-96231b3b80d8
assuming that the use of the value is in a block dominated by the
"normal" destination. LangRef.html and other documentation sources
don't explicitly guarantee this, but it seems to be assumed in
other places in LLVM at least.
This fixes an assertion failure on the included testcase, which
is derived from the Ada testsuite.
FixUsesBeforeDefs is a temporary measure which I'm looking to
replace with a more capable solution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72266 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine to be more aggressive about using SimplifyDemandedBits
on shift nodes. This allows a shift to be simplified to zero in the
included test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72204 91177308-0d34-0410-b5e6-96231b3b80d8
of the comparison is defined inside the loop. This fixes a
use-before-def problem, because the transformation puts a use
of the RHS outside the loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72149 91177308-0d34-0410-b5e6-96231b3b80d8
instructions. It attempts to create high-level multi-operand GEPs,
though in cases where this isn't possible it falls back to casting
the pointer to i8* and emitting a GEP with that. Using GEP instructions
instead of ptrtoint+arithmetic+inttoptr helps pointer analyses that
don't use ScalarEvolution, such as BasicAliasAnalysis.
Also, make the AddrModeMatcher more aggressive in handling GEPs.
Previously it assumed that operand 0 of a GEP would require a register
in almost all cases. It now does extra checking and can do more
matching if operand 0 of the GEP is foldable. This fixes a problem
that was exposed by SCEVExpander using GEPs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72093 91177308-0d34-0410-b5e6-96231b3b80d8
is not known to be nothrow. This allows readnone/readonly functions
to be deleted even if we don't know whether the callee can throw.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71676 91177308-0d34-0410-b5e6-96231b3b80d8
without one. Use it where we were using abs on
int64_t objects.
(I strongly suspect the casts to unsigned in the
fragments in LoopStrengthReduce are not doing whatever
the original intent was, but the obvious change to
uint64_t doesn't work. Maybe later.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71612 91177308-0d34-0410-b5e6-96231b3b80d8
and generalize it so that it can be used by IndVarSimplify. Implement the
base IndVarSimplify transformation code using IVUsers. This removes
TestOrigIVForWrap and associated code, as ScalarEvolution now has enough
builtin overflow detection and folding logic to handle all the same cases,
and more. Run "opt -iv-users -analyze -disable-output" on your favorite
loop for an example of what IVUsers does.
This lets IndVarSimplify eliminate IV casts and compute trip counts in
more cases. Also, this happens to finally fix the remaining testcases
in PR1301.
Now that IndVarSimplify is being more aggressive, it occasionally runs
into the problem where ScalarEvolutionExpander's code for avoiding
duplicate expansions makes it difficult to ensure that all expanded
instructions dominate all the instructions that will use them. As a
temporary measure, IndVarSimplify now uses a FixUsesBeforeDefs function
to fix up instructions inserted by SCEVExpander. Fortunately, this code
is contained, and can be easily removed once a more comprehensive
solution is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71535 91177308-0d34-0410-b5e6-96231b3b80d8
Also, if the compare is the only use, LSR would place the iv increment instruction before the compare instead in the latch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71485 91177308-0d34-0410-b5e6-96231b3b80d8
count down to 0 instead, under very restricted
circumstances. Adjust 4 testcases in which this
optimization fires.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71439 91177308-0d34-0410-b5e6-96231b3b80d8
method, fixing a crash on PR4146. While the store will
ultimately overwrite the "padded size" number of bits in memory,
the stored value may be a subset of this size. This function
only wants to handle the case where all bits are stored.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71224 91177308-0d34-0410-b5e6-96231b3b80d8
the optimizers about this. For example, a readonly
function with no uses cannot be removed unless it is
also marked nounwind.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71071 91177308-0d34-0410-b5e6-96231b3b80d8
CallbackVH, with fixes. allUsesReplacedWith need to
walk the def-use chains and invalidate all users of a
value that is replaced. SCEVs of users need to be
recalcualted even if the new value is equivalent. Also,
make forgetLoopPHIs walk def-use chains, since any
SCEV that depends on a PHI should be recalculated when
more information about that PHI becomes available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70927 91177308-0d34-0410-b5e6-96231b3b80d8
makes ScalarEvolution::deleteValueFromRecords, and it's code that
subtly needed to be called before ReplaceAllUsesWith, unnecessary.
It also makes ValueDeletionListener unnecessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70645 91177308-0d34-0410-b5e6-96231b3b80d8
of returning a list of pointers to Values that are deleted. This was
unsafe, because the pointers in the list are, by nature of what
RecursivelyDeleteDeadInstructions does, always dangling. Replace this
with a simple callback mechanism. This may eventually be removed if
all clients can reasonably be expected to use CallbackVH.
Use this to factor out the dead-phi-cycle-elimination code from LSR
utility function, and generalize it to use the
RecursivelyDeleteTriviallyDeadInstructions utility function.
This makes LSR more aggressive about eliminating dead PHI cycles;
adjust tests to either be less trivial or to simply expect fewer
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70636 91177308-0d34-0410-b5e6-96231b3b80d8
of LSR. This makes the AddUsersIfInteresting phase of LSR a pure
analysis instead of a phase that potentially does CFG modifications.
The conditions where this code would actually perform a split are
rare, and in the cases where it actually would do a split the split
is usually undone by CodeGenPrepare, and in cases where splits
actually survive into codegen, they appear to hurt more often than
they help.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70625 91177308-0d34-0410-b5e6-96231b3b80d8
target hooks canLosslesslyBitCastTo and isTruncateFree. This allows
targets to avoid worrying about handling all combinations of integer
and pointer types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70555 91177308-0d34-0410-b5e6-96231b3b80d8
with the persistent insertion point, and change IndVars to make
use of it. This fixes a bug where IndVars was holding on to a
stale insertion point and forcing the SCEVExpander to continue to
use it.
This fixes PR4038.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69892 91177308-0d34-0410-b5e6-96231b3b80d8
have pointer types, though in contrast to C pointer types, SCEV
addition is never implicitly scaled. This not only eliminates the
need for special code like IndVars' EliminatePointerRecurrence
and LSR's own GEP expansion code, it also does a better job because
it lets the normal optimizations handle pointer expressions just
like integer expressions.
Also, since LLVM IR GEPs can't directly index into multi-dimensional
VLAs, moving the GEP analysis out of client code and into the SCEV
framework makes it easier for clients to handle multi-dimensional
VLAs the same way as other arrays.
Some existing regression tests show improved optimization.
test/CodeGen/ARM/2007-03-13-InstrSched.ll in particular improved to
the point where if-conversion started kicking in; I turned it off
for this test to preserve the intent of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69258 91177308-0d34-0410-b5e6-96231b3b80d8
and sext over (iv | const), if a longer iv is
available. Allow expressions to have more than
one zext/sext parent. All from OpenSSL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69241 91177308-0d34-0410-b5e6-96231b3b80d8
sext around sext(shorter IV + constant), using a
longer IV instead, when it can figure out the
add can't overflow. This comes up a lot in
subscripting; mainly affects 64 bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69123 91177308-0d34-0410-b5e6-96231b3b80d8
strncat :(
strncat(foo, "bar", 99)
would be optimized to
memcpy(foo+strlen(foo), "bar", 100, 1)
instead of
memcpy(foo+strlen(foo), "bar", 4, 1)"
Patch by Benjamin Kramer!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68905 91177308-0d34-0410-b5e6-96231b3b80d8
integer types, unless they are already strange. This prevents it from
turning the code produced by SROA into crazy libcalls and stuff that
the code generator can't handle. In the attached example, the result
was an i96 multiply that caused the x86 backend to assert.
Note that if TargetData had an idea of what the legal types are for
a target that this could be used to stop instcombine from introducing
i64 muls, as Scott wanted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68598 91177308-0d34-0410-b5e6-96231b3b80d8
instead of the place where it started to perform the string copy.
- PR3661
- Patch by Benjamin Kramer!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68443 91177308-0d34-0410-b5e6-96231b3b80d8
to/from integer types that are not intptr_t to convert to intptr_t
then do an integer conversion to the dest type. This exposes the
cast to the optimizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67638 91177308-0d34-0410-b5e6-96231b3b80d8
1. Make instcombine always canonicalize trunc x to i1 into an icmp(x&1). This
exposes the AND to other instcombine xforms and is more of what the code
generator expects.
2. Rewrite the remaining trunc pattern match to use 'match', which
simplifies it a lot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67635 91177308-0d34-0410-b5e6-96231b3b80d8
linkage: the value may be replaced with something
different at link time. (Frontends that want to
allow values to be loaded out of weak constants can
give their constants weak_odr linkage).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67407 91177308-0d34-0410-b5e6-96231b3b80d8
and was deleting Instructions without clearing the
corresponding map entry. This led to nondeterministic
behavior if the same address got allocated to another
Instruction within a short time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67306 91177308-0d34-0410-b5e6-96231b3b80d8
it is not APInt clean, but even when it is it needs to be evaluated carefully
to determine whether it is actually profitable.
This fixes a crash on PR3806
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67134 91177308-0d34-0410-b5e6-96231b3b80d8
changes.
For InvokeInst now all arguments begin at op_begin().
The Callee, Cont and Fail are now faster to get by
access relative to op_end().
This patch introduces some temporary uglyness in CallSite.
Next I'll bring CallInst up to a similar scheme and then
the uglyness will magically vanish.
This patch also exposes all the reliance of the libraries
on InvokeInst's operand ordering. I am thinking of taking
care of that too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66920 91177308-0d34-0410-b5e6-96231b3b80d8
in the Ada testcase. Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late... See PR3784.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66826 91177308-0d34-0410-b5e6-96231b3b80d8
allocations. Apparently the assumption is there is an
instruction (terminator?) following the allocation so I
am allowing the same assumption.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66716 91177308-0d34-0410-b5e6-96231b3b80d8
use, check also for the case where it has two uses,
the other being a llvm.dbg.declare. This is needed so
debug info doesn't affect codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65970 91177308-0d34-0410-b5e6-96231b3b80d8
info with it.
Don't count debug info insns against the scan maximum
in FindAvailableLoadedValue (lest they affect codegen).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65910 91177308-0d34-0410-b5e6-96231b3b80d8
to more accurately describe what it does. Expand its doxygen comment
to describe what the backedge-taken count is and how it differs
from the actual iteration count of the loop. Adjust names and
comments in associated code accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65382 91177308-0d34-0410-b5e6-96231b3b80d8
ashr instcombine to help expose this code. And apply the fix to
SelectionDAG's copy of this code too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65364 91177308-0d34-0410-b5e6-96231b3b80d8
trip counts that use signed comparisons. It's not obviously the best
approach for preserving trip count information, and at any rate there
isn't anything in the tree right now that makes use of that, so for
now always using zero-extensions is preferable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65347 91177308-0d34-0410-b5e6-96231b3b80d8
so that ScalarEvolution doesn't hang onto a dangling Loop*, which
could be a problem if another Loop happens to get allocated at the
same address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65323 91177308-0d34-0410-b5e6-96231b3b80d8
-std-compile-opts sequence, this avoids the need for ScalarEvolution to
be rerun before LoopDeletion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65318 91177308-0d34-0410-b5e6-96231b3b80d8
memcpy to match the alignment of the destination. It isn't necessary
for making loads and stores handled like the SSE loadu/storeu
intrinsics, and it was causing a performance regression in
MultiSource/Applications/JM/lencod.
The problem appears to have been a memcpy that copies from some
highly aligned array into an alloca; the alloca was then being
assigned a large alignment, which required codegen to perform
dynamic stack-pointer re-alignment, which forced the enclosing
function to have a frame pointer, which led to increased spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65289 91177308-0d34-0410-b5e6-96231b3b80d8
as legality. Make load sinking and gep sinking more careful: we only
do it when it won't pessimize loads from the stack. This has the added
benefit of not producing code that is unanalyzable to SROA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65209 91177308-0d34-0410-b5e6-96231b3b80d8
addresses, part 1. This fixes an obvious logic bug. Previously if the only
in-loop use is a PHI, it would return AllUsesAreAddresses as true.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65178 91177308-0d34-0410-b5e6-96231b3b80d8
reduction of address calculations down to basic pointer arithmetic.
This is currently off by default, as it needs a few other features
before it becomes generally useful. And even when enabled, full
strength reduction is only performed when it doesn't increase
register pressure, and when several other conditions are true.
This also factors out a bunch of exisiting LSR code out of
StrengthReduceStridedIVUsers into separate functions, and tidies
up IV insertion. This actually decreases register pressure even
in non-superhero mode. The change in iv-users-in-other-loops.ll
is an example of this; there are two more adds because there are
two fewer leas, and there is less spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65108 91177308-0d34-0410-b5e6-96231b3b80d8
trip count value when the original loop iteration condition is
signed and the canonical induction variable won't undergo signed
overflow. This isn't required for correctness; it just preserves
more information about original loop iteration values.
Add a getTruncateOrSignExtend method to ScalarEvolution,
following getTruncateOrZeroExtend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64918 91177308-0d34-0410-b5e6-96231b3b80d8
are multiple IV's in a loop, some of them may under go signed
or unsigned wrapping even if the IV that's used in the loop
exit condition doesn't. Restrict sign-extension-elimination
and zero-extension-elimination to only those that operate on
the original loop-controlling IV.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64866 91177308-0d34-0410-b5e6-96231b3b80d8
modified in a way that may effect the trip count calculation. Change
IndVars to use this method when it rewrites pointer or floating-point
induction variables instead of using a doInitialization method to
sneak these changes in before ScalarEvolution has a chance to see
the loop. This eliminates the need for LoopPass to depend on
ScalarEvolution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64810 91177308-0d34-0410-b5e6-96231b3b80d8
eliminate all the extensions and all but the one required truncate
from the testcase, but the or/and/shift stuff still isn't zapped.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64809 91177308-0d34-0410-b5e6-96231b3b80d8
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64623 91177308-0d34-0410-b5e6-96231b3b80d8
when I was looking at functions used by python.
Highlights include, better largefile support (64-bit file sizes on 32-bit
systems), fputs string is nocapture, popen/pclose added (popen being noalias
return), modf and frexp and friends. Also added some missing 'break' statements
and combined identical sections.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64615 91177308-0d34-0410-b5e6-96231b3b80d8
- Test for signed and unsigned wrapping conditions, instead of just
testing for non-negative induction ranges.
- Handle loops with GT comparisons, in addition to LT comparisons.
- Support more cases of induction variables that don't start at 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64532 91177308-0d34-0410-b5e6-96231b3b80d8
addrec in a different loop to check the value being added to
the accumulated Start value, not the Start value before it has
the new value added to it. This prevents LSR from going crazy
on the included testcase. Dale, please review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64440 91177308-0d34-0410-b5e6-96231b3b80d8
after sorting by stride value. This prevents it from missing
IV reuse opportunities in a host-sensitive manner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64415 91177308-0d34-0410-b5e6-96231b3b80d8
loop induction on LP64 targets. When the induction variable is
used in addressing, IndVars now is usually able to inserst a
64-bit induction variable and eliminates the sign-extending cast.
This is also useful for code using C "short" types for
induction variables on targets with 32-bit addressing.
Inserting a wider induction variable is easy; the tricky part is
determining when trunc(sext(i)) expressions are no-ops. This
requires range analysis of the loop trip count. A common case is
when the original loop iteration starts at 0 and exits when the
induction variable is signed-less-than a fixed value; this case
is now handled.
This replaces IndVarSimplify's OptimizeCanonicalIVType. It was
doing the same optimization, but it was limited to loops with
constant trip counts, because it was running after the loop
rewrite, and the information about the original induction
variable is lost by that point.
Rename ScalarEvolution's executesAtLeastOnce to
isLoopGuardedByCond, generalize it to be able to test for
ICMP_NE conditions, and move it to be a public function so that
IndVars can use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64407 91177308-0d34-0410-b5e6-96231b3b80d8
accessed at least once as a vector. This prevents it from
compiling the example in not-a-vector into:
define double @test(double %A, double %B) {
%tmp4 = insertelement <7 x double> undef, double %A, i32 0
%tmp = insertelement <7 x double> %tmp4, double %B, i32 4
%tmp2 = extractelement <7 x double> %tmp, i32 4
ret double %tmp2
}
instead, producing the integer code. Producing vectors when they
aren't otherwise in the program is dangerous because a lot of other
code treats them carefully and doesn't want to break them down.
OTOH, many things want to break down tasty i448's.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63638 91177308-0d34-0410-b5e6-96231b3b80d8