fairly systematic way in instcombine. Some of these cases were already dealt
with, in which case I removed the existing code. The case of Add has a bunch of
funky logic which covers some of this plus a few variants (considers shifts to be
a form of multiplication), which I didn't touch. The simplification performed is:
A*B+A*C -> A*(B+C). The improvement is to do this in cases that were not already
handled [such as A*B-A*C -> A*(B-C), which was reported on the mailing list], and
also to do it more often by not checking for "only one use" if "B+C" simplifies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120024 91177308-0d34-0410-b5e6-96231b3b80d8
folding improvements: if P points to a type of size zero, turn "gep P, N" into "P".
More generally, if a gep index type has size zero, instcombine could replace the
index with zero, but that is not done here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119942 91177308-0d34-0410-b5e6-96231b3b80d8
void a(int x) { if (((1<<x)&8)==0) b(); }
into "x != 3", which occurs over 100 times in 403.gcc but in no
other program in llvm-test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119922 91177308-0d34-0410-b5e6-96231b3b80d8
allowing the memcpy to be eliminated.
Unfortunately, the requirements on byval's without explicit
alignment are really weak and impossible to predict in the
mid-level optimizer, so this doesn't kick in much with current
frontends. The fix is to change clang to set alignment on all
byval arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119916 91177308-0d34-0410-b5e6-96231b3b80d8
preserves LCSSA form out of ScalarEvolution and into the LoopInfo
class. Use it to check that SimplifyInstruction simplifications
are not breaking LCSSA form. Fixes PR8622.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119727 91177308-0d34-0410-b5e6-96231b3b80d8
this was a tree of hashtables, and a query recursed into the table for the immediate dominator ad infinitum
if the initial lookup failed. This led to really bad performance on tall, narrow CFGs.
We can instead replace it with what is conceptually a multimap of value numbers to leaders (actually
represented by a hashtable with a list of Value*'s as the value type), and then
determine which leader from that set to use very cheaply thanks to the DFS numberings maintained by
DominatorTree. Because there are typically few duplicates of a given value, this scan tends to be
quite fast. Additionally, we use a custom linked list and BumpPtr allocation to avoid any unnecessary
allocation in representing the value-side of the multimap.
This change brings with it a 15% (!) improvement in the total running time of GVN on 403.gcc, which I
think is pretty good considering that includes all the "real work" being done by MemDep as well.
The one downside to this approach is that we can no longer use GVN to perform simple conditional progation,
but that seems like an acceptable loss since we now have LVI and CorrelatedValuePropagation to pick up
the slack. If you see conditional propagation that's not happening, please file bugs against LVI or CVP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119714 91177308-0d34-0410-b5e6-96231b3b80d8
refusing to optimize two memcpy's like this:
copy A <- B
copy C <- A
if it couldn't prove that noalias(B,C). We can eliminate
the copy by producing a memmove instead of memcpy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119694 91177308-0d34-0410-b5e6-96231b3b80d8
if it is passed as a byval argument. The byval argument will just be a
read, so it is safe to read from the original global instead. This allows
us to promote away the %agg.tmp alloca in PR8582
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119686 91177308-0d34-0410-b5e6-96231b3b80d8
over a phi node by applying it to each operand may be wrong if the
operation and the phi node are mutually interdependent (the testcase
has a simple example of this). So only do this transform if it would
be correct to perform the operation in each predecessor of the block
containing the phi, i.e. if the other operands all dominate the phi.
This should fix the FFMPEG snow.c regression reported by İsmail Dönmez.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119347 91177308-0d34-0410-b5e6-96231b3b80d8
offload the work to hasConstantValue rather than do something more
complicated (such handling mutually recursive phis) because (1) it is
not clear it is worth it; and (2) if it is worth it, maybe such logic
would be better placed in hasConstantValue. Adjust some GVN tests
which are now cleaned up much further (eg: all phi nodes are removed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119043 91177308-0d34-0410-b5e6-96231b3b80d8
SimplifyAssociativeOrCommutative) "(A op C1) op C2" -> "A op (C1 op C2)",
which previously was only done if C1 and C2 were constants, to occur whenever
"C1 op C2" simplifies (a la InstructionSimplify). Since the simplifying operand
combination can no longer be assumed to be the right-hand terms, consider all of
the possible permutations. When compiling "gcc as one big file", transform 2
(i.e. using right-hand operands) fires about 4000 times but it has to be said
that most of the time the simplifying operands are both constants. Transforms
3, 4 and 5 each fired once. Transform 6, which is an existing transform that
I didn't change, never fired. With this change, the testcase is now optimized
perfectly with one run of instcombine (previously it required instcombine +
reassociate + instcombine, and it may just have been luck that this worked).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119002 91177308-0d34-0410-b5e6-96231b3b80d8
testing for dereferenceable pointers into a helper function,
isDereferenceablePointer. Teach it how to reason about GEPs
with simple non-zero indices.
Also eliminate ArgumentPromtion's IsAlwaysValidPointer,
which didn't check for weak externals or out of range gep
indices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118840 91177308-0d34-0410-b5e6-96231b3b80d8
references. For example, this allows gvn to eliminate the load in
this example:
void foo(int n, int* p, int *q) {
p[0] = 0;
p[1] = 1;
if (n) {
*q = p[0];
}
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118714 91177308-0d34-0410-b5e6-96231b3b80d8
nodes can be used in loops, this could result in infinite looping
if there is no recursion limit, so add such a limit. It is also
used for the SelectInst case because in theory there could be an
infinite loop there too if the basic block is unreachable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118694 91177308-0d34-0410-b5e6-96231b3b80d8
to optionally look for constant or local (alloca) memory.
Teach BasicAliasAnalysis::pointsToConstantMemory to look through Select
and Phi nodes, and to support looking for local memory.
Remove FunctionAttrs' PointsToLocalOrConstantMemory function, now that
AliasAnalysis knows all the tricks that it knew.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118412 91177308-0d34-0410-b5e6-96231b3b80d8
of a select instruction, see if doing the compare with the
true and false values of the select gives the same result.
If so, that can be used as the value of the comparison.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118378 91177308-0d34-0410-b5e6-96231b3b80d8
consider it to be readonly. In fact, don't even consider it to be
readonly if it does a volatile load from an AllocaInst either (it
is debatable as to whether readonly would be correct or not in this
case; play safe for the moment). This fixes PR8279.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117783 91177308-0d34-0410-b5e6-96231b3b80d8
This code had previously used 2*N, where N is the mask length, to represent
undef. That is not safe because the shufflevector operands may have more
than N elements -- they don't have to match the result type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117721 91177308-0d34-0410-b5e6-96231b3b80d8
Allow splats even if they don't match either of the original shuffles,
possibly due to undef entries in the shuffles masks. Radar 8597790.
Also fix some 80-column violations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117719 91177308-0d34-0410-b5e6-96231b3b80d8
it isn't unreachable and should not be zapped. The check for the entry block
was missing in one case: a block containing a unwind instruction. While there,
do some small cleanups: "M" is not a great name for a Function* (it would be
more appropriate for a Module*), change it to "Fn"; use Fn in more places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117224 91177308-0d34-0410-b5e6-96231b3b80d8
does normal initialization and normal chaining. Change the default
AliasAnalysis implementation to NoAlias.
Update StandardCompileOpts.h and friends to explicitly request
BasicAliasAnalysis.
Update tests to explicitly request -basicaa.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116720 91177308-0d34-0410-b5e6-96231b3b80d8
logic to use the new APInt methods. Among other things this
implements rdar://8501501 - llvm.smul.with.overflow.i32 should constant fold
which comes from "clang -ftrapv", originally brought to my attention from PR8221.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116457 91177308-0d34-0410-b5e6-96231b3b80d8
Anyone interested in more general PRE would be better served by implementing it separately, to get real
anticipation calculation, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115337 91177308-0d34-0410-b5e6-96231b3b80d8
code size (making this transform code size neutral), and it allows us to hoist values out of loops, which is always
a good thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115205 91177308-0d34-0410-b5e6-96231b3b80d8
Because of this, we cannot use the Simplify* APIs, as they can assert-fail on unreachable code. Since it's not easy to determine
if a given threading will cause a block to become unreachable, simply defer simplifying simplification to later InstCombine and/or
DCE passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115082 91177308-0d34-0410-b5e6-96231b3b80d8
Usually we wouldn't do this anyway because llvm_fenv_testexcept would return an
exception, but we have seen some cases where neither errno nor fenv detect an
exception on arm-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114893 91177308-0d34-0410-b5e6-96231b3b80d8
Splitting critical edges at the merge point only addressed part of the issue; it is also possible for non-post-domination
to occur when the path from the load to the merge has branches in it. Unfortunately, full anticipation analysis is
time-consuming, so for now approximate it. This is strictly more conservative than real anticipation, so we will miss
some cases that real PRE would allow, but we also no longer insert loads into paths where they didn't exist before. :-)
This is a very slight net positive on SPEC for me (0.5% on average). Most of the benchmarks are largely unaffected, but
when it pays off it pays off decently: 181.mcf improves by 4.5% on my machine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114785 91177308-0d34-0410-b5e6-96231b3b80d8
so that it detects errors on platforms where libm doesn't set errno.
It's still subject to host libm details though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114148 91177308-0d34-0410-b5e6-96231b3b80d8
a Constant into a ConstantRange. Handle this conservatively for now, rather than asserting. The testcase is
more complex that I would like, but the manifestation of the problem is sensitive to iteration orders and the state of the
LVI cache, and I have not been able to reproduce it with manually constructed or simplified cases.
Fixes PR8162.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114103 91177308-0d34-0410-b5e6-96231b3b80d8
deleted. Fix this by doing the copyValue's before we delete stuff!
The testcase only repros the problem on my system with valgrind.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113820 91177308-0d34-0410-b5e6-96231b3b80d8
to expose greater opportunities for store narrowing in codegen. This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113763 91177308-0d34-0410-b5e6-96231b3b80d8
This can result in increased opportunities for store narrowing in code generation. Update a number of
tests for this change. This fixes <rdar://problem/8285027>.
Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally. Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order
of the two ors, to give the non-constant or a chance for simplification instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113679 91177308-0d34-0410-b5e6-96231b3b80d8
unrolling threshold to the optimize-for-size threshold. Basically, for loops containing calls, unrolling
can still be profitable as long as the loop is REALLY small.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113439 91177308-0d34-0410-b5e6-96231b3b80d8
turning (fptrunc (sqrt (fpext x))) -> (sqrtf x) is great, but we have
to delete the original sqrt as well. Not doing so causes us to do
two sqrt's when building with -fmath-errno (the default on linux).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113260 91177308-0d34-0410-b5e6-96231b3b80d8
in the duplicated block instead of duplicating them.
Duplicating them into the end of the loop and the preheader
means that we got a phi node in the header of the loop,
which prevented LICM from hoisting them. GVN would
usually come around later and merge the duplicated
instructions so we'd get reasonable output... except that
anything dependent on the shoulda-been-hoisted value can't
be hoisted. In PR5319 (which this fixes), a memory value
didn't get promoted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113134 91177308-0d34-0410-b5e6-96231b3b80d8
location is being re-stored to the memory location. We would get
a dangling pointer from the SSAUpdate data structure and miss a
use. This fixes PR8068
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113042 91177308-0d34-0410-b5e6-96231b3b80d8
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.
In the short term, force off MMX datatypes. In the long term,
the X86 backend should not select generic vector types to MMX
registers. This is being worked on, but won't be done in time
for 2.8. rdar://8380055
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112696 91177308-0d34-0410-b5e6-96231b3b80d8
I have not been able to find a way to test each in isolation, for a few reasons:
1) The ability to look-through non-i1 BinaryOperator's requires the ability to look through non-constant
ICmps in order for it to ever trigger.
2) The ability to do LVI-powered PHI value determination only matters in cases that ProcessBranchOnPHI
can't handle. Since it already handles all the cases without other instructions in the def-use chain
between the PHI and the branch, it requires the ability to look through ICmps and/or BinaryOperators
as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112611 91177308-0d34-0410-b5e6-96231b3b80d8
This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's
handling of and/or of i1's), but never manifested before. This patch adds a tracking set to prevent this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112589 91177308-0d34-0410-b5e6-96231b3b80d8
A = shl x, 42
...
B = lshr ..., 38
which can be transformed into:
A = shl x, 4
...
iff we can prove that the would-be-shifted-in bits
are already zero. This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112314 91177308-0d34-0410-b5e6-96231b3b80d8
framework, which is good at ripping through bitfield
operations. This generalize a bunch of the existing
xforms that instcombine does, such as
(x << c) >> c -> and
to handle intermediate logical nodes. This is useful for
ripping up the "promote to large integer" code produced by
SRoA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112304 91177308-0d34-0410-b5e6-96231b3b80d8
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112285 91177308-0d34-0410-b5e6-96231b3b80d8
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:
%94 = zext i16 %93 to i32 ; <i32> [#uses=2]
%96 = lshr i32 %94, 8 ; <i32> [#uses=1]
%101 = trunc i32 %96 to i8 ; <i8> [#uses=1]
This also unblocks other xforms from happening, now clang is able to compile:
struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }
into:
_foo: ## @foo
## BB#0: ## %entry
pshufd $1, %xmm0, %xmm2
addss %xmm0, %xmm2
movdqa %xmm1, %xmm3
addss %xmm2, %xmm3
pshufd $1, %xmm1, %xmm0
addss %xmm3, %xmm0
ret
on x86-64, instead of:
_foo: ## @foo
## BB#0: ## %entry
movd %xmm0, %rax
shrq $32, %rax
movd %eax, %xmm2
addss %xmm0, %xmm2
movapd %xmm1, %xmm3
addss %xmm2, %xmm3
movd %xmm1, %rax
shrq $32, %rax
movd %eax, %xmm0
addss %xmm3, %xmm0
ret
This seems pretty close to optimal to me, at least without
using horizontal adds. This also triggers in lots of other
code, including SPEC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112278 91177308-0d34-0410-b5e6-96231b3b80d8
any load in the default address space that completes implies that the base value that it GEP'd from
was not null.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112015 91177308-0d34-0410-b5e6-96231b3b80d8
from the LHS should disable reconsidering that pred on the
RHS. However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111349 91177308-0d34-0410-b5e6-96231b3b80d8
loop, making the resulting loop significantly less ugly. Also, zap
its trivial PHI nodes, since it's easy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111255 91177308-0d34-0410-b5e6-96231b3b80d8
- Eliminate redundant successors.
- Convert an indirectbr with one successor into a direct branch.
Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111060 91177308-0d34-0410-b5e6-96231b3b80d8
into test/CodeGen/X86, so that they aren't run when the x86 target is
not enabled.
Fix uglygep.ll to not be x86-specific.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110343 91177308-0d34-0410-b5e6-96231b3b80d8
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110128 91177308-0d34-0410-b5e6-96231b3b80d8
it inserted rather than using LoopInfo::getCanonicalInductionVariable to
rediscover it, since that doesn't work on non-canonical loops. This fixes
infinite recurrsion on such loops; PR7562.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109419 91177308-0d34-0410-b5e6-96231b3b80d8
it doesn't miss an opportunity to form a GEP, regardless of the
relative loop depths of the operands. This fixes rdar://8197217.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108475 91177308-0d34-0410-b5e6-96231b3b80d8
mutated by recursive simplification. This also enhances
ReplaceAndSimplifyAllUses to actually do a real RAUW
at the end of it, which updates any value handles
pointing to "From" to start pointing to "To". This
seems useful for debug info and random other VH users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108415 91177308-0d34-0410-b5e6-96231b3b80d8
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant. This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108241 91177308-0d34-0410-b5e6-96231b3b80d8
the LHS and RHS of an and/or instruction, don't multiply add
known predecessor values. This fixes the crash on testcase
from PR7498
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108114 91177308-0d34-0410-b5e6-96231b3b80d8
(X >s -1) ? C1 : C2 and (X <s 0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.
This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.
for
int sel(int n) {
return n >= 0 ? 60 : 100;
}
we now generate
sarl $31, %edi
andl $40, %edi
leal 60(%rdi), %eax
instead of
testl %edi, %edi
movl $60, %ecx
movl $100, %eax
cmovnsl %ecx, %eax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107866 91177308-0d34-0410-b5e6-96231b3b80d8
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer.
Add new special pass to remove debug info for such symbols.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107416 91177308-0d34-0410-b5e6-96231b3b80d8
the returned value after the tail call if it differs from other return
values. The optimal thing to do would be to introduce a phi node for
the return value, but for the moment just fix the miscompile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106947 91177308-0d34-0410-b5e6-96231b3b80d8
The memcmp will be optimized further and even the pathological case
'strstr(x, "x") == x' generates optimal code now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106097 91177308-0d34-0410-b5e6-96231b3b80d8
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared. Fixes PR7272.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105255 91177308-0d34-0410-b5e6-96231b3b80d8
when it detects undefined behavior. llvm.trap generally codegens into some
thing really small (e.g. a 2 byte ud2 instruction on x86) and debugging this
sort of thing is "nontrivial". For example, we now compile:
void foo() { *(int*)0 = 42; }
into:
_foo:
pushl %ebp
movl %esp, %ebp
ud2
Some may even claim that this is a security hole, though that seems dubious
to me. This addresses rdar://7958343 - Optimizing away null dereference
potentially allows arbitrary code execution
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103356 91177308-0d34-0410-b5e6-96231b3b80d8
with a vector input and output into a shuffle vector. This sort of
sequence happens when the input code stores with one type and reloads
with another type and then SROA promotes to i96 integers, which make
everyone sad.
This fixes rdar://7896024
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103354 91177308-0d34-0410-b5e6-96231b3b80d8
values passed to llvm.dbg.value were not valid for the intrinsic, it
might have caused trouble one day if the verifier ever started checking
for valid debug info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103038 91177308-0d34-0410-b5e6-96231b3b80d8
RAUW of a global variable with a local variable in function F,
if function local metadata M in function G was using the global
then M would become function-local to both F and G, which is not
allowed. See the testcase for an example. Fixed by detecting
this situation and zapping the metadata operand when it occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103007 91177308-0d34-0410-b5e6-96231b3b80d8
halting analysis, it is illegal to delete a call to a read-only function.
The correct solution is almost certainly to add a "must halt" attribute and
only allow deletions in its presence.
XFAIL the relevant testcase for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102831 91177308-0d34-0410-b5e6-96231b3b80d8
if an indirect call site was removed and a direct one was added, not
just if an indirect call site was modified to be direct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102830 91177308-0d34-0410-b5e6-96231b3b80d8
that can have a big effect :). The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call. In this case, it will rerun all the passes it
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.
The second patch is to add *all* call sites to the
DevirtualizedCalls list the inliner uses. This list is
about to get renamed, but the jist of this is that the
inliner now reconsiders *all* inlined call sites as candidates
for further inlining. The intuition is this that in cases
like this:
f() { g(1); } g(int x) { h(x); }
We analyze this bottom up, and may decide that it isn't
profitable to inline H into G. Next step, we decide that it is
profitable to inline G into F, and do so, which means that F
now calls H. Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).
In my spot checks, this doesn't have a big impact on code. For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes). 252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102823 91177308-0d34-0410-b5e6-96231b3b80d8
that appear due to inlining a callee as candidates for
futher inlining, but a recent patch made it do this if
those call sites were indirect and became direct.
Unfortunately, in bizarre cases (see testcase) doing this
can cause us to infinitely inline mutually recursive
functions into callers not in the cycle. Fix this by
keeping track of the inline history from which callsite
inline candidates got inlined from.
This shouldn't affect any "real world" code, but is required
for a follow on patch that is coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102822 91177308-0d34-0410-b5e6-96231b3b80d8
were still inlining self-recursive functions into other functions.
Inlining a recursive function into itself has the potential to
reduce recursion depth by a factor of 2, inlining a recursive
function into something else reduces recursion depth by exactly
1. Since inlining a recursive function into something else is a
weird form of loop peeling, turn this off.
The deleted testcase was added by Dale in r62107, since then
we're leaning towards not inlining recursive stuff ever. In any
case, if we like inlining recursive stuff, it should be done
within the recursive function itself to get the algorithm
recursion depth win.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102798 91177308-0d34-0410-b5e6-96231b3b80d8
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102146 91177308-0d34-0410-b5e6-96231b3b80d8
Fix RefreshCallGraph to use CGN->replaceCallEdge instead of hand
rolling its own loop. replaceCallEdge properly maintains the
reference counts of the nodes, fixing a crash exposed by the
iterative callgraph stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102120 91177308-0d34-0410-b5e6-96231b3b80d8