because it won't work after my phi operand changes, because the incoming
blocks will no longer be Uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133512 91177308-0d34-0410-b5e6-96231b3b80d8
ops.
This is a rewrite of the IV simplification algorithm used by
-disable-iv-rewrite. To avoid perturbing the default mode, I
temporarily split the driver and created SimplifyIVUsersNoRewrite. The
idea is to avoid doing opcode/pattern matching inside
IndVarSimplify. SCEV already does it. We want to optimize with the
full generality of SCEV, but optimize def-use chains top down on-demand rather
than rewriting the entire expression bottom-up. This was easy to do
for operations that SCEV can prove are identity function. So we're now
eliminating bitmasks and zero extends this way.
A result of this rewrite is that indvars -disable-iv-rewrite no longer
requires IVUsers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133502 91177308-0d34-0410-b5e6-96231b3b80d8
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133435 91177308-0d34-0410-b5e6-96231b3b80d8
Change various bits of code to make better use of the existing PHINode
API, to insulate them from forthcoming changes in how PHINodes store
their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133434 91177308-0d34-0410-b5e6-96231b3b80d8
type's bitwidth matches the (allocated) size of the alloca. This severely
pessimizes vector scalar replacement when the only vector type being used is
something like <3 x float> on x86 or ARM whose allocated size matches a
<4 x float>.
I hope to fix some of the flawed assumptions about allocated size throughout
scalar replacement and reenable this in most cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133338 91177308-0d34-0410-b5e6-96231b3b80d8
spartan right now, but I plan to encode more information in this enum to improve
the correctness and reliability of SRoA. At least this first pass makes it
possible to make VectorTy an actual VectorType.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132937 91177308-0d34-0410-b5e6-96231b3b80d8
assuming that all offsets are legal vector accesses, and thus trying to access
the float member of { <2 x float>, float } as the 3rd element of the first
member.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132766 91177308-0d34-0410-b5e6-96231b3b80d8
former was using the size of the entire alloca, whereas the latter was correctly using
the allocated size of the immediate type being converted (which may differ from the size
of the alloca). This fixes PR10082.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132759 91177308-0d34-0410-b5e6-96231b3b80d8
which edge to split by pred/succ pair, which means that we can end up splitting
the wrong edge (by case value) in the switch statement entirely. Fixes PR10031!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132535 91177308-0d34-0410-b5e6-96231b3b80d8
This looks like it flagged an actual bug. Devang, please review. I added
the parentheses that change behavior, but make the behavior more closely
match commit log's intent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132165 91177308-0d34-0410-b5e6-96231b3b80d8
Use a proper worklist for use-def traversal without holding onto an
iterator. Now that we process all IV uses, we need complete logic for
resusing existing derived IV defs. See HoistStep.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132103 91177308-0d34-0410-b5e6-96231b3b80d8
case of a switch instruction. Back off this optimization when this would
eliminate all of the predecessors to the latch.
Sorry, I am unable to reduce a reasonably sized test case.
rdar://9486843
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132022 91177308-0d34-0410-b5e6-96231b3b80d8
aligned.
Teach memcpyopt to not give up all hope when confonted with an underaligned
memcpy feeding an overaligned byval. If the *source* of the memcpy can be
determined to be adequeately aligned, or if it can be forced to be, we can
eliminate the memcpy.
This addresses PR9794. We now compile the example into:
define i32 @f(%struct.p* nocapture byval align 8 %q) nounwind ssp {
entry:
%call = call i32 @g(%struct.p* byval align 8 %q) nounwind
ret i32 %call
}
in both x86-64 and x86-32 mode. We still don't get a tailcall though,
because tailcalls apparently can't handle byval.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131884 91177308-0d34-0410-b5e6-96231b3b80d8
failing to form a memset, then having to delete it" but my approximation
isn't safe for self recurrent loops. Instead of doign a hack, just
do it the right way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131858 91177308-0d34-0410-b5e6-96231b3b80d8