Specifically, introduction of XXX::Create methods
for Users that have a potentially variable number of
Uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49277 91177308-0d34-0410-b5e6-96231b3b80d8
- ChangeCompareStride only reuse stride that is larger than current stride. It
will let the general reuse mechanism to try to reuse a smaller stride.
- Watch out for multiplication overflow in ChangeCompareStride.
- Replace std::set with SmallPtrSet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43408 91177308-0d34-0410-b5e6-96231b3b80d8
and the compaison is against a constant value, try eliminate the stride
by moving the compare instruction to another stride and change its
constant operand accordingly. e.g.
loop:
...
v1 = v1 + 3
v2 = v2 + 1
if (v2 < 10) goto loop
=>
loop:
...
v1 = v1 + 3
if (v1 < 30) goto loop
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43336 91177308-0d34-0410-b5e6-96231b3b80d8
- Avoid attempting stride-reuse in the case that there are users that
aren't addresses. In that case, there will be places where the
multiplications won't be folded away, so it's better to try to
strength-reduce them.
- Several SSE intrinsics have operands that strength-reduction can
treat as addresses. The previous item makes this more visible, as
any non-address use of an IV can inhibit stride-reuse.
- Make ValidStride aware of whether there's likely to be a base
register in the address computation. This prevents it from thinking
that things like stride 9 are valid on x86 when the base register is
already occupied.
Also, XFAIL the 2007-08-10-LEA16Use32.ll test; the new logic to avoid
stride-reuse elimintes the LEA in the loop, so the test is no longer
testing what it was intended to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43231 91177308-0d34-0410-b5e6-96231b3b80d8
directly, because the insert point used by the SCEVExpander may vary
from what LSR originally computes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40641 91177308-0d34-0410-b5e6-96231b3b80d8
deleteValueFromRecords and loosen the types to all it to accept
Value* instead of just Instruction*, since this is what
ScalarEvolution uses internally anyway. This allows more flexibility
for future uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37657 91177308-0d34-0410-b5e6-96231b3b80d8
This created an ambiguity for expandInTy to decide when to use
sign-extension or zero-extension, but it turns out that most of its callers
don't actually need a type conversion, now that LLVM types don't have
explicit signedness. Drop expandInTy in favor of plain expand, and change
the few places that actually need a type conversion to do it themselves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37591 91177308-0d34-0410-b5e6-96231b3b80d8
Due to darwin gcc bug, one version of darwin linker coalesces
static const int, which defauts PassID based pass identification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36652 91177308-0d34-0410-b5e6-96231b3b80d8
constructing ImmediateDominator is now folded into DomTree construction.
This is part of the ongoing work for PR217.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36063 91177308-0d34-0410-b5e6-96231b3b80d8
out to do! :)
This fixes a problem where LSR would insert a bunch of code into each MBB
that uses a particular subexpression (e.g. IV+base+C). The problem is that
this code cannot be CSE'd back together if inserted into different blocks.
This patch changes LSR to attempt to insert a single copy of this code and
share it, allowing codegenprepare to duplicate the code if it can be sunk
into various addressing modes. On CodeGen/ARM/lsr-code-insertion.ll,
for example, this gives us code like:
add r8, r0, r5
str r6, [r8, #+4]
..
ble LBB1_4 @cond_next
LBB1_3: @cond_true
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
ldr r6, LCPI1_1
str r6, [r8, #+4]
instead of:
add r10, r0, r6
str r8, [r10, #+4]
...
ble LBB1_4 @cond_next
LBB1_3: @cond_true
add r8, r0, r6
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
add r8, r0, r6
ldr r10, LCPI1_1
str r10, [r8, #+4]
Besides being smaller and more efficient, this makes it immediately
obvious that it is profitable to predicate LBB1_3 now :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35972 91177308-0d34-0410-b5e6-96231b3b80d8