U test/CodeGen/X86/byval2.ll
U test/CodeGen/X86/byval4.ll
U test/CodeGen/X86/byval.ll
U test/CodeGen/X86/byval3.ll
U test/CodeGen/X86/byval5.ll
--- Merging r127732 into '.':
U test/CodeGen/X86/stdarg.ll
U test/CodeGen/X86/fold-mul-lohi.ll
U test/CodeGen/X86/scalar-min-max-fill-operand.ll
U test/CodeGen/X86/tailcallbyval64.ll
U test/CodeGen/X86/stride-reuse.ll
U test/CodeGen/X86/sse-align-3.ll
U test/CodeGen/X86/sse-commute.ll
U test/CodeGen/X86/stride-nine-with-base-reg.ll
U test/CodeGen/X86/coalescer-commute2.ll
U test/CodeGen/X86/sse-align-7.ll
U test/CodeGen/X86/sse_reload_fold.ll
U test/CodeGen/X86/sse-align-0.ll
--- Merging r127733 into '.':
U test/CodeGen/X86/peep-vector-extract-concat.ll
U test/CodeGen/X86/pmulld.ll
U test/CodeGen/X86/widen_load-0.ll
U test/CodeGen/X86/v2f32.ll
U test/CodeGen/X86/apm.ll
U test/CodeGen/X86/h-register-store.ll
U test/CodeGen/X86/h-registers-0.ll
--- Merging r127734 into '.':
U test/CodeGen/X86/2007-01-08-X86-64-Pointer.ll
U test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
U test/CodeGen/X86/avoid-lea-scale2.ll
U test/CodeGen/X86/lea-3.ll
U test/CodeGen/X86/vec_set-8.ll
U test/CodeGen/X86/i64-mem-copy.ll
U test/CodeGen/X86/x86-64-malloc.ll
U test/CodeGen/X86/mmx-copy-gprs.ll
U test/CodeGen/X86/vec_shuffle-17.ll
U test/CodeGen/X86/2007-07-18-Vector-Extract.ll
--- Merging r127775 into '.':
U test/CodeGen/X86/constant-pool-remat-0.ll
--- Merging r127872 into '.':
U utils/lit/lit/TestingConfig.py
U lib/Support/raw_ostream.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/branches/release_29@128258 91177308-0d34-0410-b5e6-96231b3b80d8
have pointer types, though in contrast to C pointer types, SCEV
addition is never implicitly scaled. This not only eliminates the
need for special code like IndVars' EliminatePointerRecurrence
and LSR's own GEP expansion code, it also does a better job because
it lets the normal optimizations handle pointer expressions just
like integer expressions.
Also, since LLVM IR GEPs can't directly index into multi-dimensional
VLAs, moving the GEP analysis out of client code and into the SCEV
framework makes it easier for clients to handle multi-dimensional
VLAs the same way as other arrays.
Some existing regression tests show improved optimization.
test/CodeGen/ARM/2007-03-13-InstrSched.ll in particular improved to
the point where if-conversion started kicking in; I turned it off
for this test to preserve the intent of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69258 91177308-0d34-0410-b5e6-96231b3b80d8
- Avoid attempting stride-reuse in the case that there are users that
aren't addresses. In that case, there will be places where the
multiplications won't be folded away, so it's better to try to
strength-reduce them.
- Several SSE intrinsics have operands that strength-reduction can
treat as addresses. The previous item makes this more visible, as
any non-address use of an IV can inhibit stride-reuse.
- Make ValidStride aware of whether there's likely to be a base
register in the address computation. This prevents it from thinking
that things like stride 9 are valid on x86 when the base register is
already occupied.
Also, XFAIL the 2007-08-10-LEA16Use32.ll test; the new logic to avoid
stride-reuse elimintes the LEA in the loop, so the test is no longer
testing what it was intended to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43231 91177308-0d34-0410-b5e6-96231b3b80d8