This allows SCEV users to effectively calculate trip count.
LSR later on transforms back integer IVs to floating point IVs
later on to avoid int-to-float casts inside the loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58625 91177308-0d34-0410-b5e6-96231b3b80d8
* merge two weak functions by making them both alias a third non-weak fn
* don't reimplement CallSite::hasArgument
* whitelist the safe linkage types
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58568 91177308-0d34-0410-b5e6-96231b3b80d8
This triggers only 60 times in llvm-test (look at .llvm.bc, not .linked.rbc)
and so it probably wont be turned on by default. Also, may of those are likely
to go away when PR2973 is fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58557 91177308-0d34-0410-b5e6-96231b3b80d8
function.
- This explicitly models the costs for functions which should
"always" or "never" be inlined. This fixes bugs where such costs
were not previously respected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58450 91177308-0d34-0410-b5e6-96231b3b80d8
LargeBlockInfo, we can now dramatically simplify their implementation
and speed them up at the same time. Now the code has time proportional
to the number of uses of the alloca, not the size of the block.
This also eliminates code that tried to batch up different allocas which
are used in the same blocks, and eliminates the 'retry list' logic which
was baroque and no unneccesary. In addition to being a speedup for crazy
cases, this is also a nice cleanup:
PromoteMemoryToRegister.cpp | 270 +++++++++++++++-----------------------------
1 file changed, 96 insertions(+), 174 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58229 91177308-0d34-0410-b5e6-96231b3b80d8
a trivial dense map. Use this in RewriteSingleStoreAlloca to
avoid aggressively rescanning blocks over and over again. This
fixes PR2925, speeding up mem2reg on the testcase in that bug
from 4.56s to 0.02s in a debug build on my machine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58227 91177308-0d34-0410-b5e6-96231b3b80d8
LoopPass*.
- Although less precise, this means they can be used in clients
without RTTI (who would otherwise need to include LoopPass.h, which
eventually includes things using dynamic_cast). This was the
simplest solution that presented itself, but I am happy to use a
better one if available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58010 91177308-0d34-0410-b5e6-96231b3b80d8
to find opportunities for store-to-load forwarding or load CSE,
in the same way that visitStore scans back to do DSE. Also, define
a new helper function for testing whether the addresses of two
memory accesses are known to have the same value, and use it in
both visitStore and visitLoad.
These two changes allow instcombine to eliminate loads in code
produced by front-ends that frequently emit obviously redundant
addressing for memory references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57608 91177308-0d34-0410-b5e6-96231b3b80d8
- Renumber fcmp predicates to match their icmp counterparts.
- Try swapping operands to expose more optimization opportunities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57513 91177308-0d34-0410-b5e6-96231b3b80d8
This includes not marking a GEP involving a vector as unsafe, but only when it
has all zero indices. This allows scalarrepl to work in a few more cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57177 91177308-0d34-0410-b5e6-96231b3b80d8
shifting and masking inside a bswap expr. This allows it to handle
the cases from PR2842, which involve the intermediate 'or'
expressions being shifted, not just the input value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57095 91177308-0d34-0410-b5e6-96231b3b80d8