1. Make it fold blocks separated by an unconditional branch. This enables
jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
feed the branch condition of a block. This frequently occurs due to
reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
they feed the branch condition of a block. This is common in code with
lots of loads and stores like C++ code and 255.vortex.
This implements thread-loads.ll and rdar://6402033.
Per the fixme's, several pieces of this should be moved into Transforms/Utils.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60148 91177308-0d34-0410-b5e6-96231b3b80d8
the conditional for the BRCOND statement. For instance, it will generate:
addl %eax, %ecx
jo LOF
instead of
addl %eax, %ecx
; About 10 instructions to compare the signs of LHS, RHS, and sum.
jl LOF
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60123 91177308-0d34-0410-b5e6-96231b3b80d8
performance in most cases on the Grawp tester, but does speed some
things up (like shootout/hash by 15%). This also doesn't impact
compile time in a noticable way on the Grawp tester.
It also, of course, gets the testcase it was designed for right :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60120 91177308-0d34-0410-b5e6-96231b3b80d8
and the LiveInterval.h top-level comment and accordingly. This fixes blocks
having spurious live-in registers in boundary cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60092 91177308-0d34-0410-b5e6-96231b3b80d8
heuristic: the value is already live at the new memory operation if
it is used by some other instruction in the memop's block. This is
cheap and simple to compute (moreso than full liveness).
This improves the new heuristic even more. For example, it cuts two
out of three new instructions out of 255.vortex:DbmFileInGrpHdr,
which is one of the functions that the heuristic regressed. This
overall eliminates another 40 instructions from 403.gcc and visibly
reduces register pressure in 255.vortex (though this only actually
ends up saving the 2 instructions from the whole program).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60084 91177308-0d34-0410-b5e6-96231b3b80d8
phrased in terms of liveness instead of as a horrible hack. :)
In pratice, this doesn't change the generated code for either
255.vortex or 403.gcc, but it could cause minor code changes in
theory. This is framework for coming changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60082 91177308-0d34-0410-b5e6-96231b3b80d8
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation. In cases where it works, it can substantially reduce
register pressure.
This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several
things I want to do before enabling it by default:
1. This isn't doing any caching of results, so it is much slower
than it could be. It currently slows down release-asserts llc
by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
across the computation for other reasons.
I'll continue poking at this, and eventually turn it on as llcbeta.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60074 91177308-0d34-0410-b5e6-96231b3b80d8