out, where after the first CombineTo() call, the node the second CombineTo
wishes to replace may no longer exist.
Fix a very real bug with the truncated load optimization on little endian
targets, which do not need a byte offset added to the load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23704 91177308-0d34-0410-b5e6-96231b3b80d8
like turning:
_foo:
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
rlwinm r3, r2, 0, 16, 31
blr
into
_foo:
fctiwz f0,f1
stfd f0,-8(r1)
lhz r3,-2(r1)
blr
Also removed an unncessary constraint from sra -> srl conversion, which
should take care of hte only reason we would ever need to handle sra in
MaskedValueIsZero, AFAIK.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23703 91177308-0d34-0410-b5e6-96231b3b80d8
location, replace them with a new store of the last value. This occurs
in the same neighborhood in 197.parser, speeding it up about 1.5%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23691 91177308-0d34-0410-b5e6-96231b3b80d8
multiple results.
Use this support to implement trivial store->load forwarding, implementing
CodeGen/PowerPC/store-load-fwd.ll. Though this is the most simple case and
can be extended in the future, it is still useful. For example, it speeds
up 197.parser by 6.2% by avoiding an LSU reject in xalloc:
stw r6, lo16(l5_end_of_array)(r2)
addi r2, r5, -4
stwx r5, r4, r2
- lwzx r5, r4, r2
- rlwinm r5, r5, 0, 0, 30
stwx r5, r4, r2
lwz r2, -4(r4)
ori r2, r2, 1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23690 91177308-0d34-0410-b5e6-96231b3b80d8
creating a new vreg and inserting a copy: just use the input vreg directly.
This speeds up the compile (e.g. about 5% on mesa with a debug build of llc)
by not adding a bunch of copies and vregs to be coallesced away. On mesa,
for example, this reduces the number of intervals from 168601 to 129040
going into the coallescer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23671 91177308-0d34-0410-b5e6-96231b3b80d8
the 177.mesa failure from last night, and fixes the
CodeGen/PowerPC/2005-10-08-ArithmeticRotate.ll regression test I added.
If this code cannot be fixed, it should be removed for good, but I'll leave
it to Nate to decide its fate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23670 91177308-0d34-0410-b5e6-96231b3b80d8
is faster and uses less stack space. This reduces our stack requirement
enough to compile sixtrack, and though it's a hack, should be enough until
we switch to iterative isel
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23664 91177308-0d34-0410-b5e6-96231b3b80d8
classes on PPC. We were emitting fmr instructions to do fp extensions, which
weren't getting coallesced. This fixes Regression/CodeGen/PowerPC/fpcopy.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23654 91177308-0d34-0410-b5e6-96231b3b80d8
helps but not enough.
Start pulling cases out of PPC32DAGToDAGISel::Select. With GCC 4, this function
required 8512 bytes of stack space for each invocation (GCC 3 required less
than 700 bytes). Pulling this first function out gets us down to 8224. More
to come :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23647 91177308-0d34-0410-b5e6-96231b3b80d8
previous copy elisions and we discover we need to reload a register, make
sure to use the regclass of the original register for the reload, not the
class of the current register. This avoid using 16-bit loads to reload 32-bit
values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23645 91177308-0d34-0410-b5e6-96231b3b80d8
store r12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = load [ss#2]
R4 = load [ss#1]
and turn it into this code:
store R12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = R12
R4 = R3 <- oops!
The problem was that promoting R3 = load[ss#2] to a copy missed the fact that
the instruction invalidated R3 at that point.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23638 91177308-0d34-0410-b5e6-96231b3b80d8
with the dag combiner. This speeds up espresso by 8%, reaching performance
parity with the dag-combiner-disabled llc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23636 91177308-0d34-0410-b5e6-96231b3b80d8
dead node elim and dag combiner passes where the root is potentially updated.
This fixes a fixme in the dag combiner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23634 91177308-0d34-0410-b5e6-96231b3b80d8
that testcase still does not pass with the dag combiner. This is because
not all forms of br* are folded yet.
Also, when we combine a node into another one, delete the node immediately
instead of waiting for the node to potentially come up in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23632 91177308-0d34-0410-b5e6-96231b3b80d8
When moving constant entries in 'Map' if the entry is the representative
constant for the abstractypemap, make sure to update it as well. This
fixes the bcreader failures from last night on several C++ apps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23628 91177308-0d34-0410-b5e6-96231b3b80d8
true dynamically. Finally, pass the Use* that replaceAllUsesWith has into
the method for future use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23626 91177308-0d34-0410-b5e6-96231b3b80d8
creating the keys and doing comparisons to index into 'Map' takes a lot
of time. For these large constants, keep an inverse map so that 'remove'
and move operations are much faster.
This speeds up a release build of the bc reader on Eric's nasty python
bytecode file from 1:39 to 1:00s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23624 91177308-0d34-0410-b5e6-96231b3b80d8
Since calls return more than one value, don't bail if one of their uses
happens to be a node that's not an MVT::Other when following the chain
from CALLSEQ_START to CALLSEQ_END.
Once we've found a CALLSEQ_START, we can just return; there's no need to
tail-recurse further up the graph.
Most importantly, just because something only has one use doesn't mean we
should use it's one use to follow from start to end. This faulty logic
caused us to follow a chain of one-use FP operations back to a much earlier
call, putting a cycle in the graph from a later start to an earlier end.
This is a better fix that reverting to the workaround committed earlier
today.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23620 91177308-0d34-0410-b5e6-96231b3b80d8
Neither of us have yet figured out why this code is necessary, but stuff
breaks if its not there. Still tracking this down...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23617 91177308-0d34-0410-b5e6-96231b3b80d8
constant arrays in place instead of reallocating them and replaceAllUsesOf'ing
the result. This speeds up a release build of the bcreader from:
136.987u 120.866s 4:24.38
to
49.790u 49.890s 1:40.14
... a 2.6x speedup parsing a large python bc file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23614 91177308-0d34-0410-b5e6-96231b3b80d8
lookups instead of linear time lookups. This speeds up bc parsing of a
large file from
137.834u 118.256s 4:27.96
to
132.611u 114.436s 4:08.53
with a release build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23611 91177308-0d34-0410-b5e6-96231b3b80d8
check the presplit pred, not the post-split pred. This was causing us
to make the wrong decision in some cases, leaving the critical edge block
in the loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23601 91177308-0d34-0410-b5e6-96231b3b80d8
large basic blocks because it was purely recursive. This switches it to an
iterative/recursive hybrid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23596 91177308-0d34-0410-b5e6-96231b3b80d8