Commit Graph

64 Commits

Author SHA1 Message Date
Brian Gaeke
8311befb69 Give a better message for a common assertion failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17887 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-16 06:52:35 +00:00
Alkis Evlogimenos
c4d3b91816 Fix includes. Patch contributed by Paolo Invernizzi!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16533 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-28 02:38:58 +00:00
Reid Spencer
551ccae044 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-01 22:55:40 +00:00
Chris Lattner
c25b55a5b2 Fix the sense of joinable
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15196 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-25 07:47:25 +00:00
Chris Lattner
f542649f1b This patch makes use of the infrastructure implemented before to safely and
aggressively coallesce live ranges even if they overlap.  Consider this LLVM
code for example:

int %test(int %X) {
        %Y = mul int %X, 1      ;; Codegens to Y = X
        %Z = add int %X, %Y
        ret int %Z
}

The mul is just there to get a copy into the code stream.  This produces
this machine code:

 (0x869e5a8, LLVM BB @0x869b9a0):
        %reg1024 = mov <fi#-2>, 1, %NOREG, 0    ;; "X"
        %reg1025 = mov %reg1024                 ;; "Y"  (subsumed by X)
        %reg1026 = add %reg1024, %reg1025
        %EAX = mov %reg1026
        ret

Note that the life times of reg1024 and reg1025 overlap, even though they
contain the same value.  This results in this machine code:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        add %EAX, %ECX
        ret

Another, worse case involves loops and PHI nodes.  Consider this trivial loop:
testcase:

int %test2(int %X) {
entry:
        br label %Loop
Loop:
        %Y = phi int [%X, %entry], [%Z, %Loop]
        %Z = add int %Y, 1
        %cond = seteq int %Z, 100
        br bool %cond, label %Out, label %Loop
Out:
        ret int %Z
}

Because of interactions between the PHI elimination pass and the register
allocator, this got compiled to this code:

test2:
        mov %ECX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
***     mov %EAX, %ECX
        inc %EAX
        cmp %EAX, 100
***     mov %ECX, %EAX
        jne .LBBtest2_1

        ret

Or on powerpc, this code:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r2, r3, 1
        cmpwi cr0, r2, 100
***     or r3, r2, r2
        bne cr0, .LBB_test2_1

***     or r3, r2, r2
        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0



With this improvement in place, we now generate this code for these two
testcases, which is what we want:


test:
        mov %EAX, DWORD PTR [%ESP + 4]
        add %EAX, %EAX
        ret

test2:
        mov %EAX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
        inc %EAX
        cmp %EAX, 100
        jne .LBBtest2_1 # Loop
        ret

Or on PPC:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r3, r3, 1
        cmpwi cr0, r3, 100
        bne cr0, .LBB_test2_1

        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0


Static numbers for spill code loads/stores/reg-reg copies (smaller is better):

em3d:       before: 47/25/26         after: 44/22/24
164.gzip:   before: 433/245/310      after: 403/231/278
175.vpr:    before: 3721/2189/1581   after: 4144/2081/1423
176.gcc:    before: 26195/8866/9235  after: 25942/8082/8275
186.crafty: before: 4295/2587/3079   after: 4119/2519/2916
252.eon:    before: 12754/7585/5803  after: 12508/7425/5643
256.bzip2:  before: 463/226/315      after: 482:241/309


Runtime perf number samples on X86:

gzip: before: 41.09 after: 39.86
bzip2: runtime: before: 56.71s after: 57.07s
gcc: before: 6.16 after: 6.12
eon: before: 2.03s after: 2.00s


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15194 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-25 07:11:19 +00:00
Chris Lattner
d3a205eab5 Make a method const, no functionality changes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15193 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-25 06:23:01 +00:00
Chris Lattner
6925a9f9cc Fix a bug in the range remover
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15188 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-25 05:43:53 +00:00
Alkis Evlogimenos
a1613db62f Change std::map<unsigned, LiveInterval*> into a std::map<unsigned,
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15167 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-24 11:44:15 +00:00
Chris Lattner
deb9971061 In the joiner, merge the small interval into the large interval. This restores
us back to taking about 10.5s on gcc, instead of taking 15.6s!  The net result
is that my big patches have hand no significant effect on compile time or code
quality.  heh.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15156 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-24 03:41:50 +00:00
Chris Lattner
abf295fc6c Little stuff:
* Fix comment typeo
* add dump() methods
* add a few new methods like getLiveRangeContaining, removeRange & joinable
  (which is currently the same as overlaps)
* Remove the unused operator==

Bigger change:

* In LiveInterval, instead of using a boolean isDefinedOnce to keep track of
  if there are > 1 definitions in a particular interval, keep a counter,
  NumValues to keep track of exactly how many there are.
* In LiveRange, add a new ValId element to indicate which of the numbered
  values each LiveRange belongs to.   We now no longer merge LiveRanges if
  they are of differing value ID's even if they are neighbors.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15152 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-24 02:52:23 +00:00
Chris Lattner
b26c215c05 Change addRange and join to be a little bit smarter. In particular, we don't
want to insert a new range into the middle of the vector, then delete ranges
one at a time next to the inserted one as they are merged.

Instead, if the inserted interval overlaps, just start merging.  The only time
we insert into the middle of the vector is when we don't overlap at all.  Also
delete blocks of live ranges if we overlap with many of them.

This patch speeds up joining by .7 seconds on a large testcase, but more
importantly gets all of the range adding code into addRangeFrom.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15141 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-23 19:38:44 +00:00
Chris Lattner
aa14147cd6 Search by the start point, not by the whole interval. This saves some
comparisons, reducing linscan by another .1 seconds :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15139 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-23 18:40:00 +00:00
Chris Lattner
ebd7e6c54d Instead of searching for a live interval pair, search for a location. This gives
a very modest speedup of .3 seconds compiling 176.gcc (out of 20s).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15136 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-23 18:13:24 +00:00
Chris Lattner
fb449b9ea5 Pull the LiveRange and LiveInterval classes out of LiveIntervals.h (which
will soon be renamed) into their own file.  The new file should not emit
DEBUG output or have other side effects.  The LiveInterval class also now
doesn't know whether its working on registers or some other thing.

In the future we will want to use the LiveInterval class and friends to do
stack packing.  In addition to a code simplification, this will allow us to
do it more easily.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15134 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-23 17:49:16 +00:00