that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45467 91177308-0d34-0410-b5e6-96231b3b80d8
This reduces selectiondag time on kc++ from 5.43s to 4.98s (9%). More
significantly, this speeds up the default ppc scheduler from ~1571ms to 1063ms,
a 33% speedup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29743 91177308-0d34-0410-b5e6-96231b3b80d8
2. Added argument to instruction scheduler creators so the creators can do
special things.
3. Repaired target hazard code.
4. Misc.
More to follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29450 91177308-0d34-0410-b5e6-96231b3b80d8
separate file. Added an initial implementation of top-down register pressure
reduction list scheduler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28226 91177308-0d34-0410-b5e6-96231b3b80d8
the distance between the def and another use is much longer). This is under
option control for now "-sched-lower-defnuse".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28201 91177308-0d34-0410-b5e6-96231b3b80d8
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28108 91177308-0d34-0410-b5e6-96231b3b80d8
the heuristic to further reduce spills for several test cases. (Note, it may
not necessarily translate to runtime win!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28076 91177308-0d34-0410-b5e6-96231b3b80d8
up the schedule. This helps code that looks like this:
loads ...
computations (first set) ...
stores (first set) ...
loads
computations (seccond set) ...
stores (seccond set) ...
Without this change, the stores and computations are more likely to
interleave:
loads ...
loads ...
computations (first set) ...
computations (second set) ...
computations (first set) ...
stores (first set) ...
computations (second set) ...
stores (stores set) ...
This can increase the number of spills if we are unlucky.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28033 91177308-0d34-0410-b5e6-96231b3b80d8
to be emitted.
Don't add one to the latency of a completed instruction if the latency of the
op is 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26718 91177308-0d34-0410-b5e6-96231b3b80d8
operands have all issued, but whose results are not yet available. This
allows us to compile:
int G;
int test(int A, int B, int* P) {
return (G+A)*(B+1);
}
to:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
addi r4, r4, 1
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
instead of this, which has a stall between the lis/lwz:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
addi r4, r4, 1
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26716 91177308-0d34-0410-b5e6-96231b3b80d8
merge succs/chainsuccs -> succs set
This has no functionality change, simplifies the code, and reduces the size
of sunits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26711 91177308-0d34-0410-b5e6-96231b3b80d8
keep track of a sense of "mobility", i.e. how many other nodes scheduling one
node will free up. For something like this:
float testadd(float *X, float *Y, float *Z, float *W, float *V) {
return (*X+*Y)*(*Z+*W)+*V;
}
For example, this makes us schedule *X then *Y, not *X then *Z. The former
allows us to issue the add, the later only lets us issue other loads.
This turns the above code from this:
_testadd:
lfs f0, 0(r3)
lfs f1, 0(r6)
lfs f2, 0(r4)
lfs f3, 0(r5)
fadds f0, f0, f2
fadds f1, f3, f1
lfs f2, 0(r7)
fmadds f1, f0, f1, f2
blr
into this:
_testadd:
lfs f0, 0(r6)
lfs f1, 0(r5)
fadds f0, f1, f0
lfs f1, 0(r4)
lfs f2, 0(r3)
fadds f1, f2, f1
lfs f2, 0(r7)
fmadds f1, f1, f0, f2
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26680 91177308-0d34-0410-b5e6-96231b3b80d8
Only enable this with -use-sched-latencies, I'll enable it by default with a
clean nightly tester run tonight.
PPC is the only target that provides latency info currently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26634 91177308-0d34-0410-b5e6-96231b3b80d8
class, sever its implementation from the interface. Now we can provide new
implementations of the same interface (priority computation) without touching
the scheduler itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26630 91177308-0d34-0410-b5e6-96231b3b80d8
targets can implement them. Make the top-down scheduler non-g5-specific.
Remove the old testing hazard recognizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26569 91177308-0d34-0410-b5e6-96231b3b80d8
function of the top-down scheduler are completely bogus currently, and
having (future) PPC specific in this file is also wrong, but this is a
small incremental step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26552 91177308-0d34-0410-b5e6-96231b3b80d8
a predecessor appearing more than once in the operand list was counted as
multiple predecessor; priority1 should be updated during scheduling;
CycleBound was updated after the node is inserted into priority queue; one
of the tie breaking condition was flipped.
- Take into consideration of two address opcodes. If a predecessor is a def&use
operand, it should have a higher priority.
- Scheduler should also favor floaters, i.e. nodes that do not have real
predecessors such as MOV32ri.
- The scheduling fixes / tweaks fixed bug 478:
.text
.align 4
.globl _f
_f:
movl 4(%esp), %eax
movl 8(%esp), %ecx
movl %eax, %edx
imull %ecx, %edx
imull %eax, %eax
imull %ecx, %ecx
addl %eax, %ecx
leal (%ecx,%edx,2), %eax
ret
It is also a slight performance win (1% - 3%) for most tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26470 91177308-0d34-0410-b5e6-96231b3b80d8