Teach the local spiller to turn stack slot loads into register-register copies

when possible, avoiding the load (and avoiding the copy if the value is already
in the right register).

This patch came about when I noticed code like the following being generated:

  store R17 -> [SS1]
  ...blah...
  R4 = load [SS1]

This was causing an LSU reject on the G5.  This problem was due to the register
allocator folding spill code into a reg-reg copy (producing the load), which
prevented the spiller from being able to rewrite the load into a copy, despite
the fact that the value was already available in a register.  In the case
above, we now rip out the R4 load and replace it with a R4 = R17 copy.

This speeds up several programs on X86 (which spills a lot :) ), e.g.
smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from
68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc.  This may have a larger
impact in some cases on the G5 (by avoiding LSU rejects), though it probably
won't trigger as often (less spilling in general).

Targets that implement folding of loads/stores into copies should implement
the isLoadFromStackSlot hook to get this.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23388 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Chris Lattner 2005-09-19 06:56:21 +00:00
parent a92aab74dd
commit cea8688ee4

View File

@ -466,10 +466,38 @@ void LocalSpiller::RewriteMBB(MachineBasicBlock &MBB, const VirtRegMap &VRM) {
<< I->second.second);
unsigned VirtReg = I->second.first;
VirtRegMap::ModRef MR = I->second.second;
if (VRM.hasStackSlot(VirtReg)) {
if (!VRM.hasStackSlot(VirtReg)) {
DEBUG(std::cerr << ": No stack slot!\n");
continue;
}
int SS = VRM.getStackSlot(VirtReg);
DEBUG(std::cerr << " - StackSlot: " << SS << "\n");
// If this folded instruction is just a use, check to see if it's a
// straight load from the virt reg slot.
if ((MR & VirtRegMap::isRef) && !(MR & VirtRegMap::isMod)) {
int FrameIdx;
if (unsigned DestReg = MRI->isLoadFromStackSlot(&MI, FrameIdx)) {
// If this spill slot is available, insert a copy for it!
std::map<int, unsigned>::iterator It = SpillSlotsAvailable.find(SS);
if (FrameIdx == SS && It != SpillSlotsAvailable.end()) {
DEBUG(std::cerr << "Promoted Load To Copy: " << MI);
MachineFunction &MF = *MBB.getParent();
if (DestReg != It->second) {
MRI->copyRegToReg(MBB, &MI, DestReg, It->second,
MF.getSSARegMap()->getRegClass(VirtReg));
// Revisit the copy if the destination is a vreg.
if (MRegisterInfo::isVirtualRegister(DestReg)) {
NextMII = &MI;
--NextMII; // backtrack to the copy.
}
}
MBB.erase(&MI);
goto ProcessNextInst;
}
}
}
// If this reference is not a use, any previous store is now dead.
// Otherwise, the store to this stack slot is not dead anymore.
std::map<int, MachineInstr*>::iterator MDSI = MaybeDeadStores.find(SS);
@ -494,9 +522,6 @@ void LocalSpiller::RewriteMBB(MachineBasicBlock &MBB, const VirtRegMap &VRM) {
SpillSlotsAvailable.erase(It);
}
}
} else {
DEBUG(std::cerr << ": No stack slot!\n");
}
}
// Process all of the spilled defs.
@ -575,6 +600,7 @@ void LocalSpiller::RewriteMBB(MachineBasicBlock &MBB, const VirtRegMap &VRM) {
}
}
}
ProcessNextInst:
MII = NextMII;
}
}