mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-07-29 10:25:12 +00:00
[ShrinkWrap] Add (a simplified version) of shrink-wrapping.
This patch introduces a new pass that computes the safe point to insert the prologue and epilogue of the function. The interest is to find safe points that are cheaper than the entry and exits blocks. As an example and to avoid regressions to be introduce, this patch also implements the required bits to enable the shrink-wrapping pass for AArch64. ** Context ** Currently we insert the prologue and epilogue of the method/function in the entry and exits blocks. Although this is correct, we can do a better job when those are not immediately required and insert them at less frequently executed places. The job of the shrink-wrapping pass is to identify such places. ** Motivating example ** Let us consider the following function that perform a call only in one branch of a if: define i32 @f(i32 %a, i32 %b) { %tmp = alloca i32, align 4 %tmp2 = icmp slt i32 %a, %b br i1 %tmp2, label %true, label %false true: store i32 %a, i32* %tmp, align 4 %tmp4 = call i32 @doSomething(i32 0, i32* %tmp) br label %false false: %tmp.0 = phi i32 [ %tmp4, %true ], [ %a, %0 ] ret i32 %tmp.0 } On AArch64 this code generates (removing the cfi directives to ease readabilities): _f: ; @f ; BB#0: stp x29, x30, [sp, #-16]! mov x29, sp sub sp, sp, #16 ; =16 cmp w0, w1 b.ge LBB0_2 ; BB#1: ; %true stur w0, [x29, #-4] sub x1, x29, #4 ; =4 mov w0, wzr bl _doSomething LBB0_2: ; %false mov sp, x29 ldp x29, x30, [sp], #16 ret With shrink-wrapping we could generate: _f: ; @f ; BB#0: cmp w0, w1 b.ge LBB0_2 ; BB#1: ; %true stp x29, x30, [sp, #-16]! mov x29, sp sub sp, sp, #16 ; =16 stur w0, [x29, #-4] sub x1, x29, #4 ; =4 mov w0, wzr bl _doSomething add sp, x29, #16 ; =16 ldp x29, x30, [sp], #16 LBB0_2: ; %false ret Therefore, we would pay the overhead of setting up/destroying the frame only if we actually do the call. ** Proposed Solution ** This patch introduces a new machine pass that perform the shrink-wrapping analysis (See the comments at the beginning of ShrinkWrap.cpp for more details). It then stores the safe save and restore point into the MachineFrameInfo attached to the MachineFunction. This information is then used by the PrologEpilogInserter (PEI) to place the related code at the right place. This pass runs right before the PEI. Unlike the original paper of Chow from PLDI’88, this implementation of shrink-wrapping does not use expensive data-flow analysis and does not need hack to properly avoid frequently executed point. Instead, it relies on dominance and loop properties. The pass is off by default and each target can opt-in by setting the EnableShrinkWrap boolean to true in their derived class of TargetPassConfig. This setting can also be overwritten on the command line by using -enable-shrink-wrap. Before you try out the pass for your target, make sure you properly fix your emitProlog/emitEpilog/adjustForXXX method to cope with basic blocks that are not necessarily the entry block. ** Design Decisions ** 1. ShrinkWrap is its own pass right now. It could frankly be merged into PEI but for debugging and clarity I thought it was best to have its own file. 2. Right now, we only support one save point and one restore point. At some point we can expand this to several save point and restore point, the impacted component would then be: - The pass itself: New algorithm needed. - MachineFrameInfo: Hold a list or set of Save/Restore point instead of one pointer. - PEI: Should loop over the save point and restore point. Anyhow, at least for this first iteration, I do not believe this is interesting to support the complex cases. We should revisit that when we motivating examples. Differential Revision: http://reviews.llvm.org/D9210 <rdar://problem/3201744> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236507 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
@@ -565,8 +565,9 @@ static uint64_t calculateMaxStackAlign(const MachineFunction &MF) {
|
||||
- for 32-bit code, substitute %e?? registers for %r??
|
||||
*/
|
||||
|
||||
void X86FrameLowering::emitPrologue(MachineFunction &MF) const {
|
||||
MachineBasicBlock &MBB = MF.front(); // Prologue goes in entry BB.
|
||||
void X86FrameLowering::emitPrologue(MachineFunction &MF,
|
||||
MachineBasicBlock &MBB) const {
|
||||
assert(&MF.front() == &MBB && "Shrink-wrapping not yet supported");
|
||||
MachineBasicBlock::iterator MBBI = MBB.begin();
|
||||
MachineFrameInfo *MFI = MF.getFrameInfo();
|
||||
const Function *Fn = MF.getFunction();
|
||||
@@ -1590,9 +1591,10 @@ GetScratchRegister(bool Is64Bit, bool IsLP64, const MachineFunction &MF, bool Pr
|
||||
// limit.
|
||||
static const uint64_t kSplitStackAvailable = 256;
|
||||
|
||||
void
|
||||
X86FrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const {
|
||||
MachineBasicBlock &prologueMBB = MF.front();
|
||||
void X86FrameLowering::adjustForSegmentedStacks(
|
||||
MachineFunction &MF, MachineBasicBlock &PrologueMBB) const {
|
||||
assert(&PrologueMBB == &MF.front() &&
|
||||
"Shrink-wrapping is not implemented yet");
|
||||
MachineFrameInfo *MFI = MF.getFrameInfo();
|
||||
const X86Subtarget &STI = MF.getSubtarget<X86Subtarget>();
|
||||
const TargetInstrInfo &TII = *STI.getInstrInfo();
|
||||
@@ -1634,8 +1636,9 @@ X86FrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const {
|
||||
// The MOV R10, RAX needs to be in a different block, since the RET we emit in
|
||||
// allocMBB needs to be last (terminating) instruction.
|
||||
|
||||
for (MachineBasicBlock::livein_iterator i = prologueMBB.livein_begin(),
|
||||
e = prologueMBB.livein_end(); i != e; i++) {
|
||||
for (MachineBasicBlock::livein_iterator i = PrologueMBB.livein_begin(),
|
||||
e = PrologueMBB.livein_end();
|
||||
i != e; i++) {
|
||||
allocMBB->addLiveIn(*i);
|
||||
checkMBB->addLiveIn(*i);
|
||||
}
|
||||
@@ -1749,7 +1752,7 @@ X86FrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const {
|
||||
|
||||
// This jump is taken if SP >= (Stacklet Limit + Stack Space required).
|
||||
// It jumps to normal execution of the function body.
|
||||
BuildMI(checkMBB, DL, TII.get(X86::JA_1)).addMBB(&prologueMBB);
|
||||
BuildMI(checkMBB, DL, TII.get(X86::JA_1)).addMBB(&PrologueMBB);
|
||||
|
||||
// On 32 bit we first push the arguments size and then the frame size. On 64
|
||||
// bit, we pass the stack frame size in r10 and the argument size in r11.
|
||||
@@ -1816,10 +1819,10 @@ X86FrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const {
|
||||
else
|
||||
BuildMI(allocMBB, DL, TII.get(X86::MORESTACK_RET));
|
||||
|
||||
allocMBB->addSuccessor(&prologueMBB);
|
||||
allocMBB->addSuccessor(&PrologueMBB);
|
||||
|
||||
checkMBB->addSuccessor(allocMBB);
|
||||
checkMBB->addSuccessor(&prologueMBB);
|
||||
checkMBB->addSuccessor(&PrologueMBB);
|
||||
|
||||
#ifdef XDEBUG
|
||||
MF.verify();
|
||||
@@ -1841,7 +1844,8 @@ X86FrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const {
|
||||
/// call inc_stack # doubles the stack space
|
||||
/// temp0 = sp - MaxStack
|
||||
/// if( temp0 < SP_LIMIT(P) ) goto IncStack else goto OldStart
|
||||
void X86FrameLowering::adjustForHiPEPrologue(MachineFunction &MF) const {
|
||||
void X86FrameLowering::adjustForHiPEPrologue(
|
||||
MachineFunction &MF, MachineBasicBlock &PrologueMBB) const {
|
||||
const X86Subtarget &STI = MF.getSubtarget<X86Subtarget>();
|
||||
const TargetInstrInfo &TII = *STI.getInstrInfo();
|
||||
MachineFrameInfo *MFI = MF.getFrameInfo();
|
||||
@@ -1910,12 +1914,14 @@ void X86FrameLowering::adjustForHiPEPrologue(MachineFunction &MF) const {
|
||||
// If the stack frame needed is larger than the guaranteed then runtime checks
|
||||
// and calls to "inc_stack_0" BIF should be inserted in the assembly prologue.
|
||||
if (MaxStack > Guaranteed) {
|
||||
MachineBasicBlock &prologueMBB = MF.front();
|
||||
assert(&PrologueMBB == &MF.front() &&
|
||||
"Shrink-wrapping is not implemented yet");
|
||||
MachineBasicBlock *stackCheckMBB = MF.CreateMachineBasicBlock();
|
||||
MachineBasicBlock *incStackMBB = MF.CreateMachineBasicBlock();
|
||||
|
||||
for (MachineBasicBlock::livein_iterator I = prologueMBB.livein_begin(),
|
||||
E = prologueMBB.livein_end(); I != E; I++) {
|
||||
for (MachineBasicBlock::livein_iterator I = PrologueMBB.livein_begin(),
|
||||
E = PrologueMBB.livein_end();
|
||||
I != E; I++) {
|
||||
stackCheckMBB->addLiveIn(*I);
|
||||
incStackMBB->addLiveIn(*I);
|
||||
}
|
||||
@@ -1951,7 +1957,7 @@ void X86FrameLowering::adjustForHiPEPrologue(MachineFunction &MF) const {
|
||||
// SPLimitOffset is in a fixed heap location (pointed by BP).
|
||||
addRegOffset(BuildMI(stackCheckMBB, DL, TII.get(CMPop))
|
||||
.addReg(ScratchReg), PReg, false, SPLimitOffset);
|
||||
BuildMI(stackCheckMBB, DL, TII.get(X86::JAE_1)).addMBB(&prologueMBB);
|
||||
BuildMI(stackCheckMBB, DL, TII.get(X86::JAE_1)).addMBB(&PrologueMBB);
|
||||
|
||||
// Create new MBB for IncStack:
|
||||
BuildMI(incStackMBB, DL, TII.get(CALLop)).
|
||||
@@ -1962,9 +1968,9 @@ void X86FrameLowering::adjustForHiPEPrologue(MachineFunction &MF) const {
|
||||
.addReg(ScratchReg), PReg, false, SPLimitOffset);
|
||||
BuildMI(incStackMBB, DL, TII.get(X86::JLE_1)).addMBB(incStackMBB);
|
||||
|
||||
stackCheckMBB->addSuccessor(&prologueMBB, 99);
|
||||
stackCheckMBB->addSuccessor(&PrologueMBB, 99);
|
||||
stackCheckMBB->addSuccessor(incStackMBB, 1);
|
||||
incStackMBB->addSuccessor(&prologueMBB, 99);
|
||||
incStackMBB->addSuccessor(&PrologueMBB, 99);
|
||||
incStackMBB->addSuccessor(incStackMBB, 1);
|
||||
}
|
||||
#ifdef XDEBUG
|
||||
|
Reference in New Issue
Block a user