Fix ARM paired GPR COPY lowering

ARM paired GPR COPY was being lowered to two MOVr without CC. This
patch puts the CC back.

My test is a reduction of the case where I encountered the issue,
64-bit atomics use paired GPRs.

The issue only occurs with selectionDAG, FastISel doesn't encounter it
so I didn't bother calling it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186226 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
JF Bastien 2013-07-12 23:33:03 +00:00
parent bee07bddea
commit 1b6f5a29ab
2 changed files with 20 additions and 0 deletions

View File

@ -745,6 +745,9 @@ void ARMBaseInstrInfo::copyPhysReg(MachineBasicBlock &MBB,
if (Opc == ARM::VORRq)
Mov.addReg(Src);
Mov = AddDefaultPred(Mov);
// MOVr can set CC.
if (Opc == ARM::MOVr)
Mov = AddDefaultCC(Mov);
}
// Add implicit super-register defs and kills to the last instruction.
Mov->addRegisterDefined(DestReg, TRI);

View File

@ -0,0 +1,17 @@
; RUN: llc < %s -mtriple=armv7-apple-ios -verify-machineinstrs
; RUN: llc < %s -mtriple=armv7-linux-gnueabi -verify-machineinstrs
define void @f() {
%a = alloca i8, i32 8, align 8
%b = alloca i8, i32 8, align 8
%c = bitcast i8* %a to i64*
%d = bitcast i8* %b to i64*
store atomic i64 0, i64* %c seq_cst, align 8
store atomic i64 0, i64* %d seq_cst, align 8
%e = load atomic i64* %d seq_cst, align 8
ret void
}