This adds support for the "sync $L" instruction with operand,
and provides aliases for "lwsync" and "ptesync".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185344 91177308-0d34-0410-b5e6-96231b3b80d8
This adds instruction patterns to cover the generic forms of
the conditional branch instructions. This allows the assembler
to support the generic mnemonics.
The compiler will still generate the various specific forms
of the instruction that were already supported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184722 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds a couple of Book II instructions (isync, icbi) to the
PowerPC assembler parser. These are needed when bootstrapping clang
with the integrated assembler forced on, because they are used in
inline asm statements in the code base.
The test case adds the full list of Book II storage control instructions,
including associated extended mnemonics. Again, those that are not yet
supported as marked as FIXME.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181052 91177308-0d34-0410-b5e6-96231b3b80d8
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong operand name in rldimi, wrong form
and sub-opcode for rldcl).
Tests will be added together with the asm parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180606 91177308-0d34-0410-b5e6-96231b3b80d8
This is prep. work for the implementation of optimizeCompare. Many PPC
instructions have 'record' forms (in almost all cases, this means that the RC
bit is set) that cause the result of the instruction to be compared with zero,
and the result of that comparison saved in a predefined condition register. In
order to add the record forms of the instructions without too much
copy-and-paste, the relevant functions have been refactored into multiclasses
which define both the record and normal forms.
Also, two TableGen-generated mapping functions have been added which allow
querying the instruction code for the record form given the normal form (and
vice versa).
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179356 91177308-0d34-0410-b5e6-96231b3b80d8
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179026 91177308-0d34-0410-b5e6-96231b3b80d8
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.
This commit marks those patterns as isCodeGenOnly.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178008 91177308-0d34-0410-b5e6-96231b3b80d8
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode. This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.
The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.
This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter). Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.
No significant change in generated code expected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178006 91177308-0d34-0410-b5e6-96231b3b80d8
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand. This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178003 91177308-0d34-0410-b5e6-96231b3b80d8
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.
Benchmarking the speed at -O3 of:
static jmp_buf env_sigill;
void foo() {
__builtin_longjmp(env_sigill,1);
}
main() {
...
for (int i = 0; i < c; ++i) {
if (__builtin_setjmp(env_sigill)) {
goto done;
} else {
foo();
}
done:;
}
...
}
vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177666 91177308-0d34-0410-b5e6-96231b3b80d8
This pass is derived from the Hexagon HardwareLoops pass. The only significant enhancement over the Hexagon
pass is that PPCCTRLoops will also attempt to delete the replaced add and compare operations if they are
no longer otherwise used. Also, invalid preheader DebugLoc is not used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158204 91177308-0d34-0410-b5e6-96231b3b80d8
Dynamic linking on PPC64 has had problems since we had to move the top-down
hazard-detection logic post-ra. For dynamic linking to work there needs to be
a nop placed after every call. It turns out that it is really hard to guarantee
that nothing will be placed in between the call (bl) and the nop during post-ra
scheduling. Previous attempts at fixing this by placing logic inside the
hazard detector only partially worked.
This is now fixed in a different way: call+nop codegen-only instructions. As far
as CodeGen is concerned the pair is now a single instruction and cannot be split.
This solution works much better than previous attempts.
The scoreboard hazard detector is also renamed to be more generic, there is currently
no cpu-specific logic in it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153816 91177308-0d34-0410-b5e6-96231b3b80d8
into the immediate field. This allows us to encode stuff like this:
lbz r3, lo16(__ZL4init)(r4) ; globalopt.cpp:5
; encoding: [0x88,0x64,A,A]
; fixup A - offset: 0, value: lo16(__ZL4init), kind: fixup_ppc_lo16
stw r3, lo16(__ZL1s)(r5) ; globalopt.cpp:6
; encoding: [0x90,0x65,A,A]
; fixup A - offset: 0, value: lo16(__ZL1s), kind: fixup_ppc_lo16
With this, we should have a completely function MCCodeEmitter for PPC, wewt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119134 91177308-0d34-0410-b5e6-96231b3b80d8
modes. For example, we now get:
ld r3, lo16(_G)(r3) ; encoding: [0xe8,0x63,A,0bAAAAAA00]
; fixup A - offset: 0, value: lo16(_G), kind: fixup_ppc_lo14
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119133 91177308-0d34-0410-b5e6-96231b3b80d8
When a target instruction wants to set target-specific flags, it should simply
set bits in the TSFlags bit vector defined in the Instruction TableGen class.
This works well because TableGen resolves member references late:
class I : Instruction {
AddrMode AM = AddrModeNone;
let TSFlags{3-0} = AM.Value;
}
let AM = AddrMode4 in
def ADD : I;
TSFlags gets the expected bits from AddrMode4 in this example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100384 91177308-0d34-0410-b5e6-96231b3b80d8
The algorithm it used before wasn't 100% correct, we now use an iterative
expansion model. This fixes assembler errors when compiling 403.gcc with
tail merging enabled.
Change the way the branch selector works overall: Now, the isel generates
PPC::BCC instructions (as it used to) directly, and these BCC instructions
are emitted to the output or jitted directly if branches don't need
expansion. Only if branches need expansion are instructions rewritten
and created. This should make branch select faster, and eliminates the
Bxx instructions from the .td file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31837 91177308-0d34-0410-b5e6-96231b3b80d8
Tell the codegen emitter that specific operands are not to be encoded, fixing
JIT regressions w.r.t. pre-inc loads and stores (e.g. lwzu, which we generate
even when general preinc loads are not enabled).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31770 91177308-0d34-0410-b5e6-96231b3b80d8