external symbols and global addresses. Add the missing ones.
one important workaround: PPCISD::CALL is matched by both PPCcall_ELF
and PPCcall_Macho, disable the _ELF patterns for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34601 91177308-0d34-0410-b5e6-96231b3b80d8
The algorithm it used before wasn't 100% correct, we now use an iterative
expansion model. This fixes assembler errors when compiling 403.gcc with
tail merging enabled.
Change the way the branch selector works overall: Now, the isel generates
PPC::BCC instructions (as it used to) directly, and these BCC instructions
are emitted to the output or jitted directly if branches don't need
expansion. Only if branches need expansion are instructions rewritten
and created. This should make branch select faster, and eliminates the
Bxx instructions from the .td file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31837 91177308-0d34-0410-b5e6-96231b3b80d8
value and CR reg #. This requires swapping the order of these everywhere
that touches BCC and requires us to write custom matching logic for
PPCcondbranch :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31835 91177308-0d34-0410-b5e6-96231b3b80d8
bugs including making sure that the TOS links back to the previous frame,
that the maximum call frame size is not included twice when using frame
pointers, no longer growing the frame on calls, double storing of SP and
a cleaner/faster dynamic alloca.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31792 91177308-0d34-0410-b5e6-96231b3b80d8
Tell the codegen emitter that specific operands are not to be encoded, fixing
JIT regressions w.r.t. pre-inc loads and stores (e.g. lwzu, which we generate
even when general preinc loads are not enabled).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31770 91177308-0d34-0410-b5e6-96231b3b80d8
pair for cleanliness. Add instructions for PPC32 preinc-stores with commented
out patterns. More improvement is needed to enable the patterns, but we're
getting close.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31749 91177308-0d34-0410-b5e6-96231b3b80d8
clobber. This allows LR8 to be save/restored correctly as a 64-bit quantity,
instead of handling it as a 32-bit quantity. This unbreaks ppc64 codegen when
the code is actually located above the 4G boundary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31734 91177308-0d34-0410-b5e6-96231b3b80d8
(because the 64-bit reg target versions aren't implemented yet), doesn't
support r+r addr modes, and doesn't handle stores, but it works otherwise. :)
This is disabled unless -enable-ppc-preinc is passed to llc for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31621 91177308-0d34-0410-b5e6-96231b3b80d8
that takes a register and condition code. Print these pieces of BLR the
right way, even though it is currently set to 'always'.
Next up: get the JIT encoding right, then enhance branch folding to produce
predicated blr for simple examples.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31449 91177308-0d34-0410-b5e6-96231b3b80d8
As such, use xoaddr (indexed only), not xaddr for address selection.
This fixes CodeGen/PowerPC/2006-07-19-stwbrx-crash.ll, a crash compiling lencod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29208 91177308-0d34-0410-b5e6-96231b3b80d8
Split imm16Shifted into a sext/zext form for 64-bit support.
Add some patterns for immediate formation. For example, we now compile this:
static unsigned long long Y;
void test3() {
Y = 0xF0F00F00;
}
into:
_test3:
li r2, 3840
lis r3, ha16(_Y)
xoris r2, r2, 61680
std r2, lo16(_Y)(r3)
blr
GCC produces:
_test3:
li r0,0
lis r2,ha16(_Y)
ori r0,r0,61680
sldi r0,r0,16
ori r0,r0,3840
std r0,lo16(_Y)(r2)
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28883 91177308-0d34-0410-b5e6-96231b3b80d8