Commit Graph

3651 Commits

Author SHA1 Message Date
Bill Wendling
99cb622041 Use pointers to the MCAsmInfo and MCRegInfo.
Someone may want to do something crazy, like replace these objects if they
change or something.

No functionality change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184175 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-18 07:20:20 +00:00
David Blaikie
0187e7a9ba DebugInfo: remove target-specific Frame Index handling for DBG_VALUE MachineInstrs
Frame index handling is now target-agnostic, so delete the target hooks
for creation & asm printing of target-specific addressing in DBG_VALUEs
and any related functions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184067 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-16 20:34:27 +00:00
David Blaikie
7e17024400 Revert r183854 (PPC: Fix switch warnings from r183841)
Now that the PRED_BAD has been removed, this is failing the Clang
-Werror build due to -Wcovered-switch-default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183863 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 20:57:32 +00:00
Bill Schmidt
6289533853 [PowerPC] Remove PRED_BAD from PPC::Predicate enumeration.
I'm taking David Blaikie's suggestion to use an
Optional<PPC::Predicate> return value instead.  That's the right
solution for this problem.  Thanks for pointing out that possibility!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183858 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 20:22:24 +00:00
Bill Schmidt
dab366bf25 [PowerPC] Fix switch warnings from r183841.
Introducing PRED_BAD caused some unexpected warnings that are now
suppressed.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183854 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 19:20:32 +00:00
Bill Schmidt
d3f7766f23 [PowerPC] Expose some calling convention functions in PPCISelLowering.h.
This is a preparatory patch for fast-isel support.  The instruction
selector will need to access some functions in PPCGenCallingConv.inc,
which in turn requires several helper functions to be defined.  These
are currently defined near the only use of PCCGenCallingConv.inc,
inside PPCISelLowering.cpp.  This patch moves the declaration of the
functions into the associated header file to provide the needed
visibility.

No functional change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183844 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 16:39:22 +00:00
Bill Schmidt
f69ead4651 Add artificial PRED_BAD to PPC::Predicate enumeration.
Allows returning a PPC::Predicate from a function with a no-predicate
value possible.  Preparatory patch for fast-isel on PPC64 ELF.  No
behavioral change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183841 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 15:14:42 +00:00
Ulrich Weigand
c1f4a4b264 [MC/DWARF] Support .debug_frame / .debug_line code alignment factors
I've been comparing the object file output of LLVM's integrated
assembler against the external assembler on PowerPC, and one
area where differences still remain are in DWARF sections.

In particular, the GNU assembler generates .debug_frame and
.debug_line sections using a code alignment factor of 4, since
all PowerPC instructions have size 4 and must be aligned to a
multiple of 4.  However, current MC code hard-codes a code
alignment factor of 1.

This patch changes this by adding a "minimum instruction alignment"
data element to MCAsmInfo and using this as code alignment factor.

This requires passing a MCContext into MCDwarfLineAddr::Encode
and MCDwarfLineAddr::EncodeAdvanceLoc.  Note that one caller,
MCDwarfLineAddr::Write, didn't actually have that information
available.  However, it turns out that this routine is in fact
never used in the whole code base, so the patch simply removes
it.  If it turns out to be needed again at a later time, it
could be re-added with an updated interface.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183834 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-12 14:46:54 +00:00
Ulrich Weigand
278916500a [PowerPC] Support extended sc mnemonic
A plain "sc" without argument is supposed to be treated like "sc 0"
by the assembler.  This patch adds a corresponding alias.

Problem reported by Joerg Sonnenberger.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183687 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-10 17:19:43 +00:00
Ulrich Weigand
7c6f90d486 [PowerPC] Support branch mnemonics with implied CR0
The extended branch mnemonics are supposed to use an implied CR0
if there is no explicit condition register specified.  This patch
adds extra variants of the mnemonics to this effect.

Problem reported by Joerg Sonnenberger.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183686 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-10 17:19:15 +00:00
Ulrich Weigand
b838f9fe61 [PowerPC] Use multiclass to generate extended branch mnemonics
This patch removes some redundancy by generating the extended branch
mnemonics via a multiclass.

No change in behaviour expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183685 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-10 17:18:29 +00:00
Hal Finkel
40be73bed7 Disallow i64 div/rem in PPC32 counter loops
On PPC32, [su]div,rem on i64 types are transformed into runtime library
function calls. As a result, they are not allowed in counter-based loops (the
counter-loops verification pass caught this error; this change fixes PR16169).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183581 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-07 22:16:19 +00:00
Benjamin Kramer
041399aad5 Fold variable that's only used in assert into the assert.
Avoids unused variable warnings in Release builds.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183512 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-07 11:23:35 +00:00
Bill Wendling
80ada583f3 Don't cache the instruction and register info from the TargetMachine, because
the internals of TargetMachine could change.

No functionality change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183494 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-07 07:55:53 +00:00
Ahmed Bougacha
23ed37a6b7 Make SubRegIndex size mandatory, following r183020.
This also makes TableGen able to compute sizes/offsets of synthesized
indices representing tuples.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183061 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-31 23:45:26 +00:00
Andrew Trick
6e0b2a0cb0 Order CALLSEQ_START and CALLSEQ_END nodes.
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.

Patch by Xiaoyi Guo!

This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182885 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-29 22:03:55 +00:00
Hal Finkel
119da2eb20 PPC: Add a isConsecutiveLS utility function
isConsecutiveLS is a slightly more general form of
SelectionDAG::isConsecutiveLoad. Aside from also handling stores, it also does
not assume equality of the chain operands is necessary. In the case of the PPC
backend, this chain condition is checked in a more general way by the
surrounding code.

Mostly, this part of the refactoring in preparation for supporting optimized
unaligned stores.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182723 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-27 02:06:39 +00:00
Hal Finkel
1907cad7c8 Prefer to duplicate PPC Altivec loads when expanding unaligned loads
When expanding unaligned Altivec loads, we use the decremented offset trick to
prevent page faults. Unfortunately, if we have a sequence of consecutive
unaligned loads, this leads to suboptimal code generation because the 'extra'
load from the first unaligned load can be combined with the base load from the
second (but only if the decremented offset trick is not used for the first).
Search up and down the chain, through loads and token factors, looking for
consecutive loads, and if one is found, don't use the offset reduction trick.
These duplicate loads are later combined to yield the desired sequence (in the
future, we might want a more-powerful chain search, but that will require some
changes to allow the combiner routines to access the AA object).

This should complete the initial implementation of the optimized unaligned
Altivec load expansion. There is some refactoring that should be done, but
that will happen when the unaligned store expansion is added.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182719 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-26 18:08:30 +00:00
Hal Finkel
5a0e60425f PPC: Combine duplicate (offset) lvsl Altivec intrinsics
The lvsl permutation control instruction is a function only of the alignment of
the pointer operand (relative to the 16-byte natural alignment of Altivec
vectors). As a result, multiple lvsl intrinsics where the operands differ by a
multiple of 16 can be combined.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182708 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-25 04:05:05 +00:00
Andrew Trick
ac6d9bec67 Track IR ordering of SelectionDAG nodes 2/4.
Change SelectionDAG::getXXXNode() interfaces as well as call sites of
these functions to pass in SDLoc instead of DebugLoc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182703 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-25 02:42:55 +00:00
Hal Finkel
80d10ded8c PPC: Initial support for permutation-based unaligned Altivec loads
Altivec only directly supports aligned loads, but the loads have a strange
property: If given an unaligned address, they truncate the address to the next
lower aligned address, and load from there.  This property, along with an extra
load and some special-purpose permutation-control instructions that generate
the appropriate permutations from the original unaligned address, allow
efficient lowering of aligned loads. This code uses the trick explained in the
Apple Velocity Engine optimization overview document to prevent the needed
extra load from possibly causing a page fault if the original address happens
to be aligned.

As noted in the FIXMEs, there are several additional optimizations that can be
performed to reduce the cost of these loads even more. These will be
implemented in future commits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182691 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-24 23:00:14 +00:00
Michael J. Spencer
c6af2432c8 Replace Count{Leading,Trailing}Zeros_{32,64} with count{Leading,Trailing}Zeros.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182680 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-24 22:23:49 +00:00
Ulrich Weigand
586f6d009a [PowerPC] Remove symbolLo/symbolHi instruction operand types
Now that there is no longer any distinction between symbolLo
and symbolHi operands in either printing, encoding, or parsing,
the operand types can be removed in favor of simply using
s16imm.

This completes the patch series to decouple lo/hi operand part
processing from the particular instruction whose operand it is.

No change in code generation expected from this patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182618 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-23 22:48:06 +00:00
Ulrich Weigand
edaa58ee66 [PowerPC] Clean up generation of ha16() / lo16() markers
When targeting the Darwin assembler, we need to generate markers ha16() and
lo16() to designate the high and low parts of a (symbolic) immediate.  This
is necessary not just for plain symbols, but also for certain symbolic
expression, typically along the lines of ha16(A - B).  The latter doesn't
work when simply using VariantKind flags on the symbol reference.
This is why the current back-end uses hacks (explicitly called out as such
via multiple FIXMEs) in the symbolLo/symbolHi print methods.

This patch uses target-defined MCExpr codes to represent the Darwin
ha16/lo16 constructs, following along the lines of the equivalent solution
used by the ARM back end to handle their :upper16: / :lower16: markers.
This allows us to get rid of special handling both in the symbolLo/symbolHi
print method and in the common code MCExpr::print routine.  Instead, the
ha16 / lo16 markers are printed simply in a custom print routine for the
target MCExpr types.  (As a result, the symbolLo/symbolHi print methods
can now replaced by a single printS16ImmOperand routine that also handles
symbolic operands.)

The patch also provides a EvaluateAsRelocatableImpl routine to handle
ha16/lo16 constructs.  This is not actually used at the moment by any
in-tree code, but is provided as it makes merging into David Fang's
out-of-tree Mach-O object writer simpler.

Since there is no longer any need to treat VK_PPC_GAS_HA16 and
VK_PPC_DARWIN_HA16 differently, they are merged into a single
VK_PPC_ADDR16_HA (and likewise for the _LO16 types).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182616 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-23 22:26:41 +00:00
Bill Schmidt
5cd01f74b1 Change some PowerPC PatLeaf definitions to ImmLeaf for fast-isel.
Using PatLeaf rather than ImmLeaf when defining immediate predicates
prevents simple patterns using those predicates from being recognized
for fast instruction selection.  This patch replaces the immSExt16
PatLeaf predicate with two ImmLeaf predicates, imm32SExt16 and
imm64SExt16, allowing a few more patterns to be recognized (ADDI,
ADDIC, MULLI, ADDI8, and ADDIC8).  Using the new predicates does not
help for LI, LI8, SUBFIC, and SUBFIC8 because these are rejected for
other reasons, but I see no reason to retain the PatLeaf predicate.

No functional change intended, and thus no test cases yet.  This is
preliminary work for enabling fast-isel support for PowerPC.  When
that support is ready, we'll be able to test this function.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182510 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-22 20:09:24 +00:00
Hal Finkel
75e9ee8b7f Fix PPC branch selection for counter-based branches
Although I had added some support for the BDZ/BDNZ branches into the selector
(in r158204), I had not correctly adjusted the condition at the top of the
loop. As a result, these branches were still essentially unsupported.

This fixes PR16086. Unfortunately, any test case would be very large (because
it would need to force the loop backedge to exceed the range of the 16-bit
immediate).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182385 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-21 14:21:09 +00:00
Hal Finkel
4e6b24ffcf Rename LoopSimplify.h to LoopUtils.h
As discussed, LoopUtils.h is a better name.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182314 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-20 20:46:30 +00:00
Hal Finkel
08f92c98ac Remove copied preheader insertion logic from PPCCTRLoops
Now that the preheader insertion logic in LoopSimplify is externally exposed,
use it, and remove the copy-and-pasted version.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182300 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-20 16:47:10 +00:00
Hal Finkel
85c08b059c Rename PPC MTCTRse to MTCTRloop
As the pairing of this instruction form with the bdnz/bdz branches is now
enforced by the verification pass, make it clear from the name that these
are used only for counter-based loops.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182296 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-20 16:08:37 +00:00
Hal Finkel
e50c8c1f81 Add a PPCCTRLoops verification pass
When asserts are enabled, this adds a verification pass for PPC counter-loop
formation. Unfortunately, without sacrificing code quality, there is no better
way of forming counter-based loops except at the (late) IR level. This means
that we need to recognize, at the IR level, anything which might turn into a
function call (or indirect branch). Because this is currently a finite set of
things, and because SelectionDAG lowering is basic-block local, this can be
done. Nevertheless, it is fragile, and failure results in a miscompile. This
verification pass checks that all (reachable) counter-based branches are
dominated by a loop mtctr instruction, and that no instructions in between
clobber the counter register. If these conditions are not satisfied, then an
ICE will be triggered.

In short, this is to help us sleep better at night.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182295 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-20 16:08:17 +00:00
Hal Finkel
bf0bc3b2a2 Check InlineAsm clobbers in PPCCTRLoops
We don't need to reject all inline asm as using the counter register (most does
not). Only those that explicitly clobber the counter register need to prevent
the transformation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182191 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-18 09:20:39 +00:00
Matt Arsenault
225ed7069c Add LLVMContext argument to getSetCCResultType
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182180 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-18 00:21:46 +00:00
Ulrich Weigand
4456a8ec76 [PowerPC] Fix hi/lo encoding in old-style code emitter
This patch implements the equivalent change to r182091/r182092
in the old-style code emitter.  Instead of having two separate
16-bit immediate encoding routines depending on the instruction,
this patch introduces a single encoder that checks the machine
operand flags to decide whether the low or high half of a
symbol address is required.

Since now both encoders make no further distinction between
"symbolLo" and "symbolHi", the .td operand can now use a
single getS16ImmEncoding method.

Tested by running the old-style JIT tests on 32-bit Linux.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182097 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-17 14:14:12 +00:00
Ulrich Weigand
e152eac63e [PowerPC] Merge/rename PPC fixup types
Now that fixup_ppc_ha16 and fixup_ppc_lo16 are being treated exactly
the same everywhere, it no longer makes sense to have two fixup types.

This patch merges them both into a single type fixup_ppc_half16,
and renames fixup_ppc_lo16_ds to fixup_ppc_half16ds for consistency.
(The half16 and half16ds names are taken from the description of
relocation types in the PowerPC ABI.)

No change in code generation expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182092 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-17 12:37:21 +00:00
Ulrich Weigand
c299ad32c8 [PowerPC] Fix processing of ha16/lo16 fixups
The current PowerPC MC back end distinguishes between fixup_ppc_ha16
and fixup_ppc_lo16, which are determined by the instruction the fixup
applies to, and uses this distinction to decide whether a fixup ought
to resolve to the high or the low part of a symbol address.

This isn't quite correct, however.  It is valid -if unusual- assembler
to use, e.g.
  li 1, symbol@ha
or
  lis 1, symbol@l
Whether the high or the low part of the address is used depends solely
on the @ suffix, not on the instruction.

In addition, both
  li 1, symbol
and
  lis 1, symbol
are valid, assuming the symbol address fits into 16 bits; again, both
will then refer to the actual symbol value (so li will load the value
itself, while lis will load the value shifted by 16).


To fix this, two places need to be adapted.  If the fixup cannot be
resolved at assembler time, a relocation needs to be emitted via
PPCELFObjectWriter::getRelocType.  This routine already looks at
the VK_ type to determine the relocation.  The only problem is that
will reject any _LO modifier in a ha16 fixup and vice versa.  This
is simply incorrect; any of those modifiers ought to be accepted
for either fixup type.

If the fixup *can* be resolved at assembler time, adjustFixupValue
currently selects the high bits of the symbol value if the fixup
type is ha16.  Again, this is incorrect; see the above example
  lis 1, symbol

Now, in theory we'd have to respect a VK_ modifier here.  However,
in fact common code never even attempts to resolve symbol references
using any nontrivial VK_ modifier at assembler time; it will always
fall back to emitting a reloc and letting the linker handle it.

If this ever changes, presumably there'd have to be a target callback
to resolve VK_ modifiers.  We'd then have to handle @ha etc. there.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182091 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-17 12:36:29 +00:00
Rafael Espindola
6b67ffd68b Remove addFrameMove.
Now that we have good testing, remove addFrameMove and create cfi
instructions directly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182052 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 21:02:15 +00:00
Hal Finkel
c482454e3c Create an new preheader in PPCCTRLoops to avoid counter register clobbers
Some IR-level instructions (such as FP <-> i64 conversions) are not chained
w.r.t. the mtctr intrinsic and yet may become function calls that clobber the
counter register. At the selection-DAG level, these might be reordered with the
mtctr intrinsic causing miscompiles. To avoid this situation, if an existing
preheader has instructions that might use the counter register, create a new
preheader for the mtctr intrinsic. This extra block will be remerged with the
old preheader at the MI level, but will prevent unwanted reordering at the
selection-DAG level.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182045 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 19:58:38 +00:00
Ulrich Weigand
347a5079e1 [PowerPC] Use true offset value in "memrix" machine operands
This is the second part of the change to always return "true"
offset values from getPreIndexedAddressParts, tackling the
case of "memrix" type operands.

This is about instructions like LD/STD that only have a 14-bit
field to encode immediate offsets, which are implicitly extended
by two zero bits by the machine, so that in effect we can access
16-bit offsets as long as they are a multiple of 4.

The PowerPC back end currently handles such instructions by
carrying the 14-bit value (as it will get encoded into the
actual machine instructions) in the machine operand fields
for such instructions.  This means that those values are
in fact not the true offset, but rather the offset divided
by 4 (and then truncated to an unsigned 14-bit value).

Like in the case fixed in r182012, this makes common code
operations on such offset values not work as expected.
Furthermore, there doesn't really appear to be any strong
reason why we should encode machine operands this way.

This patch therefore changes the encoding of "memrix" type
machine operands to simply contain the "true" offset value
as a signed immediate value, while enforcing the rules that
it must fit in a 16-bit signed value and must also be a
multiple of 4.

This change must be made simultaneously in all places that
access machine operands of this type.  However, just about
all those changes make the code simpler; in many cases we
can now just share the same code for memri and memrix
operands.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182032 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 17:58:02 +00:00
Hal Finkel
2a5e8c328e PPC32 cannot form counter loops around i64 FP conversions
On PPC32, i64 FP conversions are implemented using runtime calls (which clobber
the counter register). These must be excluded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182023 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 16:52:41 +00:00
Ulrich Weigand
f0ef882828 [PowerPC] Report true displacement value from getPreIndexedAddressParts
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair.  It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.

The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:

- Its type is always MVT::i32, even on 64-bit, where addresses
  ought to be i64 ... this causes the optimization to simply
  always fail on 64-bit due to this line in DAGCombiner:

      // FIXME: In some cases, we can be smarter about this.
      if (Op1.getValueType() != Offset.getValueType()) {

- Its value is truncated to an unsigned 16-bit value if negative.
  This causes the above opimization to generate wrong code.

This patch fixes both problems by simply returning the true
displacement value (in its original type).  This doesn't
affect any other user of the displacement.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182012 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 14:53:05 +00:00
Rafael Espindola
ec7f4231cb Removed dead code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181975 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-16 03:34:58 +00:00
Hal Finkel
f1e7ea43aa undef setjmp in PPCCTRLoops
Trying to unbreak the VS build by copying some undef code from
Utils/LowerInvoke.cpp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181938 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 22:20:24 +00:00
Hal Finkel
b1fd3cd78f Implement PPC counter loops as a late IR-level pass
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.

The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.

This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).

The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181927 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 21:37:41 +00:00
Rafael Espindola
4ef61f2ad4 Cleanup relocation sorting for ELF.
We want the order to be deterministic on all platforms. NAKAMURA Takumi
fixed that in r181864. This patch is just two small cleanups:

* Move the function to the cpp file. It is only passed to array_pod_sort.
* Remove the ppc implementation which is now redundant

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181910 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 18:22:01 +00:00
NAKAMURA Takumi
9d86f9cc3a PPCISelLowering.h: Escape \@ in comments. [-Wdocumentation]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181907 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 18:01:35 +00:00
NAKAMURA Takumi
8108a80677 Whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181906 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 18:01:28 +00:00
Ulrich Weigand
9122396a4d [PowerPC] Remove need for adjustFixupOffst hack
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the 
instruction text.

This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181894 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 15:07:06 +00:00
Ulrich Weigand
b1cf8de85a [PowerPC] Correctly handle fixups of other than 4 byte size
The PPCAsmBackend::applyFixup routine handles the case where a
fixup can be resolved within the same object file.  However,
this routine is currently hard-coded to assume the size of
any fixup is always exactly 4 bytes.

This is sort-of correct for fixups on instruction text; even
though it only works because several of what really would be
2-byte fixups are presented as 4-byte fixups instead (requiring
another hack in PPCELFObjectWriter::adjustFixupOffset to clean
it up).

However, this assumption breaks down completely for fixups
on data, which legitimately can be of any size (1, 2, 4, or 8).

This patch makes applyFixup aware of fixups of varying sizes,
introducing a new helper routine getFixupKindNumBytes (along
the lines of what the ARM back end does).  Note that in order
to handle fixups of size 8, we also need to fix the return type
of adjustFixupValue to uint64_t to avoid truncation.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181891 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-15 15:01:46 +00:00
Bill Schmidt
5bbdb19041 Implement the PowerPC system call (sc) instruction.
Instruction added at request of Roman Divacky.  Tested via asm-parser.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181821 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-14 19:35:45 +00:00
Bill Schmidt
ded53bf4dd PPC32: Fix stack collision between FP and CR save areas.
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info.  This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot.  spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly.  Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).

This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181800 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-14 16:08:32 +00:00
Bill Schmidt
59b078fc56 Fix goofy commentary in PPCTargetObjectFile.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181725 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-13 19:40:36 +00:00
Bill Schmidt
240b9b6078 PPC64: Constant initializers with dynamic relocations go in .data.rel.ro.
This fixes warning messages observed in the oggenc application test in
projects/test-suite.  Special handling is needed for the 64-bit
PowerPC SVR4 ABI when a constant is initialized with a pointer to a
function in a shared library.  Because a function address is
implemented as the address of a function descriptor, the use of copy
relocations can lead to problems with initialization.  GNU ld
therefore replaces copy relocations with dynamic relocations to be
resolved by the dynamic linker.  This means the constant cannot reside
in the read-only data section, but instead belongs in .data.rel.ro,
which is designed for constants containing dynamic relocations.

The implementation creates a class PPC64LinuxTargetObjectFile
inheriting from TargetLoweringObjectFileELF, which behaves like its
parent except to place constants of this sort into .data.rel.ro.

The test case is reduced from the oggenc application.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181723 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-13 19:34:37 +00:00
Rafael Espindola
4a971705bc Remove the MachineMove class.
It was just a less powerful and more confusing version of
MCCFIInstruction. A side effect is that, since MCCFIInstruction uses
dwarf register numbers, calls to getDwarfRegNum are pushed out, which
should allow further simplifications.

I left the MachineModuleInfo::addFrameMove interface unchanged since
this patch was already fairly big.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181680 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-13 01:16:13 +00:00
Rafael Espindola
d84ccfaf50 Change getFrameMoves to return a const reference.
To add a frame now there is a dedicated addFrameMove which also takes
care of constructing the move itself.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181657 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-11 02:38:11 +00:00
Rafael Espindola
6e53180db1 Remove unused argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181618 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-10 18:16:59 +00:00
Roman Divacky
fd94f0ab35 Remove unused isLegalAddressImmediate() method.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181452 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-08 17:51:39 +00:00
Ulrich Weigand
a3967b6844 [PowerPC] Fix regression in generating @ha/@l relocs
The patch I committed as revision 167864 introduced a regression that
causes LLVM to no longer generate appropriate relocs for @ha/@l symbol
references (but fail an assertion instead).

This is fixed here by re-enabling support for the VK_PPC_GAS_HA16/
VK_PPC_GAS_LO16 variant kinds (and their Darwin variants) in
PPCELFObjectWriter.cpp.

Tested by running projects/test-suite in -m32 mode with the integrated
assembler forced on.  A standalone test case will be committed shortly
as well.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181450 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-08 17:50:07 +00:00
Bill Schmidt
a7f2ce85e5 Fix handling of anonymous aggregate parameters for powerpc*-apple-darwin8.
This fixes bug 15821 similarly to the powerpc64-linux fix for bug 14779.

Patch by David Fang.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181449 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-08 17:22:33 +00:00
Hal Finkel
b45eb9fd27 PPCInstrInfo::optimizeCompareInstr should not optimize FP compares
The floating-point record forms on PPC don't set the condition register bits
based on a comparison with zero (like the integer record forms do), but rather
based on the exception status bits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181423 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-08 12:16:14 +00:00
Hal Finkel
8a88cadaed Cleanup PPCInstrInfo::optimizeCompareInstr
Implement suggestions by Bill Schmidt in post-commit review. No functionality
change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181338 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-07 17:49:55 +00:00
Ulrich Weigand
7d55b6bb1a [PowerPC] Fix memory corruption in AsmParser
As pointed out by Evgeniy Stepanov, assigning a std::string temporary
to a StringRef is not a good idea.  Rework MatchRegisterName to avoid
using the .lower routine.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181192 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-06 11:16:57 +00:00
Ulrich Weigand
fcdfd5a7ff [PowerPC] Avoid using '$' in generated assembler code
PowerPC assemblers are supposed to support a stand-alone '$' symbol
as an alternative of '.' to refer to the current PC.  This does not
work in the LLVM assembler parser yet.

To avoid bootstrap failures when using the LLVM assembler as system
assembler, this patch modifies the assembler source code generated
by LLVM to avoid using '$' (and simply use '.' instead).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181054 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-03 19:53:04 +00:00
Ulrich Weigand
8e4ba8f7b1 [PowerPC] Add some Book II instructions to AsmParser
This patch adds a couple of Book II instructions (isync, icbi) to the
PowerPC assembler parser.  These are needed when bootstrapping clang
with the integrated assembler forced on, because they are used in
inline asm statements in the code base.

The test case adds the full list of Book II storage control instructions,
including associated extended mnemonics.  Again, those that are not yet
supported as marked as FIXME.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181052 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-03 19:51:09 +00:00
Ulrich Weigand
16adfdb2e6 [PowerPC] Support extended mnemonics in AsmParser
This patch adds infrastructure to support extended mnemonics in the
PowerPC assembler parser.  It adds support specifically for those
extended mnemonics that LLVM will itself generate.

The test case lists *all* extended mnemonics according to the
PowerPC ISA v2.06 Book I, but marks those not yet supported
as FIXME.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181051 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-03 19:50:27 +00:00
Ulrich Weigand
5e220753ff [PowerPC] Add assembler parser
This adds assembler parser support to the PowerPC back end.

The parser will run for any powerpc-*-* and powerpc64-*-* triples,
but was tested only on 64-bit Linux.  The supported syntax is
intended to be compatible with the GNU assembler.

The parser does not yet support all PowerPC instructions, but
it does support anything that is generated by LLVM itself.
There is no support for testing restricted instruction sets yet,
i.e. the parser will always accept any instructions it knows,
no matter what feature flags are given.

Instruction operands will be checked for validity and errors
generated.  (Error handling in general could still be improved.)

The patch adds a number of test cases to verify instruction
and operand encodings.  The tests currently cover all instructions
from the following PowerPC ISA v2.06 Book I facilities:
Branch, Fixed-point, Floating-Point, and Vector. 
Note that a number of these instructions are not yet supported
by the back end; they are marked with FIXME.

A number of follow-on check-ins will add extra features.  When
they are all included, LLVM passes all tests (including bootstrap)
when using clang -cc1as as the system assembler.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181050 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-03 19:49:39 +00:00
Rafael Espindola
5b0ce3570c Make all darwin ppc stubs local.
This fixes pr15763.
Patch by David Fang.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180657 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-27 00:43:16 +00:00
Ulrich Weigand
a3acc2b6cf PowerPC: Use RegisterOperand instead of RegisterClass operands
In the default PowerPC assembler syntax, registers are specified simply
by number, so they cannot be distinguished from immediate values (without
looking at the opcode).  This means that the default operand matching logic
for the asm parser does not work, and we need to specify custom matchers.
Since those can only be specified with RegisterOperand classes and not
directly on the RegisterClass, all instructions patterns used by the asm
parser need to use a RegisterOperand (instead of a RegisterClass) for
all their register operands.

This patch adds one RegisterOperand for each RegisterClass, using the
same name as the class, just in lower case, and updates all instruction
patterns to use RegisterOperand instead of RegisterClass operands.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180611 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-26 16:53:15 +00:00
Ulrich Weigand
069a4a9583 PowerPC: Fix encoding of vsubcuw and vsum4sbs instructions
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong sub-opcodes).

Tests will be added together with the asm parser.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180608 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-26 15:39:57 +00:00
Ulrich Weigand
0c0a1be9c5 PowerPC: Fix encoding of stfsu and stfdu instructions
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong sub-opcodes).  Note that apparently
the compiler currently never generates pre-inc instructions
for floating point types for some reason ...

Tests will be added together with the asm parser.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180607 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-26 15:39:40 +00:00
Ulrich Weigand
1adc97c901 PowerPC: Fix encoding of rldimi and rldcl instructions
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong operand name in rldimi, wrong form
and sub-opcode for rldcl).

Tests will be added together with the asm parser.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180606 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-26 15:39:12 +00:00
Ulrich Weigand
8ade909308 PowerPC: Support PC-relative fixup_ppc_brcond14.
When testing the asm parser, I ran into an error when using a conditional
branch to an external symbol (this doesn't occur in compiler-generated
code) due to missing support in PPCELFObjectWriter::getRelocTypeInner.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180605 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-26 15:38:30 +00:00
Bill Schmidt
fa799112dd Change commentary for PowerPC Boolean vector contents.
No functional change intended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180131 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-23 18:49:44 +00:00
Owen Anderson
ed5707baf9 DAGCombine should not aggressively fold SEXT(VSETCC(...)) into a wider VSETCC without first checking the target's vector boolean contents.
This exposed an issue with PowerPC AltiVec where it appears it was setting the wrong vector boolean contents.  The included change
fixes the PowerPC tests, and was OK'd by Hal.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180129 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-23 18:09:28 +00:00
Tim Northover
6265d5c91a Remove unused MEMBARRIER DAG node; it's been replaced by ATOMIC_FENCE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179939 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-20 12:32:17 +00:00
Hal Finkel
abe64dc6f7 Move PPC getSwappedPredicate for reuse
The getSwappedPredicate function can be used in other places (such as in
improvements to the PPCCTRLoops pass). Instead of trapping it as a static
function in PPCInstrInfo, move it into PPCPredicates with other
predicate-related things.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179926 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-20 05:16:26 +00:00
Michael Liao
2a8bea7a8e ArrayRefize getMachineNode(). No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179901 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-19 22:22:57 +00:00
Hal Finkel
87c1e42be7 Fix PPC optimizeCompareInstr swapped-sub argument handling
When matching a compare with a subtract where the arguments of the compare are
swapped w.r.t. the arguments of the subtract, we need to negate the predicates
(or CR bit indices) of the users. This, however, is not the same as inverting
the predicate (negating LT -> GT, but inverting LT -> GE, for example). The ARM
backend seems to do this correctly, but when I adapted the code for the PPC
backend, I introduced an error in this logic.

Comparison optimization is now enabled again by default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179899 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-19 22:08:38 +00:00
Hal Finkel
4029c3feed Disable PPC comparison optimization by default
This seems to cause a stage-2 LLVM compile failure (by crashing TableGen); do
I'm disabling this for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179807 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-18 22:54:25 +00:00
Hal Finkel
860c08cad5 Implement optimizeCompareInstr for PPC
Many PPC instructions have a so-called 'record form' which stores to a specific
condition register the result of comparing the result of the instruction with
zero (always as a signed comparison). For integer operations on PPC64, this is
always a 64-bit comparison.

This implementation is derived from the implementation in the ARM backend;
there are some differences because PPC condition registers are allocatable
virtual registers (although the record forms always use a specific one), and we
look for a matching subtraction instruction after the compare (but before the
first use) in addition to before it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179802 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-18 22:15:08 +00:00
Peter Collingbourne
df39be6cb4 Add support for subsections to the ELF assembler. Fixes PR8717.
Differential Revision: http://llvm-reviews.chandlerc.com/D598

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179725 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-17 21:18:16 +00:00
Ulrich Weigand
1fb54cfb0a PowerPC: Mark some more patterns as isCodeGenOnly.
A couple of recently introduced conditional branch patterns
also need to be marked as isCodeGenOnly since they cannot
be handled by the asm parser.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179690 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-17 17:19:05 +00:00
Hal Finkel
00e86ad167 Mark all PPC comparison instructions as not having side effects
Now that the CR spilling issues have been resolved, we can remove the
unmodeled-side-effect attributes from the comparison instructions (and also
mark them as isCompare). By allowing these, by default, to have unmodeled side
effects, we were hiding problems with CR spilling; but everything seems much
happier now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179502 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-15 02:37:46 +00:00
Hal Finkel
fb6fe0aea2 Fix PPC64 CR spill location for callee-saved registers
This fixes an ABI bug for non-Darwin PPC64. For the callee-saved condition
registers, the spill location is specified relative to the stack pointer (SP +
8). However, this is not relative to the SP after the new stack frame is
established, but instead relative to the caller's stack pointer (it is stored
into the linkage area of the parent's stack frame).

So, like with the link register, we don't directly spill the CRs with other
callee-saved registers, but just mark them to be spilled during prologue
generation.

In practice, this reverts r179457 for PPC64 (but leaves it in place for PPC32).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179500 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-15 02:07:05 +00:00
Hal Finkel
63496f66c5 Mark all PPC CR registers to be spilled as live-in and tag MFCR appropriately
Leaving MFCR has having unmodeled side effects is not enough to prevent
unwanted instruction reordering post-RA. We could probably apply a stronger
barrier attribute, but there is a better way: Add all (not just the first) CR
to be spilled as live-in to the entry block, and add all CRs to the MFCR
instruction as implicitly killed.

Unfortunately, I don't have a small test case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179465 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-13 23:06:15 +00:00
Hal Finkel
b99c995825 Spill and restore PPC CR registers using the FP when we have one
For functions that need to spill CRs, and have dynamic stack allocations, the
value of the SP during the restore is not what it was during the save, and so
we need to use the FP in these cases (as for all of the other spills and
restores, but the CR restore has a special code path because its reserved slot,
like the link register, is specified directly relative to the adjusted SP).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179457 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-13 08:09:20 +00:00
Hal Finkel
598574695b PPC: Remove (broken) nested implicit definition lists
TableGen will not combine nested list 'let' bindings into a single list, and
instead uses only the inner scope. As a result, several instruction definitions
were missing implicit register defs that were in outer scopes. This de-nests
these scopes and makes all instructions have only one let binding which sets
implicit register definitions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179392 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 18:17:57 +00:00
Hal Finkel
81b2fd5819 Add a comment about the PPC Interpretation64Bit bit
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179391 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 18:17:38 +00:00
Hal Finkel
171a8adf31 Add PPC instruction record forms and associated query functions
This is prep. work for the implementation of optimizeCompare. Many PPC
instructions have 'record' forms (in almost all cases, this means that the RC
bit is set) that cause the result of the instruction to be compared with zero,
and the result of that comparison saved in a predefined condition register. In
order to add the record forms of the instructions without too much
copy-and-paste, the relevant functions have been refactored into multiclasses
which define both the record and normal forms.

Also, two TableGen-generated mapping functions have been added which allow
querying the instruction code for the record form given the normal form (and
vice versa).

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179356 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 02:18:09 +00:00
Hal Finkel
4b04029481 Make PPCInstrInfo::isPredicated always return false
Because of how predication in implemented on PPC (only for branches), I think
that this is the right thing to do.  No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179252 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 01:23:34 +00:00
Hal Finkel
da47e17a6f PPC: Don't predicate a diamond with two counter decrements
I've not seen this happen in practice, and probably can't until we start
allowing decrement-counter-based conditional branches to be double predicated,
but just in case, don't allow predication of a diamond in which both sides have
ctr-defining branches. Even though the branching behavior of these can be
predicated, the counter-decrementing behavior cannot be.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179199 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 18:30:16 +00:00
Hal Finkel
4e3172867d Cleanup PPCInstrInfo::DefinesPredicate
Implement suggestions made by Bill Schmidt in post-commit review. Thanks!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179162 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 07:17:47 +00:00
Hal Finkel
90dd7fd167 PPC: Prep for if conversion of bctr[l]
This adds in-principle support for if-converting the bctr[l] instructions.
These instructions are used for indirect branching. It seems, however, that the
current if converter will never actually predicate these. To do so, it would
need the ability to hoist a few setup insts. out of the conditionally-executed
block. For example, code like this:
  void foo(int a, int (*bar)()) { if (a != 0) bar(); }
becomes:
        ...
        beq 0, .LBB0_2
        std 2, 40(1)
        mr 12, 4
        ld 3, 0(4)
        ld 11, 16(4)
        ld 2, 8(4)
        mtctr 3
        bctrl
        ld 2, 40(1)
.LBB0_2:
        ...
and it would be safe to do all of this unconditionally with a predicated
beqctrl instruction.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179156 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 06:42:34 +00:00
Hal Finkel
7eb0d8148e Allow PPC B and BLR to be if-converted into some predicated forms
This enables us to form predicated branches (which are the same conditional
branches we had before) and also a larger set of predicated returns (including
instructions like bdnzlr which is a conditional return and loop-counter
decrement all in one).

At the moment, if conversion does not capture all possible opportunities. A
simple example is provided in early-ret2.ll, where if conversion forms one
predicated return, and then the PPCEarlyReturn pass picks up the other one. So,
at least for now, we'll keep both mechanisms.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179134 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 22:58:37 +00:00
Hal Finkel
13049aef8a Cleanup PPCEarlyReturn
Some general cleanup and only scan the end of a BB for branches (once we're
done with the terminators and debug values, then there should not be any other
branches). These address post-commit review suggestions by Bill Schmidt.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179112 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 18:25:18 +00:00
Hal Finkel
f6f8198d85 Use virtual base registers on PPC
On PowerPC, non-vector loads and stores have r+i forms; however, in functions
with large stack frames these were not being used to access slots far from the
stack pointer because such slots were out of range for the signed 16-bit
immediate offset field. This increases register pressure because we need a
separate register for each offset (when the r+r form is used). By enabling
virtual base registers, we can deal with large stack frames without unduly
increasing register pressure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179105 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 17:27:09 +00:00
Hal Finkel
5ee67e8e76 Generate PPC early conditional returns
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179026 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-08 16:24:03 +00:00
Hal Finkel
59889f7f49 Cleanup and improve PPC fsel generation
First, we should not cheat: fsel-based lowering of select_cc is a
finite-math-only optimization (the ISA manual, section F.3 of v2.06, makes
this clear, as does a note in our own README).

This also adds fsel-based lowering of EQ and NE condition codes. As it turned
out, fsel generation was covered by a grand total of zero regression test
cases. I've added some test cases to cover the existing behavior (which is now
finite-math only), as well as the new EQ cases.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179000 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 22:11:09 +00:00
Hal Finkel
946a811ef1 PPC rotate instructions don't have unmodeled side effcts
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178982 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 15:06:53 +00:00
Hal Finkel
f0e3ca012b Most PPC M[TF]CR instructions do not have side effects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178978 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 14:33:13 +00:00
Hal Finkel
3aea7cb7b2 PPC pre-increment load instructions do not have side effects
A few were missed in r178972.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178973 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 06:30:47 +00:00
Hal Finkel
fa1d102a05 PPC pre-increment load instructions do not have side effects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178972 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 05:46:58 +00:00
Hal Finkel
aecbe24268 PPC MCRF instruction does not have side effects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178971 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 05:16:57 +00:00
Hal Finkel
fa1cac2a1e PPC FMR instruction does not have side effects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178970 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 04:56:16 +00:00
Hal Finkel
839b909653 Implement PPCInstrInfo::FoldImmediate
There are certain PPC instructions into which we can fold a zero immediate
operand. We can detect such cases by looking at the register class required
by the using operand (so long as it is not otherwise constrained).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178961 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 19:30:30 +00:00
Hal Finkel
012ffd5605 PPC ISEL is a select and never has side effects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178960 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 19:30:28 +00:00
Hal Finkel
ff56d1a201 Enable early if conversion on PPC
On cores for which we know the misprediction penalty, and we have
the isel instruction, we can profitably perform early if conversion.
This enables us to replace some small branch sequences with selects
and avoid the potential stalls from mispredicting the branches.

Enabling this feature required implementing canInsertSelect and
insertSelect in PPCInstrInfo; isel code in PPCISelLowering was
refactored to use these functions as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178926 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:29:01 +00:00
Hal Finkel
de80951ae9 Correct the PPC A2 misprediction penalty
The manual states that there is a minimum of 13 cycles from when the
mispredicted branch is issued to when the correct branch target is
issued.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178925 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:28:58 +00:00
Hal Finkel
1abaf907b6 Add a SchedMachineModel for the PPC G5
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178850 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 05:49:18 +00:00
Hal Finkel
575e9229bd Add a SchedMachineModel for the PPC A2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178848 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 05:34:08 +00:00
Arnold Schwaighofer
6bf4f67641 CostModel: Add parameter to instruction cost to further classify operand values
On certain architectures we can support efficient vectorized version of
instructions if the operand value is uniform (splat) or a constant scalar.
An example of this is a vector shift on x86.

We can efficiently support

for (i = 0 ; i < ; i += 4)
  w[0:3] = v[0:3] << <2, 2, 2, 2>

but not

for (i = 0; i < ; i += 4)
  w[0:3] = v[0:3] << x[0:3]

This patch adds a parameter to getArithmeticInstrCost to further qualify operand
values as uniform or uniform constant.

Targets can then choose to return a different cost for instructions with such
operand values.

A follow-up commit will test this feature on x86.

radar://13576547

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178807 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 23:26:21 +00:00
Hal Finkel
caeeb18650 Rename the current PPC BCL definition to BCLalways
BCL is normally a conditional branch-and-link instruction, but has
an unconditional form (which is used in the SjLj code, for example).
To make clear that this BCL instruction definition is specifically
the special unconditional form (which does not meaningfully take
a condition-register input), rename it to BCLalways.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178803 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 22:55:54 +00:00
Hal Finkel
7530a9f7d1 PPC: Improve code generation for mixed-precision reciprocal sqrt
The DAGCombine logic that recognized a/sqrt(b) and transformed it into
a multiplication by the reciprocal sqrt did not handle cases where the
sqrt and the division were separated by an fpext or fptrunc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178801 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 22:44:12 +00:00
Hal Finkel
63c32a7a9f Cleanup PPC reciprocal-estimate functionality
Incorporating review feedback from Bill Schmidt on r178617. No functionality
change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178672 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 17:44:56 +00:00
Hal Finkel
110e323b61 PPC: Enable FRES and FRSQRTE on the default PPC64 description
I discussed this with Bill Schmidt on IRC, and it was decided that this is a
safe and reasonable default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178659 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 14:40:18 +00:00
Hal Finkel
7d52a41e32 PPC: Add a FIXME regarding the non-working fma+fneg Altivec pattern
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178658 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 14:40:16 +00:00
Hal Finkel
d8f8f58476 Remove some obsolete PowerPC/README entries
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178657 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 14:25:55 +00:00
Ulrich Weigand
6b9d52eefd More direct types in PowerPC AltiVec intrinsics.
This patch follows up on work done by Bill Schmidt in r178277,
and replaces most of the remaining uses of VRRC in ISEL DAG patterns.

The resulting .inc files are identical except for comments, so
no change in code generation is expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178656 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 14:08:13 +00:00
Bill Schmidt
cd7a1558ed Fix PR15632: No support for ppcf128 floating-point remainder on PowerPC.
For this we need to use a libcall.  Previously LLVM didn't implement
libcall support for frem, so I've added it in the usual
straightforward manner.  A test case from the bug report is included.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178639 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 13:05:44 +00:00
Hal Finkel
2d8f62017d Remove some unsupported-feature comments from PPC.td
These refer to the reciprocal estimate support recently committed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178618 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 04:03:58 +00:00
Hal Finkel
827307b95f Use PPC reciprocal estimates with Newton iteration in fast-math mode
When unsafe FP math operations are enabled, we can use the fre[s] and
frsqrte[s] instructions, which generate reciprocal (sqrt) estimates, together
with some Newton iteration, in order to quickly generate floating-point
division and sqrt results. All of these instructions are separately optional,
and so each has its own feature flag (except for the Altivec instructions,
which are covered under the existing Altivec flag). Doing this is not only
faster than using the IEEE-compliant fdiv/fsqrt instructions, but allows these
computations to be pipelined with other computations in order to hide their
overall latency.

I've also added a couple of missing fnmsub patterns which turned out to be
missing (but are necessary for good code generation of the Newton iterations).
Altivec needs a similar fix, but that will probably be more complicated because
fneg is expanded for Altivec's v4f32.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178617 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-03 04:01:11 +00:00
Bill Schmidt
debf7d345a Fix PR15630: Replace faulty stdcx. with stwcx.
When doing a partword atomic operation, a lwarx was being paired with
a stdcx. instead of a stwcx. when compiling for a 64-bit target.  The
target has nothing to do with it in this case; we always need a stwcx.

Thanks to Kai Nacke for reporting the problem.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178559 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-02 18:37:08 +00:00
Hal Finkel
2a401959b9 Fix typo in PPCISelLowering
Thanks to Bill Schmidt for finding this in review of r178480.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178521 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-02 03:29:51 +00:00
Hal Finkel
a1646ceb9a Fix a bad assert in PPCTargetLowering
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178489 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-01 18:42:58 +00:00
Hal Finkel
4647919784 Add more PPC floating-point conversion instructions
The P7 and A2 have additional floating-point conversion instructions which
allow a direct two-instruction sequence (plus load/store) to convert from all
combinations (signed/unsigned i32/i64) <--> (float/double) (on previous cores,
only some combinations were directly available).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178480 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-01 17:52:07 +00:00
Hal Finkel
a1f4290ac9 Use ImmToIdxMap.count in PPCRegisterInfo
Code improvement suggested by Jakob (in review of r178450). No functionality
change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178473 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-01 17:02:06 +00:00
Hal Finkel
1fce88313e Add the PPC popcntw instruction
The popcntw instruction is available whenever the popcntd instruction is
available, and performs a separate popcnt on the lower and upper 32-bits.
Ignoring the high-order count, this can be used for the 32-bit input case
(saving on the explicit zero extension otherwise required to use popcntd).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178470 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-01 15:58:15 +00:00
Hal Finkel
f170cc9b2e Treat PPCISD::STFIWX like the memory opcode that it is
PPCISD::STFIWX is really a memory opcode, and so it should come after
FIRST_TARGET_MEMORY_OPCODE, and we should use DAG.getMemIntrinsicNode to create
nodes using it.

No functionality change intended (although there could be optimization benefits
from preserving the MMO information).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178468 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-01 15:37:53 +00:00
Hal Finkel
4345e8040f Cleanup ImmToIdxMap and noImmForm in PPCRegisterInfo
ImmToIdxMap should be a DenseMap (not a std::map) because there
is no ordering requirement. Also, we don't need a separate list
of instructions for noImmForm in eliminateFrameIndex, because this
list is essentially the complement of the keys in ImmToIdxMap.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178450 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-31 14:43:31 +00:00
Hal Finkel
8049ab15e4 Add the PPC lfiwax instruction
This instruction is available on modern PPC64 CPUs, and is now used
to improve the SINT_TO_FP lowering (by eliminating the need for the
separate sign extension instruction and decreasing the amount of
needed stack space).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178446 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-31 10:12:51 +00:00
Hal Finkel
9ad0f4907b Cleanup PPC(64) i32 -> float/double conversion
The existing SINT_TO_FP code for i32 -> float/double conversion was disabled
because it relied on broken EXTSW_32/STD_32 instruction definitions. The
original intent had been to enable these 64-bit instructions to be used on CPUs
that support them even in 32-bit mode.  Unfortunately, this form of lying to
the infrastructure was buggy (as explained in the FIXME comment) and had
therefore been disabled.

This re-enables this functionality, using regular DAG nodes, but only when
compiling in 64-bit mode. The old STD_32/EXTSW_32 definitions (which were dead)
are removed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178438 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-31 01:58:02 +00:00
Hal Finkel
0882fd6c4f Implement FRINT lowering on PPC using frin
Like nearbyint, rint can be implemented on PPC using the frin instruction. The
complication comes from the fact that rint needs to set the FE_INEXACT flag
when the result does not equal the input value (and frin does not do that). As
a result, we use a custom inserter which, after the rounding, compares the
rounded value with the original, and if they differ, explicitly sets the XX bit
in the FPSCR register (which corresponds to FE_INEXACT).

Once LLVM has better modeling of the floating-point environment we should be
able to (often) eliminate this extra complexity.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178362 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-29 19:41:55 +00:00
Benjamin Kramer
74a4533a42 Remove the old CodePlacementOpt pass.
It was superseded by MachineBlockPlacement and disabled by default since LLVM 3.1.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178349 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-29 17:14:24 +00:00
Hal Finkel
f5d5c43460 Add PPC FP rounding instructions fri[mnpz]
These instructions are available on the P5x (and later) and on the A2. They
implement the standard floating-point rounding operations (floor, trunc, etc.).
One caveat: frin (round to nearest) does not implement "ties to even", and so
is only enabled in fast-math mode.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178337 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-29 08:57:48 +00:00
Hal Finkel
2544f221c5 Only enable 64-bit bswap DAG combines for PPC64
Compiling in 32-bit mode on a P7 would assert after 64-bit DAG combines were
added for bswap with load/store. This is because these combines are really only
valid in 64-bit mode, regardless of the CPU (and this was not being checked).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178286 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 20:23:46 +00:00
Hal Finkel
b52980be07 Fix bad indentation in r178276
Thanks to Bill Schmidt for pointing this out!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178280 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 19:43:12 +00:00
Bill Schmidt
53774a821d Use direct types in most PowerPC Altivec instructions and patterns.
This follows up Ulrich Weigand's work in PPCInstrInfo.td and
PPCInstr64Bit.td by doing the corresponding work for most of the
Altivec patterns.  I have not been able to do anything for the
following classes of instructions:

(1) Vector logicals.  These don't have corresponding intrinsics and
don't have a single obvious vector type.  So far as I can tell I need
to leave these as VRRC.  Affected instructions are:  VAND, VANDC,
VNOR, VOR, VXOR, V_SET0.

(2) Instructions that make use of vector shuffle.  The selection code
promotes all shuffles to v16i8, so any pattern that matches on a
shuffle is constrained.  I haven't found any way to make the patterns
match on their natural types, so I plan to leave these as VRRC.
Affected instructions are:  VMRG*, VSPLTB, VSPLTH, VSPLTW, VPKUHUM,
VPKUWUM.

No change in behavior is anticipated.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178277 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 19:27:24 +00:00
Hal Finkel
efdd4673d6 Add the PPC64 ldbrx/stdbrx instructions
These are 64-bit load/store with byte-swap, and available on the P7 and the A2.
Like the similar instructions for 16- and 32-bit words, these are matched in the
target DAG-combine phase against load/store-bswap pairs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178276 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 19:25:55 +00:00
Hal Finkel
c53ab4d77f Add the PPC64 popcntd instruction
PPC ISA 2.06 (P7, A2, etc.) has a popcntd instruction. Add this instruction and
tell TTI about it so that popcount-loop recognition will know about it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178233 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 13:29:47 +00:00
Hal Finkel
d957f957ee Cleanup PPC CR-spill kill flags and 32- vs. 64-bit instructions
There were a few places where kill flags were not being set correctly, and
where 32-bit instruction variants were being used with 64-bit registers. After
r178180, this code was being triggered causing llc to assert.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178220 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 03:38:16 +00:00
Hal Finkel
d01efc737a Fix typo in PPCInstr64Bit
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178219 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-28 03:38:08 +00:00
Hal Finkel
f25f93b685 Resynchronize isLoadFromStackSlot with LoadRegFromStackSlot (and stores) in PPCInstrInfo
These functions should have the same list of load/store instructions. Now that
all load/store forms have been normalized (to single instructions or pseudos)
they can be resynchronized.

Found by inspection, although hopefully this will improve optimization.  I've
also added some comments.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178180 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 21:21:15 +00:00
Hal Finkel
e915047fed Fix typo (common to both X86 and PPC)
Thanks to Bill Schmidt for pointing this out during code review!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178170 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 19:10:42 +00:00
Hal Finkel
fc80586968 Remove more dead LR-as-GPR PPC code
I had removed similar code a few days ago, but somehow missed this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178169 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 19:10:40 +00:00
Hal Finkel
e77918c355 Remove "gpr0 allocation" from the PPC README TODO list
As Chris pointed out, post r178123, this is now done!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178165 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 18:39:52 +00:00
Hal Finkel
32e12df253 Print PPC ZERO as 0 (not r0) even on Darwin
It seems that the Darwin PPC assembler requires r0 to be written as 0 when it
means 0 (at least in lwarx/stwcx.). Fixes PR15605.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178142 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 13:20:52 +00:00
Hal Finkel
240b7f3324 Allocate r0 on PPC
The R0 register can now be allocated because instructions
that cannot use R0 as a GPR have been appropriately marked.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178123 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 06:52:27 +00:00
Hal Finkel
6375e1b87b Use the PPC no-r0 class on the TOC LD pseudos
The register parameter in these instructions becomes the base register in an
r+i ld instruction (and, thus, cannot be r0).

This is not yet testable because we don't yet allocate r0 (and even then any
test would be very fragile).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178121 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 06:36:55 +00:00
Hal Finkel
ab42ec2586 Apply the no-r0 register class to the PPC SELECT_CC_I[4|8] pseudos
Either operand of these pseudo instructions can be transformed into the first
operand of an isel instruction (and this operand cannot be r0).

This is not yet testable because we don't yet allocate r0 (and even when we do,
any test would be very fragile).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178119 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 05:57:58 +00:00
Hal Finkel
56d926ac14 Apply the no-r0 class to PPC TOC ADDI[S] pseudo instructions
Like the addi/addis instructions themselves, these pseudo instructions also
cannot have r0 as their register parameter (because it will be interpreted as
the value 0).

This is not yet testable because we don't yet allocate r0 (and even when we do,
any regression test would be very fragile because it would depend on the
register allocator heuristics).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178118 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 05:57:56 +00:00
Bill Schmidt
37ef805818 Remove the link register from the GPR classes on PowerPC.
Some implementation detail in the forgotten past required the link
register to be placed in the GPRC and G8RC register classes.  This is
just wrong on the face of it, and causes several extra intersection
register classes to be generated.  I found this was having evil
effects on instruction scheduling, by causing the wrong register class
to be consulted for register pressure decisions.

No code generation changes are expected, other than some minor changes
in instruction order.  Seven tests in the test bucket required minor
tweaks to adjust to the new normal.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178114 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 02:40:14 +00:00
Hal Finkel
b7e11e400d Don't spill PPC VRSAVE on non-Darwin (even in SjLj)
As Bill Schmidt pointed out to me, only on Darwin do we need to spill/restore
VRSAVE in the SjLj code. For non-Darwin, don't spill/restore VRSAVE (and I've
added some asserts to make sure that we're not).

As it turns out, we're not currently handling the Darwin case correctly (I've
added a FIXME in the test case). I've tried adding various implied register
definitions/uses to force the spill without success, so I'll need to address
this later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178096 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-27 00:02:20 +00:00
Hal Finkel
1a0034c74a Restore real bit lengths on PPC register numbers
As suggested by Bill Schmidt (in reviewing r178067), use the real register
number bit lengths (which is self-documenting, and prevents using illegal
numbers), and set only the relevant bits in HWEncoding (which defaults to 0).

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178077 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 21:50:26 +00:00
Hal Finkel
aa6047d23d PPC: Use HWEncoding and TRI->getEncodingValue
As pointed out by Jakob, we don't need to maintain a separate
register-numbering table. Instead we should let TableGen generate the table for
us from the information (already present) in PPCRegisterInfo.td.
TRI->getEncodingValue is now used to access register-encoding values.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178067 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 20:08:20 +00:00
Hal Finkel
01f99d29c3 Use multiple virtual registers in PPC CR spilling
Now that the register scavenger can support multiple spill slots, and PEI can
use virtual-register-based scavenging for multiple simultaneous registers, we
can use a virtual register for the transfer register in the CR spilling code.

This should eliminate the last place (outside of the prologue/epilogue) where
we depend on the unconditional availability of the r0 register. We will soon be
able to allocate it (in a somewhat restricted sense) as a GPR.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178060 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 18:57:22 +00:00
Hal Finkel
3b196f20fb Update PPCRegisterInfo's use of virtual registers to be SSA
PPC's use of PEI's virtual-register-based scavenging functionality had
redefined the virtual registers (it was non-SSA). Now that PEI supports
dealing with instructions with multiple virtual registers, this can be
cleanup up to use multiple virtual registers and keep SSA form.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178059 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 18:57:20 +00:00
Benjamin Kramer
d6f5a581ab Remove default case from fully covered switch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178025 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 14:17:42 +00:00
Ulrich Weigand
3d386421e0 PowerPC: Mark patterns as isCodeGenOnly.
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.

This commit marks those patterns as isCodeGenOnly.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178008 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:57:16 +00:00
Ulrich Weigand
65e90c0364 PowerPC: Simplify handling of fixups.
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like:

 if (isSVR4ABI() && is64BitMode())
   Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
                                    (MCFixupKind)PPC::fixup_ppc_toc16));
 else
   Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
                                    (MCFixupKind)PPC::fixup_ppc_lo16));

This is a problem for the asm parser, since it requires knowledge of
the ABI / 64-bit mode to be set up.  However, more fundamentally,
at this point we shouldn't make such distinctions anyway; in an assembler
file, it always ought to be possible to e.g. generate TOC relocations even
when the main ABI is one that doesn't use TOC.

Fortunately, this is actually completely unnecessary; that code was added
to decide whether to generate TOC relocations, but that information is in
fact already encoded in the VariantKind of the underlying symbol.

This commit therefore merges those fixup types into one, and then decides
which relocation to use based on the VariantKind.

No changes in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178007 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:56:47 +00:00
Ulrich Weigand
7d35d3f432 PowerPC: Simplify FADD in round-to-zero mode.
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode.  This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.

The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.

This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter).  Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.

No significant change in generated code expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178006 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:56:22 +00:00
Ulrich Weigand
d67768db80 PowerPC: Remove LDrs pattern.
The LDrs pattern is a duplicate of LD, except that it accepts memory
addresses where the displacement is a symbolLo64.  An operand type
"memrs" is defined for just that purpose.

However, this wouldn't be necessary if the default "memrix" operand
type were to simply accept 64-bit symbolic addresses directly.
The only problem with that is that it uses "symbolLo", which is
hardcoded to 32-bit.

To fix this, this commit changes "memri" and "memrix" to use new
operand types for the memory displacement, which allow iPTR
instead of i32.  This will also make address parsing easier to
implment in the asm parser.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178005 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:55:45 +00:00
Ulrich Weigand
2b0850b830 PowerPC: Remove ADDIL patterns.
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L,
which describe the same instruction, except that they accept a
symbolLo[64] operand instead of a s16imm[64] operand.

This duplication confuses the asm parser, and it actually not really
needed, since symbolLo[64] already accepts immediate operands anyway.
So this commit removes the duplicate patterns.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178004 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:55:20 +00:00
Ulrich Weigand
a01c7dbaab PowerPC: Use CCBITRC operand for ISEL patterns.
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand.  This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178003 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:54:54 +00:00
Ulrich Weigand
3b25529336 PowerPC: Simplify BLR pattern.
The BLR pattern cannot be recognized by the asm parser in its current form.
This complexity is due to an apparent attempt to enable conditional BLR
variants.  However, none of those can ever be generated by current code;
the pattern is only ever created using the default "pred" operand.

To simplify the pattern and allow it to be recognized by the parser,
this commit removes those attempts at conditional BLR support.

When we later come back to actually add real conditional BLR, this
should probably be done via a fully generic conditional branch pattern.

No change in generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178002 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:53:27 +00:00
Ulrich Weigand
e8680da874 PowerPC: Move some 64-bit branch patterns.
In PPCInstr64Bit.td, some branch patterns appear in a different sequence
than the corresponding 32-bit patterns in PPCInstrInfo.td.

To simplify future changes that affect both files, this commit moves
those patterns to rearrange them into a similar sequence.

No effect on generated code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178001 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-26 10:53:03 +00:00
Ulrich Weigand
5b390e4cd8 Use direct types in PowerPC instruction patterns.
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
instruction patterns, along the lines of Jakob Stoklund Olesen's
changes in r177835 for Sparc.
 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177890 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-25 19:05:30 +00:00
Ulrich Weigand
1492a4e518 Use direct types in PowerPC Pat patterns.
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
Pat patterns, along the lines of Jakob Stoklund Olesen's
changes in r177829 for Sparc.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177889 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-25 19:04:58 +00:00
Hal Finkel
526d6c451b PPC ZERO register needs a register number of 0.
In order for the new ZERO register to be used with MC, etc. we need to specify
its register number (0).

Thanks to Kai for reporting the problem!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177833 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 22:06:07 +00:00
Hal Finkel
3f2c047f32 Note in PPCFunctionInfo VRSAVE spills
In preparation for using the new register scavenger capability for providing
more than one register simultaneously, specifically note functions that have
spilled VRSAVE (currently, this can happen only in functions that use the
setjmp intrinsic). As with CR spilling, such functions will need to provide two
emergency spill slots to the scavenger.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177832 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 22:06:03 +00:00
Hal Finkel
7d35f74a5d MCize the bcl instruction in PPCAsmPrinter
I recently added a BCL instruction definition as part of implementing SjLj
support. This can also be used to MCize bcl emission in the asm printer.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177830 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 20:53:15 +00:00
Hal Finkel
02327fefd8 Cleanup some unused reg. scavenger parameters in PPCRegisterInfo
These spilling functions will eventually make use of the register scavenger,
however, they'll do so by taking advantage of PEI's virtual-register-based
delayed scavenging mechanism. As a result, these function parameters will not
be used, and can be removed.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177827 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 19:36:47 +00:00
Hal Finkel
7257fda1b3 Remove dead PPC LR spilling code
The LR register is unconditionally reserved, and its spilling and restoration
is handled by the prologue/epilogue code. As a result, it is never explicitly
spilled by the register allocator.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177823 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 17:14:27 +00:00
Hal Finkel
dc3beb9017 Allow the register scavenger to spill multiple registers
This patch lets the register scavenger make use of multiple spill slots in
order to guarantee that it will be able to provide multiple registers
simultaneously.

To support this, the RS's API has changed slightly: setScavengingFrameIndex /
getScavengingFrameIndex have been replaced by addScavengingFrameIndex /
isScavengingFrameIndex / getScavengingFrameIndices.

In forthcoming commits, the PowerPC backend will use this capability in order
to implement the spilling of condition registers, and some special-purpose
registers, without relying on r0 being reserved. In some cases, spilling these
registers requires two GPRs: one for addressing and one to hold the value being
transferred.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177774 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 23:32:27 +00:00
Ulrich Weigand
86765fbe17 Remove ABI-duplicated call instruction patterns.
We currently have a duplicated set of call instruction patterns depending
on the ABI to be followed (Darwin vs. Linux).  This is a bit odd; while the
different ABIs will result in different instruction sequences, the actual
instructions themselves ought to be independent of the ABI.  And in fact it
turns out that the only nontrivial difference between the two sets of
patterns is that in the PPC64 Linux ABI, the instruction used for indirect
calls is marked to take X11 as extra input register (which is indeed used
only with that ABI to hold an incoming environment pointer for nested
functions).  However, this does not need to be hard-coded at the .td
pattern level; instead, the C++ code expanding calls can simply add that
use, just like it adds uses for argument registers anyway.

No change in generated code expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177735 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 15:24:13 +00:00
Ulrich Weigand
89ec847ec7 Rename memrr ptrreg and offreg components.
Currently, the sub-operand of a memrr address that corresponds to what
hardware considers the base register is called "offreg", while the
sub-operand that corresponds to the offset is called "ptrreg".

To avoid confusion, this patch simply swaps the named of those two
sub-operands and updates all uses.  No functional change is intended.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177734 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:59:13 +00:00
Ulrich Weigand
881a7154b9 Fix swapped BasePtr and Offset in pre-inc memory addresses.
PPCTargetLowering::getPreIndexedAddressParts currently provides
the base part of a memory address in the offset result, and the
offset part in the base result.  That swap is then undone again
when an MI instruction is generated (in PPCDAGToDAGISel::Select
for loads, and using .md Pat patterns for stores).

This patch reverts this double swap, to make common code and
back-end be in sync as to which part of the address is base
and which is offset.

To avoid performance regressions in certain cases, target code
now checks whether the choice of base register would be rejected
for pre-inc accesses by common code, and attempts to swap base
and offset again in such cases.  (Overall, this means that now
pre-ice accesses are generated *more* frequently than before.)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177733 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:58:48 +00:00
Ulrich Weigand
0301e79a1a Tighten iaddroff ComplexPattern.
The iaddroff ComplexPattern is supposed to recognize displacement
expressions that have been processed by a SelectAddressRegImm,
which means it needs to accept TargetConstant and TargetGlobalAddress
nodes.  Currently, it erroneously also accepts some other nodes,
in particular Constant and PPCISD::Lo.

While this problem is currently latent, it would cause wrong-code
bugs with a follow-on patch I'm about to commit, so this patch
tightens the ComplexPattern.  The equivalent change is made in
PPCDAGToDAGISel::Select, where pre-inc load patterns are handled
(as opposed to store patterns, the loads are handled in C++ code
without making use of the .td ComplexPattern).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177732 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:58:17 +00:00
Ulrich Weigand
cff0faa16a Remove the xaddroff ComplexPattern.
The xaddroff pattern is currently (mistakenly) used to recognize
the *base* register in pre-inc store patterns.  This patch replaces
those uses by ptr_rc_nor0 (as is elsewhere done to match the base
register of an address), and removes the now unused ComplexPattern.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177731 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:57:48 +00:00
Hal Finkel
7697370adf Remove the G8RC_NOX0_and_GPRC_NOR0 PPC register class
As Jakob pointed out in his review of r177423, having a shared ZERO
register between the 32- and 64-bit register classes causes this
odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended,
this adds a ZERO8 register which differentiates the 32- and 64-bit
zeros.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177683 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:45:03 +00:00
Hal Finkel
3ea1b064a0 Fix a register-class comparison bug in PPCCTRLoops
Thanks to Jakob for isolating the underlying problem from the
test case in r177423. The original commit had introduced
asymmetric copy operations, but these turned out to be a work-around
to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177679 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:23:34 +00:00
Hal Finkel
7ee74a663a Implement builtin_{setjmp/longjmp} on PPC
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.

Benchmarking the speed at -O3 of:

static jmp_buf env_sigill;

void foo() {
                __builtin_longjmp(env_sigill,1);
}

main() {
	...

        for (int i = 0; i < c; ++i) {
                if (__builtin_setjmp(env_sigill)) {
                        goto done;
                } else {
                        foo();
                }

done:;
        }

	...
}

vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177666 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 21:37:52 +00:00
Hal Finkel
10f7f2a222 Add support for spilling VRSAVE on PPC
Although there is only one Altivec VRSAVE register, it is a member of
a register class, and we need the ability to spill it. Because this
register is normally callee-preserved and handled by special code this
has never before been necessary. However, this capability will be required by
a forthcoming commit adding SjLj support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177654 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:21 +00:00
Hal Finkel
e9cc0a09ae Correct PPC FRAMEADDR lowering using a pseudo-register
The old code used to lower FRAMEADDR tried to replicate the logic in the real
frame-lowering code that determines whether or not the frame pointer (r31) will
be used. When it seemed as through the frame pointer would not be used, the
stack pointer (r1) was used instead. Unfortunately, because the stack size is
not yet known, this does not work. Instead, this change introduces new
always-reserved pseudo-registers (FP and FP8) that are replaced during prologue
insertion with the real frame-pointer register (either r1 or r31).

It is important that this intrinsic always return a valid frame address because
it is used by Clang to store the frame address as part of code generation for
__builtin_setjmp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177653 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:19 +00:00
Ulrich Weigand
dff4d1522a Add missing mayLoad flag to LHAUX8 and LWAUX.
All pre-increment load patterns need to set the mayLoad flag (since
they don't provide a DAG pattern).

This was missing for LHAUX8 and LWAUX, which is added by this patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177431 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:53:27 +00:00
Ulrich Weigand
8353d1e0e5 Rewrite LHAU8 pattern to use standard memory operand.
As opposed to to pre-increment store patterns, the pre-increment
load patterns were already using standard memory operands, with
the sole exception of LHAU8.

As there's no real reason why LHAU8 should be different here,
this patch simply rewrites the pattern to also use a memri
operand, just like all the other patterns.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177430 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:30 +00:00
Ulrich Weigand
5882e3d828 Rewrite pre-increment store patterns to use standard memory operands.
Currently, pre-increment store patterns are written to use two separate
operands to represent address base and displacement:

  stwu $rS, $ptroff($ptrreg)

This causes problems when implementing the assembler parser, so this
commit changes the patterns to use standard (complex) memory operands
like in all other memory access instruction patterns:

  stwu $rS, $dst

To still match those instructions against the appropriate pre_store
SelectionDAG nodes, the patch uses the new feature that allows a Pat
to match multiple DAG operands against a single (complex) instruction
operand.

Approved by Hal Finkel.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177429 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:04 +00:00
Ulrich Weigand
880d82e3db Fix sub-operand size mismatch in tocentry operands.
The tocentry operand class refers to 64-bit values (it is only used in 64-bit,
where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit
type.  This causes a mismatch to be detected at compile-time with the TableGen
patch I'll check in shortly.

To fix this, this commit changes the suboperand to a 64-bit type as well.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177427 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:50:30 +00:00
Hal Finkel
a548afc98f Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:

 1. r0 is treated specially (as the constant 0) by certain instructions, and so
    cannot be used with those instructions as a regular register.

 2. r0 is used as a temporary register in the CR-register spilling process
    (where, under some circumstances, we require two GPRs).

This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.

Once the CR spilling code is improved, we'll be able to allocate r0.

Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.

As r0 is still reserved, no functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177423 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:51:05 +00:00
Hal Finkel
ec2e968b7a Cleanup PPC64 unaligned i64 load/store
Remove an accidentally-added instruction definition and add a comment in the
test case. This is in response to a post-commit review by Bill Schmidt.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177404 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 15:23:39 +00:00
Hal Finkel
54e57f8cb7 Don't reserve R31 on PPC64 unless the frame pointer is needed
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177379 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 08:09:38 +00:00
Hal Finkel
9f2518cdc6 Fix a sign-extension bug in PPCCTRLoops
Don't sign extend the immediate value from the OR instruction in
an LIS/OR pair.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177361 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 23:58:28 +00:00
Hal Finkel
08a215c286 Fix PPC unaligned 64-bit loads and stores
PPC64 supports unaligned loads and stores of 64-bit values, but
in order to use the r+i forms, the offset must be a multiple of 4.
Unfortunately, this cannot always be determined by examining the
immediate itself because it might be available only via a TOC entry.

In order to get around this issue, we additionally predicate the
selection of the r+i form on the alignment of the load or store
(forcing it to be at least 4 in order to select the r+i form).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177338 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 23:00:58 +00:00
Hal Finkel
e39b107c46 Fix 80-col. violations in PPCCTRLoops
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177296 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:46 +00:00
Hal Finkel
9887ec31e6 Fix large count and negative constant count handling in PPCCTRLoops
This commit fixes an assert that would occur on loops with large constant counts
(like looping for ((uint32_t) -1) iterations on PPC64). The existing code did
not handle counts that it computed to be negative (asserting instead), but
these can be created with valid inputs.

This bug was discovered by bugpoint while I was attempting to isolate a
completely different problem.

Also, in writing test cases for the negative-count problem, I discovered that
the ori/lsi handling was broken (there was a typo which caused the logic that
was supposed to detect these pairs and extract the iteration count to always
fail). This has now also been corrected (and is covered by one of the new test
cases).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177295 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:44 +00:00
Hal Finkel
1448d06156 Cleanup initial-value constants in PPCCTRLoops
Because the initial-value constants had not been added to the list
of instructions considered for DCE the resulting code had redundant
constant-materialization instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177294 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:27 +00:00
Sylvestre Ledru
53856be683 To avoid symbol clash, undefine PPC here. PPC may be predefined on some hosts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177234 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-17 12:40:42 +00:00
Hal Finkel
3249729043 Improve PPC VR (Altivec) register spilling
This change cleans up two issues with Altivec register spilling:

  1. The spilling code was inefficient (using two instructions, and add and a
     load, when just one would do)

  2. The code assumed that r0 would always be available (true for now, but this
     will change)

The new code handles VR spilling just like GPR spills but forced into r+r mode.
As a result, when any VR spills are present, we must now always allocate the
register-scavenger spill slot.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177231 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-17 04:43:44 +00:00
Hal Finkel
ce638c8248 Remove PPC avoidWriteAfterWrite callback
As a follow-up to r158719, remove PPCRegisterInfo::avoidWriteAfterWrite.
Jakob pointed out in response to r158719 that this callback is currently unused
and so this has no effect (and the speedups that I thought that I had observed
as a result of implementing this function must have been noise).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177228 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-16 22:50:51 +00:00
Hal Finkel
2d37f7b979 Enable unaligned memory access on PPC for scalar types
Unaligned access is supported on PPC for non-vector types, and is generally
more efficient than manually expanding the loads and stores.

A few of the existing test cases were using expanded unaligned loads and stores
to test other features (like load/store with update), and for these test cases,
unaligned access remains disabled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177160 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-15 15:27:13 +00:00
Hal Finkel
044f841267 Protect PPC Altivec patterns with a predicate
In preparation for the addition of other SIMD ISA extensions (such as QPX) we
need to make sure that all Altivec patterns are properly predicated on having
Altivec support.

No functionality change intended (one test case needed to be updated b/c it
assumed that Altivec intrinsics would be supported without enabling Altivec
support).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177152 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-15 13:21:21 +00:00
Hal Finkel
0cfb42adb5 Allocate the RS spill slot for any PPC function with spills and a large stack frame
For spills into a large stack frame, the FI-elimination code uses the register
scavenger to obtain a free GPR for use with an r+r-addressed load or store.
When there are no available GPRs, the scavenger gets one by using its spill
slot. Previously, we were not always allocating that spill slot and the RS
would assert when the spill slot was needed.

I don't currently have a small test that triggered the assert, but I've
created a small regression test that verifies that the spill slot is now
added when the stack frame is sufficiently large.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177140 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-15 05:06:04 +00:00