821 Commits

Author SHA1 Message Date
Hal Finkel
ee8e48d4c9 [PowerPC] Handle VSX v2i64 SIGN_EXTEND_INREG
sitofp from v2i32 to v2f64 ends up generating a SIGN_EXTEND_INREG v2i64 node
(and similarly for v2i16 and v2i8). Even though there are no sign-extension (or
algebraic shifts) for v2i64 types, we can handle v2i32 sign extensions by
converting two and from v2i64. The small trick necessary here is to shift the
i32 elements into the right lanes before the i32 -> f64 step. This is because
of the big Endian nature of the system, we need the i32 portion in the high
word of the i64 elements.

For v2i16 and v2i8 we can do the same, but we first use the default Altivec
shift-based expansion from v2i16 or v2i8 to v2i32 (by casting to v4i32) and
then apply the above procedure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205146 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-30 13:22:59 +00:00
Hal Finkel
7563821402 [PowerPC] Handle v2i64 comparisons
v2i64 is a legal type under VSX, however we don't have native vector
comparisons. We can handle eq/ne by casting it to an Altivec type, but
everything else must be expanded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205106 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 16:04:40 +00:00
Hal Finkel
44b2b9dc1a [PowerPC] Add subregister classes for f64 VSX values
We had stored both f64 values and v2f64, etc. values in the VSX registers. This
worked, but was suboptimal because we would always spill 16-byte values even
through we almost always had scalar 8-byte values. This resulted in an
increase in stack-size use, extra memory bandwidth, etc. To fix this, I've
added 64-bit subregisters of the Altivec registers, and combined those with the
existing scalar floating-point registers to form a class of VSX scalar
floating-point registers. The ABI code has also been enhanced to use this
register class and some other necessary improvements have been made.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205075 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 05:29:01 +00:00
Hal Finkel
c9de9e60b9 [PowerPC] v2[fi]64 need to be explicitly passed in VSX registers
v2[fi]64 values need to be explicitly passed in VSX registers. This is because
the code in TRI that finds the minimal register class given a register and a
value type will assert if given an Altivec register and a non-Altivec type.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205041 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-28 19:58:11 +00:00
Hal Finkel
6bdc4ebedd [PowerPC] Fix v2f64 vector extract and related patterns
First, v2f64 vector extract had not been declared legal (and so the existing
patterns were not being used). Second, the patterns for that, and for
scalar_to_vector, should really be a regclass copy, not a subregister
operation, because the VSX registers directly hold both the vector and scalar data.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204971 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 22:22:48 +00:00
Hal Finkel
276d854549 [PowerPC] Expand v2i64 shifts
These operations need to be expanded during legalization so that isel does not
crash. In theory, we might be able to custom lower some of these. That,
however, would need to be follow-up work.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204963 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 21:26:33 +00:00
Hal Finkel
ee5f4bb6b3 [PowerPC] Generate VSX permutations for v2[fi]64 vectors
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204873 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 22:58:37 +00:00
Hal Finkel
6da0178737 [PowerPC] VSX loads and stores support unaligned access
I've not yet updated PPCTTI because I'm not sure what the actual relative cost
is compared to the aligned uses.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204848 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 19:39:09 +00:00
Hal Finkel
b397453155 [PowerPC] Use v2f64 <-> v2i64 VSX conversion instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204843 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 19:13:54 +00:00
Hal Finkel
c6940d4cb7 [PowerPC] Use VSX vector load/stores for v2[fi]64
These instructions have access to the complete VSX register file. In addition,
they "swap" the order of the elements so that element 0 (the scalar part) comes
first in memory and element 1 follows at a higher address.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204838 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 18:26:30 +00:00
Hal Finkel
7363d2223e [PowerPC] Add v2i64 as a legal VSX type
v2i64 needs to be a legal VSX type because it is the SetCC result type from
v2f64 comparisons. We need to expand all non-arithmetic v2i64 operations.

This fixes the lowering for v2f64 VSELECT.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204828 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 16:12:58 +00:00
Hal Finkel
159e7f4095 [PowerPC] Lower VSELECT using xxsel when VSX is available
With VSX there is a real vector select instruction, and so we should use it.
Note that VSELECT will still scalarize for v2f64 because the corresponding
SetCC result type (v2i64) is not currently a legal type.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204801 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 12:49:28 +00:00
Hal Finkel
ab849adec4 [PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.

The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).

Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that.  The assembler and disassembler
are fully implemented and tested. However:

 - CodeGen support causes miscompiles; test-suite runtime failures:
      MultiSource/Benchmarks/FreeBench/distray/distray
      MultiSource/Benchmarks/McCat/08-main/main
      MultiSource/Benchmarks/Olden/voronoi/voronoi
      MultiSource/Benchmarks/mafft/pairlocalalign
      MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
      SingleSource/Benchmarks/CoyoteBench/almabench
      SingleSource/Benchmarks/Misc/matmul_f64_4x4

 - The lowering currently falls back to using Altivec instructions far more
   than it should. Worse, there are some things that are scalarized through the
   stack that shouldn't be.

 - A lot of unnecessary copies make it past the optimizers, and this needs to
   be fixed.

 - Many more regression tests are needed.

Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203768 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-13 07:58:58 +00:00
Hal Finkel
341ea7ddf6 Fixup PPC Darwin i1 argument handling
Like on other targets, we need to zero_extend/truncate i1 args before copying
them to GPRs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203045 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 00:45:19 +00:00
Hal Finkel
025c1cefca When using CR bit registers on PPC32, handle the i1 vaarg case
When copying an i1 value into a GPR for a vaarg call, we need to explicitly
zero-extend the i1 value (otherwise an invalid CRBIT -> GPR copy will be
generated).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203041 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-06 00:23:33 +00:00
Hal Finkel
f698d7775a With PPC CR bit registers, handle int_to_fp on older cores
On cores without fpcvt support, we cannot promote int_to_fp i1 operations,
because there is nothing to promote them to. The most straightforward
implementation of this uses a select to choose between the two possible
resulting floating-point values (and that's what is done here).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203015 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-05 22:14:00 +00:00
Hal Finkel
5a49125fec Add a PPC inline asm constraint type for single CR bits
Now that the PowerPC backend can track individual CR bits as first-class
registers, we should also have a way of allocating them for inline asm
statements. Because these registers are only one bit, if an output variable is
implicitly cast to a larger integer size, we'll get an any_extend to that
larger type (this is part of the existing target-independent logic). As a
result, regardless of the size of the output type, only the first bit is
meaningful.

The constraint identifier "wc" has been chosen for this purpose. Although gcc
does not currently support allocating individual CR bits, this identifier
choice has been coordinated with the gcc PowerPC team, and will be marked as
reserved for this purpose in the gcc constraints.md file.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202657 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-02 18:23:39 +00:00
Benjamin Kramer
d628f19f5d [C++11] Replace llvm::next and llvm::prior with std::next and std::prev.
Remove the old functions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202636 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-02 12:27:27 +00:00
Hal Finkel
92b38a9d1c Remove extra truncs/exts around i32 bit operations on PPC64
This generalizes the code to eliminate extra truncs/exts around i1 bit
operations to also do the same on PPC64 for i32 bit operations. This eliminates
a fairly prevalent code wart:

int foo(int a) {
  return a == 5 ? 7 : 8;
}

On PPC64, because of the extension implied by the ABI, this would generate:

	cmplwi 0, 3, 5
	li 12, 8
	li 4, 7
	isel 3, 4, 12, 2
	rldicl 3, 3, 0, 32
	blr

where the 'rldicl 3, 3, 0, 32', the extension, is completely unnecessary. At
least for the single-BB case (which is all that the DAG combine mechanism can
handle), this unnecessary extension is no longer generated.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202600 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-01 21:36:57 +00:00
Hal Finkel
b7a2ea7228 Trying to unbreak the darwin11 builder
The CR bit tracking code broke PPC/Darwin; trying to get it working again...

(the darwin11 builder, which defaults to the darwin ABI when running PPC tests,
asserted when running test/CodeGen/PowerPC/inverted-bool-compares.ll)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202459 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 01:17:25 +00:00
Hal Finkel
36e1825e68 Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:

 - Reduction in register pressure (because we no longer need GPRs to store
   boolean values).

 - Logical operations on booleans can be handled more efficiently; we used to
   have to move all results from comparisons into GPRs, perform promoted
   logical operations in GPRs, and then move the result back into condition
   register bits to be used by conditional branches. This can be very
   inefficient, because the throughput of these CR <-> GPR moves have high
   latency and low throughput (especially when other associated instructions
   are accounted for).

 - On the POWER7 and similar cores, we can increase total throughput by using
   the CR bits. CR bit operations have a dedicated functional unit.

Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).

This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.

It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
  trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).

POWER7 test-suite performance results (from 10 runs in each configuration):

SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup

SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202451 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-28 00:27:01 +00:00
Matt Arsenault
bb7bf85f3c Add address space argument to allowsUnalignedMemoryAccess.
On R600, some address spaces have more strict alignment
requirements than others.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200887 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-05 23:15:53 +00:00
Alp Toker
ae43cab6ba Fix known typos
Sweep the codebase for common typos. Includes some changes to visible function
names that were misspelt.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200018 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-24 17:20:08 +00:00
Hal Finkel
da3446099d Fix pointer info on PPC byval stores
For PPC64 SVR (and Darwin), the stores that take byval aggregate parameters
from registers into the stack frame had MachinePointerInfo objects with
incorrect offsets. These offsets are relative to the object itself, not to the
stack frame base.

This fixes self hosting on PPC64 when compiling with -enable-aa-sched-mi.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199763 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-21 20:15:58 +00:00
Bill Wendling
b87d142ba1 Remove unnecessary #includes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198585 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-06 06:00:00 +00:00
Bill Wendling
4644d79871 Refactor function that checks that __builtin_returnaddress's argument is constant.
This moves the check up into the parent class so that all targets can use it
without having to copy (and keep in sync) the same error message.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198579 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-06 00:43:20 +00:00
Bill Wendling
4a816471f5 Emit an error message if the value passed to __builtin_returnaddress isn't a constant
__builtin_returnaddress requires that the value passed into is be a constant.
However, at -O0 even a constant expression may not be converted to a constant.
Emit an error message intead of crashing.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198531 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-05 01:47:20 +00:00
Roman Divacky
ed4678820b Implement initial-exec TLS for PPC32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197824 91177308-0d34-0410-b5e6-96231b3b80d8
2013-12-20 18:08:54 +00:00
Alp Toker
087ab613f4 Correct word hyphenations
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196471 91177308-0d34-0410-b5e6-96231b3b80d8
2013-12-05 05:44:44 +00:00
Bill Schmidt
daf6b948b9 [PowerPC] Fix PR17354: Generate nop after local calls for PIC code.
When generating code for shared libraries, even local calls may be
intercepted, so we need a nop after the call for the linker to fix up the
TOC.  Test case adapted from the one provided in PR17354.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191440 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-26 17:09:28 +00:00
Bill Schmidt
3789209b79 [PowerPC] Add a FIXME.
Documenting a design choice to generate only medium model sequences for TLS
addresses at this time.  Small and large code models could be supported if
necessary.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190883 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-17 20:22:05 +00:00
Hal Finkel
fabfb5d588 PPC: Don't restrict lvsl generation to after type legalization
This is a re-commit of r190764, with an extra check to make sure that we're not
performing the transformation on illegal types (a small test case has been
added for this as well).

Original commit message:

The PPC backend uses a target-specific DAG combine to turn unaligned Altivec
loads into a permutation-based sequence when possible. Unfortunately, the
target-specific DAG combine is not always called on all loads of interest
(sometimes the routines in DAGCombine call CombineTo such that the new node and
users are not added to the worklist); allowing the combine to trigger early
(before type legalization) mitigates this problem. Because the autovectorizers
only create legal vector types, I don't expect a lot of cases where this
optimization is enabled by type legalization in practice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190771 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-15 22:09:58 +00:00
Hal Finkel
19b59e66af Revert r190764: PPC: Don't restrict lvsl generation to after type legalization
This is causing test-suite failures.

Original commit message:

The PPC backend uses a target-specific DAG combine to turn unaligned Altivec
loads into a permutation-based sequence when possible. Unfortunately, the
target-specific DAG combine is not always called on all loads of interest
(sometimes the routines in DAGCombine call CombineTo such that the new node and
users are not added to the worklist); allowing the combine to trigger early
(before type legalization) mitigates this problem. Because the autovectorizers
only create legal vector types, I don't expect a lot of cases where this
optimization is enabled by type legalization in practice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190765 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-15 15:41:11 +00:00
Hal Finkel
55532adc68 PPC: Don't restrict lvsl generation to after type legalization
The PPC backend uses a target-specific DAG combine to turn unaligned Altivec
loads into a permutation-based sequence when possible. Unfortunately, the
target-specific DAG combine is not always called on all loads of interest
(sometimes the routines in DAGCombine call CombineTo such that the new node and
users are not added to the worklist); allowing the combine to trigger early
(before type legalization) mitigates this problem. Because the autovectorizers
only create legal vector types, I don't expect a lot of cases where this
optimization is enabled by type legalization in practice.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190764 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-15 15:20:54 +00:00
Hal Finkel
98bae99266 Add missing break statement in PPCISelLowering
As it turns out, not a problem in practice, but it should be there.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190720 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-13 20:09:02 +00:00
Chandler Carruth
a2c982129e Remove an unused variable, fixing -Werror build with latest Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190640 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-12 23:30:48 +00:00
Hal Finkel
6671cd4db0 Fix PPC ABI for ByVal structs with vector members
When a structure is passed by value, and that structure contains a vector
member, according to the PPC ABI, the structure will receive enhanced alignment
(so that the vector within the structure will always be aligned).

This should resolve PR16641.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190636 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-12 23:20:06 +00:00
Hal Finkel
4a1535c038 Make the PPC fast-math sqrt expansion safe at 0
In fast-math mode sqrt(x) is calculated using the fast expansion of the
reciprocal of the reciprocal sqrt expansion. The reciprocal and reciprocal
sqrt expansions use the associated estimate instructions along with some Newton
iterations. Unfortunately, as a result, sqrt(0) was being calculated as NaN,
which is not correct. Now we explicitly return a result of zero if the input is
zero.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190624 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-12 19:04:12 +00:00
Hal Finkel
b7fbc5baad Enable MI scheduling (and CodeGen AA) by default for embedded PPC cores
For embedded PPC cores (especially the A2 core), using the MI scheduler with AA
is far superior to the other scheduling options.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190558 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-11 23:05:25 +00:00
Bill Schmidt
11addd2a2f [PowerPC] Call support for fast-isel.
This patch adds fast-isel support for calls (but not intrinsic calls
or varargs calls).  It also removes a badly-formed assert.  There are
some new tests just for calls, and also for folding loads into
arguments on calls to avoid extra extends.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189701 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-30 22:18:55 +00:00
Bill Schmidt
7248968fa5 [PowerPC] Add loads, stores, and related things to fast-isel.
This is the next big chunk of fast-isel code.  The primary purpose is
to implement selection of loads and stores, but there is a lot of
drag-along to support this.  The common code to analyze addresses for
both loads and stores is substantial.  It's also necessary to add the
materialization code for global values.

Related to load-store processing is the code to fold loads into
integer extends, since otherwise we generate lots of redundant
instructions.  We also need to add some overrides to some FastEmit
routines to ensure we don't assign GPR 0 to a virtual register when
this would change the meaning of an instruction.

I added handling selection of a few binary arithmetic instructions, to
enable committing some test cases I wrote a while back.

Finally, ap couple of miscellaneous changes:
 * I cleaned up some poor style from a previous patch in
   PPCISelLowering.cpp, pointed out by David Blaikie.
 * I enlarged the Addr.Offset field to avoid sign problems with 32-bit
   offsets. 



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189636 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-30 02:29:45 +00:00
Bill Schmidt
7c42ede045 Dummy code to silence warning from 4189266
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189272 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-26 20:11:46 +00:00
Hal Finkel
953a78084b Add the PPC fcpsgn instruction
Modern PPC cores support a floating-point copysign instruction, and we can use
this to lower the FCOPYSIGN node (which is created from calls to the libm
copysign function). A couple of extra patterns are necessary because the
operand types of FCOPYSIGN need not agree.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188653 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-19 05:01:02 +00:00
Craig Topper
5a0910b349 Replace getValueType().getSimpleVT() with getSimpleValueType(). Also remove one weird cast from MVT->EVT just to call getSimpleVT().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188441 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-15 02:33:50 +00:00
Hal Finkel
341c1a50ad Actually fix PPC64 64-bit GPR inline asm constraint matching
This is a follow-up to r187693, correcting that code to request the correct
register class. The previous version, with the wrong register class, was not
really correcting the constraints, but rather was removing them. Coincidentally,
this fixed the failing test case in r187693, but obviously created other
problems.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188407 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-14 20:05:04 +00:00
Hal Finkel
05a4d2642b PPC: Map frin to round() not nearbyint() and rint()
Making use of the recently-added ISD::FROUND, which allows for custom lowering
of round(), the PPC backend will now map frin to round(). Previously, we had
been using frin to lower nearbyint() (and rint() via some custom lowering to
handle the extra fenv flags requirements), but only in fast-math mode because
frin does not tie-to-even. Several users had complained about this behavior,
and this new mapping of frin to round is certainly more appropriate (and does
not require fast-math mode).

In effect, this reverts r178362 (and part of r178337, replacing the nearbyint
mapping with the round mapping).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187960 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-08 04:31:34 +00:00
Hal Finkel
5cad12d12a Fix PPC64 64-bit GPR inline asm constraint matching
Internally, the PowerPC backend names the 32-bit GPRs R[0-9]+, and names the
64-bit parent GPRs X[0-9]+. When matching inline assembly constraints with
explicit register names, on PPC64 when an i64 MVT has been requested, we need
to follow gcc's convention of using r[0-9]+ to refer to the 64-bit (parent)
registers.

At some point, we'll probably want to arrange things so that the generic code
in TargetLowering uses the AsmName fields declared in *RegisterInfo.td in order
to match these inline asm register constraints. If we do that, this change can
be reverted.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187693 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-03 12:25:10 +00:00
Bill Schmidt
646cd7933b [PowerPC] Skeletal FastISel support for 64-bit PowerPC ELF.
This is the first of many upcoming patches for PowerPC fast
instruction selection support.  This patch implements the minimum
necessary for a functional (but extremely limited) FastISel pass.  It
allows the table-generated portions of the selector to be created and
used, but in most cases selection will fall back to the DAG selector.
None of the block terminator instructions are implemented yet, and
most interesting instructions require some special handling.
Therefore there aren't any new test cases with this patch.  There will
be quite a few tests coming with future patches.

This patch adds the make/CMake support for the new code (including
tablegen -gen-fast-isel) and creates the FastISel object for PPC64 ELF
only.  It instantiates the necessary virtual functions
(TargetSelectInstruction, TargetMaterializeConstant,
TargetMaterializeAlloca, tryToFoldLoadIntoMI, and FastLowerArguments),
but of these, only TargetMaterializeConstant contains any useful
implementation.  This is present since the table-generated code
requires the ability to materialize integer constants for some
instructions.

This patch has been tested by building and running the
projects/test-suite code with -O0.  All tests passed with the
exception of a couple of long-running tests that time out using -O0
code generation.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187399 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-30 00:50:39 +00:00
Roman Divacky
6ebf55d811 PPC32 va_list is an actual structure so va_copy needs to copy the whole
structure not just a pointer. This implements that and thus fixes va_copy
on PPC32. Fixes #15286. Both bug and patch by Florian Zeitz!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187158 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-25 21:36:47 +00:00
Hal Finkel
0541722de4 PPC: Add base-pointer support to builtin setjmp/longjmp
First, this changes the base-pointer implementation to remove an unnecessary
complication (and one that is incompatible with how builtin SjLj is
implemented): instead of using r31 as the base pointer when it is not needed as
a frame pointer, now the base pointer will always be r30 when needed.

Second, we introduce another pseudo register, BP, which is used just like the FP
pseudo register to refer to the base register before we know for certain what
register it will be.

Third, we now save BP into the jmp_buf, and restore r30 from that slot in
longjmp.  If the function that called setjmp did not use a base pointer, then
r30 will be overwritten by the setjmp-calling-function's restore code. FP
restoration (which is restored into r31) works the same way.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186545 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-17 23:50:51 +00:00