635 Commits

Author SHA1 Message Date
Jan Vesely
bb917c2f8a SelectionDAG: Factor FP_TO_SINT lower code out of DAGLegalizer
Move the code to a helper function to allow calls from TypeLegalizer.

No functionality change intended

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
Reviewed-by: Owen Anderson <resistor@mac.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212772 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 22:40:18 +00:00
Daniel Sanders
b0b3161567 Make it possible for ints/floats to return different values from getBooleanContents()
Summary:
On MIPS32r6/MIPS64r6, floating point comparisons return 0 or -1 but integer
comparisons return 0 or 1.

Updated the various uses of getBooleanContents. Two simplifications had to be
disabled when float and int boolean contents differ:
- ScalarizeVecRes_VSELECT except when the kind of boolean contents is trivially
  discoverable (i.e. when the condition of the VSELECT is a SETCC node).
- visitVSELECT (select C, 0, 1) -> (xor C, 1).
  Come to think of it, this one could test for the common case of 'C'
  being a SETCC too.

Preserved existing behaviour for all other targets and updated the affected
MIPS32r6/MIPS64r6 tests. This also fixes the pi benchmark where the 'low'
variable was counting in the wrong direction because it thought it could simply
add the result of the comparison.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, jholewinski, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D4389


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212697 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-10 10:18:12 +00:00
Ulrich Weigand
1ef2cec146 Fix ppcf128 component access on little-endian systems
The PowerPC 128-bit long double data type (ppcf128 in LLVM) is in fact a
pair of two doubles, where one is considered the "high" or
more-significant part, and the other is considered the "low" or
less-significant part.  When a ppcf128 value is stored in memory or a
register pair, the high part always comes first, i.e. at the lower
memory address or in the lower-numbered register, and the low part
always comes second.  This is true both on big-endian and little-endian
PowerPC systems.  (Similar to how with a complex number, the real part
always comes first and the imaginary part second, no matter the byte
order of the system.)

This was implemented incorrectly for little-endian systems in LLVM.
This commit fixes three related issues:

- When printing an immediate ppcf128 constant to assembler output
  in emitGlobalConstantFP, emit the high part first on both big-
  and little-endian systems.

- When lowering a ppcf128 type to a pair of f64 types in SelectionDAG
  (which is used e.g. when generating code to load an argument into a
  register pair), use correct low/high part ordering on little-endian
  systems.

- In a related issue, because lowering ppcf128 into a pair of f64 must
  operate differently from lowering an int128 into a pair of i64,
  bitcasts between ppcf128 and int128 must not be optimized away by the
  DAG combiner on little-endian systems, but must effect a word-swap.

Reviewed by Hal Finkel.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212274 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-03 15:06:47 +00:00
Chandler Carruth
70968365db [codegen,aarch64] Add a target hook to the code generator to control
vector type legalization strategies in a more fine grained manner, and
change the legalization of several v1iN types and v1f32 to be widening
rather than scalarization on AArch64.

This fixes an assertion failure caused by scalarizing nodes like "v1i32
trunc v1i64". As v1i64 is legal it will fail to scalarize v1i32.

This also provides a foundation for other targets to have more granular
control over how vector types are legalized.

Patch by Hao Liu, reviewed by Tim Northover. I'm committing it to allow
some work to start taking place on top of this patch as it adds some
really important hooks to the backend that I'd like to immediately start
using. =]

http://reviews.llvm.org/D4322

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212242 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-03 00:23:43 +00:00
Juergen Ributzka
75909261f0 [DAG] Pass the argument list to the CallLoweringInfo via move semantics. NFCI.
The argument list vector is never used after it has been passed to the
CallLoweringInfo and moving it to the CallLoweringInfo is cleaner and
pretty much as cheap as keeping a pointer to it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212135 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-01 22:01:54 +00:00
Saleem Abdulrasool
82b1114fef Target: remove old constructors for CallLoweringInfo
This is mostly a mechanical change changing all the call sites to the newer
chained-function construction pattern.  This removes the horrible 15-parameter
constructor for the CallLoweringInfo in favour of setting properties of the call
via chained functions.  No functional change beyond the removal of the old
constructors are intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209082 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:17 +00:00
Saleem Abdulrasool
a2b20f975f Target: add support to build CallLoweringInfo
Rather than introducing an auxiliary CallLoweringInfoBuilder, add the methods to
do chained function construction directly to CallLoweringInfo.  This reduces the
monstrous 15-parameter constructor into a series of simpler (for some definition
of simpler) functions that control particular aspects of the call.  The old
interfaces can be completely removed once callers are moved to the new chained
constructor pattern.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209081 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:06 +00:00
Saleem Abdulrasool
d825568843 Target: change member from reference to pointer
This is a preliminary step to help ease the construction of CallLoweringInfo.
Changing the construction to a chained function pattern requires that the
parameter be nullable.  However, rather than copying the vector, save a pointer
rather than the reference to permit a late binding of the arguments.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209080 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:01 +00:00
Rafael Espindola
21cfedee05 Revert "Implement global merge optimization for global variables."
This reverts commit r208934.

The patch depends on aliases to GEPs with non zero offsets. That is not
supported and fairly broken.

The good news is that GlobalAlias is being redesigned and will have support
for offsets, so this patch should be a nice match for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208978 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 13:02:18 +00:00
Jiangning Liu
d5db8765d6 Implement global merge optimization for global variables.
This commit implements two command line switches -global-merge-on-external
and -global-merge-aligned, and both of them are false by default, so this
optimization is disabled by default for all targets.

For ARM64, some back-end behaviors need to be tuned to get this optimization
further enabled.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208934 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 23:45:42 +00:00
Jay Foad
6b543713a2 Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208811 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 21:14:37 +00:00
Hal Finkel
24f554f052 Pass the value type to TLI::getRegisterByName
We must validate the value type in TLI::getRegisterByName, because if we
don't and the wrong type was used with the IR intrinsic, then we'll assert
(because we won't be able to find a valid register class with which to
construct the requested copy operation). For PPC64, additionally, the type
information is necessary to decide between the 64-bit register and the 32-bit
subregister.

No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208508 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 19:29:07 +00:00
Oliver Stannard
e2948385b9 ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208413 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 14:01:47 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Eric Christopher
9572a1c8c9 Fix typo (also tab character).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208001 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 21:40:41 +00:00
Weiming Zhao
fa1cf8cd68 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207702 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 21:07:24 +00:00
Benjamin Kramer
aab6231cd9 DAGCombiner: Turn divs of vector splats into vectorized multiplications.
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.

I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207315 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 12:06:28 +00:00
Michael Zolotukhin
abd7ca0706 Revert r206749 till a final decision about the intrinsics is made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207313 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 09:56:41 +00:00
Yi Jiang
5d473a0831 ARM64: Combine shifts and uses from different basic block to bit-extract instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206774 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 19:34:27 +00:00
Michael Zolotukhin
d329c79f16 Reapply r206732. This time without optimization of branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206749 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 12:01:33 +00:00
Chandler Carruth
81549a0a39 Revert r206732 which is causing llc to crash on most of the build bots.
Original commit message:
  Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN,
  safe.srem.iN, safe.urem.iN (iN = i8, i61, i32, or i64).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206735 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 07:11:15 +00:00
Michael Zolotukhin
7d5100d14e Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN, safe.srem.iN,
safe.urem.iN (iN = i8, i16, i32, or i64).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206732 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 05:33:09 +00:00
Tim Northover
09da6b5540 Atomics: promote ARM's IR-based atomics pass to CodeGen.
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.

At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206485 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-17 18:22:47 +00:00
Craig Topper
4ba844388c [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206142 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-14 00:51:57 +00:00
Benjamin Kramer
15c435a367 Retire llvm::array_endof in favor of non-member std::end.
While there make array_lengthof constexpr if we have support for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206112 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-12 16:15:53 +00:00
Benjamin Kramer
71e50b4d97 Make doxygen comment match the declaration.
Found by -Wdocumentation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206076 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-11 21:58:11 +00:00
Tom Stellard
fdde7d2110 SelectionDAG: Factor ISD::MUL lowering code out of DAGTypeLegalizer
This code has been moved to a new function in the TargetLowering
class called expandMUL().  The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.

No functionality changed intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206036 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-11 16:11:58 +00:00
Matt Arsenault
a134cec06b Add DAG parameter to ComputeNumSignBitsForTargetNode
This way, you can check the number of sign bits in the
operands. The depth parameter it already has is pretty useless
without this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205649 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-04 20:13:13 +00:00
Craig Topper
84f7f350c3 Make consistent use of MCPhysReg instead of uint16_t throughout the tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205610 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-04 05:16:06 +00:00
Matt Arsenault
1ff5f1a061 Add helpers for checking if a value is a target boolean constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205335 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-01 18:13:22 +00:00
Matt Arsenault
8d8c507bbf Change shouldSplitVectorElementType to better match the description.
Pass the entire vector type, and not just the element.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205247 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 20:54:58 +00:00
Hal Finkel
adbf9764ae Add a TLI hook to control when BUILD_VECTOR might be expanded using shuffles
There are two general methods for expanding a BUILD_VECTOR node:
  1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
     them together.
  2. Build the vector on the stack and then load it.

Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).

Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.

This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205230 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 17:48:10 +00:00
Tim Northover
1db780ba22 CodeGenPrep: wrangle IR to exploit AArch64 tbz/tbnz inst.
Given IR like:
    %bit = and %val, #imm-with-1-bit-set
    %tst = icmp %bit, 0
    br i1 %tst, label %true, label %false

some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.

This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205086 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 08:22:29 +00:00
Renato Golin
58839f43de Change @llvm.clear_cache default to call rt-lib
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.

I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204806 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 14:01:32 +00:00
Renato Golin
c4b058f9e7 Add @llvm.clear_cache builtin
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.

Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.

A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204802 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 12:52:28 +00:00
Benjamin Kramer
dabc5073b2 Remove copy ctors that did the same thing as the default one.
The code added nothing but potentially disabled move semantics and made
types non-trivially copyable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203563 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-11 11:32:49 +00:00
Chandler Carruth
4bbfbdf7d7 [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202816 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-04 11:01:28 +00:00
Rafael Espindola
b4aaffffd3 move getNameWithPrefix and getSymbol to TargetMachine.
TargetLoweringBase is implemented in CodeGen, so before this patch we had
a dependency fom Target to CodeGen. This would show up as a link failure of
llvm-stress when building with -DBUILD_SHARED_LIBS=ON.

This fixes pr18900.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201711 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 20:30:41 +00:00
Rafael Espindola
737c9f6005 Add back r201608, r201622, r201624 and r201625
r201608 made llvm corretly handle private globals with MachO. r201622 fixed
a bug in it and r201624 and r201625 were changes for using private linkage,
assuming that llvm would do the right thing.

They all got reverted because r201608 introduced a crash in LTO. This patch
includes a fix for that. The issue was that TargetLoweringObjectFile now has
to be initialized before we can mangle names of private globals. This is
trivially true during the normal codegen pipeline (the asm printer does it),
but LTO has to do it manually.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201700 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 17:23:20 +00:00
Daniel Jasper
9a92586114 Revert r201622 and r201608.
This causes the LLVMgold plugin to segfault. More information on the
replies to r201608.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201669 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 12:26:01 +00:00
Tim Northover
44697f3fc1 X86 CodeGenPrep: sink shufflevectors before shifts
On x86, shifting a vector by a scalar is significantly cheaper than shifting a
vector by another fully general vector. Unfortunately, because SelectionDAG
operates on just one basic block at a time, the shufflevector instruction that
reveals whether the right-hand side of a shift *is* really a scalar is often
not visible to CodeGen when it's needed.

This adds another handler to CodeGenPrepare, to sink any useful shufflevector
instructions down to the basic block where they're used, predicated on a target
hook (since on other architectures, doing so will often just introduce extra
real work).

rdar://problem/16063505

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201655 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 10:02:43 +00:00
Rafael Espindola
faaa553274 Avoid an infinite cycle with private linkage and -f{data|function}-sections.
When outputting an object we check its section to find its name, but when
looking for the section with -ffunction-section we look for the symbol name.

Break the loop by requesting a name with the private prefix when constructing
the section name. This matches the behavior before r201608.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201622 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 01:28:30 +00:00
Rafael Espindola
6880f0e19f Fix PR18743.
The IR
@foo = private constant i32 42

is valid, but before this patch we would produce an invalid MachO from it. It
was invalid because it would use an L label in a section where the liker needs
the labels in order to atomize it.

One way of fixing it would be to just reject this IR in the backend, but that
would not be very front end friendly.

What this patch does is use an 'l' prefix in sections that we know the linker
requires symbols for atomizing them. This allows frontends to just use
private and not worry about which sections they go to or how the linker handles
them.

One small issue with this strategy is that now a symbol name depends on the
section, which is not available before codegen. This is not a problem in
practice. The reason is that it only happens with private linkage, which will
be ignored by the non codegen users (llvm-nm and llvm-ar).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201608 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-18 22:24:57 +00:00
Rafael Espindola
39d8dcb53b Rename some member variables from TD to DL.
TargetData was renamed DataLayout back in r165242.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-18 15:33:12 +00:00
Matt Arsenault
bb7bf85f3c Add address space argument to allowsUnalignedMemoryAccess.
On R600, some address spaces have more strict alignment
requirements than others.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200887 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-05 23:15:53 +00:00
Reid Kleckner
8a24e83550 Implement inalloca codegen for x86 with the new inalloca design
Calls with inalloca are lowered by skipping all stores for arguments
passed in memory and the initial stack adjustment to allocate argument
memory.

Now the frontend is responsible for the memory layout, and the backend
doesn't have to do any work.  As a result these changes are pretty
minimal.

Reviewers: echristo

Differential Revision: http://llvm-reviews.chandlerc.com/D2637

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200596 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-31 23:50:57 +00:00
Juergen Ributzka
efbb39740c [TLI] Add a new hook to TargetLowering to query the target if a load of a constant should be converted to simply the constant itself.
Before this patch we used getIntImmCost from TargetTransformInfo to determine if
a load of a constant should be converted to just a constant, but the threshold
for this was set to an arbitrary value. This value works well for the two
targets (X86 and ARM) that implement this target-hook, but it isn't
target-independent at all.

Now targets have the possibility to decide directly if this optimization should
be performed. The default value is set to false to preserve the current
behavior. The target hook has been moved to TargetLowering, which removed the
last use and need of TargetTransformInfo in SelectionDAG.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200271 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 01:20:14 +00:00
Bill Wendling
4644d79871 Refactor function that checks that __builtin_returnaddress's argument is constant.
This moves the check up into the parent class so that all targets can use it
without having to copy (and keep in sync) the same error message.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198579 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-06 00:43:20 +00:00
Hal Finkel
ac8ba0c0fd Disable compare sinking in CodeGenPrepare when multiple condition registers are available
As noted in the comment above CodeGenPrepare::OptimizeInst, which aggressively
sinks compares to reduce pressure on the condition register(s), for targets
such as PowerPC with multiple condition registers, this may not be the right
thing to do. This adds an HasMultipleConditionRegisters boolean to TLI, and
CodeGenPrepare::OptimizeInst is skipped when HasMultipleConditionRegisters is
true.

This functionality will be used by the PowerPC backend in an upcoming commit.
Especially when the PowerPC backend starts tracking individual condition
register bits as separate allocatable entities (which will happen in this
upcoming commit), this sinking from CodeGenPrepare::OptimizeInst is
significantly suboptimial.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198354 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-02 21:13:43 +00:00
Richard Sandiford
086791eca2 Add TargetLowering::prepareVolatileOrAtomicLoad
One unusual feature of the z architecture is that the result of a
previous load can be reused indefinitely for subsequent loads, even if
a cache-coherent store to that location is performed by another CPU.
A special serializing instruction must be used if you want to force
a load to be reattempted.

Since volatile loads are not supposed to be omitted in this way,
we should insert a serializing instruction before each such load.
The same goes for atomic loads.

The patch implements this at the IR->DAG boundary, in a similar way
to atomic fences.  It is a no-op for targets other than SystemZ.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196905 91177308-0d34-0410-b5e6-96231b3b80d8
2013-12-10 10:36:34 +00:00