2671 Commits

Author SHA1 Message Date
Andrew Trick
5f22dd7858 Add a subtarget hook: enablePostMachineScheduler.
As requested by AArch64 subtargets.

Note that this will have no effect until the
AArch64 target actually enables the pass like this:
substitutePass(&PostRASchedulerID, &PostMachineSchedulerID);

As soon as armv7 switches over, PostMachineScheduler will become the
default postRA scheduler, so this won't be necessary any more.
Targets using the old postRA schedule would then do:
substitutePass(&PostMachineSchedulerID, &PostRASchedulerID);

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210167 91177308-0d34-0410-b5e6-96231b3b80d8
2014-06-04 07:06:27 +00:00
Eric Christopher
189fe78e2f Make early if conversion dependent upon the subtarget and add
a subtarget hook to enable. Unconditionally add to the pass pipeline
for targets that might want to use it. No functional change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209340 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-21 23:40:26 +00:00
Eric Christopher
595bdb7e8b Group the scheduling functions together.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209339 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-21 23:40:18 +00:00
Eric Christopher
4f6d26dbe8 Move the verbose asm option to be part of the options struct and
set appropriately.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209258 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 23:59:50 +00:00
Eric Christopher
6a9366c0c6 Move the function and data section flags into the options struct and
make the functions to set them non-static.
Move and rename the llvm specific backend options to avoid conflicting
with the clang option.

Paired with a backend commit to update.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209238 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 21:25:34 +00:00
Tim Northover
c9ff20e8c6 TableGen: convert InstAlias's Emit bit to an int.
When multiple aliases overlap, the correct string to print can often be
determined purely by considering the InstAlias declarations in some particular
order. This allows the user to specify that order manually when desired,
without resorting to hacking around with the default lexicographical order on
Record instantiation, which is error-prone and ugly.

I was also mistaken about "add w2, w3, w4" being the same as "add w2, w3, w4,
uxtw". That's only true if Rn is the stack pointer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209199 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-20 09:17:16 +00:00
Saleem Abdulrasool
82b1114fef Target: remove old constructors for CallLoweringInfo
This is mostly a mechanical change changing all the call sites to the newer
chained-function construction pattern.  This removes the horrible 15-parameter
constructor for the CallLoweringInfo in favour of setting properties of the call
via chained functions.  No functional change beyond the removal of the old
constructors are intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209082 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:17 +00:00
Saleem Abdulrasool
a2b20f975f Target: add support to build CallLoweringInfo
Rather than introducing an auxiliary CallLoweringInfoBuilder, add the methods to
do chained function construction directly to CallLoweringInfo.  This reduces the
monstrous 15-parameter constructor into a series of simpler (for some definition
of simpler) functions that control particular aspects of the call.  The old
interfaces can be completely removed once callers are moved to the new chained
constructor pattern.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209081 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:06 +00:00
Saleem Abdulrasool
d825568843 Target: change member from reference to pointer
This is a preliminary step to help ease the construction of CallLoweringInfo.
Changing the construction to a chained function pattern requires that the
parameter be nullable.  However, rather than copying the vector, save a pointer
rather than the reference to permit a late binding of the arguments.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209080 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-17 21:50:01 +00:00
Reid Kleckner
1ce3088669 Add comdat key field to llvm.global_ctors and llvm.global_dtors
This allows us to put dynamic initializers for weak data into the same
comdat group as the data being initialized.  This is necessary for MSVC
ABI compatibility.  Once we have comdats for guard variables, we can use
the combination to help GlobalOpt fire more often for weak data with
guarded initialization on other platforms.

Reviewers: nlewycky

Differential Revision: http://reviews.llvm.org/D3499

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209015 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 20:39:27 +00:00
Rafael Espindola
21cfedee05 Revert "Implement global merge optimization for global variables."
This reverts commit r208934.

The patch depends on aliases to GEPs with non zero offsets. That is not
supported and fairly broken.

The good news is that GlobalAlias is being redesigned and will have support
for offsets, so this patch should be a nice match for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208978 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 13:02:18 +00:00
Eric Christopher
9364e63d79 Remove the Options query functions and just access our Options directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208937 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-16 00:32:52 +00:00
Jiangning Liu
d5db8765d6 Implement global merge optimization for global variables.
This commit implements two command line switches -global-merge-on-external
and -global-merge-aligned, and both of them are false by default, so this
optimization is disabled by default for all targets.

For ARM64, some back-end behaviors need to be tuned to get this optimization
further enabled.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208934 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 23:45:42 +00:00
Eric Christopher
cea72fe763 Remove unused functions setting MCOptions from TargetMachine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208835 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 01:25:04 +00:00
Eric Christopher
afc6099348 Move the TargetMachine MC options to MCTargetOptions. No functional
change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208832 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-15 01:08:00 +00:00
Jay Foad
6b543713a2 Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208811 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-14 21:14:37 +00:00
Rafael Espindola
7b6f59e8f6 Remove MCUseCFI from TargetMachine.
It was always true.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208547 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-12 13:01:42 +00:00
Hal Finkel
24f554f052 Pass the value type to TLI::getRegisterByName
We must validate the value type in TLI::getRegisterByName, because if we
don't and the wrong type was used with the IR intrinsic, then we'll assert
(because we won't be able to find a valid register class with which to
construct the requested copy operation). For PPC64, additionally, the type
information is necessary to decide between the 64-bit register and the 32-bit
subregister.

No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208508 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-11 19:29:07 +00:00
Oliver Stannard
e2948385b9 ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208413 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-09 14:01:47 +00:00
Hal Finkel
f35ce2376c Move late partial-unrolling thresholds into the processor definitions
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).

This does represent a small functionality change:
 - For generic x86-64 (which uses the SB model and, thus, will get some
   unrolling).
 - For AMD cores (because they still currently use the SB scheduling model)
 - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
   the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208289 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-08 09:14:44 +00:00
Renato Golin
22f779d1fd Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-06 16:51:25 +00:00
Eric Christopher
9572a1c8c9 Fix typo (also tab character).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208001 91177308-0d34-0410-b5e6-96231b3b80d8
2014-05-05 21:40:41 +00:00
Weiming Zhao
fa1cf8cd68 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207702 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-30 21:07:24 +00:00
Joerg Sonnenberger
c430d3ffcd Fix comment
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207429 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-28 18:11:51 +00:00
Benjamin Kramer
aab6231cd9 DAGCombiner: Turn divs of vector splats into vectorized multiplications.
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.

I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207315 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 12:06:28 +00:00
Michael Zolotukhin
abd7ca0706 Revert r206749 till a final decision about the intrinsics is made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207313 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-26 09:56:41 +00:00
Evgeniy Stepanov
d6af41b2eb Create MCTargetOptions.
For now it contains a single flag, SanitizeAddress, which enables
AddressSanitizer instrumentation of inline assembly.

Patch by Yuri Gorshenin.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206971 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-23 11:16:03 +00:00
Yi Jiang
5d473a0831 ARM64: Combine shifts and uses from different basic block to bit-extract instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206774 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 19:34:27 +00:00
Michael Zolotukhin
d329c79f16 Reapply r206732. This time without optimization of branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206749 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 12:01:33 +00:00
Chandler Carruth
81549a0a39 Revert r206732 which is causing llc to crash on most of the build bots.
Original commit message:
  Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN,
  safe.srem.iN, safe.urem.iN (iN = i8, i61, i32, or i64).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206735 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 07:11:15 +00:00
Michael Zolotukhin
7d5100d14e Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN, safe.srem.iN,
safe.urem.iN (iN = i8, i16, i32, or i64).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206732 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-21 05:33:09 +00:00
Yaron Keren
64b2297786 Patch by Vadim Chugunov
Win64 stack unwinder gets confused when execution flow "falls through" after
a call to 'noreturn' function. This fixes the "missing epilogue" problem by 
emitting a trap instruction for IR 'unreachable' on x86_x64-pc-windows.

A secondary use for it would be for anyone wanting to make double-sure that
'noreturn' functions, indeed, do not return.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206684 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-19 13:47:43 +00:00
Tim Northover
09da6b5540 Atomics: promote ARM's IR-based atomics pass to CodeGen.
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.

At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206485 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-17 18:22:47 +00:00
Craig Topper
0b6cb7104b [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206252 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-15 06:32:26 +00:00
Craig Topper
4ba844388c [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206142 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-14 00:51:57 +00:00
Benjamin Kramer
15c435a367 Retire llvm::array_endof in favor of non-member std::end.
While there make array_lengthof constexpr if we have support for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206112 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-12 16:15:53 +00:00
Benjamin Kramer
71e50b4d97 Make doxygen comment match the declaration.
Found by -Wdocumentation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206076 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-11 21:58:11 +00:00
Tom Stellard
fdde7d2110 SelectionDAG: Factor ISD::MUL lowering code out of DAGTypeLegalizer
This code has been moved to a new function in the TargetLowering
class called expandMUL().  The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.

No functionality changed intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206036 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-11 16:11:58 +00:00
Reid Kleckner
bc1fd917f0 Move the segmented stack switch to a function attribute
This removes the -segmented-stacks command line flag in favor of a
per-function "split-stack" attribute.

Patch by Luqman Aden and Alex Crichton!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205997 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-10 22:58:43 +00:00
Matt Arsenault
a134cec06b Add DAG parameter to ComputeNumSignBitsForTargetNode
This way, you can check the number of sign bits in the
operands. The depth parameter it already has is pretty useless
without this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205649 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-04 20:13:13 +00:00
Craig Topper
84f7f350c3 Make consistent use of MCPhysReg instead of uint16_t throughout the tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205610 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-04 05:16:06 +00:00
Jim Grosbach
bc07242d9b Simplify resolveFrameIndex() signature.
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205453 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-02 19:28:18 +00:00
Matt Arsenault
1ff5f1a061 Add helpers for checking if a value is a target boolean constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205335 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-01 18:13:22 +00:00
Matt Arsenault
8d8c507bbf Change shouldSplitVectorElementType to better match the description.
Pass the entire vector type, and not just the element.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205247 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 20:54:58 +00:00
Hal Finkel
adbf9764ae Add a TLI hook to control when BUILD_VECTOR might be expanded using shuffles
There are two general methods for expanding a BUILD_VECTOR node:
  1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
     them together.
  2. Build the vector on the stack and then load it.

Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).

Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.

This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205230 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-31 17:48:10 +00:00
Tim Northover
7b837d8c75 ARM64: initial backend import
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.

Everything will be easier with the target in-tree though, hence this
commit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205090 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 10:18:08 +00:00
Tim Northover
1db780ba22 CodeGenPrep: wrangle IR to exploit AArch64 tbz/tbnz inst.
Given IR like:
    %bit = and %val, #imm-with-1-bit-set
    %tst = icmp %bit, 0
    br i1 %tst, label %true, label %false

some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.

This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205086 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-29 08:22:29 +00:00
Manman Ren
d9524d66cd Provide a target override for the cost of using a callee-saved register
for the first time.

Thanks Andy for the discussion.
rdar://16162005


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204979 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 23:10:04 +00:00
David Blaikie
8040d49d85 DebugInfo: TargetOptions/MCAsmInfo support for compressed debug info sections
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204957 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-27 20:45:41 +00:00
Renato Golin
58839f43de Change @llvm.clear_cache default to call rt-lib
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.

I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204806 91177308-0d34-0410-b5e6-96231b3b80d8
2014-03-26 14:01:32 +00:00