Commit Graph

563 Commits

Author SHA1 Message Date
235c75cc21 Change TargetLowering::TransformToType to contain MVTs, instead of
EVTs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169847 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 10:05:04 +00:00
78a70f3c60 Change TargetLowering::getRepRegClassCostFor, getIndexedLoadAction,
getIndexedStoreAction, and addRegisterClass to take an MVT, instead
of EVT.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169846 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 10:00:35 +00:00
bade0345d1 Change TargetLowering::findRepresentativeClass to take an MVT, instead
of EVT.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169845 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:57:18 +00:00
bb2543bb0e Change TargetLowering::getTypeToPromoteTo to take and return MVTs,
instead of EVTs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169844 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:54:23 +00:00
204301f045 Change TargetLowering::isCondCodeLegal to take an MVT, instead of EVT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169843 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:51:27 +00:00
aff674331e Change TargetLowering::getCondCodeAction to take an MVT, instead of
EVT.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169842 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:48:14 +00:00
3166283ac1 Change TargetLowering::getTruncStoreAction to take MVTs, instead of EVTs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169841 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:42:24 +00:00
ffa03b7981 Change TargetLowering::getLoadExtAction to take an MVT, instead of EVT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169840 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:39:09 +00:00
968947766b Change TargetLowering::setTypeAction to take an MVT, instead fo EVT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169839 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:32:56 +00:00
aa7744d75f Change TargetLowering::getRepRegClassFor to take an MVT, instead of
EVT.

Accordingly, change RegDefIter to contain MVTs instead of EVTs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169838 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:31:43 +00:00
8163ca76f0 Change TargetLowering::getRegClassFor to take an MVT, instead of EVT.
Accordingly, add helper funtions getSimpleValueType (in parallel to
getValueType) in SDValue, SDNode, and TargetLowering.

This is the first, in a series of patches.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169837 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 09:10:33 +00:00
376642ed62 Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
   bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
   x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
   if it's not possible to materialize an integer immediate with a single
   instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
   are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
   Also increase the threshold to something reasonable (8 for memset, 4 pairs
   for memcpy).

This significantly improves Dhrystone, up to 50% on ARM iOS devices.

rdar://12760078


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169791 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 23:21:26 +00:00
9171fb9cfb Fix a coding style nit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169776 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 22:00:20 +00:00
2766a47310 Replace r169459 with something safer. Rather than having computeMaskedBits to
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169536 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 19:13:27 +00:00
8a7186dbc2 Let targets provide hooks that compute known zero and ones for any_extend
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.

define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
  %tmp1 = icmp ult i32 %a, 100
  br i1 %tmp1, label %bb1, label %bb2
bb1:
  %tmp2 = load i16* %ptr, align 2
  br label %bb2
bb2:
  %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
  %cmp = icmp ult i16 %tmp3, 24
  br i1 %cmp, label %bb3, label %exit
bb3:
  call void @bar() nounwind
  br label %exit
exit:
  ret void
}

This compiles to the followings before:
        push    {lr}
        mov     r2, #0
        cmp     r1, #99
        bhi     LBB0_2
@ BB#1:                                 @ %bb1
        ldrh    r2, [r0]
LBB0_2:                                 @ %bb2
        uxth    r0, r2
        cmp     r0, #23
        bhi     LBB0_4
@ BB#3:                                 @ %bb3
        bl      _bar
LBB0_4:                                 @ %exit
        pop     {lr}
        bx      lr

The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169459 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 01:28:01 +00:00
255f89faee Sort the #include lines for the include/... tree with the script.
AKA: Recompile *ALL* the source code!

This one went much better. No manual edits here. I spot-checked for
silliness and grep-checked for really broken edits and everything seemed
good. It all still compiles. Yell if you see something that looks goofy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169133 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-03 17:02:12 +00:00
3d200255d5 Allow targets to prefer TypeSplitVector over TypePromoteInteger when computing the legalization method for vectors
For some targets, it is desirable to prefer scalarizing <N x i1> instead of promoting to a larger legal type, such as <N x i32>.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168882 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-29 14:26:24 +00:00
87a1af4380 PR14256: SelectionDAGLowering was renamed to SelectionDAGBuilder a long time ago. Fix references to it in documentation and comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167378 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 02:59:23 +00:00
f065a84677 1. Fix a bug in getTypeConversion. When a *simple* type is split, we need to return the type of the split result.
2. Change the maximum vectorization width from 4 to 8.
3. A test for both.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166864 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-27 04:11:32 +00:00
a5a3a61c5f Refactor the VectorTargetTransformInfo interface.
Add getCostXXX calls for different families of opcodes, such as casts, arithmetic, cmp, etc.

Port the LoopVectorizer to the new API.

The LoopVectorizer now finds instructions which will remain uniform after vectorization. It uses this information when calculating the cost of these instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166836 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-26 23:49:28 +00:00
2652c50f74 Implement a basic cost model for vector and scalar instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166642 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-24 23:47:38 +00:00
4332bdcb5f Make LegalizeKind public so that we can use it outside of TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166623 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-24 20:59:17 +00:00
b52ba9f8a8 Issue:
Stack is formed improperly for long structures passed as byval arguments for
EABI mode.

If we took AAPCS reference, we can found the next statements:

A: "If the argument requires double-word alignment (8-byte), the NCRN (Next
Core Register Number) is rounded up to the next even register number." (5.5
Parameter Passing, Stage C, C.3).

B: "The alignment of an aggregate shall be the alignment of its most-aligned
component." (4.3 Composite Types, 4.3.1 Aggregates).

So if we have structure with doubles (9 double fields) and 3 Core unused
registers (r1, r2, r3): caller should use r2 and r3 registers only.
Currently r1,r2,r3 set is used, but it is invalid.

Callee VA routine should also use r2 and r3 regs only. All is ok here. This
behaviour is guessed by rounding up SP address with ADD+BFC operations.

Fix:
Main fix is in ARMTargetLowering::HandleByVal. If we detected AAPCS mode and
8 byte alignment, we waste odd registers then.

P.S.:
I also improved LDRB_POST_IMM regression test. Since ldrb instruction will
not generated by current regression test after this patch. 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166018 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-16 07:16:47 +00:00
2c39b15073 Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165941 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-15 16:24:29 +00:00
fb384d61c7 Revert 165732 for further review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165747 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-11 21:27:41 +00:00
f3840d2c16 Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165726 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-11 17:21:41 +00:00
3e2d76c946 Use the attribute enums to query if a parameter has an attribute.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165550 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-09 21:38:14 +00:00
7d66146868 Add in the first step of the multiple pointer support. This adds in support to the data layout for specifying a per address space pointer size.
The next step is to update the optimizers to allow them to optimize the different address spaces with this information.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165505 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-09 16:06:12 +00:00
ad6aedc7d9 Refactor the AddrMode class out of TLI to its own header file.
This class is used by LSR and a number of places in the codegen.
This is the first step in de-coupling LSR from TLI, and creating
a new interface in between them.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165455 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-08 23:06:34 +00:00
3574eca1b0 Move TargetData to DataLayout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165402 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-08 16:38:25 +00:00
8d662b59f0 This patch corrects commit 165126 by using an integer bit width instead of
a pointer to a type, in order to remove the uses of getGlobalContext().

Patch by Tyler Nowicki.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165255 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 21:33:40 +00:00
5d0061e025 Use attribute query methods.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165210 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 07:08:30 +00:00
1a37d7e807 TargetLowering interface to set/get minimum block entries for jump tables.
Provide interface in TargetLowering to set or get the minimum number of basic
blocks whereby jump tables are generated for switch statements rather than an
if sequence.

    getMinimumJumpTableEntries() defaults to 4.
    setMinimumJumpTableEntries() allows target configuration.

    This patch changes the default for the Hexagon architecture to 5
    as it improves performance on some benchmarks.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164628 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-25 20:35:36 +00:00
001d3dc976 Mark unimplemented copy constructors and copy assignment operators as LLVM_DELETED_FUNCTION.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164016 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-17 06:59:23 +00:00
d15e657690 Add in comments that explain what the indexing and the size of the arrays is about.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163904 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 15:36:50 +00:00
af40a5be77 The current implementation does not allow more than 32 types to be properly handled with target lowering. This doubles the size to 64bit types and easily allows extension to more types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163806 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-13 15:24:43 +00:00
2e2efd9600 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163150 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-04 18:22:17 +00:00
9f40cb32ac Not all targets have efficient ISel code generation for select instructions.
For example, the ARM target does not have efficient ISel handling for vector
selects with scalar conditions. This patch adds a TLI hook which allows the
different targets to report which selects are supported well and which selects
should be converted to CF duting codegen prepare.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163093 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-02 12:10:19 +00:00
12864689d1 Allow legalization of target-specific SDNodes, provided that the target itself provide a legalization hook for them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161536 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-08 23:31:14 +00:00
d49edb7ab0 Fall back to selection DAG isel for calls to builtin functions.
Fast isel doesn't currently have support for translating builtin function
calls to target instructions.  For embedded environments where the library
functions are not available, this is a matter of correctness and not
just optimization.  Most of this patch is just arranging to make the
TargetLibraryInfo available in fast isel.  <rdar://problem/12008746>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161232 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-03 04:06:28 +00:00
c8e41c5917 Fix a typo (the the => the)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160621 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-23 08:51:15 +00:00
8c51e3995d Remove tabs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160473 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-19 00:01:33 +00:00
769951f6cc Target option DisableJumpTables is a gross hack. Move it to TargetLowering instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159611 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-02 22:39:56 +00:00
5afba6f00c Add a new intrinsic: llvm.fmuladd. This intrinsic represents a multiply-add
expression (a * b + c) that can be implemented as a fused multiply-add (fma)
if the target determines that this will be more efficient. This intrinsic
will be used to implement FP_CONTRACT support and an aggressive FMA formation
mode.

If your target has a fast FMA instruction you should override the
isFMAFasterThanMulAndAdd method in TargetLowering to return true.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158014 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-05 19:07:46 +00:00
fcb2c3cf5e Remove the "-promote-elements" flag. This flag is now enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157925 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 11:27:21 +00:00
d2ea0e10cb Change interface for TargetLowering::LowerCallTo and TargetLowering::LowerCall
to pass around a struct instead of a large set of individual values.  This
cleans up the interface and allows more information to be added to the struct
for future targets without requiring changes to each and every target.

NV_CONTRIB

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157479 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-25 16:35:28 +00:00
2db0e9ebb6 Simplify code for calling a function where CanLowerReturn fails, fixing a small bug in the process.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157446 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-25 00:09:29 +00:00
aaf723dd2b Add a new target hook "predictableSelectIsExpensive".
This will be used to determine whether it's profitable to turn a select into a
branch when the branch is likely to be predicted.

Currently enabled for everything but Atom on X86 and Cortex-A9 devices on ARM.

I'm not entirely happy with the name of this flag, suggestions welcome ;)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156233 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-05 12:49:14 +00:00
e3ee49fb27 Use SuperRegClassIterator for findRepresentativeClass().
The masks returned by SuperRegClassIterator are computed automatically
by TableGen. This is better than depending on the manually specified
SuperRegClasses.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156147 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-04 02:19:22 +00:00
bf010eb911 Fix a long standing tail call optimization bug. When a libcall is emitted
legalizer always use the DAG entry node. This is wrong when the libcall is
emitted as a tail call since it effectively folds the return node. If
the return node's input chain is not the entry (i.e. call, load, or store)
use that as the tail call input chain.

PR12419
rdar://9770785
rdar://11195178


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154370 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10 01:51:00 +00:00