Commit Graph

605 Commits

Author SHA1 Message Date
Sanjoy Das
a0a0b40aa3 Bugfix: SCEVExpander incorrectly marks increment operations as no-wrap
(The change was landed in r230280 and caused the regression PR22674.
This version contains a fix and a test-case for PR22674).
    
When emitting the increment operation, SCEVExpander marks the
operation as nuw or nsw based on the flags on the preincrement SCEV.
This is incorrect because, for instance, it is possible that {-6,+,1}
is <nuw> while {-6,+,1}+1 = {-5,+,1} is not.
    
This change teaches SCEV to mark the increment as nuw/nsw only if it
can explicitly prove that the increment operation won't overflow.
    
Apart from the attached test case, another (more realistic)
manifestation of the bug can be seen in
Transforms/IndVarSimplify/pr20680.ll.

Differential Revision: http://reviews.llvm.org/D7778



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230533 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-25 20:02:59 +00:00
Matthias Braun
dd1a6e074d AArch64: Relax assert about large shift sizes.
The reason why these large shift sizes happen is because OpaqueConstants
currently inhibit alot of DAG combining, but that has to be addressed in
another commit (like the proposal in D6946).

Differential Revision: http://reviews.llvm.org/D6940

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230355 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 18:52:04 +00:00
Hans Wennborg
b499b73e30 Revert r230280: "Bugfix: SCEVExpander incorrectly marks increment operations as no-wrap"
This caused PR22674, failing this assert:

Instructions.h:2281: llvm::Value* llvm::PHINode::getOperand(unsigned int) const: Assertion `i_nocapture < OperandTraits<PHINode>::operands(this) && "getOperand() out of range!"' failed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230341 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:19:29 +00:00
Sanjoy Das
8d16a81c33 Bugfix: SCEVExpander incorrectly marks increment operations as no-wrap
When emitting the increment operation, SCEVExpander marks the
operation as nuw or nsw based on the flags on the preincrement SCEV.
This is incorrect because, for instance, it is possible that {-6,+,1}
is <nuw> while {-6,+,1}+1 = {-5,+,1} is not.

This change teaches SCEV to mark the increment as nuw/nsw only if it
can explicitly prove that the increment operation won't overflow.

Apart from the attached test case, another (more realistic) manifestation
of the bug can be seen in Transforms/IndVarSimplify/pr20680.ll.

NOTE: this change was landed with an incorrect commit message in
rL230275 and was reverted for that reason in rL230279.  This commit
message is the correct one.

Differential Revision: http://reviews.llvm.org/D7778



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230280 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 23:22:58 +00:00
Sanjoy Das
69048edf8a Revert 230275.
230275 got committed with an incorrect commit message due to a mixup
on my side.  Will re-land in a few moments with the correct commit
message.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230279 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 23:13:22 +00:00
Sanjoy Das
7ebbc8de2f Fix bug 22641
The bug was a result of getPreStartForExtend interpreting nsw/nuw
flags on an add recurrence more strongly than is legal.  {S,+,X}<nsw>
implies S+X is nsw only if the backedge of the loop is taken at least
once.

Differential Revision: http://reviews.llvm.org/D7808



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230275 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 22:55:13 +00:00
Andrew Trick
4f7d60c1ea AArch64: Safely handle the incoming sret call argument.
This adds a safe interface to the machine independent InputArg struct
for accessing the index of the original (IR-level) argument. When a
non-native return type is lowered, we generate the hidden
machine-level sret argument on-the-fly. Before this fix, we were
representing this argument as OrigArgIndex == 0, which is an outright
lie. In particular this crashed in the AArch64 backend where we
actually try to access the type of the original argument.

Now we use a sentinel value for machine arguments that have no
original argument index. AArch64, ARM, Mips, and PPC now check for this
case before accessing the original argument.

Fixes <rdar://19792160> Null pointer assertion in AArch64TargetLowering

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229413 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-16 18:10:47 +00:00
Chandler Carruth
72753f87f2 [SDAG] Teach the SelectionDAG to canonicalize vector shuffles of splats
directly into blends of the splats.

These patterns show up even very late in the vector shuffle lowering
where we don't have any chance for DAG combining to kick in, and
blending is a tremendously simpler operation to model. By coercing the
shuffle into a blend we can much more easily match and lower shuffles of
splats.

Immediately with this change there are significantly more blends being
matched in the x86 vector shuffle lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229308 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-15 12:18:12 +00:00
James Molloy
c76bff0715 [SimplifyCFG] Be more aggressive
Up the phi node folding threshold from a cheap "1" to a meagre "2".

Update tests for extra added selects and slight code churn.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229099 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-13 10:48:30 +00:00
Ahmed Bougacha
9e9bde9b54 [CodeGen] Don't blindly combine (fp_round (fp_round x)) to (fp_round x).
We used to do this DAG combine, but it's not always correct:
If the first fp_round isn't a value preserving truncation, it might
introduce a tie in the second fp_round, that wouldn't occur in the
single-step fp_round we want to fold to.
In other words, double rounding isn't the same as rounding.

Differential Revision: http://reviews.llvm.org/D7571


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228911 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-12 06:15:29 +00:00
James Molloy
4de471dd0a [SimplifyCFG] Swap to using TargetTransformInfo for cost
analysis.

We're already using TTI in SimplifyCFG, so remove the hard-baked "cheapness"
heuristic and use TTI directly. Generally NFC intended, but we're using a slightly
different heuristic now so there is a slight test churn.

Test changes:
  * combine-comparisons-by-cse.ll: Removed unneeded branch check.
  * 2014-08-04-muls-it.ll: Test now doesn't branch but emits muleq.
  * coalesce-subregs.ll: Superfluous block check.
  * 2008-01-02-hoist-fp-add.ll: fadd is safe to speculate. Change to udiv.
  * PhiBlockMerge.ll: Superfluous CFG checking code. Main checks still present.
  * select-gep.ll: A variable GEP is not expensive, just TCC_Basic, according to the TTI.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228826 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-11 12:15:41 +00:00
Tim Northover
848d812931 ARM & AArch64: teach LowerVSETCC that output type size may differ from input.
While various DAG combines try to guarantee that a vector SETCC
operation will have the same output size as input, there's nothing
intrinsic to either creation or LegalizeTypes that actually guarantees
it, so the function needs to be ready to handle a mismatch.

Fortunately this is easy enough, just extend or truncate the naturally
compared result.

I couldn't reproduce the failure in other backends that I know have
SIMD, so it's probably only an issue for these two due to shared
heritage.

Should fix PR21645.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228518 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 00:50:47 +00:00
Matthias Braun
3fd0775f06 AArch64: Make test more robust.
Avoid the creation of select instructions which can result in different
scheduling of the selects.

I also added a bunch of additional store volatiles. Those avoid A
CodeGen problem (bug?) where normalizes and denomarlizing the control
moves all shift instructions into the first block where ISel can't match
them together with the cmps.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228362 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:14 +00:00
Matthias Braun
a602c10686 MachineCSE: Clear dead-def flag on CSE.
In case CSE reuses a previoulsy unused register the dead-def flag has to
be cleared on the def operand, as exposed by the arm64-cse.ll test.

This fixes PR22439 and the corresponding rdar://19694987

Differential Revision: http://reviews.llvm.org/D7395

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228178 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 19:35:16 +00:00
Renato Golin
0966a4e370 Adding support to LLVM for targeting Cortex-A72
Currently, Cortex-A72 is modelled as an Cortex-A57 except the fp
load balancing pass isn't enabled for Cortex-A72 as it's not
profitable to have it enabled for this core.

Patch by Ranjeet Singh.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228140 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 13:31:29 +00:00
Ahmed Bougacha
0f1a21bcb8 [AArch64] Prefer DUP/MOV ("CPY") to INS for vector_extract.
This avoids a partial false dependency on the previous content of
the upper lanes of the destination vector register.

Differential Revision: http://reviews.llvm.org/D7307


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227820 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-02 17:55:57 +00:00
Ahmed Bougacha
cfb61b368c [AArch64] Add a few more DUP testcases. NFC.
Also, don't lie about testing index 0.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227642 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-30 23:41:15 +00:00
Ahmed Bougacha
f2045fb1f2 [AArch64] Robustize neon-scalar-copy.ll tests. NFC.
Some of those didn't even have run lines: they were removed
inadvertently during the Great Merge of 2014.

They used to check for DUPs, but now we go through W-regs?
Filed PR22418 for that potential regression.

For now, just make the tests explicit, so we now where we stand.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227635 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-30 23:13:57 +00:00
Hao Liu
7d3a44a692 [AArch64]Fix PR21675, a bug about lowering llvm.ctpop.i32. We should noot use "DAG.getUNDEF(MVT::v8i8)" to get all zero vector.
Patch by Wei-cheng Wang.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227550 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-30 02:13:53 +00:00
Quentin Colombet
af1cd03764 [AArch64][LoadStoreOptimizer] Form LDPSW when possible.
This patch adds the missing LD[U]RSW variants to the load store optimizer, so
that we generate LDPSW when possible.

<rdar://problem/19583480>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226978 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-24 01:25:54 +00:00
Tim Northover
f5f8a3e6a6 DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))
It can help with argument juggling on some targets, and is generally a good
idea.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226740 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 23:17:19 +00:00
Tim Northover
c49e57ade1 Revert "DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))"
It hadn't gone through review yet, but was still on my local copy.

This reverts commit r226663

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226665 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:48:52 +00:00
Tim Northover
004d725549 AArch64: add backend option to reserve x18 (platform register)
AAPCS64 says that it's up to the platform to specify whether x18 is
reserved, and a first step on that way is to add a flag controlling
it.

From: Andrew Turner <andrew@fubar.geek.nz>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226664 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:43:31 +00:00
Tim Northover
47f47f5d2a DAGCombine: fold (or (and X, M), (and X, N)) -> (and X, (or M, N))
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226663 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-21 15:43:28 +00:00
Greg Fitzgerald
4b96323e81 [AArch64] Implement GHC calling convention
Original patch by Luke Iannini.  Minor improvements and test added by
Erik de Castro Lopo.

Differential Revision: http://reviews.llvm.org/D6877

From: Erik de Castro Lopo <erikd@mega-nerd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226473 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-19 17:40:05 +00:00
Duncan P. N. Exon Smith
37ac8d3622 IR: Move MDLocation into place
This commit moves `MDLocation`, finishing off PR21433.  There's an
accompanying clang commit for frontend testcases.  I'll attach the
testcase upgrade script I used to PR21433 to help out-of-tree
frontends/backends.

This changes the schema for `DebugLoc` and `DILocation` from:

    !{i32 3, i32 7, !7, !8}

to:

    !MDLocation(line: 3, column: 7, scope: !7, inlinedAt: !8)

Note that empty fields (line/column: 0 and inlinedAt: null) don't get
printed by the assembly writer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226048 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 22:27:36 +00:00
Sanjoy Das
7f0da20b97 Fix PR22179.
We were incorrectly inferring nsw for certain SCEVs. We can be more
aggressive here (see Richard Smith's comment on
http://llvm.org/bugs/show_bug.cgi?id=22179) but this change just
focuses on correctness.

Differential Revision: http://reviews.llvm.org/D6914



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225591 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-10 23:41:24 +00:00
Karthik Bhat
f2b3638c3d Revert r225165 and r225169
Even thouh gcc produces simialr instructions as Owen pointed out the two patterns aren’t equivalent in the case
where the original subtraction could have caused an overflow.
Reverting the same.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225341 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-07 06:34:34 +00:00
Ahmed Bougacha
3c9fb6e1ad [AArch64] Improve codegen of store lane instructions by avoiding GPR usage.
We used to generate code similar to:

  umov.b        w8, v0[2]
  strb  w8, [x0, x1]

because the STR*ro* patterns were preferred to ST1*.
Instead, we can avoid going through GPRs, and generate:

  add   x8, x0, x1
  st1.b { v0 }[2], [x8]

This patch increases the ST1* AddedComplexity to achieve that.

rdar://16372710
Differential Revision: http://reviews.llvm.org/D6202


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225183 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 17:10:26 +00:00
Ahmed Bougacha
c52cd839b9 [AArch64] Improve codegen of store lane 0 instructions by directly storing the subregister.
For 0-lane stores, we used to generate code similar to:

  fmov w8, s0
  str w8, [x0, x1, lsl #2]

instead of:

  str s0, [x0, x1, lsl #2]

To correct that: for store lane 0 patterns, directly match to STR <subreg>0.

Byte-sized instructions don't have the special case for a 0 index,
because FPR8s are defined to have untyped content.

rdar://16372710
Differential Revision: http://reviews.llvm.org/D6772


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225181 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 17:02:28 +00:00
Karthik Bhat
050064d32c Select lower fsub,fabs pattern to fabd on AArch64
This patch lowers patterns such as-
  fsub   v0.4s, v0.4s, v1.4s
  fabs   v0.4s, v0.4s
to
  fabd  v0.4s, v0.4s, v1.4s
on AArch64.

Review: http://reviews.llvm.org/D6791



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225169 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 13:57:59 +00:00
Karthik Bhat
e239724d12 Select lower sub,abs pattern to sabd on AArch64
This patch lowers patterns such as-
  sub	v0.4s, v0.4s, v1.4s
  abs	v0.4s, v0.4s
to
  sabd	v0.4s, v0.4s, v1.4s
on AArch64.

Review: http://reviews.llvm.org/D6781



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-05 13:11:07 +00:00
Karthik Bhat
0c2590a266 Lower multiply-negate operation to mneg on AArch64
This patch pattern matches code such as-
neg	 w8, w8
mul	 w8, w9, w8
to
mneg	 w8, w8, w9

Review: http://reviews.llvm.org/D6754



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224706 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 13:38:58 +00:00
Duncan P. N. Exon Smith
1ef70ff39b IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly.  These
are the matching assembly changes for the metadata/value split in
r223802.

  - Only use the `metadata` type when referencing metadata from a call
    intrinsic -- i.e., only when it's used as a `Value`.

  - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
    when referencing it from call intrinsics.

So, assembly like this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata !{i32 %v}, metadata !0)
      call void @llvm.foo(metadata !{i32 7}, metadata !0)
      call void @llvm.foo(metadata !1, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{metadata !3}, metadata !0)
      ret void, !bar !2
    }
    !0 = metadata !{metadata !2}
    !1 = metadata !{i32* @global}
    !2 = metadata !{metadata !3}
    !3 = metadata !{}

turns into this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata i32 %v, metadata !0)
      call void @llvm.foo(metadata i32 7, metadata !0)
      call void @llvm.foo(metadata i32* @global, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{!3}, metadata !0)
      ret void, !bar !2
    }
    !0 = !{!2}
    !1 = !{i32* @global}
    !2 = !{!3}
    !3 = !{}

I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines).  I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.

This is part of PR21532.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224257 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 19:07:53 +00:00
Juergen Ributzka
446f01b1d5 [AArch64] MachO large code-model: Materialize FP constants in code.
In the large code model we have to first get the address of the GOT entry, load
the address of the constant, and then load the constant itself.

To avoid these loads and the GOT entry alltogether this commit changes the way
how FP constants are materialized in the large code model. The constats are now
materialized in a GPR and then bitconverted/moved into the FPR.

Reviewed by Tim Northover

Fixes rdar://problem/16572564.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223941 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 19:43:32 +00:00
Juergen Ributzka
772744826c [FastISel][AArch64] Fix a missing nullptr check in 'computeAddress'.
The load/store value type is currently not available when lowering the memcpy
intrinsic. Add the missing nullptr check to support this in 'computeAddress'.

Fixes rdar://problem/19178947.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223818 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 19:44:38 +00:00
Juergen Ributzka
b49ee78320 [CodeGenPrepare] Split branch conditions into multiple conditional branches.
This optimization transforms code like:
bb1:
  %0 = icmp ne i32 %a, 0
  %1 = icmp ne i32 %b, 0
  %or.cond = or i1 %0, %1
  br i1 %or.cond, label %TrueBB, label %FalseBB

into a multiple branch instructions like:

bb1:
  %0 = icmp ne i32 %a, 0
  br i1 %0, label %TrueBB, label %bb2
bb2:
  %1 = icmp ne i32 %b, 0
  br i1 %1, label %TrueBB, label %FalseBB

This optimization is already performed by SelectionDAG, but not by FastISel.
FastISel cannot perform this optimization, because it cannot generate new
MachineBasicBlocks.

Performing this optimization at CodeGenPrepare time makes it available to both -
SelectionDAG and FastISel - and the implementation in SelectiuonDAG could be
removed. There are currenty a few differences in codegen for X86 and PPC, so
this commmit only enables it for FastISel.

Reviewed by Jim Grosbach

This fixes rdar://problem/19034919.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223786 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 16:36:13 +00:00
Tim Northover
811474b929 AArch64: treat HFAs containing "half" types as blocks too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223669 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 17:54:58 +00:00
Weiming Zhao
1b77bb5628 [AArch64] Combining Load and IntToFp should check for neon availability
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223382 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 20:25:50 +00:00
Tim Northover
34c1d6673e AArch64: fix wrong-endian parameter passing.
The blocked arguments code didn't take account of the hacks needed to support
it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223247 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 17:49:26 +00:00
Tim Northover
09d2a61cd4 AArch64: strengthen Darwin ABI alignment assumptions
A global variable without an explicit alignment specified should be assumed to
be ABI-aligned according to its type, like on other platforms. This allows us
to use better memory operations when accessing it.

rdar://18533701

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223180 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 23:53:43 +00:00
Tim Northover
1dad6937c5 AArch64: don't be too greedy when folding :lo12: accesses into mem ops.
This frequently leads to cases like:
   ldr xD, [xN, :lo12:var]
   add xA, xN, :lo12:var
   ldr xD, [xA, #8]

where the ADD would have been needed anyway, and the two distinct addressing
modes can prevent the formation of an ldp. Because of how we handle ADRP
(aggressively forming an ADRP/ADD pseudo-inst at ISel time), this pattern also
results in duplicated ADRP instructions (one on its own to cover the ldr, and
one combined with the add).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223172 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 23:13:39 +00:00
Lang Hames
26cc5bea39 [AArch64][Stackmaps] Optimize stackmap shadows on AArch64.
Reduce the number of nops emitted for stackmap shadows on AArch64 by counting
non-stackmap instructions up to the next branch target towards the requested
shadow.

<rdar://problem/14959522>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223156 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 21:36:24 +00:00
Ahmed Bougacha
88d2b5812a [MachineCSE] Clear kill-flag on registers imp-def'd by the CSE'd instruction.
Go through implicit defs of CSMI and MI, and clear the kill flags on
their uses in all the instructions between CSMI and MI.
We might have made some of the kill flags redundant, consider:
  subs  ... %NZCV<imp-def>        <- CSMI
  csinc ... %NZCV<imp-use,kill>   <- this kill flag isn't valid anymore
  subs  ... %NZCV<imp-def>        <- MI, to be eliminated
  csinc ... %NZCV<imp-use,kill>
Since we eliminated MI, and reused a register imp-def'd by CSMI
(here %NZCV), that register, if it was killed before MI, should have
that kill flag removed, because it's lifetime was extended.

Also, add an exhaustive testcase for the motivating example.

Reviewed by: Juergen Ributzka <juergen@apple.com>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223133 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 18:09:51 +00:00
Tim Northover
b588e02c07 AArch64: make register block rules apply to vector types too.
The blocking code originated in ARM, which is more aggressive about casting
types to a canonical representative before doing anything else, so I missed out
most vector HFAs and broke the ABI. This should fix it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223126 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 17:15:22 +00:00
Ahmed Bougacha
14fe2e6948 [AArch64] Don't combine "select (setcc i1 LHS, RHS), vL, vR".
r208210 introduced an optimization that improves the vector select
codegen by doing the setcc on vectors directly.
This is a problem they the setcc operands are i1s, because the
optimization would create vectors of i1, which aren't legal.

Part of PR21549.

Differential Revision: http://reviews.llvm.org/D6308


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223075 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-01 20:59:00 +00:00
Ahmed Bougacha
217a4a87ce [AArch64] Fix v2i8->i16 bitcast legalization.
r213378 improved f16 bitcasts, so that they go directly through subregs,
instead of through the stack.  That code now causes an assertion failure
for bitcasts from other 16-bits types (most importantly v2i8).

Correct that by doing the custom lowering for i16 bitcasts only when the
input is an f16.

Part of PR21549.

Differential Revision: http://reviews.llvm.org/D6307


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223074 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-01 20:52:32 +00:00
Ahmed Bougacha
eb1f2e8eb0 [MachineVerifier] Accept a MBB with a single landing pad successor.
The MachineVerifier used to check that there was always exactly one
unconditional branch to a non-landingpad (normal) successor.
If that normal successor to an invoke BB is unreachable, it seems
reasonable to only have one successor, the landing pad.
On targets other than AArch64 (and on AArch64 with a different testcase),
the branch folder turns the branch to the landing pad into a fallthrough.
The MachineVerifier, which relies on AnalyzeBranch, is unable to check
the condition, and doesn't complain. However, it does in this specific
testcase, where the branch to the landing pad remained.
Make the MachineVerifier accept it.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223059 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-01 18:43:53 +00:00
Tim Northover
d7a4f74f15 AArch64: treat [N x Ty] as a block during procedure calls.
The AAPCS treats small structs and homogeneous floating (or vector) aggregates
specially, and guarantees they either get passed as a contiguous block of
registers, or prevent any future use of those registers and get passed on the
stack.

This concept can fit quite neatly into LLVM's own type system, mapping an HFA
to [N x float] and so on, and small structs to [N x i64]. Doing so allows
front-ends to emit AAPCS compliant code without having to duplicate the
register counting logic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222903 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-27 21:02:42 +00:00
Will Newton
87a2f3751c Update AArch64 ELF relocations to ABI 1.0
This mostly entails adding relocations, however there are a couple of
changes to existing relocations:

1. R_AARCH64_NONE is defined to be zero rather than 256

R_AARCH64_NONE has been defined to be zero for a long time elsewhere
e.g. binutils and glibc since the submission of the AArch64 port in
2012 so this is required for compatibility.

2. R_AARCH64_TLSDESC_ADR_PAGE renamed to R_AARCH64_TLSDESC_ADR_PAGE21

I don't think there is any way for relocation names to leak out of LLVM
so this should not break anything.

Tested with check-all with no regressions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222821 91177308-0d34-0410-b5e6-96231b3b80d8
2014-11-26 10:49:18 +00:00