Commit Graph

675 Commits

Author SHA1 Message Date
Ahmed Bougacha
ae618e7873 [AArch64] Also combine vector selects fed by non-i1 SETCCs.
After legalization, scalar SETCC has an i32 result type on AArch64.
The i1 requirement seems too conservative, replace it with an assert.

This also means that we now can run after legalization. That should also
be fine, since the ops legalizer runs again after each combine, and
all types created all have the same sizes as the (legal) inputs.

Exposed by r235917; while there, robustize its tests (bsl also uses the
register it defines).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235922 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-27 21:43:12 +00:00
Ahmed Bougacha
bc92b2ca37 [AArch64] Don't assert when combining (v3f32 select (setcc f64)).
When the setcc has f64 operands, we can't build a vector setcc mask
to feed a vselect, because f64 doesn't divide v3f32 evenly.
Just bail out when that happens.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235917 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-27 21:01:20 +00:00
Yaron Keren
d5df7d3c7b Teach AArch64\lit.local.cfg the new triple names windows-gnu and windows-msvc.
Tests were failing when built with -DLLVM_DEFAULT_TARGET_TRIPLE=i686-pc-windows-gnu.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235733 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 17:14:16 +00:00
Pirama Arumuga Nainar
dab5145cb3 [AArch64] Add nvcast patterns for v4f16 and v8f16
Summary:
Constant stores of f16 vectors can create NvCast nodes from various
operand types to v4f16 or v8f16 depending on patterns in the stored
constants.  This patch adds nvcast rules with v4f16 and v8f16 values.

AArchISelLowering::LowerBUILD_VECTOR has the details on which constant
patterns generate the nvcast nodes.

Reviewers: jmolloy, srhines, ab

Subscribers: rengolin, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D9201

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235610 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-23 17:32:25 +00:00
Pirama Arumuga Nainar
b7db5f28c5 [AArch64] Handle vec4, vec8, vec16 *itofp for half
Summary:
Set operation action for SINT_TO_FP and UINT_TO_FP nodes with v4i32,
v8i8, v8i16 inputs to allow promotion of v4f16 results.

Add tests for sitofp and uitofp for vec4, vec8, vec16, and i8, i16, i32,
and i64 vectors.  Only missing tests are for v16i8 and v16i16 as the
shift operations are too complicated to write a proper check sequence.

The conversions from v4i64 to v4f16 do not depend on this patch - v4i64
is split and the conversion gets handled while lowering v2i64.  I am
adding a test here for completeness.

Reviewers: aemerson, rengolin, ab, jmolloy, srhines

Subscribers: rengolin, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D9166

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235609 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-23 17:16:27 +00:00
James Molloy
89cc8dd3b8 [AArch64] Disable complex GEP optimization by default.
Enough concerns were raised that this optimization is pessimising some code patterns.

The obvious fix, to add a Reassociate run afterwards, causes even more pessimisation in some cases due to fewer complex addressing modes being matched. As there isn't a trivial fix for this, backing this out by default until someone gets a chance to fix the addressing mode matcher.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235491 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 09:11:38 +00:00
Ahmed Bougacha
35e8055393 [GlobalMerge] Look at uses to create smaller global sets.
Instead of merging everything together, look at the users of
GlobalVariables, and try to group them by function, to create
sets of globals used "together".

Using that information, a less-aggressive alternative is to keep merging
everything together *except* globals that are only ever used alone, that
is, those for which it's clearly non-profitable to merge with others.

In my testing, grouping by Function is too aggressive, but grouping by
BasicBlock is too conservative.  Anything in-between isn't trivially
available, so stick with Function grouping for now.

cl::opts are added for testing; both enabled by default.

A few of the testcases aren't testing the merging proper, but just
various edge cases when merging does occur.  Update them to use the
previous grouping behavior. Also, one of the tests is unrelated to
GlobalMerge; change it accordingly.
While there, switch to r234666' flags rather than the brutal -O3.

Differential Revision: http://reviews.llvm.org/D8070


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235249 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-18 01:21:58 +00:00
Ahmed Bougacha
6b96a388ed [AArch64] Don't force MVT::Untyped when selecting LD1LANEpost.
The result is either an Untyped reg sequence, on ldN with N > 1, or
just the type of the input vector, on ld1.  Don't force Untyped.
Instead, just use the type of the reg sequence.

This mirrors the behavior of createTuple, which feeds the LD1*_POST.

The narrow code path wasn't actually covered by tests, because V64
insert_vector_elt are widened to V128 before the LD1LANEpost combine
has the chance to run, usually.

The only case where it does run on V64 vectors is if the vector ops
legalizer ran.  So, tickle the code with a ctpop.

Fixes PR23265.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235243 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 23:43:33 +00:00
Ahmed Bougacha
ab017cbc27 Fix another typo in r235224 testcase. NFC.
Third time's the charm!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 23:38:46 +00:00
Pete Cooper
12a57fe1d3 AArch64: Add test for returning [2 x i64] in registers. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235228 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 21:31:25 +00:00
Ahmed Bougacha
b24498671a Fix typo in r235224 testcase. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 21:11:58 +00:00
Ahmed Bougacha
7ce2cb4b62 [AArch64] Avoid vector->load dependency cycles when creating LD1*post.
They would break the SelectionDAG.
Note that the opposite load->vector dependency is already obvious in:
  (LD1*post vec, ..)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235224 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 21:02:30 +00:00
James Molloy
10331c7400 Fix TRUNCATE splitting helper logic.
This is a followon to r233681 - I'd misunderstood the semantics of FTRUNC,
and had confused it with (FP_ROUND ..., 0).

Thanks for Ahmed Bougacha for his post-commit review!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235191 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 13:51:40 +00:00
Ahmed Bougacha
7e3c3ae7c1 [AArch64] Don't assert on f16 in DUP PerfectShuffle generator.
Found by code inspection, but breaking i16 at least breaks other tests.
They aren't checking this in particular though, so also add some
explicit tests for the already working types.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235148 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 23:57:07 +00:00
David Blaikie
32b845d223 [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 23:24:18 +00:00
Pete Cooper
8dd904ce60 Disable AArch64 fast-isel on big-endian call vector returns.
A big-endian vector return needs a byte-swap which we aren't doing right now.

For now just bail on these cases to get correctness back.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235133 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 21:19:36 +00:00
Simon Pilgrim
c7bcb37fd3 TRUNCATE constant folding - minor fix for rL233224
Fix for test case found by James Molloy - TRUNCATE of constant build vectors can be more simply achieved by simply replacing with a new build vector node with the truncated value type - no need to touch the scalar operands at all.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235079 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 08:21:09 +00:00
Ahmed Bougacha
363a2799ff [CodeGen] Re-apply r234809 (concat of scalars), with an x86_mmx fix.
The only type that isn't an integer, isn't floating point, and isn't
a vector; ladies and gentlemen, the gift that keeps on giving: x86_mmx!

Fixes PR23246.

Original message (reverted in r235062):
[CodeGen] Combine concat_vectors of scalars into build_vector.

Combine something like:
  (v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
  (v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))

If any of the scalars are floating point, use that throughout.

Differential Revision: http://reviews.llvm.org/D8948


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235072 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 02:39:14 +00:00
Nick Lewycky
782028b4bc Revert r234809 because it caused PR23246.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235062 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 00:56:20 +00:00
Daniel Jasper
058309ba87 Re-apply r234898 and fix tests.
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.

The code that originally made me discover this is:

  if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 06:24:07 +00:00
Rafael Espindola
091be7b530 Revert "The code that originally made me discover this is:"
This reverts commit r234898.
CodeGen/ARM/2013-10-11-select-stalls.ll was faling.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234903 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 15:56:33 +00:00
Daniel Jasper
7025d248eb The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234898 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 15:20:37 +00:00
Ahmed Bougacha
164cbefb85 [CodeGen] Combine concat_vectors of scalars into build_vector.
Combine something like:
  (v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
  (v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))

If any of the scalars are floating point, use that throughout.

Differential Revision: http://reviews.llvm.org/D8948


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 22:57:21 +00:00
Krzysztof Parzyszek
2c85db4642 Settle on a specific triple for the aarch64 testcase
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234801 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 21:55:21 +00:00
Krzysztof Parzyszek
83ed245532 Also add mtriple to the aarch64 testcase
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234797 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 20:49:08 +00:00
Krzysztof Parzyszek
fcc330abfe Allow memory intrinsics to be tail calls
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234764 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 17:16:45 +00:00
Ahmed Bougacha
d2069333ee [CodeGen] Split -enable-global-merge into ARM and AArch64 options.
Currently, there's a single flag, checked by the pass itself.
It can't force-enable the pass (and is on by default), because it
might not even have been created, as that's the targets decision.
Instead, have separate explicit flags, so that the decision is
consistently made in the target.

Keep the flag as a last-resort "force-disable GlobalMerge" for now,
for backwards compatibility.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234666 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-11 00:06:36 +00:00
Ahmed Bougacha
1810ca3110 [AArch64] Promote f16 operations to f32.
For the most common ones (such as fadd), we already did the promotion.
Do the same thing for all the others.

Currently, we'll just crash/assert on all these operations, as
there's no hardware or libcall support whatsoever.

f16 (half) is specified as an interchange - not arithmetic - format,
and is expected to be promoted to single-precision for arithmetic
operations.

While there, teach the legalizer about promoting some of the (mostly
floating-point) operations that we never needed before.

Differential Revision: http://reviews.llvm.org/D8648
See related discussion on the thread for: http://reviews.llvm.org/D8755


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234550 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 00:08:48 +00:00
Ahmed Bougacha
66649e00c9 [CodeGen] Combine concat_vector of trunc'd scalar to scalar_to_vector.
We already do:
  concat_vectors(scalar, undef) -> scalar_to_vector(scalar)
When the scalar is legal.
When it's not, but is a truncated legal scalar, we can also do:
  concat_vectors(trunc(scalar), undef) -> scalar_to_vector(scalar)
Which is equivalent, since the upper lanes are undef anyway.
While there, teach the combine to look at more than 2 operands.

Differential Revision: http://reviews.llvm.org/D8883


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234530 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 20:04:47 +00:00
Juergen Ributzka
117bf240ef [AArch64][FastISel] Fix integer extend optimization.
The integer extend optimization tries to fold the extend into the load
instruction. This requires us to identify if the extend has already been
emitted or not and act accordingly on it.

The check that was originally performed for this was not sufficient. Besides
checking the ValueMap for a mapped register we also need to check if the
virtual register has already an associated machine instruction that defines it.

This fixes rdar://problem/20470788.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234529 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 20:00:46 +00:00
Kristof Beyls
2a6ad5bfb2 [AArch64] Add support for dynamic stack alignment
Differential Revision: http://reviews.llvm.org/D8876



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234471 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 08:49:47 +00:00
Lang Hames
64008ef318 [AArch64] Remove redundant -march option. Also fix a think-o from r234462.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234467 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 05:34:57 +00:00
Nick Lewycky
7ca40334f1 Not all triples put _ before function names. Specify a triple to make this test pass on Linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234466 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 05:31:32 +00:00
Lang Hames
174f04eefb [AArch64] Teach AArch64TargetLowering::getOptimalMemOpType to consider alignment
restrictions when choosing a type for small-memcpy inlining in
SelectionDAGBuilder.

This ensures that the loads and stores output for the memcpy won't be further
expanded during legalization, which would cause the total number of instructions
for the memcpy to exceed (often significantly) the inlining thresholds.

<rdar://problem/17829180>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234462 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-09 03:40:33 +00:00
Akira Hatanaka
2c3c562b03 Use option -march instead of -mtriple to avoid overconditionalizing the test.
This fixes r234439, which was committed to fix the test failures caused by
r234430.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234451 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-08 23:02:45 +00:00
Akira Hatanaka
0400513fd3 Pass -mtriple to llc to appease buildbot.
This fixes the test case I committed in r234430.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234439 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-08 21:30:48 +00:00
Akira Hatanaka
522877813a [DAGCombine] Fix a bug in MergeConsecutiveStores.
The bug manifests when there are two loads and two stores chained as follows in
a DAG,

(ld v3f32) -> (st f32) -> (ld v3f32) -> (st f32)

and the stores' values are extracted from the preceding vector loads.

MergeConsecutiveStores would replace the first store in the chain with the
merged vector store, which would create a cycle between the merged store node
and the last load node that appears in the chain.

This commits fixes the bug by replacing the last store in the chain instead.

rdar://problem/20275084

Differential Revision: http://reviews.llvm.org/D8849


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234430 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-08 20:34:53 +00:00
Simon Pilgrim
4e60da755a [DAGCombiner] Combine shuffles of BUILD_VECTOR and SCALAR_TO_VECTOR
This patch attempts to fold the shuffling of 'scalar source' inputs - BUILD_VECTOR and SCALAR_TO_VECTOR nodes - if the shuffle node is the only user. This folds away a lot of unnecessary shuffle nodes, and allows quite a bit of constant folding that was being missed.

Differential Revision: http://reviews.llvm.org/D8516

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234004 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 10:02:21 +00:00
Jiangning Liu
3ee56c2c67 Fix PR23065. Avoid optimizing bitcast of build_vector with constant input to scalar_to_vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233778 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-01 01:52:38 +00:00
Quentin Colombet
6aebd393f0 [AArch64] Enable the codegenprepare optimization that promotes operation to form
extended loads.
Implement the related target lowering hook so that the optimization has a better
estimation of the cost of an extension.

rdar://problem/19267165


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233753 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 20:52:32 +00:00
Tim Northover
5b8131701d AArch64: fix v8.1 sqrdmlah tests on Darwin platforms
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233709 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 16:41:38 +00:00
Vladimir Sukharev
e99524cf52 [AArch64] Add v8.1a "Rounding Double Multiply Add/Subtract" extension
Reviewers: t.p.northover, jmolloy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8502


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233693 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 13:15:48 +00:00
James Molloy
369ee1b1f4 [SDAG] Move TRUNCATE splitting logic into a helper, and use
it more liberally.

SplitVecOp_TRUNCATE has logic for recursively splitting oversize vectors
that need more than one round of splitting to become legal. There are many
other ISD nodes that could benefit from this logic, so factor it out and
use it for FP_TO_UINT,FP_TO_SINT,SINT_TO_FP,UINT_TO_FP and FTRUNC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233681 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 10:20:58 +00:00
Quentin Colombet
9e5f04d219 [AArch64] Fix poor codegen for add immediate.
We used to match the register variant before the immediate when the register
argument could be implicitly zero-extended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233653 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 00:31:13 +00:00
Juergen Ributzka
1c8595529b Transfer implicit operands when expanding the RET_ReallyLR pseudo instruction.
When we expand the RET_ReallyLR pseudo instruction we also need to transfer the
implicit operands.

The return register is an implicit operand and without it the liveness
calculation generates an incorrect live-out set for the patchpoint.

This fixes rdar://problem/19068476.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233635 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-30 22:45:56 +00:00
Akira Hatanaka
f2f2ef70a0 [AArch64InstPrinter] Use the feature bits of the subtarget passed to the print
method.

This enables the instprinter to print a different system register name based on
the feature bits of the per-function subtarget. 

Differential Revision: http://reviews.llvm.org/D8668 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233412 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-27 20:37:20 +00:00
Ahmed Bougacha
19e2fce680 [CodeGen] Don't attempt a tail-call with a non-forwarded explicit sret.
Tailcalls are only OK with forwarded sret pointers. With explicit sret,
one approximation is to check that the pointer isn't an Instruction, as
in that case it might point into some local memory (alloca). That's not
OK with tailcalls.

Explicit sret counterpart to r233409.
Differential Revison: http://reviews.llvm.org/D8510


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233410 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-27 20:35:49 +00:00
Ahmed Bougacha
2615b686d3 [CodeGen] Don't attempt a tail-call with implicit sret.
Tailcalls are only OK with forwarded sret pointers. With sret demotion,
they're not, as we'd have a pointer into a soon-to-be-dead stack frame.

Differential Revison: http://reviews.llvm.org/D8510


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233409 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-27 20:28:30 +00:00
Andrew Trick
9217916725 Complete the MachineScheduler fix made way back in r210390.
"Fix the MachineScheduler's logic for updating ready times for in-order.
 Now the scheduler updates a node's ready time as soon as it is
 scheduled, before releasing dependent nodes."

This fix was only made in one variant of the ScheduleDAGMI driver.
Francois de Ferriere reported the issue in the other bit of code where
it was also needed.
I never got around to coming up with a test case, but it's an
obvious fix that shouldn't be delayed any longer.
I'll try to refactor this code a little better.

I did verify performance on a wide variety of targets and saw no
negative impact with this fix.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233366 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-27 06:10:13 +00:00
Ahmed Bougacha
c9ad3ab624 [AArch64, ARM] Enable GlobalMerge with -O3 rather than -O1.
The pass used to be enabled by default with CodeGenOpt::Less (-O1).
This is too aggressive, considering the pass indiscriminately merges
all globals together.

Currently, performance doesn't always improve, and, on code that uses
few globals (e.g., the odd file- or function- static), more often than
not is degraded by the optimization.  Lengthy discussion can be found
on llvmdev (AArch64-focused;  ARM has similar problems):
  http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-February/082800.html
Also, it makes tooling and debuggers less useful when dealing with
globals and data sections.

GlobalMerge needs to better identify those cases that benefit, and this
will be done separately.  In the meantime, move the pass to run with
-O3 rather than -O1, on both ARM and AArch64.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233024 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-23 21:17:36 +00:00