Commit Graph

25462 Commits

Author SHA1 Message Date
Ulrich Weigand
c568629589 [PowerPC] Swap arguments to vpkuhum/vpkuwum on little-endian
In commit r213915, Bill fixed little-endian usage of vmrgh* and vmrgl*
by swapping the input arguments.  As it turns out, the exact same fix
is also required for the vpkuhum/vpkuwum patterns.

This fixes another regression in llvmpipe when vector support is
enabled.

Reviewed by Bill Schmidt.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214718 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 13:53:40 +00:00
Ulrich Weigand
d5e9497c88 [PowerPC] MULHU/MULHS are not legal for vector types
I ran into some test failures where common code changed vector division
by constant into a multiply-high operation (MULHU).  But these are not
implemented by the back-end, so we failed to recognize the insn.

Fixed by marking MULHU/MULHS as Expand for vector types.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214716 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 13:27:12 +00:00
Ulrich Weigand
3b7a193521 [PowerPC] Fix and improve vector comparisons
This patch refactors code generation of vector comparisons.

This fixes a wrong code-gen bug for ISD::SETGE for floating-point types,
and improves generated code for vector comparisons in general.

Specifically, the patch moves all logic deciding how to implement vector
comparisons into getVCmpInst, which gets two extra boolean outputs
indicating to its caller whether its needs to swap the input operands
and/or negate the result of the comparison.  Apart from implementing
these two modifications as directed by getVCmpInst, there is no need
to ever implement vector comparisons in any other manner; in particular,
there is never a need to perform two separate comparisons (e.g. one for
equal and one for greater-than, as code used to do before this patch).

Reviewed by Bill Schmidt.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214714 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 13:13:57 +00:00
Daniel Sanders
98b419bff7 [mips] Add assembler support for '.set mipsX'.
Summary:
This patch also fixes an issue with the way the Mips assembler enables/disables architecture
features. Before this patch, the assembler never disabled feature bits. For example,
.set mips64
.set mips32r2

would result in the 'OR' of mips64 with mips32r2 feature bits which isn't right.
Unfortunately this isn't trivial to fix because there's not an easy way to clear
feature bits as the algorithm in MCSubtargetInfo (ToggleFeature) only clears the bits
that imply the feature being cleared and not the implied bits by the feature (there's a
better explanation to the code I added).

Patch by Matheus Almeida and updated by Toma Tabacu

Reviewers: vmedic, matheusalmeida, dsanders

Reviewed By: dsanders

Subscribers: tomatabacu, llvm-commits

Differential Revision: http://reviews.llvm.org/D4123


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214709 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 12:20:00 +00:00
Chandler Carruth
48593d7934 [x86] Just unilaterally prefer SSSE3-style PSHUFB lowerings over clever
use of PACKUS. It's cleaner that way.

I looked at implementing clever combine-based folding of PACKUS chains
into PSHUFB but it is quite hard and doesn't seem likely to be worth it.
The most annoying part would be detecting that the correct masking had
been done to use PACKUS-style instructions as a blend operation rather
than there being any saturating as is indicated by its name. We generate
really nice code for what few test cases I've come up with that aren't
completely contrived for this by just directly prefering PSHUFB and so
let's go with that strategy for now. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214707 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 10:17:35 +00:00
Chandler Carruth
93f5d9f093 [x86] Implement more aggressive use of PACKUS chains for lowering common
patterns of v16i8 shuffles.

This implements one of the more important FIXMEs for the SSE2 support in
the new shuffle lowering. We now generate the optimal shuffle sequence
for truncate-derived shuffles which show up essentially everywhere.

Unfortunately, this exposes a weakness in other parts of the shuffle
logic -- we can no longer form PSHUFB here. I'll add the necessary
support for that and other things in a subsequent commit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214702 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 09:40:02 +00:00
Kevin Qin
534100b31e Revert "r214669 - MachineCombiner Pass for selecting faster instruction"
This commit broke "make check" for several hours, so get it reverted.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214697 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 05:10:33 +00:00
Chandler Carruth
73100d8f33 [x86] Handle single input shuffles in the SSSE3 case more intelligently.
I spent some time looking into a better or more principled way to handle
this. For example, by detecting arbitrary "unneeded" ORs... But really,
there wasn't any point. We just shouldn't build blatantly wrong code so
late in the pipeline rather than adding more stages and logic later on
to fix it. Avoiding this is just too simple.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214680 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 01:14:24 +00:00
Chandler Carruth
f3ef961a94 [x86] Fix the test case added in r214670 and tweaked in r214674 further.
Fundamentally, there isn't a really portable way to test the constant
pool contents. Instead, pin this test to the bare-metal triple. This
also makes it a 64-bit triple which allows us to only match a single
constant pool rather than two. It can also just hard code the '.' prefix
as the format should be stable now that it has a fixed triple. Finally,
I've switched it to use CHECK-NEXT to be more precise in the instruction
sequence expected and to use variables rather than hard coding decisions
by the register allocator.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214679 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-04 00:54:28 +00:00
Peter Zotov
0980da373c [OCaml] Add Llvm.{string_of_const,const_element}.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214677 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-03 23:54:22 +00:00
Sanjay Patel
7bf958fe15 Account for possible leading '.' in label string.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214674 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-03 23:20:16 +00:00
Sanjay Patel
0f2bc49126 fix for PR20354 - Miscompile of fabs due to vectorization
This is intended to be the minimal change needed to fix PR20354 ( http://llvm.org/bugs/show_bug.cgi?id=20354 ). The check for a vector operation was wrong; we need to check that the fabs itself is not a vector operation.

This patch will not generate the optimal code. A constant pool load and 'and' op will be generated instead of just returning a value that we can calculate in advance (as we do for the scalar case). I've put a 'TODO' comment for that here and expect to have that patch ready soon.

There is a very similar optimization that we can do in visitFNEG, so I've put another 'TODO' there and expect to have another patch for that too.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214670 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-03 22:48:23 +00:00
Gerolf Hoflehner
48e1bd7287 MachineCombiner Pass for selecting faster instruction
sequence -  AArch64 target support

 This patch turns off madd/msub generation in the DAGCombiner and generates
 them in the MachineCombiner instead. It replaces the original code sequence
 with the combined sequence when it is beneficial to do so.

 When there is no machine model support it always generates the madd/msub
 instruction. This is true also when the objective is to optimize for code
 size: when the combined sequence is shorter is always chosen and does not
 get evaluated.

 When there is a machine model the combined instruction sequence
 is evaluated for critical path and resource length using machine
 trace metrics and the original code sequence is replaced when it is
 determined to be faster.

 rdar://16319955



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214669 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-03 22:03:40 +00:00
Matt Arsenault
fc65cf649c R600/SI: Fix extra whitespace in asm str
This slipped in in r214467, so something like

V_MOV_B32_e32  v0, ... is now printed with 2 spaces
between the instruction name and first operand.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214660 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-03 05:27:14 +00:00
Manman Ren
3d5463d81f [SimplifyCFG] fix accessing deleted PHINodes in switch-to-table conversion.
When we have a covered lookup table, make sure we don't delete PHINodes that
are cached in PHIs.

rdar://17887153


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214642 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 23:41:54 +00:00
Joerg Sonnenberger
a0e1d10f75 tlbia support
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214640 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 20:16:29 +00:00
Joerg Sonnenberger
55f925abc5 mfdcr / mtdcr support
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214639 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 20:00:26 +00:00
Erik Eckstein
8624519c0c fix bug 20513 - Crash in SLP Vectorizer
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214638 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 19:39:42 +00:00
James Molloy
14d596a382 Update test to use a more modern AArch64 triple, as requested by Renato.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214637 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 17:15:11 +00:00
Joerg Sonnenberger
419f3804f0 Don't use additional arguments for dss and friends to satisfy DSS_Form,
when let can do the same thing. Keep the 64bit variants as codegen-only.
While they have a different register class, the encoding is the same for
32bit and 64bit mode. Having both present would otherwise confuse the
disassembler.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214636 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 15:09:41 +00:00
James Molloy
e411c38de9 [AArch64] Teach DAGCombiner that converting two consecutive loads into a vector load is not a good transform when paired loads are available.
The combiner was creating Q-register loads and stores, which then had to be spilled because there are no callee-save Q registers!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214634 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 14:51:24 +00:00
Chandler Carruth
49f7f89c16 [x86] Give this test a bare metal triple so it doesn't use the weird
Darwin x86 asm comment prefix designed to work around GAS on that
platform. That makes the comment-matching of the test much more stable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214629 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 11:17:41 +00:00
Chandler Carruth
1029c7003f [x86] Largely complete the use of PSHUFB in the new vector shuffle
lowering with a small addition to it and adding PSHUFB combining.

There is one obvious place in the new vector shuffle lowering where we
should form PSHUFBs directly: when without them we will unpack a vector
of i8s across two different registers and do a potentially 4-way blend
as i16s only to re-pack them into i8s afterward. This is the crazy
expensive fallback path for i8 shuffles and we can just directly use
pshufb here as it will always be cheaper (the unpack and pack are
two instructions so even a single shuffle between them hits our
three instruction limit for forming PSHUFB).

However, this doesn't generate very good code in many cases, and it
leaves a bunch of common patterns not using PSHUFB. So this patch also
adds support for extracting a shuffle mask from PSHUFB in the X86
lowering code, and uses it to handle PSHUFBs in the recursive shuffle
combining. This allows us to combine through them, combine multiple ones
together, and generally produce sufficiently high quality code.

Extracting the PSHUFB mask is annoyingly complex because it could be
either pre-legalization or post-legalization. At least this doesn't have
to deal with re-materialized constants. =] I've added decode routines to
handle the different patterns that show up at this level and we dispatch
through them as appropriate.

The two primary test cases are updated. For the v16 test case there is
still a lot of room for improvement. Since I was going through it
systematically I left behind a bunch of FIXME lines that I'm hoping to
turn into ALL lines by the end of this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214628 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 10:39:15 +00:00
Chandler Carruth
fb1293fd4c [x86] Teach the target shuffle mask extraction to recognize unary forms
of normally binary shuffle instructions like PUNPCKL and MOVLHPS.

This detects cases where a single register is used for both operands
making the shuffle behave in a unary way. We detect this and adjust the
mask to use the unary form which allows the existing DAG combine for
shuffle instructions to actually work at all.

As a consequence, this uncovered a number of obvious bugs in the
existing DAG combine which are fixed. It also now canonicalizes several
shuffles even with the existing lowering. These typically are trying to
match the shuffle to the domain of the input where before we only really
modeled them with the floating point variants. All of the cases which
change to an integer shuffle here have something in the integer domain, so
there are no more or fewer domain crosses here AFAICT. Technically, it
might be better to go from a GPR directly to the floating point domain,
but detecting floating point *outputs* despite integer inputs is a lot
more code and seems unlikely to be worthwhile in practice. If folks are
seeing domain-crossing regressions here though, let me know and I can
hack something up to fix it.

Also as a consequence, a bunch of missed opportunities to form pshufb
now can be formed. Notably, splats of i8s now form pshufb.
Interestingly, this improves the existing splat lowering too. We go from
3 instructions to 1. Yes, we may tie up a register, but it seems very
likely to be worth it, especially if splatting the 0th byte (the
common case) as then we can use a zeroed register as the mask.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214625 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 10:27:38 +00:00
Akira Hatanaka
7e55ac7ce7 [ARM] In dynamic-no-pic mode, ARM's post-RA pseudo expansion was incorrectly
expanding pseudo LOAD_STATCK_GUARD using instructions that are normally used
in pic mode. This patch fixes the bug.

<rdar://problem/17886592>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214614 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 05:40:40 +00:00
Matt Arsenault
507c5af818 R600: Cleanup fneg tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214612 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 02:26:51 +00:00
Chandler Carruth
a272803ec5 [x86] Make some questionable tests not spew assembly to stdout, which
makes a mess of the lit output when they ultimately fail.

The 2012-10-02-DAGCycle test is really frustrating because the *only*
explanation for what it is testing is a rdar link. I would really rather
that rdar links (which are not public or part of the open source
project) were not committed to the source code. Regardless, the actual
problem *must* be described as the rdar link is completely opaque. The
fact that this test didn't check for any particular output further
exacerbates the inability of any other developer to debug failures.

The mem-promote-integers test has nice comments and *seems* to be
a great test for our lowering... except that we don't actually check
that any of the generated code is correct or matches some pattern. We
just avoid crashing. It would be great to go back and populate this test
with the actual expectations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214605 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 00:50:10 +00:00
Alexey Samsonov
cbd84586ef [ASan] Use metadata to pass source-level information from Clang to ASan.
Instead of creating global variables for source locations and global names,
just create metadata nodes and strings. They will be transformed into actual
globals in the instrumentation pass (if necessary). This approach is more
flexible:
1) we don't have to ensure that our custom globals survive all the optimizations
2) if globals are discarded for some reason, we will simply ignore metadata for them
   and won't have to erase corresponding globals
3) metadata for source locations can be reused for other purposes: e.g. we may
   attach source location metadata to alloca instructions and provide better descriptions
   for stack variables in ASan error reports.

No functionality change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214604 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 00:35:50 +00:00
Tyler Nowicki
842a06e8dd Add diagnostics to the vectorizer cost model.
When the cost model determines vectorization is not possible/profitable these remarks print an analysis of that decision.

Note that in selectVectorizationFactor() we can assume that OptForSize and ForceVectorization are mutually exclusive.

Reviewed by Arnold Schwaighofer


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214599 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-02 00:14:03 +00:00
Peter Collingbourne
f425efdbc2 PartiallyInlineLibCalls: Check sqrt result type before transforming it.
Some configure scripts declare this with the wrong prototype, which can lead
to an assertion failure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214593 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 23:21:21 +00:00
Adrian Prantl
65225b64bb Cleanup this test some more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214591 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 23:01:32 +00:00
Adrian Prantl
22c6e043d1 Add the missing target triple to this testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214590 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 23:01:30 +00:00
Justin Bogner
4a6fa1390a InstrProf: Allow multiple functions with the same name
This updates the instrumentation based profiling format so that when
we have multiple functions with the same name (but different function
hashes) we keep all of them instead of rejecting the later ones.

There are a number of scenarios where this can come up where it's more
useful to keep multiple function profiles:

* Name collisions in unrelated libraries that are profiled together.
* Multiple "main" functions from multiple tools built against a common
  library.
* Combining profiles from different build configurations (ie, asserts
  and no-asserts)

The profile format now stores the number of counters between the hash
and the counts themselves, so that multiple sets of counts can be
stored. Since this is backwards incompatible, I've bumped the format
version and added some trivial logic to skip this when reading the old
format.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214585 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 22:50:07 +00:00
Duncan P. N. Exon Smith
1526278443 UseListOrder: Fix blockaddress use-list order
`parseBitcodeFile()` uses the generic `getLazyBitcodeFile()` function as
a helper.  Since `parseBitcodeFile()` isn't actually lazy -- it calls
`MaterializeAllPermanently()` -- bypass the unnecessary call to
`materializeForwardReferencedFunctions()` by extracting out a common
helper function.  This removes the last of the use-list churn caused by
blockaddresses.

This highlights that we can't reproduce use-list order of globals and
constants when parsing lazily -- but that's necessarily out of scope.
When we're parsing lazily, we never have all the functions in memory, so
the use-lists of globals (and constants that reference globals) are
always incomplete.

This is part of PR5680.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 22:27:19 +00:00
Akira Hatanaka
306030f8aa [X86] Simplify X87 stackifier pass.
Stop using ST registers for function returns and inline-asm instructions and use
FP registers instead. This allows removing a large amount of code in the
stackifier pass that was needed to track register liveness and handle copies
between ST and FP registers and function calls returning floating point values.

It also fixes a bug which manifests when an ST register defined by an
inline-asm instruction was live across another inline-asm instruction, as shown
in the following sequence of machine instructions:

1. INLINEASM <es:frndint> $0:[regdef], %ST0<imp-def,tied5>
2. INLINEASM <es:fldcw $0>
3. %FP0<def> = COPY %ST0

<rdar://problem/16952634>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214580 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 22:19:41 +00:00
NAKAMURA Takumi
c36cba412a llvm/test/CodeGen/Mips/cconv/arguments-varargs.ll: Add explicit -mtriple=(mips|mipsel)-linux on 4 lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214578 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 22:15:38 +00:00
Adrian Prantl
2a39c993eb Debug info: Infrastructure to support debug locations for fragmented
variables (for example, by-value struct arguments passed in registers, or
large integer values split across several smaller registers).
On the IR level, this adds a new type of complex address operation OpPiece
to DIVariable that describes size and offset of a variable fragment.
On the DWARF emitter level, all pieces describing the same variable are
collected, sorted and emitted as DWARF expressions using the DW_OP_piece
and DW_OP_bit_piece operators.

http://reviews.llvm.org/D3373
rdar://problem/15928306

What this patch doesn't do / Future work:
- This patch only adds the backend machinery to make this work, patches
  that change SROA and SelectionDAG's type legalizer to actually create
  such debug info will follow. (http://reviews.llvm.org/D2680)
- Making the DIVariable complex expressions into an argument of dbg.value
  will reduce the memory footprint of the debug metadata.
- The sorting/uniquing of pieces should be moved into DebugLocEntry,
  to facilitate the merging of multi-piece entries.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214576 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 22:11:58 +00:00
Tom Stellard
b2df20d015 Revert "R600: Move code for generating REGISTER_LOAD into R600ISelLowering.cpp"
This reverts commit r214566.

I did not mean to commit this yet.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214572 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 21:55:50 +00:00
Reid Kleckner
b125730db8 MS inline asm: Hide symbol to attempt to fix test failure on darwin
If the symbol comes from an external DSO, it apparently requires
indirection through a register.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214571 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 21:54:37 +00:00
Tom Stellard
7f288b455e R600: Move code for generating REGISTER_LOAD into R600ISelLowering.cpp
SI doesn't use REGISTER_LOAD anymore, but it was still hitting this code
path for 8-bit and 16-bit private loads.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214566 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 21:50:47 +00:00
Peter Collingbourne
f1499548d0 [dfsan] Correctly handle loads and stores of zero size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214561 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 21:18:18 +00:00
Reid Kleckner
ab418066a2 MS inline asm: Use memory constraints for functions instead of registers
This is consistent with how we parse them in a standalone .s file, and
inline assembly shouldn't differ.

This fixes errors about requiring more registers than available in
cases like this:
  void f();
  void __declspec(naked) g() {
    __asm pusha
    __asm call f
    __asm popa
    __asm ret
  }

There are no registers available to pass the address of 'f' into the asm
blob.  The asm should now directly call 'f'.

Tests will land in Clang shortly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214550 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 20:21:24 +00:00
Justin Bogner
a2a42a6a4e llvm-profdata: Replace redundant tests with more targeted ones
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214548 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 19:59:48 +00:00
Juergen Ributzka
8d3bf10dd3 [FastISel][AArch64] Fold offset into the memory operation.
Fold simple offsets into the memory operation:
  add x0, x0, #8
  ldr x0, [x0]
-->
  ldr x0, [x0, #8]

Fixes <rdar://problem/17887945>.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214545 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 19:40:16 +00:00
Juergen Ributzka
3d253a3f80 [FastISel][AArch64] Add branch weights.
Add branch weights to branch instructions, so that the following passes can
optimize based on it (i.e. basic block ordering).

Fixes <rdar://problem/17887137>.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214537 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 18:39:24 +00:00
Philip Reames
5a96b41c70 Explicitly report runtime stack realignment in StackMap section
This change adds code to explicitly mark a function which requires runtime stack realignment as not having a fixed frame size in the StackMap section. As it happens, this is not actually a functional change. The size that would be reported without the check is also "-1", but as far as I can tell, that's an accident. The code change makes this explicit.

Note: There's a separate bug in handling of stackmaps and patchpoints in functions which need dynamic frame realignment. The current code assumes that offsets can be calculated from RBP, but realigned frames must use RSP. (There's a variable gap between RBP and the spill slots.) This change set does not address that issue.

Reviewers: atrick, ributzka

Differential Revision: http://reviews.llvm.org/D4572




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214534 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 18:26:27 +00:00
Juergen Ributzka
b36af0785a [FastISel][ARM] Do not emit stores for undef arguments.
This is a followup patch for r214366, which added the same behavior to the
AArch64 and X86 FastISel code. This fix reproduces the already existing
behavior of SelectionDAG in FastISel.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214531 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 18:04:14 +00:00
Matt Arsenault
f8230857c0 R600: Cleanup test
Remove -CHECKs, use multiple prefixes, name values,
also test the @llvm.fabs version

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214525 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 17:00:29 +00:00
Chad Rosier
b4cded4926 [AArch64] Fix test from r214518 in an attempt to appease buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214521 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 15:30:41 +00:00
Chad Rosier
4175e2f597 [AArch64] Generate tbz/tbnz when comparing against zero.
The tbz/tbnz checks the sign bit to convert

op w1, w1, w10
cmp w1, #0
b.lt .LBB0_0

to

op w1, w1, w10
tbnz w1, #31, .LBB0_0

Differential Revision: http://reviews.llvm.org/D4440

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214518 91177308-0d34-0410-b5e6-96231b3b80d8
2014-08-01 14:48:56 +00:00