Commit Graph

12432 Commits

Author SHA1 Message Date
Simon Pilgrim
3717f7c80c [X86] Refactored stack memory folding tests to explicitly force register spilling
The current 'big vectors' stack folded reload testing pattern is very bulky and makes it difficult to test all instructions as big vectors will tend to use only the ymm instruction implementations.

This patch changes the tests to use a nop call that lists explicit xmm registers as sideeffects, with this we can force a partial register spill of the relevant registers and then check that the reload is correctly folded. The asm generated only adds the forced spill, a nop instruction and a couple of extra labels (a fraction of the current approach).

More exhaustive tests will follow shortly, I've added some extra tests (the xmm versions of some of the existing folding tests) as a starting point.

Differential Revision: http://reviews.llvm.org/D6932



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226264 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 09:32:54 +00:00
Timur Iskhodzhanov
6a7c74de33 Revert r226242 - Revert Revert Don't create new comdats in CodeGen
This breaks AddressSanitizer (ninja check-asan) on Windows

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226251 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 08:38:45 +00:00
Hal Finkel
92cd0ca3b2 [PowerPC] Adjust PatchPoints for ppc64le
Bill Schmidt pointed out that some adjustments would be needed to properly
support powerpc64le (using the ELF V2 ABI). For one thing, R11 is not available
as a scratch register, so we need to use R12. R12 is also available under ELF
V1, so to maintain consistency, I flipped the order to make R12 the first
scratch register in the array under both ABIs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226247 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 04:40:58 +00:00
Rafael Espindola
dfe88a08c7 Revert "Revert Don't create new comdats in CodeGen"
This reverts commit r226173, adding r226038 back.

No change in this commit, but clang was changed to also produce trivial comdats for
costructors, destructors and vtables when needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-16 02:22:55 +00:00
Matt Arsenault
ab2315014e R600/SI: Add patterns for v_cvt_{flr|rpi}_i32_f32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226230 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 23:58:35 +00:00
Matt Arsenault
c204f47feb R600/SI: Fix trailing comma with modifiers
Instructions with 1 operand can still use source modifiers,
so make sure we don't print an extra comma afterwards.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 23:17:03 +00:00
Hal Finkel
94dc061e85 [PowerPC] Loosen ELFv1 PPC64 func descriptor loads for indirect calls
Function pointers under PPC64 ELFv1 (which is used on PPC64/Linux on the
POWER7, A2 and earlier cores) are really pointers to a function descriptor, a
structure with three pointers: the actual pointer to the code to which to jump,
the pointer to the TOC needed by the callee, and an environment pointer. We
used to chain these loads, and make them opaque to the rest of the optimizer,
so that they'd always occur directly before the call. This is not necessary,
and in fact, highly suboptimal on embedded cores. Once the function pointer is
known, the loads can be performed ahead of time; in fact, they can be hoisted
out of loops.

Now these function descriptors are almost always generated by the linker, and
thus the contents of the descriptors are invariant. As a result, by default,
we'll mark the associated loads as invariant (allowing them to be hoisted out
of loops). I've added a target feature to turn this off, however, just in case
someone needs that option (constructing an on-stack descriptor, casting it to a
function pointer, and then calling it cannot be well-defined C/C++ code, but I
can imagine some JIT-compilation system doing so).

Consider this simple test:
  $ cat call.c

  typedef void (*fp)();
  void bar(fp x) {
    for (int i = 0; i < 1600000000; ++i)
      x();
  }

  $ cat main.c

  typedef void (*fp)();
  void bar(fp x);
  void foo() {}
  int main() {
    bar(foo);
  }

On the PPC A2 (the BG/Q supercomputer), marking the function-descriptor loads
as invariant brings the execution time down to ~8 seconds from ~32 seconds with
the loads in the loop.

The difference on the POWER7 is smaller. Compiling with:

  gcc -std=c99 -O3 -mcpu=native call.c main.c : ~6 seconds [this is 4.8.2]

  clang -O3 -mcpu=native call.c main.c : ~5.3 seconds

  clang -O3 -mcpu=native call.c main.c -mno-invariant-function-descriptors : ~4 seconds
  (looks like we'd benefit from additional loop unrolling here, as a first
   guess, because this is faster with the extra loads)

The -mno-invariant-function-descriptors will be added to Clang shortly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226207 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 21:17:34 +00:00
Colin LeMahieu
02b677594c [Hexagon] Updating indexed load-extend patterns and changing test to new expected output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226206 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 21:07:52 +00:00
Hal Finkel
39b09ae788 Revert "r226086 - Revert "r226071 - [RegisterCoalescer] Remove copies to reserved registers""
Reapply r226071 with fixes. Two fixes:

 1. We need to manually remove the old and create the new 'deaf defs'
    associated with physical register definitions when we move the definition of
    the physical register from the copy point to the point of the original vreg def.

    This problem was picked up by the machinstr verifier, and could trigger a
    verification failure on test/CodeGen/X86/2009-02-12-DebugInfoVLA.ll, so I've
    turned on the verifier in the tests.

 2. When moving the def point of the phys reg up, we need to make sure that it
    is neither defined nor read in between the two instructions. We don't, however,
    extend the live ranges of phys reg defs to cover uses, so just checking for
    live-range overlap between the pair interval and the phys reg aliases won't
    pick up reads. As a result, we manually iterate over the range and check for
    reads.

    A test soon to be committed to the PowerPC backend will test this change.

Original commit message:

[RegisterCoalescer] Remove copies to reserved registers

This allows the RegisterCoalescer to join "non-flipped" range pairs with a
physical destination register -- which allows the RegisterCoalescer to remove
copies like this:

<vreg> = something (maybe a load, for example)
... (things that don't use PHYSREG)
PHYSREG = COPY <vreg>

(with all of the restrictions normally applied by the RegisterCoalescer: having
compatible register classes, etc. )

Previously, the RegisterCoalescer handled only the opposite case (copying
*from* a physical register). I don't handle the problem fully here, but try to
get the common case where there is only one use of <vreg> (the COPY).

An upcoming commit to the PowerPC backend will make this pattern much more
common on PPC64/ELF systems.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226200 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 20:32:09 +00:00
Matt Arsenault
ecbec418bd R600/SI: Improve fpext / fptrunc test coverage
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226197 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 19:39:42 +00:00
Marek Olsak
232d5fa02c R600/SI: Use 64-bit encoding by default for opcodes that are VOP3-only on VI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226190 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 18:43:01 +00:00
Ramkumar Ramachandra
4f158a708b statepoint tests: use statepoint-example gc
Mechanical conversion of statepoint tests to use the example-statepoint
gc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226183 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 18:10:44 +00:00
Colin LeMahieu
044438aff5 [Hexagon] Deleting old float comparison instruction and updating references to new ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226179 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 17:28:14 +00:00
Colin LeMahieu
4ce3b1e4ce [Hexagon] Replacing old fadd/fsub instructions and updating references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226176 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 16:30:07 +00:00
Timur Iskhodzhanov
d048b3be70 Revert Don't create new comdats in CodeGen
It breaks AddressSanitizer on Windows.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226173 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 16:14:34 +00:00
Daniel Sanders
cb71ef1b46 [mips] Fix a typo in the compare patterns for MIPS32r6/MIPS64r6.
Summary: The patterns intended for the SETLE node were actually matching the SETLT node.

Reviewers: atanasyan, sstankovic, vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6997

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226171 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 15:41:03 +00:00
Hal Finkel
f908e37144 Revert "r226071 - [RegisterCoalescer] Remove copies to reserved registers"
Reverting this while I investigate some bad behavior this is causing. As a
possibly-related issue, adding -verify-machineinstrs to one of the test cases
now fails because of this change:

  llc test/CodeGen/X86/2009-02-12-DebugInfoVLA.ll -march=x86-64 -o - -verify-machineinstrs

*** Bad machine code: No instruction at def index ***
- function:    foo
- basic block: BB#0 return (0x10007e21f10) [0B;736B)
- liverange:   [128r,128d:9)[160r,160d:8)[176r,176d:7)[336r,336d:6)[464r,464d:5)[480r,480d:4)[624r,624d:3)[752r,752d:2)[768r,768d:1)[78
4r,784d:0)  0@784r 1@768r 2@752r 3@624r 4@480r 5@464r 6@336r 7@176r 8@160r 9@128r
- register:    %DS
Valno #3 is defined at 624r

*** Bad machine code: Live segment doesn't end at a valid instruction ***
- function:    foo
- basic block: BB#0 return (0x10007e21f10) [0B;736B)
- liverange:   [128r,128d:9)[160r,160d:8)[176r,176d:7)[336r,336d:6)[464r,464d:5)[480r,480d:4)[624r,624d:3)[752r,752d:2)[768r,768d:1)[78
4r,784d:0)  0@784r 1@768r 2@752r 3@624r 4@480r 5@464r 6@336r 7@176r 8@160r 9@128r
- register:    %DS
[624r,624d:3)
LLVM ERROR: Found 2 machine code errors.

where 624r corresponds exactly to the interval combining change:

624B    %RSP<def> = COPY %vreg16; GR64:%vreg16
        Considering merging %vreg16 with %RSP
                RHS = %vreg16 [608r,624r:0)  0@608r
                updated: 608B   %RSP<def> = MOV64rm <fi#3>, 1, %noreg, 0, %noreg; mem:LD8[%saved_stack.1]
        Success: %vreg16 -> %RSP
        Result = %RSP

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226086 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 03:08:59 +00:00
Hal Finkel
47ab8c106f [RegisterCoalescer] Remove copies to reserved registers
This allows the RegisterCoalescer to join "non-flipped" range pairs with a
physical destination register -- which allows the RegisterCoalescer to remove
copies like this:

<vreg> = something (maybe a load, for example)
... (things that don't use PHYSREG)
PHYSREG = COPY <vreg>

(with all of the restrictions normally applied by the RegisterCoalescer: having
compatible register classes, etc. )

Previously, the RegisterCoalescer handled only the opposite case (copying
*from* a physical register). I don't handle the problem fully here, but try to
get the common case where there is only one use of <vreg> (the COPY).

An upcoming commit to the PowerPC backend will make this pattern much more
common on PPC64/ELF systems.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226071 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-15 01:25:28 +00:00
Philip Reames
8f9d11309a getMangledTypeStr: clarify how it mangles types, and add tests
"Write a set of tests that show how name mangling is done for overloaded intrinsics."  These happen to use gc.relocates to exercise the codepath in question, but is not a GC specific test.

Patch by: artagnon@gmail.com
Differential Revision: http://reviews.llvm.org/D6915



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226056 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 23:05:17 +00:00
Duncan P. N. Exon Smith
37ac8d3622 IR: Move MDLocation into place
This commit moves `MDLocation`, finishing off PR21433.  There's an
accompanying clang commit for frontend testcases.  I'll attach the
testcase upgrade script I used to PR21433 to help out-of-tree
frontends/backends.

This changes the schema for `DebugLoc` and `DILocation` from:

    !{i32 3, i32 7, !7, !8}

to:

    !MDLocation(line: 3, column: 7, scope: !7, inlinedAt: !8)

Note that empty fields (line/column: 0 and inlinedAt: null) don't get
printed by the assembly writer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226048 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 22:27:36 +00:00
Rafael Espindola
33f5127540 Don't create new comdats in CodeGen.
This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226038 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 20:55:48 +00:00
Chandler Carruth
8af8091ef5 [MBP] Add flags to disable the BadCFGConflict check in MachineBlockPlacement.
Some benchmarks have shown that this could lead to a potential
performance benefit, and so adding some flags to try to help measure the
difference.

A possible explanation. In diamond-shaped CFGs (A followed by either
B or C both followed by D), putting B and C both in between A and
D leads to the code being less dense than it could be. Always either
B or C have to be skipped increasing the chance of cache misses etc.
Moving either B or C to after D might be beneficial on average.

In the long run, but we should probably do a better job of analyzing the
basic block and branch probabilities to move the correct one of B or
C to after D. But even if we don't use this in the long run, it is
a good baseline for benchmarking.

Original patch authored by Daniel Jasper with test tweaks and a second
flag added by me.

Differential Revision: http://reviews.llvm.org/D6969

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226034 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 20:19:29 +00:00
Bill Schmidt
11abe69e98 [PPC64] Add support for the ICBT instruction on POWER8.
Patch by Kit Barton.

Support for the ICBT instruction is currently present, but limited to
embedded processors. This change adds a new FeatureICBT that can be used
to identify whether the ICBT instruction is available on a specific processor.

Two new tests are added:
 * Positive test to ensure the icbt instruction is present when using
-mcpu=pwr8
 * Negative test to ensure the icbt instruction is not generated when
using -mcpu=pwr7

Both test cases use the Prefetch opcode in LLVM. They are based on the
ppc64-prefetch.ll test case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226033 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 20:17:10 +00:00
Olivier Sallenave
735aa71398 Check that the TLI callback enableAggressiveFMAFusion has the desired effect on FMA folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225987 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 15:36:28 +00:00
Kai Nacke
92e28620d3 [mips] Refine octeon instructions seq/seqi/sne/snei
This commit refines the pattern for the octeon seq/seqi/sne/snei instructions.
The target register is set to 0 or 1 according to the result of the comparison.
In C, this is something like

rd = (unsigned long)(rs == rt)

This commit adds a zext to bring the result to i64. With this change the
instruction is selected for this type of code. (gcc produces the same code for
the above C code.)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225968 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 10:19:09 +00:00
Brad Smith
f449c53c89 Use the integrated assembler by default on SPARC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225957 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 07:53:39 +00:00
JF Bastien
7f0cbb5703 Revert "Insert random noops to increase security against ROP attacks (llvm)"
This reverts commit:
http://reviews.llvm.org/D3392

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225948 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 05:24:33 +00:00
NAKAMURA Takumi
2a38522280 Disable a couple of tests, CodeGen/X86/noop-insert.ll and CodeGen/X86/noop-insert-percentage.ll, in r225908, to unbreak tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225940 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 04:21:33 +00:00
Tim Northover
09bec94c16 ARM: add test for crc32 instructions in CodeGen.
Somehow we seem to have ended up without any actual tests of the
CodeGen side. Easy enough to fix.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225930 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:43:33 +00:00
Hal Finkel
037c21f82c [PowerPC] Fix the noop-insert test
The form of nops used is CPU-specific (some CPUs, such as the POWER7, have
special group-terminating nops). We probably want a different callback for this
kind of nop insertion (something more like MCAsmBackend::writeNopData), or for
PPC to use a different mechanism for scheduling nops, but this will stop the
test from failing for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225928 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:37:21 +00:00
Matt Arsenault
140c2ece1e R600/SI: Remove some redudant load testcases.
This reduces coverage for Evergreen, since the more
complete tests have those run lines disabled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225927 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:35:26 +00:00
Matt Arsenault
781f7ee502 R600/SI: Fix bad code with unaligned byte vector loads
Don't do the v4i8 -> v4f32 combine if the load will need to
be expanded due to alignment. This stops adding instructions
to repack into a single register that the v_cvt_ubyteN_f32
instructions read.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225926 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:35:22 +00:00
Matt Arsenault
8b6a26ca85 Implement new way of expanding extloads.
Now that the source and destination types can be specified,
allow doing an expansion that doesn't use an EXTLOAD of the
result type. Try to do a legal extload to an intermediate type
and extend that if possible.

This generalizes the special case custom lowering of extloads
R600 has been using to work around this problem.

This also happens to fix a bug that would incorrectly use more
aligned loads than should be used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225925 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:35:17 +00:00
Hal Finkel
ade705c6e5 Revert "r225811 - Revert "r225808 - [PowerPC] Add StackMap/PatchPoint support""
This re-applies r225808, fixed to avoid problems with SDAG dependencies along
with the preceding fix to ScheduleDAGSDNodes::RegDefIter::InitNodeNumDefs.
These problems caused the original regression tests to assert/segfault on many
(but not all) systems.

Original commit message:

This commit does two things:

 1. Refactors PPCFastISel to use more of the common infrastructure for call
    lowering (this lets us take advantage of this common code for lowering some
    common intrinsics, stackmap/patchpoint among them).

 2. Adds support for stackmap/patchpoint lowering. For the most part, this is
    very similar to the support in the AArch64 target, with the obvious differences
    (different registers, NOP instructions, etc.). The test cases are adapted
    from the AArch64 test cases.

One difference of note is that the patchpoint call sequence takes 24 bytes, so
you can't use less than that (on AArch64 you can go down to 16). Also, as noted
in the docs, we take the patchpoint address to be the actual code address
(assuming the call is local in the TOC-sharing sense), which should yield
higher performance than generating the full cross-DSO indirect-call sequence
and is likely just as useful for JITed code (if not, we'll change it).

StackMaps and Patchpoints are still marked as experimental, and so this support
is doubly experimental. So go ahead and experiment!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225909 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:07:51 +00:00
JF Bastien
21befa7761 Insert random noops to increase security against ROP attacks (llvm)
A pass that adds random noops to X86 binaries to introduce diversity with the goal of increasing security against most return-oriented programming attacks.

Command line options:
  -noop-insertion // Enable noop insertion.
  -noop-insertion-percentage=X // X% of assembly instructions will have a noop prepended (default: 50%, requires -noop-insertion)
  -max-noops-per-instruction=X // Randomly generate X noops per instruction. ie. roll the dice X times with probability set above (default: 1). This doesn't guarantee X noop instructions.

In addition, the following 'quick switch' in clang enables basic diversity using default settings (currently: noop insertion and schedule randomization; it is intended to be extended in the future).
  -fdiversify

This is the llvm part of the patch.
clang part: D3393

http://reviews.llvm.org/D3392
Patch by Stephen Crane (@rinon)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225908 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:07:26 +00:00
Reid Kleckner
504fa89c8e CodeGen support for x86_64 SEH catch handlers in LLVM
This adds handling for ExceptionHandling::MSVC, used by the
x86_64-pc-windows-msvc triple. It assumes that filter functions have
already been outlined in either the frontend or the backend. Filter
functions are used in place of the landingpad catch clause type info
operands. In catch clause order, the first filter to return true will
catch the exception.

The C specific handler table expects the landing pad to be split into
one block per handler, but LLVM IR uses a single landing pad for all
possible unwind actions. This patch papers over the mismatch by
synthesizing single instruction BBs for every catch clause to fill in
the EH selector that the landing pad block expects.

Missing functionality:
- Accessing data in the parent frame from outlined filters
- Cleanups (from __finally) are unsupported, as they will require
  outlining and parent frame access
- Filter clauses are unsupported, as there's no clear analogue in SEH

In other words, this is the minimal set of changes needed to write IR to
catch arbitrary exceptions and resume normal execution.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D6300

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225904 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-14 01:05:27 +00:00
Adam Nemet
656da67bc0 [AVX512] Add 16x32 unpck tests as well
Forgot this from r225838.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225850 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 23:27:55 +00:00
Adam Nemet
f38f71d8a0 Fix function names in tests from r225838.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225840 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 22:40:15 +00:00
Adam Nemet
293f71ddd2 [AVX512] Unpack support in new shuffle lowering
This now handles both 32 and 64-bit element sizes.

In this version, the test are in vector-shuffle-512-v8.ll, canonicalized by
Chandler's update_llc_test_checks.py.

Part of <rdar://problem/17688758>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225838 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 22:20:18 +00:00
Matt Arsenault
8603a3d1c5 R600: Implement getRsqrtEstimate
Only do for f32 since I'm unclear on both what this is expecting
for the refinement steps in terms of accuracy, and what
f64 instruction actually provides.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225827 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 20:53:18 +00:00
Matt Arsenault
9e495c518c R600: Make cttz / ctlz cheap to speculate
Speculating things is generally good. SI+ has instructions for these
for 32-bit values. This is still probably better even with the expansion
for 64-bit values, although it is odd that this callback doesn't have
the size as a parameter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225822 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 19:46:48 +00:00
Ulrich Weigand
81d2500685 Use the integrated assembler as default on SystemZ
This was already done in clang, this commit now uses the integrated
assembler as default when using LLVM tools directly.

A number of test cases deliberately using an invalid instruction in
inline asm now have to use -no-integrated-as.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225820 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 19:45:16 +00:00
Ulrich Weigand
5a4c26e7bc Use the integrated assembler as default on PowerPC
This was already done in clang, this commit now uses the integrated
assembler as default when using LLVM tools directly.

A number of test cases using inline asm had to be adapted, either by
updating the expected output, or by using -no-integrated-as (for such
tests that deliberately use an invalid instruction in inline asm).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225819 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 19:43:45 +00:00
Hal Finkel
ea55eceaed Revert "r225808 - [PowerPC] Add StackMap/PatchPoint support"
Reverting this while I investiage buildbot failures (segfaulting in
GetCostForDef at ScheduleDAGRRList.cpp:314).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225811 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 18:25:05 +00:00
Hal Finkel
232f393466 [PowerPC] Add StackMap/PatchPoint support
This commit does two things:

 1. Refactors PPCFastISel to use more of the common infrastructure for call
    lowering (this lets us take advantage of this common code for lowering some
    common intrinsics, stackmap/patchpoint among them).

 2. Adds support for stackmap/patchpoint lowering. For the most part, this is
    very similar to the support in the AArch64 target, with the obvious differences
    (different registers, NOP instructions, etc.). The test cases are adapted
    from the AArch64 test cases.

One difference of note is that the patchpoint call sequence takes 24 bytes, so
you can't use less than that (on AArch64 you can go down to 16). Also, as noted
in the docs, we take the patchpoint address to be the actual code address
(assuming the call is local in the TOC-sharing sense), which should yield
higher performance than generating the full cross-DSO indirect-call sequence
and is likely just as useful for JITed code (if not, we'll change it).

StackMaps and Patchpoints are still marked as experimental, and so this support
is doubly experimental. So go ahead and experiment!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225808 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 17:48:12 +00:00
Jozef Kolek
abdc0284ff [mips][microMIPS] Fix issue with 16b instructions in jr instruction delay slot
16 bit instructions are not allowed in jr delay slot. Same stands for
PseudoIndirectBranch and PseudoReturn.

Differential Revision: http://reviews.llvm.org/D6815


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225798 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 15:59:17 +00:00
Reid Kleckner
d8f69a7201 Rename llvm.recoverframeallocation to llvm.framerecover
This name is less descriptive, but it sort of puts things in the
'llvm.frame...' namespace, relating it to frameallocate and
frameaddress. It also avoids using "allocate" and "allocation" together.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225752 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 01:51:34 +00:00
Reid Kleckner
221a7075cf Add the llvm.frameallocate and llvm.recoverframeallocation intrinsics
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.

Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.

These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.

Reviewers: echristo, andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D6493

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225746 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 00:48:10 +00:00
Matt Arsenault
29ad7506e1 Combine fcmp + select to fminnum / fmaxnum if no nans and legal
Also require unsafe FP math for no since there isn't a way to
test for signed zeros.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225744 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-13 00:43:00 +00:00
Reid Kleckner
1ec250a32f musttail: Only set the inreg flag for fastcall and vectorcall
Otherwise we'll attempt to forward ECX, EDX, and EAX for cdecl and
stdcall thunks, leaving us with no scratch registers for indirect call
targets.

Fixes PR22052.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225729 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-12 23:28:23 +00:00