Commit Graph

29205 Commits

Author SHA1 Message Date
Ahmed Bougacha
995f4f8fd1 [CodeGen][IfCvt] Don't re-ifcvt blocks with unanalyzable terminators.
If we couldn't analyze its terminator (i.e., it's an indirectbr, or some
other weirdness), we can't safely re-if-convert a predicated block,
because we can't tell whether the predicated terminator can
fallthrough (it does).

Currently, we would completely ignore the fallthrough successor. In
the added testcase, this means we used to generate:

    ...
  @ %entry:
    cmp   r5, #21
    ittt  ne
  @ %cc1f:
    cmpne r7, #42
  @ %cc2t:
    strne.w       r5, [r8]
    movne pc, r10
  @ %cc1t:
    ...

Whereas the successor of %cc1f was originally %bb1.
With the fix, we get the correct:

    ...
  @ %entry:
    cmp   r5, #21
    itt   eq
  @ %cc1t:
    streq.w       r5, [r11]
    moveq pc, r0
  @ %cc1f:
    cmp   r7, #42
    itt   ne
  @ %cc2t:
    strne.w       r5, [r8]
    movne pc, r10
  @ %bb1:
    ...

rdar://20192768
Differential Revision: http://reviews.llvm.org/D8509


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232872 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-21 01:23:15 +00:00
Ahmed Bougacha
165bd1733b [AArch64] Prefer UZP for concat_vector of illegal truncs.
Follow-up to r232459: prefer a UZP shuffle to the intermediate truncs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232871 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-21 01:08:39 +00:00
Yunzhong Gao
2c11db2e64 Tell lit.cfg about more Windows triples.
For example, the host triple on my 64-bit PC is x86_64-pc-windows-msvc.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232854 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 22:08:40 +00:00
Sanjay Patel
be9ee96926 [X86, AVX] instcombine common cases of vperm2* intrinsics into shuffles
vperm2* intrinsics are just shuffles. 
In a few special cases, they're not even shuffles.

Optimizing intrinsics in InstCombine is better than
handling this in the front-end for at least two reasons:

1. Optimizing custom-written SSE intrinsic code at -O0 makes vector coders
   really angry (and so I have regrets about some patches from last week).

2. Doing mask conversion logic in header files is hard to write and 
   subsequently read.

There are a couple of TODOs in this patch to complete this optimization.

Differential Revision: http://reviews.llvm.org/D8486



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232852 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:47:56 +00:00
Andrew Kaylor
e0e1c1d94d Fixing a bug with WinEH PHI handling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232851 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:42:54 +00:00
Sanjay Patel
39110ecd35 [X86] Prefer blendps over insertps codegen for one special case
With this patch, for this one exact case, we'll generate:

  blendps %xmm0, %xmm1, $1

instead of:

  insertps %xmm0, %xmm1, $0

If there's a memory operand available for load folding and we're
optimizing for size, we'll still generate the insertps.

The detailed performance data motivation for this may be found in D7866; 
in summary, blendps has 2-3x throughput vs. insertps on widely used chips.

Differential Revision: http://reviews.llvm.org/D8332



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232850 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 21:19:52 +00:00
Rafael Espindola
d80979b25d Don't declare all text sections at the start of the .s
The code this patch removes was there to make sure the text sections went
before the dwarf sections. That is necessary because MachO uses offsets
relative to the start of the file, so adding a section can change relaxations.

The dwarf sections were being printed at the start just to produce symbols
pointing at the start of those sections.

The underlying issue was fixed in r231898. The dwarf sections are now printed
when they are about to be used, which is after we printed the text sections.

To make sure we don't regress, the patch makes the MachO streamer assert
if CodeGen puts anything unexpected after the DWARF sections.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232842 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 20:00:01 +00:00
Duncan P. N. Exon Smith
e9994ab8a7 Bugpoint: Fix invalid 'inlinedAt:' references in testcase
These are causing crashes in `DebugInfoFinder` after a WIP patch to
increase strictness of `DIDescriptor` accessors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232839 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 19:51:34 +00:00
Rafael Espindola
82759c6cac Reorganize the x86 ELF relocation selection logic.
The main differences are:

* Split in 32 and 64 bit functions.
* First switch on the Modifier so that we have only one non fully covered
  switch.
* Map the fixup kind first to a x86_64 (or i386) specific enum, to make
  it easy to handle cases like X86::reloc_riprel_4byte_movq_load.
* Switch on IsPCRel last, which reduces code duplication.

Fixes pr22308.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232837 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 19:48:54 +00:00
Duncan P. N. Exon Smith
d94d5bbbf9 Verifier: Check that !dbg attachments have the right type
A WIP patch makes `DIDescriptor` accessors more strict, which in turn
causes the `DebugInfoFinder` to crash on wrongly typed `!dbg`
attachments.  Catch that error up front in
`Verifier::visitInstruction()`.

Also remove a test that we "handle" invalid `!dbg` attachments, added
back in r99938.  We don't want to handle those anymore.

Note: I'm *not* recursing and verifying the debug info graph reachable
from this node; that work is already done by `verifyDebugInfo()`.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232834 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 19:26:58 +00:00
Duncan P. N. Exon Smith
1188480049 Rewrite test/Feature/md_on_instruction.ll
This test is supposed to be testing whether metadata attachments to
instructions work, but it was using invalid debug info to do so.  (This
was causing assertion failures in the `DebugInfoFinder` with a WIP patch
to be more strict about `DIDescriptor` accessors.)

Rather than fix the debug info -- which is better tested elsewhere --
just test the IR feature directly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232828 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 18:34:53 +00:00
Wei Mi
8979e3f69b Correctly estimate SROA savings for store operands in inline cost analysis.
When estimating SROA savings, we want to see if an address is derived
off an alloca in the caller. For store instructions, operand 1 is the
address operand, but the current code uses operand 0.  Use
getPointerOperand for loads and stores to fix this.

Patch by Easwaran Raman.
http://reviews.llvm.org/D8425


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232827 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 18:33:12 +00:00
John Brawn
151a5da534 [ARM] Fix handling of thumb1 out-of-range frame offsets
LocalStackSlotPass assumes that isFrameOffsetLegal doesn't change its
answer when the base register changes. Unfortunately this isn't true
in thumb1, where SP-based loads allow a larger offset than
non-SP-based loads, and this causes the base register reuse code to
generate instructions that are unencodable, causing an assertion
failure. 

Solve this by adding a BaseReg parameter to isFrameOffsetLegal, which
ARMBaseRegisterInfo can then make use of to give the correct answer. 

Differential Revision: http://reviews.llvm.org/D8419


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232825 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 17:20:07 +00:00
Daniel Jasper
70b146b25e [MBP] Don't outline short optional branches
With the option -outline-optional-branches, LLVM will place optional
branches out of line (more details on r231230).

With this patch, this is not done for short optional branches. A short
optional branch is a branch containing a single block with an
instruction count below a certain threshold (defaulting to 3). Still
everything is guarded under -outline-optional-branches).

Outlining a short branch can't significantly improve code locality. It
can however decrease performance because of the additional jmp and in
cases where the optional branch is hot. This fixes a compile time
regression I have observed in a benchmark.

Review: http://reviews.llvm.org/D8108

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232802 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 10:00:37 +00:00
Tom Stellard
4aee931a46 R600/SI: Add missing CHECK-LABEL lines to a test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232797 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 03:12:42 +00:00
Nick Lewycky
bea9b06e84 When simplifying a SCEV truncate by distributing, consider it a simplification to replace a cast, even if we end up with a trunc around the term. Fixes PR22960!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232794 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-20 02:25:00 +00:00
Peter Collingbourne
aa01400663 test: Make a start on a test suite for libLTO.
This works in a similar way to the gold plugin tests. We search for a compatible
linker on $PATH and use it to run tests against our just-built libLTO. To start
with, test the just added opt level functionality.

Differential Revision: http://reviews.llvm.org/D8472

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232785 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 23:55:38 +00:00
Owen Anderson
8154ef7589 Fix a nasty bug in DAGCombine of STORE nodes.
This is very related to the bug fixed in r174431.  The problem is that
SelectionDAG does not include alignment in the uniquing of loads and
stores.  When an otherwise no-op DAGCombine would increase the alignment
of a load or store, the original node would be returned (with the
alignment increased), which would cause the node not to be processed by
any further DAGCombines.

I don't have a direct testcase for this that manifests on an in-tree
target, but I did see some noise in the tests for other targets and have
updated them for it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232780 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:48:57 +00:00
Reid Kleckner
c39212a2fc WinEH: Make llvm.eh.actions emission match the EH docs
This switches the sense of the i32 values and updates the test cases.

We can also use CHECK-SAME to clean up some tests, and reduce the visual
noise from bitcasts.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232774 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:31:02 +00:00
Sanjay Patel
11d77223a5 [X86, AVX] use blends instead of insert128 with index 0
Another case of x86-specific shuffle strength reduction:
avoid generating insert*128 instructions with index 0 because
they are slower than their non-lane-changing blend equivalents.

Shuffle lowering already catches most of these cases, but
the zero vector case and some other paths such as in the
modified test in vector-shuffle-256-v32.ll were getting
through.

Differential Revision: http://reviews.llvm.org/D8366


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232773 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:29:40 +00:00
Peter Collingbourne
58e8e3505d LowerBitSets: Avoid reusing byte set addresses.
Each use of the byte array uses a different alias. This makes the
backend less likely to reuse previously computed byte array addresses,
improving the security of the CFI mechanism based on this pass.

Differential Revision: http://reviews.llvm.org/D8455

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232770 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:02:10 +00:00
Peter Collingbourne
416d8ecf80 libLTO, llvm-lto, gold: Introduce flag for controlling optimization level.
This change also introduces a link-time optimization level of 1. This
optimization level runs only the globaldce pass as well as cleanup passes for
passes that run at -O0, specifically simplifycfg which cleans up lowerbitsets.

http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266951.html

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232769 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 22:01:00 +00:00
Krzysztof Parzyszek
8962c01fbf Unxfail test/CodeGen/Generic/vector.ll now passing on Hexagon
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232758 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 20:22:17 +00:00
Peter Collingbourne
5e86804089 gold: Make powerpc support optional for the tests.
Differential Revision: http://reviews.llvm.org/D8400

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232744 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 18:23:31 +00:00
Artem Belevich
97f4d01ee1 Add support for __nvvm_reflect changes in libdevice in CUDA-7.0
Summary:
CUDA 7.0's libdevice uses slightly different IR to call __nvvm_reflect
and that triggers an assertion in nvvm_reflect optimization pass. This
change allows nvvm_reflect pass to deal with both old and new ways to
pass an argument to __nvvm_reflect.

Test Plan: ninja check-all

Reviewers: eliben, echristo

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8399

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232732 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 17:05:35 +00:00
Krzysztof Parzyszek
07121ea974 [Hexagon] Add support for vector instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232728 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 16:33:08 +00:00
Daniel Jasper
3baea2951d [InstCombine] Don't fold a GEP into itself through a PHI node
This can only occur (I think) through the back-edge of the loop.

However, folding a GEP into itself means that the value of the previous
iteration needs to be stored in the meantime, thus requiring an
additional register variable to be live, but not actually achieving
anything (the gep still needs to be executed once per loop iteration).

The attached test case is derived from:
  typedef unsigned uint32;
  typedef unsigned char uint8;
  inline uint8 *f(uint32 value, uint8 *target) {
    while (value >= 0x80) {
      value >>= 7;
      ++target;
    }
    ++target;
    return target;
  }
  uint8 *g(uint32 b, uint8 *target) {
    target = f(b, f(42, target));
    return target;
  }

What happens is that the GEP stored in incptr2 is folded into itself
through the loop's back-edge and the phi-node stored in loopptr,
effectively incrementing the ptr by "2" in each iteration instead of "1".

In this case, it is actually increasing the number of GEPs required as
the GEP before the loop can't be folded away anymore. For comparison:

With this patch:
  define i8* @test4(i32 %value, i8* %buffer) {
  entry:
    %cmp = icmp ugt i32 %value, 127
    br i1 %cmp, label %loop.header, label %exit

  loop.header:                                      ; preds = %entry
    br label %loop.body

  loop.body:                                        ; preds = %loop.body, %loop.header
    %buffer.pn = phi i8* [ %buffer, %loop.header ], [ %loopptr, %loop.body ]
    %newval = phi i32 [ %value, %loop.header ], [ %shr, %loop.body ]
    %loopptr = getelementptr inbounds i8, i8* %buffer.pn, i64 1
    %shr = lshr i32 %newval, 7
    %cmp2 = icmp ugt i32 %newval, 16383
    br i1 %cmp2, label %loop.body, label %loop.exit

  loop.exit:                                        ; preds = %loop.body
    br label %exit

  exit:                                             ; preds = %loop.exit, %entry
    %0 = phi i8* [ %loopptr, %loop.exit ], [ %buffer, %entry ]
    %incptr3 = getelementptr inbounds i8, i8* %0, i64 2
    ret i8* %incptr3
  }

Without this patch:
  define i8* @test4(i32 %value, i8* %buffer) {
  entry:
    %incptr = getelementptr inbounds i8, i8* %buffer, i64 1
    %cmp = icmp ugt i32 %value, 127
    br i1 %cmp, label %loop.header, label %exit

  loop.header:                                      ; preds = %entry
    br label %loop.body

  loop.body:                                        ; preds = %loop.body, %loop.header
    %0 = phi i8* [ %buffer, %loop.header ], [ %loopptr, %loop.body ]
    %loopptr = phi i8* [ %incptr, %loop.header ], [ %incptr2, %loop.body ]
    %newval = phi i32 [ %value, %loop.header ], [ %shr, %loop.body ]
    %shr = lshr i32 %newval, 7
    %incptr2 = getelementptr inbounds i8, i8* %0, i64 2
    %cmp2 = icmp ugt i32 %newval, 16383
    br i1 %cmp2, label %loop.body, label %loop.exit

  loop.exit:                                        ; preds = %loop.body
    br label %exit

  exit:                                             ; preds = %loop.exit, %entry
    %ptr2 = phi i8* [ %incptr2, %loop.exit ], [ %incptr, %entry ]
    %incptr3 = getelementptr inbounds i8, i8* %ptr2, i64 1
    ret i8* %incptr3
  }

Review: http://reviews.llvm.org/D8245

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232718 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 11:05:08 +00:00
Rafael Espindola
2c275b1f80 Note that we don't support COFF on PPC.
Should bring back the windows bots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232701 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 02:40:56 +00:00
Justin Bogner
cc690d62e3 llvm-cov: Only emit colour by default if the output is a tty
This replaces the -no-color flag with a -color={auto|always|never}
option, with auto as the default, which is much saner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232693 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-19 00:02:23 +00:00
Simon Pilgrim
4c38456ead Fixed failing test due to missing target triple causing different results on different buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232685 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 22:51:45 +00:00
Rafael Espindola
b354ef31cf Teach getDefaultFormat that we only support ELF on some architectures.
This should bring the windows bots back.

It is a bit ugly, but it is better than what we had before: The triple would
say that the object format was COFF, but llc/llvm-mc would produce an ELF.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232683 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 22:19:16 +00:00
Simon Pilgrim
ab18d0e7cb [X86][SSE] Avoid scalarization of v2i64 vector shifts (REAPPLIED)
Fixed broken tests.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232682 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 22:18:51 +00:00
Eric Christopher
3932b367d7 Revert "[X86][SSE] Avoid scalarization of v2i64 vector shifts" as it
appears to have broken tests/bots.

This reverts commit r232660.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232670 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 21:01:00 +00:00
Reid Kleckner
01a1af4fe4 Use WinEHPrepare to outline SEH finally blocks
No outlining is necessary for SEH catch blocks. Use the blockaddr of the
handler in place of the usual outlined function.

Reviewers: majnemer, andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D8370

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232664 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 20:26:53 +00:00
Simon Pilgrim
0ee70a1554 [X86][SSE] Avoid scalarization of v2i64 vector shifts
Currently v2i64 vectors shifts (non-equal shift amounts) are scalarized, costing 4 x extract, 2 x x86-shifts and 2 x insert instructions - and it gets even more awkward on 32-bit targets.

This patch separately shifts the vector by both shift amounts and then shuffles the partial results back together, costing 2 x shuffles and 2 x sse-shifts instructions (+ 2 movs on pre-AVX hardware).

Note - this patch only improves the SHL / LSHR logical shifts as only these are supported in SSE hardware.

Differential Revision: http://reviews.llvm.org/D8416

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232660 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 19:35:31 +00:00
Matthias Braun
8b41add6ca TableGen: Fix register class lane masks being too conservative.
When calculating the lanemask of a register class we have to include the
masks of subregisters supported by any of the class members, not just
the ones supported by all class members.

This fixes problems when coalescing towards a subclass with additional
subregisters available.

The attached testcase works fine as is, but does crash if you enable
subregister liveness on x86 without this change applied.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232652 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 17:56:09 +00:00
Rafael Espindola
df600f8049 Handle X86::reloc_riprel_4byte in 32 bits mode.
We can get there with .code64.

Fixes pr22349.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232651 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 17:33:40 +00:00
Sanjay Patel
22a94d59d9 Use utils/update_llc_test_checks.py to update all CHECKs
The checks here were so vague that we could nuke intrinsics
from existence and still pass the test because we'd match
the function name.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232647 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 16:38:44 +00:00
Krzysztof Parzyszek
f795de029a [Hexagon] Intrinsics for circular and bit-reversed loads and stores
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232645 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 16:23:44 +00:00
Sanjay Patel
4795cb202c fixed to test features, not CPU model
The 'vmovntdq' was only passing due to a fluke in
SandyBridge codegen that splits 32-byte stores in half, 
but that meant that the test was not correctly checking
for the 32-byte store that we thought we were generating.

The lax checking in this file will be addressed in
another commit. There are bigger problems here.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232644 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 16:07:10 +00:00
Krzysztof Parzyszek
d5cb4a90e5 [Hexagon] Handle ENDLOOP0 in InsertBranch and RemoveBranch
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232643 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 15:56:43 +00:00
Sid Manning
b243a3a556 Add support for .ifnes psuedo-op.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232636 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 14:20:54 +00:00
Daniel Jasper
bf2e6a6be2 Change test to accept an additional critical edge split.
The two hot blocks are right next to each other and I verified that
there is no performance regression by compressing/uncompressing some
files with a minigzip built with the different options.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232629 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 12:45:45 +00:00
John Brawn
0328ca6cd7 [ARM] Align stack objects passed to memory intrinsics
Memcpy, and other memory intrinsics, typically tries to use LDM/STM if
the source and target addresses are 4-byte aligned. In CodeGenPrepare
look for calls to memory intrinsics and, if the object is on the
stack, 4-byte align it if it's large enough that we expect that memcpy
would want to use LDM/STM to copy it.

Differential Revision: http://reviews.llvm.org/D7908


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232627 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 12:01:59 +00:00
John Brawn
bf60cd0751 Add missing newline to end of test file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232626 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 10:45:12 +00:00
Josh Magee
cbaefea0c0 Add testcases for BEXTR.
These BEXTR cases are a check for the 64-bit load form and two negative cases where the bitrange is non-contiguous.  From a private patch equivalent to r189742/PR17028.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232580 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 01:34:06 +00:00
Krzysztof Parzyszek
dbe964d3a6 Missed testcase for r232577
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232578 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 00:44:46 +00:00
Sanjoy Das
e027d74733 [SCEV] Make isImpliedCond smarter.
Summary:
This change teaches isImpliedCond to infer things like "X sgt 0" => "X -
1 sgt -1".  The `ConstantRange` class has the logic to do the heavy
lifting, this change simply gets ScalarEvolution to exploit that when
reasonable.

Depends on D8345

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8346

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232576 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 00:41:29 +00:00
David Majnemer
8f01b96d93 DAGCombiner: fold (xor (shl 1, x), -1) -> (rotl ~1, x)
Targets which provide a rotate make it possible to replace a sequence of
(XOR (SHL 1, x), -1) with (ROTL ~1, x).  This saves an instruction on
architectures like X86 and POWER(64).

Differential Revision: http://reviews.llvm.org/D8350

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232572 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-18 00:03:36 +00:00
David Majnemer
7605cdd6e4 COFF: Let globals with private linkage reside in their own section
COFF COMDATs (for selection kinds other than 'select any') require at
least one non-section symbol in the symbol table.
Satisfy this by morally enhancing the linkage from private to internal.

Differential Revision: http://reviews.llvm.org/D8394

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232570 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-17 23:54:51 +00:00