Commit Graph

6136 Commits

Author SHA1 Message Date
David Majnemer
73059bd1f1 ConstantFold: Shifting undef by zero results in undef
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224553 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 23:54:43 +00:00
Suyog Sarda
4bfc4f2e8c Revert 224119 "This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders them for bundling into vector of loads,
and vectorizes it." 

This was re-ordering floating point data types resulting in mismatch in output.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224424 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 10:34:27 +00:00
Elena Demikhovsky
982a8b3aeb Added 5 more tests related to sink store revision 224247
- by Ella Bolshinsky

http://reviews.llvm.org/D6420



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224418 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 08:12:59 +00:00
Erik Eckstein
96bd465d6c Strength reduce intrinsics with overflow into regular arithmetic operations if possible.
Some intrinsics, like s/uadd.with.overflow and umul.with.overflow, are already strength reduced.
This change adds other arithmetic intrinsics: s/usub.with.overflow, smul.with.overflow.
It completes the work on PR20194.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224417 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 07:29:19 +00:00
David Majnemer
891ec6d69f InstSimplify: shl nsw/nuw undef, %V -> undef
We can always choose an value for undef which might cause %V to shift
out an important bit except for one case, when %V is zero.

However, shl behaves like an identity function when the right hand side
is zero.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224405 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 01:54:33 +00:00
Elena Demikhovsky
14fb445715 Masked Load and Store Intrinsics in loop vectorizer.
The loop vectorizer optimizes loops containing conditional memory
accesses by generating masked load and store intrinsics.
This decision is target dependent.

http://reviews.llvm.org/D6527



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224334 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 11:50:42 +00:00
Sanjoy Das
574e01c32e Teach ScalarEvolution to exploit min and max expressions when proving
isKnownPredicate.

The motivation for this change is to optimize away checks in loops
like this:

    limit = min(t, len)
    for (i = 0 to limit)
      if (i >= len || i < 0) throw_array_of_of_bounds();
      a[i] = ...

Differential Revision: http://reviews.llvm.org/D6635



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224285 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 22:50:15 +00:00
Duncan P. N. Exon Smith
1ef70ff39b IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly.  These
are the matching assembly changes for the metadata/value split in
r223802.

  - Only use the `metadata` type when referencing metadata from a call
    intrinsic -- i.e., only when it's used as a `Value`.

  - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
    when referencing it from call intrinsics.

So, assembly like this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata !{i32 %v}, metadata !0)
      call void @llvm.foo(metadata !{i32 7}, metadata !0)
      call void @llvm.foo(metadata !1, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{metadata !3}, metadata !0)
      ret void, !bar !2
    }
    !0 = metadata !{metadata !2}
    !1 = metadata !{i32* @global}
    !2 = metadata !{metadata !3}
    !3 = metadata !{}

turns into this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata i32 %v, metadata !0)
      call void @llvm.foo(metadata i32 7, metadata !0)
      call void @llvm.foo(metadata i32* @global, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{!3}, metadata !0)
      ret void, !bar !2
    }
    !0 = !{!2}
    !1 = !{i32* @global}
    !2 = !{!3}
    !3 = !{}

I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines).  I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.

This is part of PR21532.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224257 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 19:07:53 +00:00
Elena Demikhovsky
a8a374135b Added a test related to 224247 revision
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224248 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 14:14:10 +00:00
Suyog Sarda
4dcffed444 Typo Correction in Test Case. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224244 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 12:19:46 +00:00
Ahmed Bougacha
780a093afb Reapply "[ARM] Combine base-updating/post-incrementing vector load/stores."
r223862 tried to also combine base-updating load/stores.
r224198 reverted it, as "it created a regression on the test-suite
on test MultiSource/Benchmarks/Ptrdist/anagram by scrambling the order
in which the words are shown."
Reapply, with a fix to ignore non-normal load/stores.
Truncstores are handled elsewhere (you can actually write a pattern for
those, whereas for postinc loads you can't, since they return two values),
but it should be possible to also combine extloads base updates, by checking
that the memory (rather than result) type is of the same size as the addend.

Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.

We can do the same thing for generic load/stores.

Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).

Differential Revision: http://reviews.llvm.org/D6585


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224203 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-13 23:22:12 +00:00
Renato Golin
1e173b7139 Revert "[ARM] Combine base-updating/post-incrementing vector load/stores."
This reverts commit r223862, as it created a regression on the test-suite
on test MultiSource/Benchmarks/Ptrdist/anagram by scrambling the order
in which the words are shown. We'll investigate the issue and re-apply
when safe.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224198 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-13 20:23:18 +00:00
David Majnemer
3b7e6d27d2 ValueTracking: Don't recurse too deeply in computeKnownBitsFromAssume
Respect the MaxDepth recursion limit, doing otherwise will trigger an
assert in computeKnownBits.

This fixes PR21891.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224168 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 23:59:29 +00:00
Suyog Sarda
1dea0dc279 This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders them for bundling into vector of loads,
and vectorizes it. 
 
 Test case :
 
       float hadd(float* a) {
           return (a[0] + a[1]) + (a[2] + a[3]);
        }
 
 
 AArch64 assembly before patch :
 
        ldp	s0, s1, [x0]
 	ldp	s2, s3, [x0, #8]
 	fadd	s0, s0, s1
 	fadd	s1, s2, s3
 	fadd	s0, s0, s1
 	ret
 
 AArch64 assembly after patch :
 
        ldp	d0, d1, [x0]
 	fadd	v0.2s, v0.2s, v1.2s
 	faddp	s0, v0.2s
 	ret

Reviewed Link : http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20141208/248531.html



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224119 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 12:53:44 +00:00
Steven Wu
a511846bdf Fix another infinite loop in InstCombine
Summary:
InstCombine infinite-loops for the testcase added
It is because InstCombine is generating instructions that can be
optimized by itself. Fix by not optimizing frem if the optimized
type is the same as original type.
rdar://problem/19150820

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D6634

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224097 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 04:34:07 +00:00
Andrea Di Biagio
f27500040b [InstCombine][X86] Improved folding of calls to Intrinsic::x86_sse4a_insertqi.
This patch teaches the instruction combiner how to fold a call to 'insertqi' if
the 'length field' (3rd operand) is set to zero, and if the sum between
field 'length' and 'bit index' (4th operand) is bigger than 64.

From the AMD64 Architecture Programmer's Manual:
1. If the sum of the bit index + length field is greater than 64, then the
   results are undefined;
2. A value of zero in the field length is defined as a length of 64.

This patch improves the existing combining logic for intrinsic 'insertqi'
adding extra checks to address both point 1. and point 2.

Differential Revision: http://reviews.llvm.org/D6583


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224054 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 20:44:59 +00:00
David Majnemer
c57bee5399 InstSimplify: Remove usesless %a parameter from tests
No functional change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224016 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 12:56:17 +00:00
Michael Kuperstein
1696b35ff1 The inliner needs to fix up debug information for llvm.dbg.declare, not only for llvm.dbg.value.
Patch by Amjad Aboud

Differential Revision: http://reviews.llvm.org/D6525


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224015 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-11 12:41:10 +00:00
David Majnemer
72c6bdbf70 ConstantFold, InstSimplify: undef >>a x can be either -1 or 0, choose 0
Zero is usually a nicer constant to have than -1.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223969 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 21:58:15 +00:00
David Majnemer
ea9bcfc707 ConstantFold: an undef shift amount results in undef
X shifted by undef results in undef because the undef value can
represent values greater than the width of the operands.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223968 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 21:38:05 +00:00
David Majnemer
895316336e ConstantFold: div undef, 0 should fold to undef, not zero
Dividing by zero yields an undefined value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223924 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 09:14:55 +00:00
David Majnemer
6578f1beb1 InstSimplify: [al]shr exact undef, %X -> undef
Exact shifts always keep the non-zero bits of their input.  This means
it keeps it's undef bits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223923 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 09:14:52 +00:00
David Majnemer
1297775557 InstSimplify: div %X, 0 -> undef
We already optimized rem %X, 0 to undef, we should do the same for div.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223919 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 07:52:18 +00:00
Ahmed Bougacha
605c40341b [ARM] Combine base-updating/post-incrementing vector load/stores.
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.

We can do the same thing for generic load/stores.

Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).

Differential Revision: http://reviews.llvm.org/D6585


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223862 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-10 00:07:37 +00:00
Chandler Carruth
3508e27903 Revert r223764 which taught instcombine about integer-based elment extraction
patterns.

This is causing Clang to miscompile itself for 32-bit x86 somehow, and likely
also on ARM and PPC. I really don't know how, but reverting now that I've
confirmed this is actually the culprit. I have a reproduction as well and so
should be able to restore this shortly.

This reverts commit r223764.

Original commit log follows:
Teach instcombine to canonicalize "element extraction" from a load of an
integer and "element insertion" into a store of an integer into actual
element extraction, element insertion, and vector loads and stores.

Previously various parts of LLVM (including instcombine itself) would
introduce integer loads and stores into the code as a way of opaquely
loading and storing "bits". In some cases (such as a memcpy of
std::complex<float> object) we will eventually end up using those bits
in non-integer types. In order for SROA to effectively promote the
allocas involved, it splits these "store a bag of bits" integer loads
and stores up into the constituent parts. However, for non-alloca loads
and tsores which remain, it uses integer math to recombine the values
into a large integer to load or store.

All of this would be "fine", except that it forces LLVM to go through
integer math to combine and split up values. While this makes perfect
sense for integers (and in fact is critical for bitfields to end up
lowering efficiently) it is *terrible* for non-integer types, especially
floating point types. We have a much more canonical way of representing
the act of concatenating the bits of two SSA values in LLVM: a vector
and insertelement. This patch teaching InstCombine to use this
representation.

With this patch applied, LLVM will no longer introduce integer math into
the critical path of every loop over std::complex<float> operations such
as those that make up the hot path of ... oh, most HPC code, Eigen, and
any other heavy linear algebra library.

For the record, I looked *extensively* at fixing this in other parts of
the compiler, but it just doesn't work:
- We really do want to canonicalize memcpy and other bit-motion to
  integer loads and stores. SSA values are tremendously more powerful
  than "copy" intrinsics. Not doing this regresses massive amounts of
  LLVM's scalar optimizer.
- We really do need to split up integer loads and stores of this form in
  SROA or every memcpy of a trivially copyable struct will prevent SSA
  formation of the members of that struct. It essentially turns off
  SROA.
- The closest alternative is to actually split the loads and stores when
  partitioning with SROA, but this has all of the downsides historically
  discussed of splitting up loads and stores -- the wide-store
  information is fundamentally lost. We would also see performance
  regressions for bitfield-heavy code and other places where the
  integers aren't really intended to be split without seemingly
  arbitrary logic to treat integers totally differently.
- We *can* effectively fix this in instcombine, so it isn't that hard of
  a choice to make IMO.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223813 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 19:21:16 +00:00
Duncan P. N. Exon Smith
dad20b2ae2 IR: Split Metadata from Value
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532.  Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.

I have a follow-up patch prepared for `clang`.  If this breaks other
sub-projects, I apologize in advance :(.  Help me compile it on Darwin
I'll try to fix it.  FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.

This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.

Here's a quick guide for updating your code:

  - `Metadata` is the root of a class hierarchy with three main classes:
    `MDNode`, `MDString`, and `ValueAsMetadata`.  It is distinct from
    the `Value` class hierarchy.  It is typeless -- i.e., instances do
    *not* have a `Type`.

  - `MDNode`'s operands are all `Metadata *` (instead of `Value *`).

  - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
    replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.

    If you're referring solely to resolved `MDNode`s -- post graph
    construction -- just use `MDNode*`.

  - `MDNode` (and the rest of `Metadata`) have only limited support for
    `replaceAllUsesWith()`.

    As long as an `MDNode` is pointing at a forward declaration -- the
    result of `MDNode::getTemporary()` -- it maintains a side map of its
    uses and can RAUW itself.  Once the forward declarations are fully
    resolved RAUW support is dropped on the ground.  This means that
    uniquing collisions on changing operands cause nodes to become
    "distinct".  (This already happened fairly commonly, whenever an
    operand went to null.)

    If you're constructing complex (non self-reference) `MDNode` cycles,
    you need to call `MDNode::resolveCycles()` on each node (or on a
    top-level node that somehow references all of the nodes).  Also,
    don't do that.  Metadata cycles (and the RAUW machinery needed to
    construct them) are expensive.

  - An `MDNode` can only refer to a `Constant` through a bridge called
    `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).

    As a side effect, accessing an operand of an `MDNode` that is known
    to be, e.g., `ConstantInt`, takes three steps: first, cast from
    `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
    third, cast down to `ConstantInt`.

    The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
    metadata schema owners transition away from using `Constant`s when
    the type isn't important (and they don't care about referring to
    `GlobalValue`s).

    In the meantime, I've added transitional API to the `mdconst`
    namespace that matches semantics with the old code, in order to
    avoid adding the error-prone three-step equivalent to every call
    site.  If your old code was:

        MDNode *N = foo();
        bar(isa             <ConstantInt>(N->getOperand(0)));
        baz(cast            <ConstantInt>(N->getOperand(1)));
        bak(cast_or_null    <ConstantInt>(N->getOperand(2)));
        bat(dyn_cast        <ConstantInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));

    you can trivially match its semantics with:

        MDNode *N = foo();
        bar(mdconst::hasa               <ConstantInt>(N->getOperand(0)));
        baz(mdconst::extract            <ConstantInt>(N->getOperand(1)));
        bak(mdconst::extract_or_null    <ConstantInt>(N->getOperand(2)));
        bat(mdconst::dyn_extract        <ConstantInt>(N->getOperand(3)));
        bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));

    and when you transition your metadata schema to `MDInt`:

        MDNode *N = foo();
        bar(isa             <MDInt>(N->getOperand(0)));
        baz(cast            <MDInt>(N->getOperand(1)));
        bak(cast_or_null    <MDInt>(N->getOperand(2)));
        bat(dyn_cast        <MDInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));

  - A `CallInst` -- specifically, intrinsic instructions -- can refer to
    metadata through a bridge called `MetadataAsValue`.  This is a
    subclass of `Value` where `getType()->isMetadataTy()`.

    `MetadataAsValue` is the *only* class that can legally refer to a
    `LocalAsMetadata`, which is a bridged form of non-`Constant` values
    like `Argument` and `Instruction`.  It can also refer to any other
    `Metadata` subclass.

(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223802 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 18:38:53 +00:00
Sonam Kumari
05a824843d Removal Of Duplicate Test Cases and Addition Of Missing Check Statements
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223768 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 10:46:38 +00:00
Ankur Garg
df35082f20 [test/Transforms/InstCombine/shift.ll] Removed duplicate test cases. NFC.
Removed some duplicate test cases from the file /test/Transforms/InstCombine/shift.ll.

test54 and test57 were duplicates of each other.
test55 and test58 were duplicates of each other.

(Removed test57 and test58)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223767 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 10:35:19 +00:00
Chandler Carruth
e78a87b633 Teach instcombine to canonicalize "element extraction" from a load of an
integer and "element insertion" into a store of an integer into actual
element extraction, element insertion, and vector loads and stores.

Previously various parts of LLVM (including instcombine itself) would
introduce integer loads and stores into the code as a way of opaquely
loading and storing "bits". In some cases (such as a memcpy of
std::complex<float> object) we will eventually end up using those bits
in non-integer types. In order for SROA to effectively promote the
allocas involved, it splits these "store a bag of bits" integer loads
and stores up into the constituent parts. However, for non-alloca loads
and tsores which remain, it uses integer math to recombine the values
into a large integer to load or store.

All of this would be "fine", except that it forces LLVM to go through
integer math to combine and split up values. While this makes perfect
sense for integers (and in fact is critical for bitfields to end up
lowering efficiently) it is *terrible* for non-integer types, especially
floating point types. We have a much more canonical way of representing
the act of concatenating the bits of two SSA values in LLVM: a vector
and insertelement. This patch teaching InstCombine to use this
representation.

With this patch applied, LLVM will no longer introduce integer math into
the critical path of every loop over std::complex<float> operations such
as those that make up the hot path of ... oh, most HPC code, Eigen, and
any other heavy linear algebra library.

For the record, I looked *extensively* at fixing this in other parts of
the compiler, but it just doesn't work:
- We really do want to canonicalize memcpy and other bit-motion to
  integer loads and stores. SSA values are tremendously more powerful
  than "copy" intrinsics. Not doing this regresses massive amounts of
  LLVM's scalar optimizer.
- We really do need to split up integer loads and stores of this form in
  SROA or every memcpy of a trivially copyable struct will prevent SSA
  formation of the members of that struct. It essentially turns off
  SROA.
- The closest alternative is to actually split the loads and stores when
  partitioning with SROA, but this has all of the downsides historically
  discussed of splitting up loads and stores -- the wide-store
  information is fundamentally lost. We would also see performance
  regressions for bitfield-heavy code and other places where the
  integers aren't really intended to be split without seemingly
  arbitrary logic to treat integers totally differently.
- We *can* effectively fix this in instcombine, so it isn't that hard of
  a choice to make IMO.

Differential Revision: http://reviews.llvm.org/D6548

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223764 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-09 08:55:32 +00:00
David Majnemer
fca9c7b21c InstSimplify: Try to bring back the rest of r223583
This reverts r223624 with a small tweak, hopefully this will make stage3
equivalent.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223679 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 18:30:43 +00:00
Sonam Kumari
a585ec7f7f Removal Of Duplicate Test Case from shift.ll file
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223648 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 09:40:43 +00:00
NAKAMURA Takumi
e4a5390406 Revert a part of r223583, for now. It seems causing different emission between stage2(gcc-clang) and stage3 clang. Investigating.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223624 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-08 02:07:22 +00:00
David Majnemer
620e8763ec InstSimplify: Optimize away useless unsigned comparisons
Code like X < Y && Y == 0 should always be folded away to false.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223583 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-06 10:51:40 +00:00
Duncan P. N. Exon Smith
30596886ed IR: Disallow complicated function-local metadata
Disallow complex types of function-local metadata.  The only valid
function-local metadata is an `MDNode` whose sole argument is a
non-metadata function-local value.

Part of PR21532.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223564 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-06 01:26:49 +00:00
Hans Wennborg
76b1313e75 Add some tests for SimplifyCFG's TurnSwitchRangeIntoICmp(). NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223396 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 22:19:28 +00:00
Hans Wennborg
8de6d3e510 Add some tests for SimplifyCFG's ConstantFoldTerminator(). NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223395 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 22:19:25 +00:00
Philip Reames
ef7c2ff0f3 Add a test case for argument type coercion in an invoke of a vararg function
This would have caught the bug I fixed in 223370.  



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223378 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 19:13:45 +00:00
Hal Finkel
efbb95a1be Revert "r223364 - Revert r223347 which has caused crashes on bootstrap bots."
Reapply r223347, with a fix to not crash on uninserted instructions (or more
precisely, instructions in uninserted blocks). bugpoint was able to reduce the
test case somewhat, but it is still somewhat large (and relies on setting
things up to be simplified during inlining), so I've not included it here.
Nevertheless, it is clear what is going on and why.

Original commit message:

Restrict somewhat the memory-allocation pointer cmp opt from r223093

Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223371 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 17:45:19 +00:00
Alexander Potapenko
182d9aaccb Revert r223347 which has caused crashes on bootstrap bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223364 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 14:22:27 +00:00
Simon Pilgrim
94590ca4cf [InstCombine] Minor optimization for bswap with binary ops
Added instcombine optimizations for BSWAP with AND/OR/XOR ops:

OP( BSWAP(x), BSWAP(y) ) -> BSWAP( OP(x, y) )
OP( BSWAP(x), CONSTANT ) -> BSWAP( OP(x, BSWAP(CONSTANT) ) )

Since its just a one liner, I've also added BSWAP to the DAGCombiner equivalent as well:

fold (OP (bswap x), (bswap y)) -> (bswap (OP x, y))

Refactored bswap-fold tests to use FileCheck instead of just checking that the bswaps had gone.

Differential Revision: http://reviews.llvm.org/D6407



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223349 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 09:44:01 +00:00
Hal Finkel
d70d5148a6 Restrict somewhat the memory-allocation pointer cmp opt from r223093
Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223347 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-04 09:22:28 +00:00
Matthias Braun
b0ec6c21b7 [SimplifyLibCalls] Improve double->float shrinking to consider constants
This allows cases like float x; fmin(1.0, x); to be optimized to fminf(1.0f, x);

rdar://19049359

Differential Revision: http://reviews.llvm.org/D6496

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223270 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 21:46:33 +00:00
Matthias Braun
9d362ec2a4 [SimplifyLibCalls] Enable double to float shrinking for copysign
rdar://19049359

Differential Revision: http://reviews.llvm.org/D6495

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223269 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 21:46:29 +00:00
Nick Lewycky
0f87df2033 Fix test to use the right metadata node (reapply r223239 plus a fix) and also to use the correct path to the GCNO file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223244 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 17:32:44 +00:00
Alexander Potapenko
22b6c01fc8 Revert r223239, which broke some bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223240 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 16:03:08 +00:00
Alexander Potapenko
2afd191abd Fix the metadata number used by llvm.gcov to match the number of the inserted metadata node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223239 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 15:15:58 +00:00
Erik Eckstein
10e28ca6b1 InstCombine: simplify signed range checks
Try to convert two compares of a signed range check into a single unsigned compare.
Examples:
(icmp sge x, 0) & (icmp slt x, n) --> icmp ult x, n
(icmp slt x, 0) | (icmp sgt x, n) --> icmp ugt x, n




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223224 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 10:39:15 +00:00
Tom Stellard
857550322c StructurizeCFG: Use LoopInfo analysis for better loop detection
We were assuming that each back-edge in a region represented a unique
loop, which is not always the case.  We need to use LoopInfo to
correctly determine which back-edges are loops.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223199 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 04:28:32 +00:00
Nick Lewycky
92d7d4dcd7 Emit the entry block first and the exit block second, then all the blocks in between afterwards. This is what gcc always does, and some out of tree tools depend on that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223193 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-03 02:45:01 +00:00
Michael Zolotukhin
97be10d98f PR21302. Vectorize only bottom-tested loops.
rdar://problem/18886083

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223171 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-02 22:59:06 +00:00