Commit Graph

12238 Commits

Author SHA1 Message Date
Chandler Carruth
888ee76367 [SROA] Make the computation of adjusted pointers not leak GEP
instructions.

I noticed this when working on dialing up how aggressively we can
pre-split loads and stores. My test case wasn't passing because dead
GEPs into the allocas persisted when they were built by this routine.
This isn't terribly harmful, we still rewrote and promoted the alloca
and I can't conceive of how to cause this to happen in a case where we
will keep the exact same alloca but rewrite and promote the uses of it.
If that ever happened, we'd get an assert out of mem2reg.

So I don't have a direct test case yet, but the subsequent commit's test
case wouldn't pass without this. There are other problems fixed by this
patch that I spotted purely by inspection such as the fact that
getAdjustedPtr could have actually deleted dead base pointers. I don't
know how to get a base pointer to go into getAdjustedPtr today, so
I think this bug could never have manifested (and I certainly can't
write a test case for it) but, it wasn't the intent of the code. The
code really just wanted to GC the new instructions built. That can be
done more directly by comparing with the base pointer which is the only
non-new instruction that this code can return.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225073 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-02 02:47:38 +00:00
Chandler Carruth
987c1f8ee7 [SROA] Fix the loop exit placement to be prior to indexing the splits
array. This prevents it from walking out of bounds on the splits array.

Bug found with the existing tests by ASan and by the MSVC debug build.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225069 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-02 00:10:22 +00:00
Chandler Carruth
ed3f2c6761 [SROA] Fix two total think-os in r225061 that should have been caught on
a +asserts bootstrap, but my bootstrap had asserts off. Oops.

Anyways, in some places it is reasonable to cast (as a sanity check) the
pointer operand to a load or store to an instruction within SROA --
namely when the pointer operand is expected to be derived from an
alloca, and thus always an instruction. However, the pre-splitting code
also deals with loads and stores to non-alloca pointers and there we
need to just use the Value*. Nothing about the code relied on the
instruction cast, it was only there essentially as an invariant
assertion. Remove the two that don't actually hold.

This should fix the proximate issue in PR22080, but I'm also doing an
asserts bootstrap myself to see if there are other issues lurking.

I'll craft a reduced test case in a moment, but I wanted to get the tree
healthy as quickly as possible.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225068 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-01 23:26:16 +00:00
Chandler Carruth
2f1e3d88b7 [SROA] Switch to using a more direct debug logging technique in one part
of my new load and store splitting, and fix a bug where it logged
a totally irrelevant slice rather than the actual slice in question.

The logging here previously worked because we used to place new slices
onto the back of the core sequence, but that caused other problems.
I updated the actual code to store new slices in their own vector but
didn't update the logging. There isn't a good way to reuse the logging
any more, and frankly it wasn't needed. We can directly log this bit
more easily.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225063 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-01 12:56:47 +00:00
Chandler Carruth
8785c31033 [SROA] Fix formatting with clang-format which I managed to fail to do
prior to committing r225061. Sorry for that.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225062 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-01 12:01:03 +00:00
Chandler Carruth
450b39e971 [SROA] Teach SROA how to much more intelligently handle split loads and
stores.

When there are accesses to an entire alloca with an integer
load or store as well as accesses to small pieces of the alloca, SROA
splits up the large integer accesses. In order to do that, it uses bit
math to merge the small accesses into large integers. While this is
effective, it produces insane IR that can cause significant problems in
the rest of the optimizer:

- It can cause load and store mismatches with GVN on the non-alloca side
  where we end up loading an i64 (or some such) rather than loading
  specific elements that are stored.
- We can't always get rid of the integer bit math, which is why we can't
  always fix the loads and stores to work well with GVN.
- This is especially bad when we have operations that mix poorly with
  integer bit math such as floating point operations.
- It will block things like the vectorizer which might be able to handle
  the scalar stores that underly the aggregate.

At the same time, we can't just directly split up these loads and stores
in all cases. If there is actual integer arithmetic involved on the
values, then using integer bit math is actually the perfect lowering
because we can often combine it heavily with the surrounding math.

The solution this patch provides is to find places where SROA is
partitioning aggregates into small elements, and look for splittable
loads and stores that it can split all the way to some other adjacent
load and store. These are uniformly the cases where failing to split the
loads and stores hurts the optimizer that I have seen, and I've looked
extensively at the code produced both from more and less aggressive
approaches to this problem.

However, it is quite tricky to actually do this in SROA. We may have
loads and stores to the same alloca, or other complex patterns that are
hard to handle. This complexity leads to the somewhat subtle algorithm
implemented here. We have to do this entire process as a separate pass
over the partitioning of the alloca, and split up all of the loads prior
to splitting the stores so that we can handle safely the cases of
overlapping, including partially overlapping, loads and stores to the
same alloca. We also have to reconstitute the post-split slice
configuration so we can avoid iterating again over all the alloca uses
(the slow part of SROA). But we also have to ensure that when we split
up loads and stores to *other* allocas, we *do* re-iterate over them in
SROA to adapt to the more refined partitioning now required.

With this, I actually think we can fix a long-standing TODO in SROA
where I avoided splitting as many loads and stores as probably should be
splittable. This limitation historically mitigated the fallout of all
the bad things mentioned above. Now that we have more intelligent
handling, I plan to remove the FIXME and more aggressively mark integer
loads and stores as splittable. I'll do that in a follow-up patch to
help with bisecting any fallout.

The net result of this change should be more fine-grained and accurate
scalars being formed out of aggregates. At the very least, Clang now
generates perfect code for this high-level test case using
std::complex<float>:

  #include <complex>

  void g1(std::complex<float> &x, float a, float b) {
    x += std::complex<float>(a, b);
  }
  void g2(std::complex<float> &x, float a, float b) {
    x -= std::complex<float>(a, b);
  }

  void foo(const std::complex<float> &x, float a, float b,
           std::complex<float> &x1, std::complex<float> &x2) {
    std::complex<float> l1 = x;
    g1(l1, a, b);
    std::complex<float> l2 = x;
    g2(l2, a, b);
    x1 = l1;
    x2 = l2;
  }

This code isn't just hypothetical either. It was reduced out of the hot
inner loops of essentially every part of the Eigen math library when
using std::complex<float>. Those loops would consistently and
pervasively hop between the floating point unit and the integer unit due
to bit math extraction and insertion of floating point values that were
"stored" in a 64-bit integer register around the loop backedge.

So far, this change has passed a bootstrap and I have done some other
testing and so far, no issues. That doesn't mean there won't be though,
so I'll be prepared to help with any fallout. If you performance swings
in particular, please let me know. I'm very curious what all the impact
of this change will be. Stay tuned for the follow-up to also split more
integer loads and stores.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225061 91177308-0d34-0410-b5e6-96231b3b80d8
2015-01-01 11:54:38 +00:00
Sanjay Patel
28650b8ec2 InstCombine: fsub nsz 0, X ==> fsub nsz -0.0, X
Some day the backend may handle instruction-level fast math flags and make
this transform unnecessary, but it's still better practice to use the canonical
representation of fneg when possible (use a -0.0).

This is a partial fix for PR20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 ).
See also http://reviews.llvm.org/D6723.

Differential Revision: http://reviews.llvm.org/D6731



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225050 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 22:14:05 +00:00
David Majnemer
0f77ccd6bb InstCombine: try to transform A-B < 0 into A < B
We are allowed to move the 'B' to the right hand side if we an prove
there is no signed overflow and if the comparison itself is signed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225034 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-31 04:21:41 +00:00
Kostya Serebryany
dd890d5c5e [asan] change _sanitizer_cov_module_init to accept int* instead of int**
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224999 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-30 19:29:28 +00:00
Elena Demikhovsky
cc794daa67 Some code improvements in Masked Load/Store.
No functional changes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224986 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-30 14:28:14 +00:00
Philip Reames
91a083c57f Carry facts about nullness and undef across GC relocation
This change implements four basic optimizations:

    If a relocated value isn't used, it doesn't need to be relocated.
    If the value being relocated is null, relocation doesn't change that. (Technically, this might be collector specific. I don't know of one which it doesn't work for though.)
    If the value being relocated is undef, the relocation is meaningless.
    If the value being relocated was known nonnull, the relocated pointer also isn't null. (Since it points to the same source language object.)

I outlined other planned work in comments.

Differential Revision: http://reviews.llvm.org/D6600



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224968 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 23:27:30 +00:00
Philip Reames
1714ad67bd Refine the notion of MayThrow in LICM to include a header specific version
In LICM, we have a check for an instruction which is guaranteed to execute and thus can't introduce any new faults if moved to the preheader. To handle a function which might unconditionally throw when first called, we check for any potentially throwing call in the loop and give up.

This is unfortunate when the potentially throwing condition is down a rare path. It prevents essentially all LICM of potentially faulting instructions where the faulting condition is checked outside the loop. It also greatly diminishes the utility of loop unswitching since control dependent instructions - which are now likely in the loops header block - will not be lifted by subsequent LICM runs.

define void @nothrow_header(i64 %x, i64 %y, i1 %cond) {
; CHECK-LABEL: nothrow_header
; CHECK-LABEL: entry
; CHECK: %div = udiv i64 %x, %y
; CHECK-LABEL: loop
; CHECK: call void @use(i64 %div)
entry:
  br label %loop
loop: ; preds = %entry, %for.inc
  %div = udiv i64 %x, %y
  br i1 %cond, label %loop-if, label %exit
loop-if:
  call void @use(i64 %div)
  br label %loop
exit:
  ret void
}

The current patch really only helps with non-memory instructions (i.e. divs, etc..) since the maythrow call down the rare path will be considered to alias an otherwise hoistable load.  The one exception is that it does kick in for loads which are known to be invariant without regard to other possible stores, i.e. those marked with either !invarant.load metadata of tbaa 'is constant memory' metadata.

Differential Revision: http://reviews.llvm.org/D6725



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224965 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 23:00:57 +00:00
Philip Reames
456b7b602c Loading from null is valid outside of addrspace 0
This patches fixes a miscompile where we were assuming that loading from null is undefined and thus we could assume it doesn't happen.  This transform is perfectly legal in address space 0, but is not neccessarily legal in other address spaces.

We really should introduce a hook to control this property on a per target per address space basis.  We may be loosing valuable optimizations in some address spaces by being too conservative.

Original patch by Thomas P Raoux (submitted to llvm-commits), tests and formatting fixes by me.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224961 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-29 22:46:21 +00:00
David Majnemer
7627d9c229 InstCombine: Infer nuw for multiplies
A multiply cannot unsigned wrap if there are bitwidth, or more, leading
zero bits between the two operands.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224849 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 09:50:35 +00:00
David Majnemer
998ae69abe InstCombe: Infer nsw for multiplies
We already utilize this logic for reducing overflow intrinsics, it makes
sense to reuse it for normal multiplies as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224847 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-26 09:10:14 +00:00
Elena Demikhovsky
b31322328a Masked Load/Store - Changed the order of parameters in intrinsics.
No functional changes.
The documentation is coming.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224829 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-25 07:49:20 +00:00
Chandler Carruth
43e17cfe85 [SROA] Update the documentation and names for accessing the slices
within a partition of an alloca in SROA.

This reflects the fact that the organization of the slices isn't really
ideal for analysis, but is the naive way in which the slices are
available while we're processing them in the core partitioning
algorithm.

It is possible we could improve matters, and I've left a FIXME with
one of my ideas for how to do this, but it is a lot of work, the benefit
is somewhat minor, and it isn't clear that it would be strictly better.
=/ Not really satisfying, but I'm out of really good ideas.

This also improves one place where the debug logging failed to mark some
split partitions. Now we log in one place, slightly later, and with
accurate information about whether the slice is split by the partition
being rewritten.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224800 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-24 01:48:09 +00:00
Chandler Carruth
c807870534 [SROA] Refactor the integer and vector promotion testing logic to
operate in terms of the new Partition class, and generally have a more
clear set of arguments. No functionality changed.

The most notable improvements here are consistently using the
terminology of 'partition' for a collection of slices that will be
rewritten together and 'slice' for a region of an alloca that is used by
a particular instruction.

This also makes it more clear that the split things are actually slices
as well, just ones that will be split by the proposed partition.

This doesn't yet address the confusing aspects of the partition's
interface where slices that will be split by the partition and start
prior to the partition are accesssed via Partition::splitSlices() while
the core range of slices exposed by a Partition includes both unsplit
slices and slices which will be split by the end, but started within the
offset range of the partition. This is particularly hard to address
because the algorithm which computes partitions quite literally doesn't
know which slices these will end up being until too late. I'm looking at
whether I can fix that or not, but I'm not optimistic. I'll update the
comments and/or names to further explain this either way. I've also
added one FIXME in this patch relating to this confusion so that I don't
forget about it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224798 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-24 01:05:14 +00:00
Kostya Serebryany
b69d796590 [asan] change the coverage collection scheme so that we can easily emit coverage for the entire process as a single bit set, and if coverage_bitset=1 actually emit that bitset
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224789 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 22:32:17 +00:00
Michael Liao
b9e302f3ca [SimplifyCFG] Revise common code sinking
- Fix the case where more than 1 common instructions derived from the same
  operand cannot be sunk. When a pair of value has more than 1 derived values
  in both branches, only 1 derived value could be sunk.
- Replace BB1 -> (BB2, PN) map with joint value map, i.e.
  map of (BB1, BB2) -> PN, which is more accurate to track common ops.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224757 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 08:26:55 +00:00
Michael Kuperstein
fc86f5fc9f Remove a bad cast in CloneModule()
A cast that was introduced in r209007 was accidentally left in after the changes made to GlobalAlias rules in r210062. This crashes if the aliasee is a now-leggal ConstantExpr.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224756 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 08:23:45 +00:00
Chandler Carruth
d4510005df Revert r224739: Debug info: Teach SROA how to update debug info for
fragmented variables.

This caused codegen to start crashing when we built somewhat large
programs with debug info and optimizations. 'check-msan' hit in, and
I suspect a bootstrap would as well. I mailed a test case to the
review thread.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224750 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-23 02:58:14 +00:00
David Blaikie
b39244dca3 Remove dynamic allocation/indirection from GCOVBlocks owned by GCOVFunction
Since these are all created in the DenseMap before they are referenced,
there's no problem with pointer validity by the time it's required. This
removes another use of DeleteContainerSeconds/manual memory management
which I'm cleaning up from time to time.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224744 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 23:12:42 +00:00
Chandler Carruth
67924e9af8 [SROA] Lift the logic for traversing the alloca slices one partition at
a time into a partition iterator and a Partition class.

There is a lot of knock-on simplification that this enables, largely
stemming from having a Partition object to refer to in lots of helpers.
I've only done a minimal amount of that because enoguh stuff is changing
as-is in this commit.

This shouldn't change any observable behavior. I've worked hard to
preserve the *exact* traversal semantics which were originally present
even though some of them make no sense. I'll be changing some of this in
subsequent commits now that the logic is carefully factored into
a reusable place.

The primary motivation for this change is to break the rewriting into
phases in order to support more intelligent rewriting. For example, I'm
planning to change how split loads and stores are rewritten to remove
the significant overuse of integer bit packing in the resulting code and
allow more effective secondary splitting of aggregates. For any of this
to work, they have to share the exact traversal logic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224742 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 22:46:00 +00:00
Bruno Cardoso Lopes
a559a2317c [LCSSA] Handle PHI insertion in disjoint loops
Take two disjoint Loops L1 and L2.

LoopSimplify fails to simplify some loops (e.g. when indirect branches
are involved). In such situations, it can happen that an exit for L1 is
the header of L2. Thus, when we create PHIs in one of such exits we are
also inserting PHIs in L2 header.

This could break LCSSA form for L2 because these inserted PHIs can also
have uses in L2 exits, which are never handled in the current
implementation. Provide a fix for this corner case and test that we
don't assert/crash on that.

Differential Revision: http://reviews.llvm.org/D6624

rdar://problem/19166231

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224740 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 22:35:46 +00:00
Adrian Prantl
e5ca21a2df Debug info: Teach SROA how to update debug info for fragmented variables.
This allows us to generate debug info for extremely advanced code such as

  typedef struct { long int a; int b;} S;

  int foo(S s) {
    return s.b;
  }

which at -O1 on x86_64 is codegen'd into

  define i32 @foo(i64 %s.coerce0, i32 %s.coerce1) #0 {
    ret i32 %s.coerce1, !dbg !24
  }

with this patch we emit the following debug info for this

  TAG_formal_parameter [3]
    AT_location( 0x00000000
                 0x0000000000000000 - 0x0000000000000006: rdi, piece 0x00000008, rsi, piece 0x00000004
                 0x0000000000000006 - 0x0000000000000008: rdi, piece 0x00000008, rax, piece 0x00000004 )
                 AT_name( "s" )
                 AT_decl_file( "/Volumes/Data/llvm/_build.ninja.release/test.c" )

Thanks to chandlerc, dblaikie, and echristo for their feedback on all
previous iterations of this patch!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224739 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-22 22:26:00 +00:00
David Majnemer
854a37649a InstCombine: Squash an icmp+select into bitwise arithmetic
(X & INT_MIN) == 0 ? X ^ INT_MIN : X  into  X | INT_MIN
(X & INT_MIN) != 0 ? X ^ INT_MIN : X  into  X & INT_MAX

This fixes PR21993.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224676 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 04:45:35 +00:00
Chandler Carruth
93e03df3cf [SROA] Run clang-format over the entire SROA pass as I wrote it before
much of the glory of clang-format, and now any time I touch it I risk
introducing formatting changes as part of a functional commit.

Also, clang-format is *way* better at formatting my code than I am.
Most of this is a huge improvement although I reverted a couple of
places where I hit a clang-format bug with lambdas that has been filed
but not (fully) fixed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224666 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-20 02:39:18 +00:00
Tilmann Scheller
409585877a [BBVectorize] Remove two more redundant assignments.
Found by the Clang static analyzer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224590 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 17:21:38 +00:00
Tilmann Scheller
14ef1a43c2 [BBVectorize] Remove redundant assignment.
Found by the Clang static analyzer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224589 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 17:13:12 +00:00
Bruno Cardoso Lopes
06833ca7c1 Reapply: [InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr
The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. Also, fix code to also return the modified switch when only
the truncation is performed.

This fixes an assertion crash.

Differential Revision: http://reviews.llvm.org/D6644

rdar://problem/19191835

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224588 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 17:12:35 +00:00
Tilmann Scheller
b8b0b8f0a8 [LoopVectorize] Remove redundant assignment.
Found by the Clang static analyzer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224587 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 17:02:31 +00:00
Sanjay Patel
7c5fa50875 use -0.0 when creating an fneg instruction
Backends recognize (-0.0 - X) as the canonical form for fneg
and produce better code. Eg, ppc64 with 0.0:

   lis r2, ha16(LCPI0_0)
   lfs f0, lo16(LCPI0_0)(r2)
   fsubs f1, f0, f1
   blr

vs. -0.0:

   fneg f1, f1
   blr

Differential Revision: http://reviews.llvm.org/D6723



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224583 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 16:44:08 +00:00
Bruno Cardoso Lopes
01b07d541b Revert "[InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr"
Reverts commit r224574 to appease buildbots:

The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. This fixes an assertion crash.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224576 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 14:36:24 +00:00
Bruno Cardoso Lopes
cba407d019 [InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr
The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. This fixes an assertion crash.

Differential Revision: http://reviews.llvm.org/D6644

rdar://problem/19191835

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224574 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 14:23:15 +00:00
Duncan P. N. Exon Smith
33ed2ef4ff Rename MapValue(Metadata*) to MapMetadata()
Instead of reusing the name `MapValue()` when mapping `Metadata`, use
`MapMetadata()`.  The old name doesn't make much sense after the
`Metadata`/`Value` split.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224566 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-19 06:06:18 +00:00
Sanjay Patel
82a4ce31b2 fix formatting; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224542 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 21:11:09 +00:00
Viktor Kutuzov
e22e2b8798 [Msan] Generalize instrumentation code to support FreeBSD mapping
Differential Revision: http://reviews.llvm.org/D6666


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224514 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 12:12:59 +00:00
Chandler Carruth
9ac193609f [SROA] Cleanup - remove the use of std::mem_fun_ref nonsense and use
a lambda now that we have them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224500 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-18 05:19:47 +00:00
Kostya Serebryany
1c97c5e8bd [sanitizer] allow -fsanitize-coverage=N w/ -fsanitize=leak, llvm part
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224463 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 21:50:04 +00:00
Suyog Sarda
4bfc4f2e8c Revert 224119 "This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders them for bundling into vector of loads,
and vectorizes it." 

This was re-ordering floating point data types resulting in mismatch in output.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224424 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 10:34:27 +00:00
Erik Eckstein
96bd465d6c Strength reduce intrinsics with overflow into regular arithmetic operations if possible.
Some intrinsics, like s/uadd.with.overflow and umul.with.overflow, are already strength reduced.
This change adds other arithmetic intrinsics: s/usub.with.overflow, smul.with.overflow.
It completes the work on PR20194.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224417 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-17 07:29:19 +00:00
Kostya Serebryany
95aa8cab27 [sanitizer] prevent function call merging for sanitizer-coverage callbacks
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 21:24:15 +00:00
Elena Demikhovsky
14fb445715 Masked Load and Store Intrinsics in loop vectorizer.
The loop vectorizer optimizes loops containing conditional memory
accesses by generating masked load and store intrinsics.
This decision is target dependent.

http://reviews.llvm.org/D6527



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224334 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-16 11:50:42 +00:00
Elena Demikhovsky
2f6d42351a Sink store based on alias analysis
- by Ella Bolshinsky
The alias analysis is used define whether the given instruction
is a barrier for store sinking. For 2 identical stores, following
instructions are checked in the both basic blocks, to determine
whether they are sinking barriers.

http://reviews.llvm.org/D6420



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224247 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-15 14:09:53 +00:00
Elena Demikhovsky
1c3a1516f8 Loop Vectorizer minor changes in the code -
some comments, function names, identation.

Reviewed here: http://reviews.llvm.org/D6527


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224218 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-14 09:43:50 +00:00
Steven Wu
ac42de8ef0 More code format fix from r224133, NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224140 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 18:48:37 +00:00
Steven Wu
00b3170e70 Restructure code from r224097. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224133 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 17:21:54 +00:00
Chad Rosier
5400fd6d79 [Reassociate] Use dbgs() instead of errs().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224125 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 14:44:12 +00:00
Suyog Sarda
1dea0dc279 This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders them for bundling into vector of loads,
and vectorizes it. 
 
 Test case :
 
       float hadd(float* a) {
           return (a[0] + a[1]) + (a[2] + a[3]);
        }
 
 
 AArch64 assembly before patch :
 
        ldp	s0, s1, [x0]
 	ldp	s2, s3, [x0, #8]
 	fadd	s0, s0, s1
 	fadd	s1, s2, s3
 	fadd	s0, s0, s1
 	ret
 
 AArch64 assembly after patch :
 
        ldp	d0, d1, [x0]
 	fadd	v0.2s, v0.2s, v1.2s
 	faddp	s0, v0.2s
 	ret

Reviewed Link : http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20141208/248531.html



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224119 91177308-0d34-0410-b5e6-96231b3b80d8
2014-12-12 12:53:44 +00:00