Commit Graph

6459 Commits

Author SHA1 Message Date
Pawel Bylica
59764b94a7 Constfold insertelement to undef when index is out-of-bounds
Summary:
This patch adds constant folding of insertelement instruction to undef value when index operand is constant and is not less than vector size or is undef.

InstCombine does not support this case, but I'm happy to add it there also if this change is accepted.

Test Plan: Unittests and regression tests for ConstProp pass.

Reviewers: majnemer

Reviewed By: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9287

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235854 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-27 09:30:49 +00:00
Philip Reames
a404b6f421 [RewriteStatepointsForGC] Exclude constant values from being considered live at a safepoint
There can be various constant pointers in the IR which do not get relocated at a safepoint. One example is the address of a global variable. Another example is a pointer created via inttoptr. Note that the optimizer itself likes to create such inttoptrs when locally propagating constants through dynamically dead code.

To deal with this, we need to exclude uses of constants from contributing to the liveness of a safepoint which might reach that use. At some later date, it might be worth exploring what could be done to support the relocation of various special types of "constants", but that's future work.

Differential Revision: http://reviews.llvm.org/D9236



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235821 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-26 19:48:03 +00:00
Philip Reames
83a049f2a6 Don't Place Entry Safepoints Before the llvm.frameescape() Intrinsic
llvm.frameescape() intrinsic is not a real call. The intrinsic can only exist in the entry block. Inserting a gc.statepoint() before llvm.frameescape() may split the entry block, and push the intrinsic out of the entry block.

Patch by: Swaroop.Sridhar@microsoft.com
Differential Revision: http://reviews.llvm.org/D8910




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235820 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-26 19:41:23 +00:00
Sanjay Patel
1111a216ee [x86] instcombine more cases of insertps into a shufflevector
This is a follow-on to D8833 (insertps optimization when the zero mask is not used).

In this patch, we check for the case where the zmask is used, but both input vectors
to the insertps intrinsic are the same operand or the zmask overrides the destination
lane. This lets us replace the 2nd shuffle input operand with the zero vector.

Differential Revision: http://reviews.llvm.org/D9257



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235810 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-25 20:55:25 +00:00
Hans Wennborg
7a301c1b8c SimplifyCFG: Correctly handle switch lookup tables which fully cover the input type and use bit tests to check for holes
When using bit tests for hole checks, we call AddPredecessorToBlock to give the
phi node a value from the bit test block. This would break if we've
previously called removePredecessor on the default destination because the
switch is fully covered.

Test case by Mark Lacey.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235771 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 20:57:56 +00:00
David Blaikie
e41f3849bc [opaque pointer type] Add textual IR support for explicit type parameter to the invoke instruction
Same as r235145 for the call instruction - the justification, tradeoffs,
etc are all the same. The conversion script worked the same without any
false negatives (after replacing 'call' with 'invoke').

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235755 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 19:32:54 +00:00
David Blaikie
8b6356c73e Skip extra LLVM IR assemble/disassemble steps in some tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235736 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 18:06:09 +00:00
Jingyue Wu
728ad0157c Resurrect r235688
We should skip vector types which are not SCEVable.

test/CodeGen/NVPTX/sched2.ll passes


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235695 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 04:22:39 +00:00
Jingyue Wu
f42450abb6 Revert r235688
Seems breaking builds


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235690 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 03:26:11 +00:00
Jingyue Wu
d83b3b1a8d [NVPTX] enable NaryReassociate in NVPTX
Summary:
We run NaryReassociate right after SLSR because SLSR enables many
opportunities for NaryReassociate. For example, in nary-slsr.ll

  foo((a + b) + c);
  foo((a + b * 2) + c);
  foo((a + b * 3) + c);   // 2 muls and 6 adds

after SLSR:

  ab = a + b;
  foo(ab + c);
  ab2 = ab + b;
  foo(ab2 + c);
  ab3 = ab2 + b;
  foo(ab3 + c);           // 6 adds

after NaryReassociate:

  abc = (a + b) + c;
  foo(abc);
  ab2c = abc + b;
  foo(ab2c);
  ab3c = ab2c + b;
  foo(ab3c);              // 4 adds

Test Plan: nary-slsr.ll

Reviewers: jholewinski, eliben

Reviewed By: eliben

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D9066

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235688 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-24 02:54:06 +00:00
Jingyue Wu
12f341611a [NVPTX] run SeparateConstOffsetFromGEP before SLSR
Summary:
We pick this order because SeparateConstOffsetFromGEP may create more
opportunities for SLSR.

Test Plan:
reassociate-geps-and-slsr.ll
no performance regression on internal benchmarks

Reviewers: meheff

Subscribers: llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D9230

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235632 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-23 20:00:04 +00:00
Karthik Bhat
7ab8b5573e Add support to interchange loops with reductions.
This patch enables interchanging of tightly nested loops with reductions.
Differential Revision: http://reviews.llvm.org/D8314


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235571 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-23 04:51:44 +00:00
David Majnemer
f9c92b069a [InstCombine] Use a more targeted fix instead of r235544
Only clear out the NSW/NUW flags if we are optimizing 'add'/'sub' while
taking advantage that the sign bit is not set.  We do this optimization
to further shrink the mask but shrinking the mask isn't NSW/NUW
preserving in this case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235558 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 22:42:05 +00:00
David Majnemer
3bd87826e5 [InstCombine] Clear out nsw/nuw if we modify computation in the chain
An nsw/nuw operation relies on the values feeding into it to not
overflow if 'poison' is not to be produced.  This means that
optimizations which make modifications to the bottom of a chain (like
SimplifyDemandedBits) must strip out nsw/nuw if they cannot ensure that
they will be preserved.

This fixes PR23309.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235544 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 20:59:28 +00:00
NAKAMURA Takumi
bc4233f437 Remove a zero-length file of llvm/test/Transforms/InstCombine/descale-zero.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235457 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 23:14:33 +00:00
Wei Mi
ef67950b62 Limiting gep merging to fix the performance problem described in
https://llvm.org/bugs/show_bug.cgi?id=23163.

Gep merging sometimes behaves like a reverse CSE/LICM optimization,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.

The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.

Differential Revision: http://reviews.llvm.org/D8911


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235455 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 23:02:15 +00:00
Wei Mi
480fc70c43 Revert r235451 since it is attached to a wrong Differential Revision. Sorry.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235453 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 22:56:09 +00:00
Wei Mi
73a5fa9ad6 Limiting gep merging to fix the performance problem described in
https://llvm.org/bugs/show_bug.cgi?id=23163.

Gep merging sometimes behaves like a reverse CSE/LICM optimizations,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.

The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.

Differential Revision: http://reviews.llvm.org/D9007


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235451 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 22:37:09 +00:00
Ahmed Bougacha
0f32a037ef [MemCpyOpt] Use the raw i8* dest when optimizing memset+memcpy.
MemIntrinsic::getDest() looks through pointer casts, and using it
directly when building the new GEP+memset results in stuff like:

  %0 = getelementptr i64* %p, i32 16
  %1 = bitcast i64* %0 to i8*
  call ..memset(i8* %1, ...)

instead of the correct:

  %0 = bitcast i64* %p to i8*
  %1 = getelementptr i8* %0, i32 16
  call ..memset(i8* %1, ...)

Instead, use getRawDest, which just gives you the i8* value.
While there, use the memcpy's dest, as it's live anyway.

In most cases, when the optimization triggers, the memset and memcpy
sizes are the same, so the built memset is 0-sized and eliminated.
The problem occurs when they're different.

Fixes a regression caused by r235232: PR23300.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235419 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 21:28:33 +00:00
Jingyue Wu
423476d899 [SLSR] garbage-collect unused instructions
Summary:
After we rewrite a candidate, the instructions used by the old form may
become unused. This patch cleans up these unused instructions so that we
needn't run DCE after SLSR.

Test Plan: removed -dce in all the SLSR tests

Reviewers: broune, meheff

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9101

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235410 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 19:56:18 +00:00
Jingyue Wu
412b6fc7b9 [SeparateConstOffsetFromGEP] garbage-collect intermediate instructions
Summary: so that we needn't run DCE after this pass.

Test Plan: removed -dce from the commandline in split-gep.ll and split-gep-and-gvn.ll

Reviewers: meheff

Subscribers: llvm-commits, HaoLiu, hfinkel, jholewinski

Differential Revision: http://reviews.llvm.org/D9096

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235409 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 19:53:18 +00:00
Fiona Glaser
b5750565de InstCombine: fold (sitofp (zext x)) to (uitofp x)
This is okay because the zext guarantees the high bit is zero,
and so the value is unsigned.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235364 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-21 00:05:41 +00:00
Akira Hatanaka
ca9313e65e [InlineFunction] Don't add lifetime markers for zero-sized allocas.
This commit fixes the code which adds lifetime markers in InlineFunction to skip
zero-sized allocas instead of asserting on them.

rdar://problem/20531155


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235312 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-20 16:11:05 +00:00
Ahmed Bougacha
3a65b111b5 [MemCpyOpt] Don't force i64 when promoting memset/memcpy sizes.
Harden r235258 to support any integer bitwidth.  The quick glance at
the reference made me think only i32 and i64 were valid types, but
they're not special, so any overload is legal.

Thanks to David Majnemer for noticing!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235261 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-18 23:06:04 +00:00
Ahmed Bougacha
ebb4371478 [MemCpyOpt] Promote both memset/memcpy sizes if differently typed.
Followup to r235232, which caused PR23278.

We can't assume the memset and memcpy sizes have the same type, as
nothing in the language reference prevents that.
Instead, zext both to i64 if they disagree.

While there, robustify tests by using i8 %c rather than i8 0 for the
memset character.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235258 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-18 17:57:41 +00:00
David Majnemer
c5edbea4e7 [InstCombine] (mul nsw 1, INT_MIN) != (shl nsw 1, 31)
Multiplying INT_MIN by 1 doesn't trigger nsw.  However, shifting 1 into
the sign bit *does* trigger nsw.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235250 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-18 04:41:30 +00:00
Ahmed Bougacha
7349765ddb [MemCpyOpt] Optimize double-storing by memset+memcpy.
A common idiom in some code is to do the following:

  memset(dst, 0, dst_size);
  memcpy(dst, src, src_size);

Some of the memset is redundant; instead, we can do:

  memcpy(dst, src, src_size);
  memset(dst + src_size, 0,
         dst_size <= src_size ? 0 : dst_size - src_size);

Original patch by: Joel Jones
Differential Revision: http://reviews.llvm.org/D498


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235232 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 22:20:57 +00:00
Jingyue Wu
f182c81444 [NaryReassociate] run NaryReassociate iteratively
Summary:
An alternative is to use a worklist approach. However, that approach
would break the traversing order so that we couldn't lookup SeenExprs
efficiently. I don't see a clear winner here, so I picked the easier approach.

Along with two minor improvements:
1. preserves ScalarEvolution by forgetting instructions replaced
2. removes dead code locally avoiding the need of running DCE afterwards

Test Plan: add to slsr-add.ll a test that requires multiple iterations

Reviewers: broune, dberlin, atrick, meheff

Reviewed By: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9058

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235151 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 00:25:10 +00:00
David Blaikie
32b845d223 [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 23:24:18 +00:00
Jingyue Wu
feecc904c4 [NaryReassociate] speeds up candidate searching
Summary:
This fixes a left-over efficiency issue in D8950.

As Andrew and Daniel suggested, we can store the candidates in a stack
and pop the top element when it does not dominate the current
instruction. This reduces the worst-case time complexity to O(n).

Test Plan: a new test in nary-add.ll that exercises this optimization.

Reviewers: broune, dberlin, meheff, atrick

Reviewed By: atrick

Subscribers: llvm-commits, sanjoy

Differential Revision: http://reviews.llvm.org/D9055

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235129 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 18:42:31 +00:00
Sanjay Patel
81b61c0e50 [X86, SSE] instcombine common cases of insertps intrinsics into shuffles
This is very similar to D8486 / r232852 (vperm2). If we treat insertps intrinsics
as shufflevectors, we can optimize them better.

I've left all but the full zero case of the zero mask variants out of this patch. 
I don't think those can be converted into a single shuffle in all cases, but I'd
be happy to be proven wrong as I was for vperm2f128.

Either way, we'd need to support whatever sequence we come up with for those cases
in the backend before converting them here.

Differential Revision: http://reviews.llvm.org/D8833



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235124 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 17:52:13 +00:00
Duncan P. N. Exon Smith
88e419d66e DebugInfo: Remove 'inlinedAt:' field from MDLocalVariable
Remove 'inlinedAt:' from MDLocalVariable.  Besides saving some memory
(variables with it seem to be single largest `Metadata` contributer to
memory usage right now in -g -flto builds), this stops optimization and
backend passes from having to change local variables.

The 'inlinedAt:' field was used by the backend in two ways:

 1. To tell the backend whether and into what a variable was inlined.
 2. To create a unique id for each inlined variable.

Instead, rely on the 'inlinedAt:' field of the intrinsic's `!dbg`
attachment, and change the DWARF backend to use a typedef called
`InlinedVariable` which is `std::pair<MDLocalVariable*, MDLocation*>`.
This `DebugLoc` is already passed reliably through the backend (as
verified by r234021).

This commit removes the check from r234021, but I added a new check
(that will survive) in r235048, and changed the `DIBuilder` API in
r235041 to require a `!dbg` attachment whose 'scope:` is in the same
`MDSubprogram` as the variable's.

If this breaks your out-of-tree testcases, perhaps the script I used
(mdlocalvariable-drop-inlinedat.sh) will help; I'll attach it to PR22778
in a moment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235050 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 22:29:27 +00:00
Duncan P. N. Exon Smith
666ef776b3 DebugInfo: Add missing !dbg attachments to intrinsics
Add missing `!dbg` attachments to `@llvm.dbg.*` intrinsics.  I updated
these using a script (add-dbg-to-intrinsics.sh) that I'll attach to
PR22778 for posterity.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235040 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 21:04:10 +00:00
Jingyue Wu
9b8a9f1a9c [NFC] [SLSR] clean up some tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235021 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 17:14:03 +00:00
Jingyue Wu
d4ceea3837 [SLSR] handle candidate form (B + i * S)
Summary:
With this patch, SLSR may rewrite

S1: X = B + i * S
S2: Y = B + i' * S

to

S2: Y = X + (i' - i) * S

A secondary improvement: if (i' - i) is a power of 2, emit Y as X + (S << log(i' - i)). (S << log(i' -i)) is in a canonical form and thus more likely GVN'ed than (i' - i) * S.

Test Plan: slsr-add.ll

Reviewers: hfinkel, sanjoy, meheff, broune, eliben

Reviewed By: eliben

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8983

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235019 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 16:46:13 +00:00
Reid Kleckner
ecc4595ce4 [Inliner] Don't inline functions with frameescape calls
Inlining such intrinsics is very difficult, since you need to
simultaneously transform many calls to llvm.framerecover and potentially
duplicate the functions containing them.  Normally this intrinsic isn't
added until EH preparation, which is part of the backend pass pipeline
after inlining.  However, if it were to get fed through the inliner,
this change will ensure that it doesn't break the code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234937 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 20:38:14 +00:00
Jingyue Wu
9cecacd16a Simplify n-ary adds by reassociation
Summary:
This transformation reassociates a n-ary add so that the add can partially reuse
existing instructions. For example, this pass can simplify

  void foo(int a, int b) {
    bar(a + b);
    bar((a + 2) + b);
  }

to

  void foo(int a, int b) {
    int t = a + b;
    bar(t);
    bar(t + 2);
  }

saving one add instruction.

Fixes PR22357 (https://llvm.org/bugs/show_bug.cgi?id=22357).

Test Plan: nary-add.ll

Reviewers: broune, dberlin, hfinkel, meheff, sanjoy, atrick

Reviewed By: sanjoy, atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8950

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234855 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 04:59:22 +00:00
Sanjoy Das
888e8e3a66 [LoopUnrollRuntime] Avoid high-cost trip count computation.
Summary:
Runtime unrolling of loops needs to emit an expression to compute the
loop's runtime trip-count.  Avoid runtime unrolling if this computation
will be expensive.

Depends on D8993.

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8994

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234846 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 03:20:38 +00:00
Sanjoy Das
17e08f50b9 [SCEV] Strengthen SCEVExpander::isHighCostExpansion.
Summary:

Teach `isHighCostExpansion` to consider divisions by power-of-two
constants as cheap and add a test case.  This change is needed for a new
user of `isHighCostExpansion` that will be added in a subsequent change.

Depends on D8995.

Reviewers: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8993

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234845 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 03:20:32 +00:00
Nick Lewycky
d4b4e3e20f Subtraction is not commutative. Fixes PR23212!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234780 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 19:17:37 +00:00
Akira Hatanaka
a50a335767 [inliner] Don't inline a function if it doesn't have exactly the same
target-cpu and target-features attribute strings as the caller.

Differential Revision: http://reviews.llvm.org/D8984


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234773 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 18:43:38 +00:00
Sanjoy Das
e0f4a11a89 [LoopUnrollRuntime] Clean up a predicate.
Clean up a predicate I added in r229731, fix the relevant comment and
add a test case.  The earlier version is confusing to read and was also
buggy (probably not a coincidence) till Alexey fixed it in r233881.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234701 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-12 01:24:01 +00:00
Philip Reames
f894c7ecb2 [RewriteStatepointsForGC] test case missing from 234657
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234658 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 22:58:39 +00:00
Philip Reames
d92e9ef170 [RewriteStatepointsForGC] Use an actual liveness algorithm
When rewriting statepoints to make relocations explicit, we need to have a conservative but consistent notion of where a particular pointer is live at a particular site. The old code just used dominance, which is correct, but decidedly more conservative then it needed to be. This patch implements a simple dataflow algorithm that's run one per function (well, twice counting fixup after base pointer insertion). There's still lots of room to make this faster, but it's fast enough for all practical purposes today.

Differential Revision: http://reviews.llvm.org/D8674



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234657 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 22:53:14 +00:00
Philip Reames
82374f6b8a [RewriteStatepointsForGC] Preprocess the IR to remove unreachable blocks and single entry phis
Two related small changes:

    Various dominance based queries about liveness can get confused if we're talking about unreachable blocks. To avoid reasoning about such cases, just remove them before rewriting statepoints.
    Remove single entry phis (likely left behind by LCSSA) to reduce the number of live values.

Both of these are motivated by http://reviews.llvm.org/D8674 which will be submitted shortly.

Differential Revision: http://reviews.llvm.org/D8675



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234651 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 22:07:04 +00:00
Philip Reames
cb148c8541 [RewriteStatepointsForGC] Limited support for vectors of pointers
This patch adds limited support for inserting explicit relocations when there's a vector of pointers live over the statepoint. This doesn't handle the case where the vector contains a mix of base and non-base pointers; that's future work.

The current implementation just scalarizes the vector over the gc.statepoint before doing the explicit rewrite. An alternate approach would be to plumb the vector all the way though the backend lowering, but doing that appears challenging. In particular, the size of the indirect spill slot is currently assumed to be sizeof(pointer) throughout the backend.

In practice, this is enough to allow running the SLP and Loop vectorizers before RewriteStatepointsForGC.

Differential Revision: http://reviews.llvm.org/D8671



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234647 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 21:48:25 +00:00
Sanjoy Das
8aca90e5b6 [InstCombine][CodeGenPrep] Create llvm.uadd.with.overflow in CGP.
Summary:
This change moves creating calls to `llvm.uadd.with.overflow` from
InstCombine to CodeGenPrep.  Combining overflow check patterns into
calls to the said intrinsic in InstCombine inhibits optimization because
it introduces an intrinsic call that not all other transforms and
analyses understand.

Depends on D8888.

Reviewers: majnemer, atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8889

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234638 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 21:07:09 +00:00
Jingyue Wu
10483b93ea [SLSR] consider &B[S << i] as &B[(1 << i) * S]
Summary: This reduces handling &B[(1 << i) * s] to handling &B[i * S].

Test Plan: slsr-gep.ll

Reviewers: meheff

Subscribers: sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D8837

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234180 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-06 17:15:48 +00:00
David Majnemer
33b9ae320a Fix a typo
CHECK-LABEL had the wrong function name.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234051 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 20:56:24 +00:00
David Majnemer
701c2fca7e [InstCombine] Use DataLayout to determine vector element width
InstCombine didn't realize that it needs to use DataLayout to determine
how wide pointers are.  This lead to assertion failures.

This fixes PR23113.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234046 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 20:18:40 +00:00