Commit Graph

5399 Commits

Author SHA1 Message Date
Eric Christopher
0181303087 Add a debug info code generation level to the compile unit metadata
and update everything accordingly. This can be used to conditionalize
the amount of output in the backend based on the amount of debug
requested/metadata emission scheme by a front end (e.g. clang).

Paired with a commit to clang.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202332 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-27 01:24:56 +00:00
Nico Rieck
06eab82006 Fix broken FileCheck prefixes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202308 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 22:29:11 +00:00
Reid Kleckner
726bae9a66 GlobalOpt: Apply fastcc to internal x86_thiscallcc functions
We should apply fastcc whenever profitable.  We can expand this list,
but there are lots of conventions with performance implications that we
don't want to change.

Differential Revision: http://llvm-reviews.chandlerc.com/D2705

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202293 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 19:57:30 +00:00
Nico Rieck
5732fbd6e4 Fix broken FileCheck prefix
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202291 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 19:51:08 +00:00
Andrew Trick
401d35bedb Fix PR18165: LSR must avoid scaling factors that exceed the limit on truncated use.
Patch by Michael Zolotukhin!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202273 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 16:31:56 +00:00
Chandler Carruth
2b442bffcb [SROA] Use the correct index integer size in GEPs through non-default
address spaces.

This isn't really a correctness issue (the values are truncated) but its
much cleaner.

Patch by Matt Arsenault!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202252 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 10:08:16 +00:00
Chandler Carruth
5b95cec37c [SROA] Teach SROA how to handle pointers from address spaces other than
the default.

Based on the patch by Matt Arsenault, D1764!

I switched one place to use the more direct pointer type to compute the
desired address space, and I reworked the memcpy rewriting section to
reflect significant refactorings that this patch helped inspire.

Thanks to several of the folks who helped review and improve the patch
as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202247 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 08:25:02 +00:00
Chandler Carruth
38e90e3de1 [SROA] Split the alignment computation complete for the memcpy rewriting
to work independently for the slice side and the other side.

This allows us to only compute the minimum of the two when we actually
rewrite to a memcpy that needs to take the minimum, and preserve higher
alignment for one side or the other when rewriting to loads and stores.

This fix was inspired by seeing the result of some refactoring that
makes addrspace handling better.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202242 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 07:29:54 +00:00
Chandler Carruth
50bc165c54 [SROA] Fix PR18615 with some long overdue simplifications to the bounds
checking in SROA.

The primary change is to just rely on uge for checking that the offset
is within the allocation size. This removes the explicit checks against
isNegative which were terribly error prone (including the reversed logic
that led to PR18615) and prevented us from supporting stack allocations
larger than half the address space.... Ok, so maybe the latter isn't
*common* but it's a silly restriction to have.

Also, we used to try to support a PHI node which loaded from before the
start of the allocation if any of the loaded bytes were within the
allocation. This doesn't make any sense, we have never really supported
loading or storing *before* the allocation starts. The simplified logic
just doesn't care.

We continue to allow loading past the end of the allocation in part to
support cases where there is a PHI and some loads are larger than others
and the larger ones reach past the end of the allocation. We could solve
this a different and more conservative way, but I'm still somewhat
paranoid about this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202224 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-26 03:14:14 +00:00
Chandler Carruth
c537759a7f [SROA] Fix another instability in SROA with respect to the slice
ordering.

The fundamental problem that we're hitting here is that the use-def
chain ordering is *itself* not a stable thing to be relying on in the
rewriting for SROA. Further, we use a non-stable sort over the slices to
arrange them based on the section of the alloca they're operating on.
With a debugging STL implementation (or different implementations in
stage2 and stage3) this can cause stage2 != stage3.

The specific aspect of this problem fixed in this commit deals with the
rewriting and load-speculation around PHIs and Selects. This, like many
other aspects of the use-rewriting in SROA, is really part of the
"strong SSA-formation" that is doen by SROA where it works very hard to
canonicalize loads and stores in *just* the right way to satisfy the
needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the
same alloca, we test that loads downstream of the select are
speculatable around it twice. If only one of the operands to the select
needs to be rewritten, then if we get lucky we rewrite that one first
and the select is immediately speculatable. This can cause the order of
operand visitation, and thus the order of slices to be rewritten, to
change an alloca from promotable to non-promotable and vice versa.

The fix is to defer all of the speculation until *after* the rewrite
phase is done. Once we've rewritten everything, we can accurately test
for whether speculation will work (once, instead of twice!) and the
order ceases to matter.

This also happens to simplify the other subtlety of speculation -- we
need to *not* speculate anything unless the result of speculating will
make the alloca fully promotable by mem2reg. I had a previous attempt at
simplifying this, but it was still pretty horrible.

There is actually already a *really* nice test case for this in
basictest.ll, but on multiple STL implementations and inputs, we just
got "lucky". Fortunately, the test case is very small and we can
essentially build it in exactly the opposite way to get reasonable
coverage in both directions even from normal STL implementations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202092 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-25 00:07:09 +00:00
Arnold Schwaighofer
68085c7bef SLPVectorizer: Try vectorizing 'splat' stores
Vectorize sequential stores of a broadcasted value.
5% on eon.

radar://16124699

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202067 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-24 19:52:29 +00:00
Nick Lewycky
0aabe661a4 Make sure that value handle users see the transformation of an indirect call to a direct call. This is important for the CallGraph iteration. Patch by Björn Steinbrink!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201822 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-20 23:00:15 +00:00
Tim Northover
1f55e40aa5 X86: move test requiring X86TargetLowering info into its own directory
If LLVM is built without X86 as a supported target then the test would
mysteriously fail.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201668 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 12:24:19 +00:00
Tim Northover
a5d63e5a30 Try addding datalayout in case that's what Hexagon doesn't like.
Just a wild stab in the dark really, but in the absence of any ability to
reproduce the problem...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201658 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 10:32:40 +00:00
Tim Northover
44697f3fc1 X86 CodeGenPrep: sink shufflevectors before shifts
On x86, shifting a vector by a scalar is significantly cheaper than shifting a
vector by another fully general vector. Unfortunately, because SelectionDAG
operates on just one basic block at a time, the shufflevector instruction that
reveals whether the right-hand side of a shift *is* really a scalar is often
not visible to CodeGen when it's needed.

This adds another handler to CodeGenPrepare, to sink any useful shufflevector
instructions down to the basic block where they're used, predicated on a target
hook (since on other architectures, doing so will often just introduce extra
real work).

rdar://problem/16063505

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201655 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-19 10:02:43 +00:00
Gerolf Hoflehner
3bc859466b fix for null VectorizedValue assertion in the SLP Vectorizer (in function vectorizeTree()). radar://16064178
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201501 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-17 03:06:16 +00:00
Arnold Schwaighofer
2ced33808e SCEVExpander: Try hard not to create derived induction variables in other loops
During LSR of one loop we can run into a situation where we have to expand the
start of a recurrence of a loop induction variable in this loop. This start
value is a value derived of the induction variable of a preceeding loop. SCEV
has cannonicalized this value to a different recurrence than the recurrence of
the preceeding loop's induction variable (the type and/or step direction) has
changed). When we come to instantiate this SCEV we created a second induction
variable in this preceeding loop.  This patch tries to base such derived
induction variables of the preceeding loop's induction variable.

This helps twolf on arm and seems to help scimark2 on x86.

Reapply with a fix for the case of a value derived from a pointer.

radar://15970709

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201496 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-16 15:49:50 +00:00
Nico Rieck
268e96a8a6 Fix broken CHECK lines
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201479 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-16 07:31:05 +00:00
Arnold Schwaighofer
a9db46bf3e Revert "SCEVExpander: Try hard not to create derived induction variables in other loops"
This reverts commit r201465. It broke an arm bot.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201466 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-15 18:16:56 +00:00
Arnold Schwaighofer
e672548602 SCEVExpander: Try hard not to create derived induction variables in other loops
During LSR of one loop we can run into a situation where we have to expand the
start of a recurrence of a loop induction variable in this loop. This start
value is a value derived of the induction variable of a preceeding loop. SCEV
has cannonicalized this value to a different recurrence than the recurrence of
the preceeding loop's induction variable (the type and/or step direction) has
changed). When we come to instantiate this SCEV we created a second induction
variable in this preceeding loop.  This patch tries to base such derived
induction variables of the preceeding loop's induction variable.

This helps twolf on arm and seems to help scimark2 on x86.

radar://15970709

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201465 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-15 17:11:56 +00:00
Matt Arsenault
f222ebe86c Do more addrspacecast transforms that happen for bitcast.
Makes addrspacecast (gep) do addrspacecast (gep) instead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201376 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-14 00:49:12 +00:00
Daniel Sanders
38c6b58eec Re-commit: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for
targets with mature MC support. Such targets will always parse the inline
assembly (even when emitting assembly). Targets without mature MC support
continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced
with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler
to parse inline assembly (even when emitting assembly output). UseIntegratedAs
is set to true for targets that consider any failure to parse valid assembly
to be a bug. Target specific subclasses generally enable the integrated
assembler in their constructor. The default value can be overridden with
-no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example,
those that use mnemonics such as 'foo' or 'hello world') have been updated to
disable the integrated assembler.

Changes since review (and last commit attempt):
- Fixed test failures that were missed due to configuration of local build.
  (fixes crash.ll and a couple others).
- Fixed tests that happened to pass because the local build was on X86
  (should fix 2007-12-17-InvokeAsm.ll)
- mature-mc-support.ll's should no longer require all targets to be compiled.
  (should fix ARM and PPC buildbots)
- Object output (-filetype=obj and similar) now forces the integrated assembler
  to be enabled regardless of default setting or -no-integrated-as.
  (should fix SystemZ buildbots)

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201333 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-13 14:44:26 +00:00
Reid Kleckner
2798b77586 GlobalOpt: Aliases don't have sections, don't copy them when replacing
As defined in LangRef, aliases do not have sections.  However, LLVM's
GlobalAlias class inherits from GlobalValue, which means we can read and
set its section.  We should probably ban that as a separate change,
since it doesn't make much sense for an alias to have a section that
differs from its aliasee.

Fixes PR18757, where the section was being lost on the global in code
from Clang like:

extern "C" {
__attribute__((used, section("CUSTOM"))) static int in_custom_section;
}

Reviewers: rafael.espindola

Differential Revision: http://llvm-reviews.chandlerc.com/D2758

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201286 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-13 02:18:36 +00:00
Owen Anderson
3042a65e5f Remove a very old instcombine where we would turn sequences of selects into
logical operations on the i1's driving them.  This is a bad idea for every
target I can think of (confirmed with micro tests on all of: x86-64, ARM,
AArch64, Mips, and PowerPC) because it forces the i1 to be materialized into
a general purpose register, whereas consuming it directly into a select generally
allows it to exist only transiently in a predicate or flags register.

Chandler ran a set of performance tests with this change, and reported no
measurable change on x86-64.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201275 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-12 23:54:07 +00:00
Daniel Sanders
7580df334e Revert r201237+r201238: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
It introduced multiple test failures in the buildbots.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201241 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-12 15:39:20 +00:00
Daniel Sanders
57edb9588b Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for targets with mature MC support. Such targets will always parse the inline assembly (even when emitting assembly). Targets without mature MC support continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler to parse inline assembly (even when emitting assembly output). UseIntegratedAs is set to true for targets that consider any failure to parse valid assembly to be a bug. Target specific subclasses generally enable the integrated assembler in their constructor. The default value can be overridden with -no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example, those that use mnemonics such as 'foo' or 'hello world') have been updated to disable the integrated assembler.

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201237 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-12 14:44:54 +00:00
Benjamin Kramer
1e6240a85d InstCombine: Teach icmp merging about the equivalence of bit tests and UGE/ULT with a power of 2.
This happens in bitfield code. While there reorganize the existing code
a bit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201176 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-11 21:09:03 +00:00
Chandler Carruth
8615ab4a4a [LPM] Switch LICM to actively use LCSSA in addition to preserving it.
Fixes PR18753 and PR18782.

This is necessary for LICM to preserve LCSSA correctly and efficiently.
There is still some active discussion about whether we should be using
LCSSA, but we can't just immediately stop using it and we *need* LICM to
preserve it while we are using it. We can restore the old SSAUpdater
driven code if and when there is a serious effort to remove the reliance
on LCSSA from all of the loop passes.

However, this also serves as a great example of why LCSSA is very nice
to have. This change significantly simplifies the process of sinking
instructions for LICM, and makes it quite a bit less expensive.

It wouldn't even be as complex as it is except that I had to start the
process of removing the big recursive LCSSA formation hammer in order to
switch even this much of the re-forming code to asserting that LCSSA was
preserved. I'll fully remove that next just to tidy things up until the
LCSSA debate settles one way or the other.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201148 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-11 12:52:27 +00:00
Arnold Schwaighofer
846acbeef1 LoopVectorizer: Keep track of conditional store basic blocks
Before conditional store vectorization/unrolling we had only one
vectorized/unrolled basic block. After adding support for conditional store
vectorization this will not only be one block but multiple basic blocks. The
last block would have the back-edge. I updated the code to use a vector of basic
blocks instead of a single basic block and fixed the users to use the last entry
in this vector. But, I forgot to add the basic blocks to this vector!

Fixes PR18724.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201028 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-08 20:41:13 +00:00
Juergen Ributzka
6f1819f2e6 [Constant Hoisting] Fix insertion point for constant materialization.
The bitcast instruction during constant materialization was not placed correcly
in the presence of phi nodes. This commit fixes the insertion point to be in the
idom instead.

This fixes PR18768

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201009 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-08 00:20:49 +00:00
Nick Lewycky
44e40408ee A memcpy out of an fresh alloca is a no-op, delete it. Patch by Patrick Walton!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200907 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 06:29:19 +00:00
Manman Ren
c7ac256d52 Set default of inlinecold-threshold to 225.
225 is the default value of inline-threshold. This change will make sure
we have the same inlining behavior as prior to r200886.

As Chandler points out, even though we don't have code in our testing
suite that uses cold attribute, there are larger applications that do
use cold attribute.

r200886 + this commit intend to keep the same behavior as prior to r200886.
We can later on tune the inlinecold-threshold.

The main purpose of r200886 is to help performance of instrumentation based
PGO before we actually hook up inliner with analysis passes such as BPI and BFI.
For instrumentation based PGO, we try to increase inlining of hot functions and
reduce inlining of cold functions by setting inlinecold-threshold.

Another option suggested by Chandler is to use a boolean flag that controls
if we should use OptSizeThreshold for cold functions. The default value
of the boolean flag should not change the current behavior. But it gives us
less freedom in controlling inlining of cold functions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200898 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-06 01:59:22 +00:00
Manman Ren
df7da79db6 Inliner uses a smaller inline threshold for callees with cold attribute.
Added command line option inlinecold-threshold to set threshold for inlining
functions with cold attribute. Listen to the cold attribute when it would
decrease the inline threshold.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200886 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-05 22:53:44 +00:00
Benjamin Kramer
fb0ad6bd15 SimplifyLibCalls: Push TLI through the exp2->ldexp transform.
For the odd case of platforms with exp2 available but not ldexp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200795 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-04 20:27:23 +00:00
Tim Northover
8f0354c973 OS X: the correct function is __sincospif_stret, not __sincospi_stretf
rdar://problem/13729466

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200771 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-04 16:28:20 +00:00
Kai Nacke
6840e895c1 Add strchr(p, 0) -> p + strlen(p) to SimplifyLibCalls
Add the missing transformation strchr(p, 0) -> p + strlen(p) to SimplifyLibCalls
and remove the ToDo comment.

Reviewer: Duncan P.N. Exan Smith


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200736 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-04 05:55:16 +00:00
Reid Kleckner
81558937d7 inalloca: Don't remove dead arguments in the presence of inalloca args
It disturbs the layout of the parameters in memory and registers,
leading to problems in the backend.

The plan for optimizing internal inalloca functions going forward is to
essentially SROA the argument memory and demote any captured arguments
(things that aren't trivially written by a load or store) to an indirect
pointer to a static alloca.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200717 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-03 20:42:49 +00:00
Duncan P. N. Exon Smith
e6562c5088 Lower llvm.expect intrinsic correctly for i1
LowerExpectIntrinsic previously only understood the idiom of an expect
intrinsic followed by a comparison with zero. For llvm.expect.i1, the
comparison would be stripped by the early-cse pass.

Patch by Daniel Micay.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200664 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-02 22:43:55 +00:00
Arnold Schwaighofer
a16c1b55e2 LoopVectorizer: Enable unrolling of conditional stores and the load/store
unrolling heuristic per default

Benchmarking on x86_64 (thanks Chandler!) and ARM has shown those options speed
up some benchmarks while not causing any interesting regressions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200621 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-02 03:12:34 +00:00
Arnold Schwaighofer
991dd3bb92 ARMTTI: We don't have 16 allocatable scalar registers
This caused an regression on libquantum after enabling the new loop vectorizer
unroll heuristics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200616 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-01 18:00:25 +00:00
Chandler Carruth
115fd30b24 [LPM] Apply a really big hammer to fix PR18688 by recursively reforming
LCSSA when we promote to SSA registers inside of LICM.

Currently, this is actually necessary. The promotion logic in LICM uses
SSAUpdater which doesn't understand how to place LCSSA PHI nodes.
Teaching it to do so would be a very significant undertaking. It may be
worthwhile and I've left a FIXME about this in the code as well as
starting a thread on llvmdev to try to figure out the right long-term
solution.

For now, the PR needs to be fixed. Short of using the promition
SSAUpdater to place both the LCSSA PHI nodes and the promoted PHI nodes,
I don't see a cleaner or cheaper way of achieving this. Fortunately,
LCSSA is relatively lazy and sparse -- it should only update
instructions which need it. We can also skip the recursive variant when
we don't promote to SSA values.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200612 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-01 13:35:14 +00:00
Chandler Carruth
d383b8eec3 [inliner] Skip debug intrinsics even earlier in computing the inline
cost so that they don't impact the vector bonus. Fundamentally, counting
unsimplified instructions is just *wrong*; it will continue to introduce
instability as things which do not generate code bizarrely impact
inlining. For example, sufficiently nested inlined functions could turn
off the vector bonus with lifetime markers just like the debug
intrinsics do. =/

This is a short-term tactical fix. Long term, I think we need to remove
the vector bonus entirely. That's a separate patch and discussion
though.

The patch to fix this provided by Dario Domizioli. I've added some
comments about the planned direction and used a heavily pruned form of
debug info intrinsics for the test case. While this debug info doesn't
work or "do" anything useful, it lets us easily test all manner of
interference easily, and I suspect this will not be the last time we
want to craft a pattern where debug info interferes with the inliner in
a problematic way.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200609 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-01 10:38:17 +00:00
Reid Kleckner
86cb795388 Revert "[SLPV] Recognize vectorizable intrinsics during SLP vectorization ..."
This reverts commit r200576.  It broke 32-bit self-host builds by
vectorizing two calls to @llvm.bswap.i64, which we then fail to expand.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200602 91177308-0d34-0410-b5e6-96231b3b80d8
2014-02-01 01:37:30 +00:00
Chandler Carruth
093b0413fe [SLPV] Recognize vectorizable intrinsics during SLP vectorization and
transform accordingly. Based on similar code from Loop vectorization.
Subsequent commits will include vectorization of function calls to
vector intrinsics and form function calls to vector library calls.

Patch by Raul Silvera! (Much delayed due to my not running dcommit)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200576 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-31 21:14:40 +00:00
Chandler Carruth
93228f6199 [vectorizer] Tweak the way we do small loop runtime unrolling in the
loop vectorizer to not do so when runtime pointer checks are needed and
share code with the new (not yet enabled) load/store saturation runtime
unrolling. Also ensure that we only consider the runtime checks when the
loop hasn't already been vectorized. If it has, the runtime check cost
has already been paid.

I've fleshed out a test case to cover the scalar unrolling as well as
the vector unrolling and comment clearly why we are or aren't following
the pattern.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200530 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-31 10:51:08 +00:00
Matt Arsenault
e932091eb5 Allow speculating llvm.sqrt, fma and fmuladd
This doesn't set errno, so this should be OK.
Also update the documentation to explicitly state
that errno are not set.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200501 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-31 00:09:00 +00:00
Arnold Schwaighofer
fb70e11bbc LoopVectorizer: Add a test case for unrolling of small loops that need a runtime
check.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200408 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-29 18:55:44 +00:00
Chandler Carruth
a403ceb205 [LPM] Fix PR18643, another scary place where loop transforms failed to
preserve loop simplify of enclosing loops.

The problem here starts with LoopRotation which ends up cloning code out
of the latch into the new preheader it is buidling. This can create
a new edge from the preheader into the exit block of the loop which
breaks LoopSimplify form. The code tries to fix this by splitting the
critical edge between the latch and the exit block to get a new exit
block that only the latch dominates. This sadly isn't sufficient.

The exit block may be an exit block for multiple nested loops. When we
clone an edge from the latch of the inner loop to the new preheader
being built in the outer loop, we create an exiting edge from the outer
loop to this exit block. Despite breaking the LoopSimplify form for the
inner loop, this is fine for the outer loop. However, when we split the
edge from the inner loop to the exit block, we create a new block which
is in neither the inner nor outer loop as the new exit block. This is
a predecessor to the old exit block, and so the split itself takes the
outer loop out of LoopSimplify form. We need to split every edge
entering the exit block from inside a loop nested more deeply than the
exit block in order to preserve all of the loop simplify constraints.

Once we try to do that, a problem with splitting critical edges
surfaces. Previously, we tried a very brute force to update LoopSimplify
form by re-computing it for all exit blocks. We don't need to do this,
and doing this much will sometimes but not always overlap with the
LoopRotate bug fix. Instead, the code needs to specifically handle the
cases which can start to violate LoopSimplify -- they aren't that
common. We need to see if the destination of the split edge was a loop
exit block in simplified form for the loop of the source of the edge.
For this to be true, all the predecessors need to be in the exact same
loop as the source of the edge being split. If the dest block was
originally in this form, we have to split all of the deges back into
this loop to recover it. The old mechanism of doing this was
conservatively correct because at least *one* of the exiting blocks it
rewrote was the DestBB and so the DestBB's predecessors were fixed. But
this is a much more targeted way of doing it. Making it targeted is
important, because ballooning the set of edges touched prevents
LoopRotate from being able to split edges *it* needs to split to
preserve loop simplify in a coherent way -- the critical edge splitting
would sometimes find the other edges in need of splitting but not
others.

Many, *many* thanks for help from Nick reducing these test cases
mightily. And helping lots with the analysis here as this one was quite
tricky to track down.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200393 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-29 13:16:53 +00:00
Chandler Carruth
6a67a3f3ec [LPM] Fix PR18642, a pretty nasty bug in IndVars that "never mattered"
because of the inside-out run of LoopSimplify in the LoopPassManager and
the fact that LoopSimplify couldn't be "preserved" across two
independent LoopPassManagers.

Anyways, in that case, IndVars wasn't correctly preserving an LCSSA PHI
node because it thought it was rewriting (via SCEV) the incoming value
to a loop invariant value. While it may well be invariant for the
current loop, it may be rewritten in terms of an enclosing loop's
values. This in and of itself is fine, as the LCSSA PHI node in the
enclosing loop for the inner loop value we're rewriting will have its
own LCSSA PHI node if used outside of the enclosing loop. With me so
far?

Well, the current loop and the enclosing loop may share an exiting
block and exit block, and when they do they also share LCSSA PHI nodes.
In this case, its not valid to RAUW through the LCSSA PHI node.

Expected crazy test included.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200372 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-29 04:40:19 +00:00
Rafael Espindola
f611ae40fd Fix pr14893.
When simplifycfg moves an instruction, it must drop metadata it doesn't know
is still valid with the preconditions changes. In particular, it must drop
the range and tbaa metadata.

The patch implements this with an utility function to drop all metadata not
in a white list.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200322 91177308-0d34-0410-b5e6-96231b3b80d8
2014-01-28 16:56:46 +00:00