Commit Graph

879 Commits

Author SHA1 Message Date
Igor Laevsky
899ad49863 NFC. Explicitly specify attributes in BasicAA/cs-cs.ll test.
This will simplify verifying correctness for a changes which modify attributes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243016 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-23 14:31:18 +00:00
Jingyue Wu
0a6b8b7be0 [MDA] change BlockScanLimit into a command line option.
Summary:
In the benchmark (https://github.com/vetter/shoc) we are researching,
the duplicated load is not eliminated because MemoryDependenceAnalysis
hit the BlockScanLimit. This patch change it into a command line option
instead of a hardcoded value.

Patched by Xuetian Weng. 

Test Plan: test/Analysis/MemoryDependenceAnalysis/memdep-block-scan-limit.ll

Reviewers: jingyue, reames

Subscribers: reames, llvm-commits

Differential Revision: http://reviews.llvm.org/D11366

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242842 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-21 21:50:39 +00:00
Simon Pilgrim
9549a0c0bc [X86][SSE] Updated SHL/LSHR i64 vectorization costs.
This was missed in D8416.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242621 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-18 20:06:30 +00:00
Chandler Carruth
81f4bf79a1 [PM/AA] Disable the core unsafe aspect of GlobalsModRef in the face of
basic changes to the IR such as folding pointers through PHIs, Selects,
integer casts, store/load pairs, or outlining.

This leaves the feature available behind a flag. This flag's default
could be flipped if necessary, but the real-world performance impact of
this particular feature of GMR may not be sufficiently significant for
many folks to want to run the risk.

Currently, the risk here is somewhat mitigated by half-hearted attempts
to update GlobalsModRef when the rest of the optimizer changes
something. However, I am currently trying to remove that update
mechanism as it makes migrating the AA infrastructure to a form that can
be readily shared between new and old pass managers very challenging.
Without this update mechanism, it is possible that this still unlikely
failure mode will start to trip people, and so I wanted to try to
proactively avoid that.

There is a lengthy discussion on the mailing list about why the core
approach here is flawed, and likely would need to look totally different
to be both reasonably effective and resilient to basic IR changes
occuring. This patch is essentially the first of two which will enact
the result of that discussion. The next patch will remove the current
update mechanism.

Thanks to lots of folks that helped look at this from different angles.
Especial thanks to Michael Zolotukhin for doing some very prelimanary
benchmarking of LTO without GlobalsModRef to get a rough idea of the
impact we could be facing here. So far, it looks very small, but there
are some concerns lingering from other benchmarking. The default here
may get flipped if performance results end up pointing at this as a more
significant issue.

Also thanks to Pete and Gerolf for reviewing!

Differential Revision: http://reviews.llvm.org/D11213

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242512 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 06:58:24 +00:00
Silviu Baranga
e4877f25bd Fix memcheck interval ends for pointers with negative strides
Summary:
The checking pointer grouping algorithm assumes that the
starts/ends of the pointers are well formed (start <= end).

The runtime memory checking algorithm also assumes this by doing:

 start0 < end1 && start1 < end0

to detect conflicts. This check only works if start0 <= end0 and
start1 <= end1.

This change correctly orders the interval ends by either checking
the stride (if it is constant) or by using min/max SCEV expressions.

Reviewers: anemet, rengolin

Subscribers: rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D11149

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242400 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 14:02:58 +00:00
Sean Silva
155c5e75fb Add a test for r242281 from an old patch of mine.
This isn't thorough, but should serve as a sanity check.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242356 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 23:23:02 +00:00
Tobias Edler von Koch
3659b8adc9 Analyze recursive PHI nodes in BasicAA
Summary:
This patch allows phi nodes like
  %x = phi [ %incptr, ... ] [ %var, ... ]
  %incptr = getelementptr %x, 1
to be analyzed by BasicAliasAnalysis.

In aliasPHI, we can detect incoming values that are recursive GEPs with a
constant offset. Instead of trying to analyze a recursive GEP (and failing), 
we now ignore it and instead set the size of the memory referenced by
the PHINode to UnknownSize. This represents all the possible memory
locations the pointer represented by the PHINode could be advanced to
by the GEP.

For now, this new behavior is turned off by default to allow debugging of
performance degradations seen with SPEC/x86 and Hexagon benchmarks.
The flag -basicaa-recphi turns it on.


Reviewers: hfinkel, sanjoy

Subscribers: tobiasvk_caf, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D10368

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242320 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 19:32:22 +00:00
Silviu Baranga
5b50110192 Cleanup after r241809 - remove uncessary call to std::sort
Summary:
The iteration order within a member of DepCands is deterministic
and therefore we don't have to sort the accesses within a member.
We also don't have to copy the indices of the pointers into a
vector, since we can iterate over the members of the class.

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11145

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242033 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 14:48:24 +00:00
Manuel Klimek
c376473bb4 Revert r241981 "Revert "Revert r236894 "[BasicAA] Fix zext & sext handling"""
The repros from PR23626 still fail.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242025 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 13:50:55 +00:00
Simon Pilgrim
f9df477221 [X86][SSE] Vectorized v4i32 non-uniform shifts.
While the v4i32 shl operation is already vectorized using a cvttps2dq/pmulld pattern, the lshr/ashr opeations are still scalarized.

This patch adds vectorization support for non-uniform v4i32 shift operations - it splats constant shift amounts to allow them to use the immediate sse shift instructions, or extracts/zero-extends non-constant shift amounts. The individual results are then blended together.

Differential Revision: http://reviews.llvm.org/D11063

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241989 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-12 11:15:19 +00:00
Hal Finkel
1e3fa768c0 Revert "Revert r236894 "[BasicAA] Fix zext & sext handling""
r236894 caused PR23626 (Clang miscompiles webkit's base64 decoder), and was
reverted in r237984. This reapplies the patch with an additional test case for
PR23626 and the associated fix (both scales and offsets in the
BasicAliasAnalysis::constantOffsetHeuristic should initially be zero).

Patch by Nick White, thanks!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241981 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-11 11:04:54 +00:00
Igor Laevsky
6690dbffe0 Add argmemonly attribute.
This change adds new attribute called "argmemonly". Function marked with this attribute can only access memory through it's argument pointers. This attribute directly corresponds to the "OnlyAccessesArgumentPointees" ModRef behaviour in alias analysis.

Differential Revision: http://reviews.llvm.org/D10398



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-11 10:30:36 +00:00
Silviu Baranga
c0970bdc63 Add a test of a regression discovered during testing of r241673
Summary:
We were missing a corner case where DepCands was not available,
but we were using DepCands to compute the checking pointer
groups.

This adds a test for that regression.

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11068

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241818 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-09 16:40:25 +00:00
Silviu Baranga
f283cd9acf Don't rely on the DepCands iteration order when constructing checking pointer groups
Summary:
The checking pointer group construction algorithm relied on the iteration on DepCands.
We would need the same leaders across runs and the same iteration order over the underlying std::set for determinism.

This changes the algorithm to process the pointers in the order in which they were added to the runtime check, which is deterministic.
We need to update the tests, since the order in which pointers appear has changed.

No new tests were added, since it is impossible to test for non-determinism.

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11064

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-09 15:18:25 +00:00
Adam Nemet
7a6f54545f [LAA] Revert a small part of r239295
This commit ([LAA] Fix estimation of number of memchecks) regressed the
logic a bit.  We shouldn't quit the analysis if we encounter a pointer
without known bounds *unless* we actually need to emit a memcheck for
it.

The original code was using NumComparisons which is now computed
differently.  Instead I compute NeedRTCheck from NumReadPtrChecks and
NumWritePtrChecks.

As side note, I find the separation of NeedRTCheck and CanDoRT
confusing, so I will try to merge them in a follow-up patch.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241756 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-08 22:58:48 +00:00
Silviu Baranga
8bde857088 [LAA] Merge memchecks for accesses separated by a constant offset
Summary:
Often filter-like loops will do memory accesses that are
separated by constant offsets. In these cases it is
common that we will exceed the threshold for the
allowable number of checks.

However, it should be possible to merge such checks,
sice a check of any interval againt two other intervals separated
by a constant offset (a,b), (a+c, b+c) will be equivalent with
a check againt (a, b+c), as long as (a,b) and (a+c, b+c) overlap.
Assuming the loop will be executed for a sufficient number of
iterations, this will be true. If not true, checking against
(a, b+c) is still safe (although not equivalent).

As long as there are no dependencies between two accesses,
we can merge their checks into a single one. We use this
technique to construct groups of accesses, and then check
the intervals associated with the groups instead of
checking the accesses directly.

Reviewers: anemet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10386

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241673 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-08 09:16:33 +00:00
Simon Pilgrim
6970be03d1 [X86][SSE] Vectorized i64 uniform constant SRA shifts
This patch adds vectorization support for uniform constant i64 arithmetic shift right operators.

Differential Revision: http://reviews.llvm.org/D9645

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241514 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-06 22:35:19 +00:00
Sanjoy Das
c6f1b8a4ba [LazyCallGraph] Port test case from r240039 to LCG.
Summary:
r240039 adds a test case to check that CallGraph does the right thing
with respect to non-leaf intrinsics like statepoint and patchpoint.
This ports the same test case to LazyCallGraph.  LazyCallGraph already
does the right thing with respect to escaping function pointers so there
is no need to change any code.

Reviewers: chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10582

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@241226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-02 02:03:58 +00:00
Adam Nemet
e11d1d2c31 [LAA] Try to prove non-wrapping of pointers if SCEV cannot
Summary:
Scalar evolution does not propagate the non-wrapping flags to values
that are derived from a non-wrapping induction variable because
the non-wrapping property could be flow-sensitive.

This change is a first attempt to establish the non-wrapping property in
some simple cases.  The main idea is to look through the operations
defining the pointer.  As long as we arrive to a non-wrapping AddRec via
a small chain of non-wrapping instruction, the pointer should not wrap
either.

I believe that this essentially is what Andy described in
http://article.gmane.org/gmane.comp.compilers.llvm.cvs/220731 as the way
forward.

Reviewers: aschwaighofer, nadav, sanjoy, atrick

Reviewed By: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10472

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240798 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-26 17:25:43 +00:00
Simon Pilgrim
1e71d7421a [X86][SSE][CostModel] Added full set of sitofp/uitofp costings for SSE2/AVX/AVX2/AVX512F.
Merged separate (but equivalent) SSE2/AVX512F tests.

Removed codegen tests since these are already done better in test/CodeGen/X86.

The actual cost values still need to be updated to match recent codegen improvements.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240219 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-20 14:58:01 +00:00
Sanjoy Das
152f3b5997 [CallGraph] Given -print-callgraph a stable printing order.
Summary:
Since FunctionMap has llvm::Function pointers as keys, the order in
which the traversal happens can differ from run to run, causing spurious
FileCheck failures.  Have CallGraph::print sort the CallGraphNodes by
name before printing them.

Reviewers: bogner, chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10575

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240191 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-19 23:20:31 +00:00
Simon Pilgrim
4ac9a2e70f [X86][SSE][CostModel] Fixed uitofp/sitofp cost target tests to specify sse2/avx2/avx512f directly instead of via a cpu model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240062 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-18 21:26:01 +00:00
Sanjoy Das
aabacb67c3 [CallGraph] Teach the CallGraph about non-leaf intrinsics.
Summary:
Currently intrinsics don't affect the creation of the call graph.
This is not accurate with respect to statepoint and patchpoint
intrinsics -- these do call (or invoke) LLVM level functions.

This change fixes this inconsistency by adding a call to the external
node for call sites that call these non-leaf intrinsics.  This coupled
with the fact that these intrinsics also escape the function pointer
they call gives us a conservatively correct call graph.

Reviewers: reames, chandlerc, atrick, pgavlin

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10526

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240039 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-18 19:28:26 +00:00
David Majnemer
cc714e2142 Move the personality function from LandingPadInst to Function
The personality routine currently lives in the LandingPadInst.

This isn't desirable because:
- All LandingPadInsts in the same function must have the same
  personality routine.  This means that each LandingPadInst beyond the
  first has an operand which produces no additional information.

- There is ongoing work to introduce EH IR constructs other than
  LandingPadInst.  Moving the personality routine off of any one
  particular Instruction and onto the parent function seems a lot better
  than have N different places a personality function can sneak onto an
  exceptional function.

Differential Revision: http://reviews.llvm.org/D10429

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239940 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-17 20:52:32 +00:00
Diego Novillo
5057ee8357 Add documentation for new backedge mass propagation in irregular loops.
Tweak test cases and rename headerIndexFor -> getHeaderIndex.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239915 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-17 16:28:22 +00:00
Diego Novillo
3f53fc8f5f Fix PR 23525 - Separate header mass propagation in irregular loops.
Summary:
When propagating mass through irregular loops, the mass flowing through
each loop header may not be equal. This was causing wrong frequencies
to be computed for irregular loop headers.

Fixed by keeping track of masses flowing through each of the headers in
an irregular loop. To do this, we now keep track of per-header backedge
weights. After the loop mass is distributed through the loop, the
backedge weights are used to re-distribute the loop mass to the loop
headers.

Since each backedge will have a mass proportional to the different
branch weights, the loop headers will end up with a more approximate
weight distribution (as opposed to the current distribution that assumes
that every loop header is the same).

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10348

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239843 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-16 19:10:58 +00:00
Jingyue Wu
7a1e93493d [ValueTracking] do not overwrite analysis results already computed
Summary:
ValueTracking used to overwrite the analysis results computed from
assumes and dominating conditions. This patch fixes this issue.

Test Plan: test/Analysis/ValueTracking/assume.ll

Reviewers: hfinkel, majnemer

Reviewed By: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10283

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239718 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-15 05:46:29 +00:00
Simon Pilgrim
44226ffc19 [X86][SSE] Vectorized i8 and i16 shift operators
This patch ensures that SHL/SRL/SRA shifts for i8 and i16 vectors avoid scalarization. It builds on the existing i8 SHL vectorized implementation of moving the shift bits up to the sign bit position and separating the 4, 2 & 1 bit shifts with several improvements:

1 - SSE41 targets can use (v)pblendvb directly with the sign bit instead of performing a comparison to feed into a VSELECT node.
2 - pre-SSE41 targets were masking + comparing with an 0x80 constant - we avoid this by using the fact that a set sign bit means a negative integer which can be compared against zero to then feed into VSELECT, avoiding the need for a constant mask (zero generation is much cheaper).
3 - SRA i8 needs to be unpacked to the upper byte of a i16 so that the i16 psraw instruction can be correctly used for sign extension - we have to do more work than for SHL/SRL but perf tests indicate that this is still beneficial.

The i16 implementation is similar but simpler than for i8 - we have to do 8, 4, 2 & 1 bit shifts but less shift masking is involved. SSE41 use of (v)pblendvb requires that the i16 shift amount is splatted to both bytes however.

Tested on SSE2, SSE41 and AVX machines.

Differential Revision: http://reviews.llvm.org/D9474

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-11 07:46:37 +00:00
Artur Pilipenko
1328b67dd1 Minor refactoring of GEP handling in isDereferenceablePointer
For GEP instructions isDereferenceablePointer checks that all indices are constant and within bounds. Replace this index calculation logic to a call to accumulateConstantOffset. Separated from the http://reviews.llvm.org/D9791

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D9874


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239299 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 11:58:13 +00:00
Silviu Baranga
a420a14276 [LAA] Fix estimation of number of memchecks
Summary:
We need to add a runtime memcheck for pair of accesses (x,y) where at least one of x and y
are writes.
 
Assuming we have w writes and r reads, currently this number is  estimated as being
w* (w+r-1). This estimation will count (write,write) pairs twice and will overestimate
the number of checks required.

This change adds a getNumberOfChecks method to RuntimePointerCheck, which
will count the number of runtime checks needed (similar in implementation to
needsAnyChecking) and uses it to produce the correct number of runtime checks.

Test Plan:
llvm test suite
spec2k
spec2k6

Performance results: no changes observed (not surprising since the formula for 1 writer is basically the same, which would covers most cases - at least with the current check limit).

Reviewers: anemet

Reviewed By: anemet

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D10217

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239295 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 10:27:06 +00:00
Hao Liu
f60ff6bdf6 [LoopAccessAnalysis] Teach LAA to check the memory dependence between strided accesses.
Differential Revision: http://reviews.llvm.org/D9368


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239285 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 04:48:37 +00:00
Jingyue Wu
ed0d841f59 [DependenceAnalysis] Extend unifySubscriptType for handling coupled subscript groups.
Summary:
In continuation to an earlier commit to DependenceAnalysis.cpp by jingyue (r222100), the type for all subscripts in a coupled group need to be the same since constraints from one subscript may be propagated to another during testing. During testing, new SCEVs may be created and the operands for these need to be the same.
This patch extends unifySubscriptType() to work on lists of subscript pairs, ensuring a common extended type for all of them.

Test Plan:
Added a test case to NonCanonicalizedSubscript.ll which causes dependence analysis to crash without this fix.

All regression tests pass.

Reviewers: spop, sebpop, jingyue

Reviewed By: jingyue

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9698

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238573 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 16:58:08 +00:00
Hans Wennborg
253c092e56 Revert r236894 "[BasicAA] Fix zext & sext handling"
This seems to have caused PR23626: Clang miscompiles webkit's base64 decoder

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237984 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-22 01:27:37 +00:00
Artur Pilipenko
5ac9604df7 Fix memory-dereferenceable.ll test
One of the testcases introduced by D9365 had incorrect !dereferenceable metadata on load. It must fail but it doesn't due to incorrect order of CHECK/CHECK-NOT commands in test. Fixed both.

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D9877


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237897 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-21 12:51:38 +00:00
Sanjoy Das
4d88c3ebad Dereferenceable, dereferenceable_or_null metadata for loads
Summary:
Introduce dereferenceable, dereferenceable_or_null metadata for loads
with the same semantic as corresponding attributes.

This patch depends on http://reviews.llvm.org/D9253

Patch by Artur Pilipenko!

Reviewers: hfinkel, sanjoy, reames

Reviewed By: sanjoy, reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9365

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237720 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-19 20:10:19 +00:00
Adam Nemet
a4c8c9292b [LoopAccesses] If shouldRetryWithRuntimeCheck, reset InterestingDependences
When dependence analysis encounters a non-constant distance between
memory accesses it aborts the analysis and falls back to run-time checks
only.  In this case we weren't resetting the array of dependences.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237574 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 15:37:03 +00:00
Adam Nemet
2f2bbe4ced [LoopAccesses] Rearrange printed lines in -analyze
"Store to invariant address..." is moved as the last line.  This is not
the prime result of the analysis.  Plus it simplifies some of the tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237573 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-18 15:36:57 +00:00
James Molloy
39a7d6e91d [DependenceAnalysis] Fix for PR21585: collectUpperBound triggers asserts
collectUpperBound hits an assertion when the back edge count is wider then the desired type.

If that happens, truncate the backedge count.

Patch by Philip Pfaffe!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237439 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-15 12:17:22 +00:00
Sanjoy Das
ead2d1fbe0 [Statepoints] Support for "patchable" statepoints.
Summary:
This change adds two new parameters to the statepoint intrinsic, `i64 id`
and `i32 num_patch_bytes`.  `id` gets propagated to the ID field
in the generated StackMap section.  If the `num_patch_bytes` is
non-zero then the statepoint is lowered to `num_patch_bytes` bytes of
nops instead of a call (the spill and reload code remains unchanged).
A non-zero `num_patch_bytes` is useful in situations where a language
runtime requires complete control over how a call is lowered.

This change brings statepoints one step closer to patchpoints.  With
some additional work (that is not part of this patch) it should be
possible to get rid of `TargetOpcode::STATEPOINT` altogether.

PlaceSafepoints generates `statepoint` wrappers with `id` set to
`0xABCDEF00` (the old default value for the ID reported in the stackmap)
and `num_patch_bytes` set to `0`.  This can be made more sophisticated
later.

Reviewers: reames, pgavlin, swaroop.sridhar, AndyAyers

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9546

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@237214 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-12 23:52:24 +00:00
Adam Nemet
beb74d3cf7 [Testsuite] Renumber metadata in ScopedNoAliasAA test to match CHECK lines
Summary:
Now it's much easier to follow what's happening in this test.

Also removed some unused metadata entries.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9601

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236981 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-11 09:10:14 +00:00
Sanjoy Das
5de9960136 [BasicAA] Fix zext & sext handling
Summary:

There are several unhandled edge cases in BasicAA's GetLinearExpression
method. This changes fixes outstanding issues, including zext / sext of
a constant with the sign bit set, and the refusal to decompose zexts or
sexts of wrapping arithmetic.

Test Plan: Unit tests added in //q.ext.ll//.

Patch by Nick White.

Reviewers: hfinkel, sanjoy

Reviewed By: hfinkel, sanjoy

Subscribers: sanjoy, llvm-commits, hfinkel

Differential Revision: http://reviews.llvm.org/D6682

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236894 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-08 18:58:55 +00:00
Pat Gavlin
5c7f7462e4 Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:

  @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)

to:

  @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)

This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.

In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.

Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.

Differential Revision: http://reviews.llvm.org/D9501

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236888 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-08 18:07:42 +00:00
Diego Novillo
aa46024ea3 Fix information loss in branch probability computation.
Summary:
This addresses PR 22718. When branch weights are too large, they were
being clamped to the range [1, MaxWeightForBB]. But this clamping is
only applied to edges that go outside the range, so it distorts the
relative branch probabilities.

This patch changes the weight calculation to scale every branch so the
relative probabilities are preserved. The scaling is done differently
now. First, all the branch weights are added up, and if the sum exceeds
32 bits, it computes an integer scale to bring all the weights within
the range.

The patch fixes an existing test that had slightly wrong branch
probabilities due to the previous clamping. It now gets branch weights
scaled accordingly.

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9442

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@236750 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-07 17:22:06 +00:00
Adam Nemet
50b9e7f7d4 [getUnderlyingOjbects] Analyze loop PHIs further to remove false positives
Specifically, if a pointer accesses different underlying objects in each
iteration, don't look through the phi node defining the pointer.

The motivating case is the underlyling-objects-2.ll testcase.  Consider
the loop nest:

  int **A;
  for (i)
    for (j)
       A[i][j] = A[i-1][j] * B[j]

This loop is transformed by Load-PRE to stash away A[i] for the next
iteration of the outer loop:

  Curr = A[0];          // Prev_0
  for (i: 1..N) {
    Prev = Curr;        // Prev = PHI (Prev_0, Curr)
    Curr = A[i];
    for (j: 0..N)
       Curr[j] = Prev[j] * B[j]
  }

Since A[i] and A[i-1] are likely to be independent pointers,
getUnderlyingObjects should not assume that Curr and Prev share the same
underlying object in the inner loop.

If it did we would try to dependence-analyze Curr and Prev and the
analysis of the corresponding SCEVs would fail with non-constant
distance.

To fix this, the getUnderlyingObjects API is extended with an optional
LoopInfo parameter.  This is effectively what controls whether we want
the above behavior or the original.  Currently, I only changed to use
this approach for LoopAccessAnalysis.

The other testcase is to guard the opposite case where we do want to
look through the loop PHI.  If we step through an array by incrementing
a pointer, the underlying object is the incoming value of the phi as the
loop is entered.

Fixes rdar://problem/19566729

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235634 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-23 20:09:20 +00:00
Brendon Cahoon
8b94db17a4 Fix a type mismatch assert in SCEV division
An assert was triggered when attempting to create a new SCEV
with operands of different types in the visitAddRecExpr. In this
test case, the operand types of the numerator and denominator
are different. The SCEV division code should generate a
conservative answer when this happens.

Differential Revision: http://reviews.llvm.org/D9021


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235511 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-22 15:06:40 +00:00
Brendon Cahoon
d9b36e1007 Recognize n/1 in the SCEV divide function
n/1 generates a quotient equal to n and a remainder of 0.
If this case is not recognized, then the SCEV divide() function
can return a remainder that is greater than or equal to the
denominator, which means the delinearized subscripts for the
test case will be incorrect.

Differential Revision: http://reviews.llvm.org/D9003


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235311 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-20 16:03:28 +00:00
David Blaikie
32b845d223 [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 23:24:18 +00:00
Daniel Jasper
058309ba87 Re-apply r234898 and fix tests.
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.

The code that originally made me discover this is:

  if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234979 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 06:24:07 +00:00
Rafael Espindola
091be7b530 Revert "The code that originally made me discover this is:"
This reverts commit r234898.
CodeGen/ARM/2013-10-11-select-stalls.ll was faling.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234903 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 15:56:33 +00:00
Daniel Jasper
7025d248eb The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234898 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 15:20:37 +00:00