Although this makes the `cast<>` assert more often, the
`assert(Node->isResolved())` on the following line would assert in all
those cases. So, no functionality change here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225903 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out, all callsites of the simplifier are guarded by a check for
CallInst::getCalledFunction (i.e., to make sure the callee is direct).
This check wasn't done when trying to further optimize a simplified fortified
libcall, introduced by a refactoring in r225640.
Fix that, add a testcase, and document the requirement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225895 91177308-0d34-0410-b5e6-96231b3b80d8
The issue was introduced in r214638:
+ for (auto &BSIter : BlocksSchedules) {
+ scheduleBlock(BSIter.second.get());
+ }
Because BlocksSchedules is a DenseMap with BasicBlock* keys, blocks are
scheduled in non-deterministic order, resulting in unpredictable IR.
Patch by Daniel Reynaud!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225821 91177308-0d34-0410-b5e6-96231b3b80d8
The alias cache has a problem of incorrect collisions in case a new instruction is allocated at the same address as a previously deleted instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225790 91177308-0d34-0410-b5e6-96231b3b80d8
This speeds up the dependency calculations for blocks with many load/store/call instructions.
Beside the improved runtime, there is no functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225786 91177308-0d34-0410-b5e6-96231b3b80d8
The functions {pred,succ,use,user}_{begin,end} exist, but many users
have to check *_begin() with *_end() by hand to determine if the
BasicBlock or User is empty. Fix this with a standard *_empty(),
demonstrating a few usecases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225760 91177308-0d34-0410-b5e6-96231b3b80d8
Split `GenericMDNode` into two classes (with more descriptive names).
- `UniquableMDNode` will be a common subclass for `MDNode`s that are
sometimes uniqued like constants, and sometimes 'distinct'.
This class gets the (short-lived) RAUW support and related API.
- `MDTuple` is the basic tuple that has always been returned by
`MDNode::get()`. This is as opposed to more specific nodes to be
added soon, which have additional fields, custom assembly syntax,
and extra semantics.
This class gets the hash-related logic, since other sublcasses of
`UniquableMDNode` may need to hash based on other fields.
To keep this diff from getting too big, I've added casts to `MDTuple`
that won't really scale as new subclasses of `UniquableMDNode` are
added, but I'll clean those up incrementally.
(No functionality change intended.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225682 91177308-0d34-0410-b5e6-96231b3b80d8
When we compute the size of a loop, we include the branch on the backedge and
the comparison feeding the conditional branch. Under normal circumstances,
these don't get replicated with the rest of the loop body when we unroll. This
led to the somewhat surprising behavior that really small loops would not get
unrolled enough -- they could be unrolled more and the resulting loop would be
below the threshold, because we were assuming they'd take
(LoopSize * UnrollingFactor) instructions after unrolling, instead of
(((LoopSize-2) * UnrollingFactor)+2) instructions. This fixes that computation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225565 91177308-0d34-0410-b5e6-96231b3b80d8
The previous code assumed that such instructions could not have any uses
outside CaseDest, with the motivation that the instruction could not
dominate CommonDest because CommonDest has phi nodes in it. That simply
isn't true; e.g., CommonDest could have an edge back to itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225552 91177308-0d34-0410-b5e6-96231b3b80d8
doing Load PRE"
It's not really expected to stick around, last time it provoked a weird LTO
build failure that I can't reproduce now, and the bot logs are long gone. I'll
re-revert it if the failures recur.
Original description: Perform Scalar PRE on gep indices that feed loads before
doing Load PRE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225536 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemoryDependenceAnalysis::getNonLocalPointerDependency was taking a list of properties about the instruction being queried. Since I'm about to need one more property to be passed down through the infrastructure - I need to know a query instruction is non-volatile in an inner helper - fix the interface once and for all.
I also added some assertions and behaviour clarifications around volatile and ordered field accesses. At the moment, this is mostly to document expected behaviour. The only non-standard instructions which can currently reach this are atomic, but unordered, loads and stores. Neither ordered or volatile accesses can reach here.
The call in GVN is protected by an isSimple check when it first considers the load. The calls in MemDepPrinter are protected by isUnordered checks. Both utilities also check isVolatile for loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225481 91177308-0d34-0410-b5e6-96231b3b80d8
Create new copies of distinct `MDNode`s instead of following the
uniquing `MDNode` logic.
Just like self-references (or other cycles), `MapMetadata()` creates a
new node. In practice most calls use `RF_NoModuleLevelChanges`, in
which case nothing is duplicated anyway.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225476 91177308-0d34-0410-b5e6-96231b3b80d8
This was only handling the libcall. This is another example
of why only the intrinsic should ever be used when it exists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225465 91177308-0d34-0410-b5e6-96231b3b80d8
The two buildbot failures were addressed in LLVM r225378 and CFE r225359.
This rapplies commit 225272 without modifications.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225379 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedAdd's smarts will live in ValueTracking as
computeOverflowForUnsignedAdd. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225329 91177308-0d34-0410-b5e6-96231b3b80d8
This is already handled in general when it is known the
conversion can't lose bits with smaller integer types
casted into wider floating point types.
This pattern happens somewhat often in GPU programs that cast
workitem intrinsics to float, which are often compared with 0.
Specifically handle the special case of compares with zero which
should also be known to not lose information. I had a more general
version of this which allows equality compares if the casted float is
exactly representable in the integer, but I'm not 100% confident that
is always correct.
Also fold cases that aren't integers to true / false.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225265 91177308-0d34-0410-b5e6-96231b3b80d8
Try harder to get rid of bitcast'd calls by ptrtoint/inttoptr'ing
arguments and return values when DataLayout says it is safe to do so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225254 91177308-0d34-0410-b5e6-96231b3b80d8
The swap implementation for iplist is currently unsupported. Simply splice the
old list into place, which achieves the same purpose. This is needed in order
to thread the -frewrite-map-file frontend option correctly. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225186 91177308-0d34-0410-b5e6-96231b3b80d8
{code}
// loop body
... = a[i] (1)
... = a[i+1] (2)
.......
a[i+1] = .... (3)
a[i] = ... (4)
{code}
The algorithm tries to collect memory access candidates from AliasSetTracker, and then check memory dependences one another. The memory accesses are unique in AliasSetTracker, and a single memory access in AliasSetTracker may map to multiple entries in AccessAnalysis, which could cover both 'read' and 'write'. Originally the algorithm only checked 'write' entry in Accesses if only 'write' exists. This is incorrect and the consequence is it ignored all read access, and finally some RAW and WAR dependence are missed.
For the case given above, if we ignore two reads, the dependence between (1) and (3) would not be able to be captured, and finally this loop will be incorrectly vectorized.
The fix simply inserts a new loop to find all entries in Accesses. Since it will skip most of all other memory accesses by checking the Value pointer at the very beginning of the loop, it should not increase compile-time visibly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225159 91177308-0d34-0410-b5e6-96231b3b80d8
assert out of the new pre-splitting in SROA.
This fix makes the code do what was originally intended -- when we have
a store of a load both dealing in the same alloca, we force them to both
be pre-split with identical offsets. This is really quite hard to do
because we can keep discovering problems as we go along. We have to
track every load over the current alloca which for any resaon becomes
invalid for pre-splitting, and go back to remove all stores of those
loads. I've included a couple of test cases derived from PR22093 that
cover the different ways this can happen. While that PR only really
triggered the first of these two, its the same fundamental issue.
The other challenge here is documented in a FIXME now. We end up being
quite a bit more aggressive for pre-splitting when loads and stores
don't refer to the same alloca. This aggressiveness comes at the cost of
introducing potentially redundant loads. It isn't clear that this is the
right balance. It might be considerably better to require that we only
do pre-splitting when we can presplit every load and store involved in
the entire operation. That would give more consistent if conservative
results. Unfortunately, it requires a non-trivial change to the actual
pre-splitting operation in order to correctly handle cases where we end
up pre-splitting stores out-of-order. And it isn't 100% clear that this
is the right direction, although I'm starting to suspect that it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225149 91177308-0d34-0410-b5e6-96231b3b80d8
a cache of assumptions for a single function, and an immutable pass that
manages those caches.
The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.
Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.
For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225131 91177308-0d34-0410-b5e6-96231b3b80d8
We assumed the output of a match was a Value, this would cause us to
assert because we would fail a cast<>. Instead, use a helper in the
Operator family to hide the distinction between Value and Constant.
This fixes PR22087.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225127 91177308-0d34-0410-b5e6-96231b3b80d8
We know overflow always occurs if both ~LHSKnownZero * ~RHSKnownZero
and LHSKnownOne * RHSKnownOne overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225077 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225076 91177308-0d34-0410-b5e6-96231b3b80d8
a pre-splitting pass over loads and stores.
Historically, splitting could cause enough problems that I hamstrung the
entire process with a requirement that splittable integer loads and
stores must cover the entire alloca. All smaller loads and stores were
unsplittable to prevent chaos from ensuing. With the new pre-splitting
logic that does load/store pair splitting I introduced in r225061, we
can now very nicely handle arbitrarily splittable loads and stores. In
order to fully benefit from these smarts, we need to mark all of the
integer loads and stores as splittable.
However, we don't actually want to rewrite partitions with all integer
loads and stores marked as splittable. This will fail to extract scalar
integers from aggregates, which is kind of the point of SROA. =] In
order to resolve this, what we really want to do is only do
pre-splitting on the alloca slices with integer loads and stores fully
splittable. This allows us to uncover all non-integer uses of the alloca
that would benefit from a split in an integer load or store (and where
introducing the split is safe because it is just memory transfer from
a load to a store). Once done, we make all the non-whole-alloca integer
loads and stores unsplittable just as they have historically been,
repartition and rewrite.
The result is that when there are integer loads and stores anywhere
within an alloca (such as from a memcpy of a sub-object of a larger
object), we can split them up if there are non-integer components to the
aggregate hiding beneath. I've added the challenging test cases to
demonstrate how this is able to promote to scalars even a case where we
have even *partially* overlapping loads and stores.
This restores the single-store behavior for small arrays of i8s which is
really nice. I've restored both the little endian testing and big endian
testing for these exactly as they were prior to r225061. It also forced
me to be more aggressive in an alignment test to actually defeat SROA.
=] Without the added volatiles there, we actually split up the weird i16
loads and produce nice double allocas with better alignment.
This also uncovered a number of bugs where we failed to handle
splittable load and store slices which didn't have a begininng offset of
zero. Those fixes are included, and without them the existing test cases
explode in glorious fireworks. =]
I've kept support for leaving whole-alloca integer loads and stores as
splittable even for the purpose of rewriting, but I think that's likely
no longer needed. With the new pre-splitting, we might be able to remove
all the splitting support for loads and stores from the rewriter. Not
doing that in this patch to try to isolate any performance regressions
that causes in an easy to find and revert chunk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225074 91177308-0d34-0410-b5e6-96231b3b80d8