This just lifts the logic into a static helper function, sinks the
legacy pass to be a trivial wrapper of that helper fuction, and adds
a trivial wrapper for the new PM as well. Not much to see here.
I switched a test case to run in both modes, but we have to strip the
dead prototypes separately as that pass isn't in the new pass manager
(yet).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226999 91177308-0d34-0410-b5e6-96231b3b80d8
changed the IR. This is particularly easy as we can just look for the
existence of any expect intrinsic at all to know whether we've changed
the IR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226998 91177308-0d34-0410-b5e6-96231b3b80d8
for small switches, and avoid using a complex loop to set up the
weights.
We know what the baseline weights will be so we can just resize the
vector to contain all that value and clobber the one slot that is
likely. This seems much more direct than the previous code that tested
at every iteration, and started off by zeroing the vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226995 91177308-0d34-0410-b5e6-96231b3b80d8
no members for them to use.
Also, make them accept references as there is no possibility of a null
pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226994 91177308-0d34-0410-b5e6-96231b3b80d8
It was already in the Scalar header and referenced extensively as being
in this library, the source file was just in the utils directory for
some reason. No actual functionality changed. I noticed as it didn't
make sense to add a pass header to the utils headers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226991 91177308-0d34-0410-b5e6-96231b3b80d8
Use the struct instead of a std::pair<Value *, Value *>. This makes a
Range an obviously immutable object, and we can now assert that a
range is well-typed (Begin->getType() == End->getType()) on its
construction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226804 91177308-0d34-0410-b5e6-96231b3b80d8
There are places where the inductive range check elimination pass
depends on two llvm::Values or llvm::SCEVs to be of the same
llvm::Type when they do not need to be. This patch relaxes those
restrictions (by bailing out of the optimization if the types
mismatch), and adds test cases to trigger those paths.
These issues were found by bootstrapping clang with IRCE running in
the -O3 pass ordering.
Differential Revision: http://reviews.llvm.org/D7082
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226793 91177308-0d34-0410-b5e6-96231b3b80d8
This reapplies r225379.
ChangeLog:
- The assertion that this commit previously ran into about the inability
to handle indirect variables has since been removed and the backend
can handle this now.
- Testcases were upgrade to the new MDLocation format.
- Instead of keeping a DebugDeclares map, we now use
llvm::FindAllocaDbgDeclare().
Original commit message follows.
Debug info: Teach SROA how to update debug info for fragmented variables.
This allows us to generate debug info for extremely advanced code such as
typedef struct { long int a; int b;} S;
int foo(S s) {
return s.b;
}
which at -O1 on x86_64 is codegen'd into
define i32 @foo(i64 %s.coerce0, i32 %s.coerce1) #0 {
ret i32 %s.coerce1, !dbg !24
}
with this patch we emit the following debug info for this
TAG_formal_parameter [3]
AT_location( 0x00000000
0x0000000000000000 - 0x0000000000000006: rdi, piece 0x00000008, rsi, piece 0x00000004
0x0000000000000006 - 0x0000000000000008: rdi, piece 0x00000008, rax, piece 0x00000004 )
AT_name( "s" )
AT_decl_file( "/Volumes/Data/llvm/_build.ninja.release/test.c" )
Thanks to chandlerc, dblaikie, and echristo for their feedback on all
previous iterations of this patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226598 91177308-0d34-0410-b5e6-96231b3b80d8
and updated.
This may appear to remove handling for things like alias analysis when
splitting critical edges here, but in fact no callers of SplitEdge
relied on this. Similarly, all of them wanted to preserve LCSSA if there
was any update of the loop info. That makes the interface much simpler.
With this, all of BasicBlockUtils.h is free of Pass arguments and
prepared for the new pass manager. This is tho majority of utilities
that relied on pass arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226459 91177308-0d34-0410-b5e6-96231b3b80d8
while refactoring this API for the new pass manager.
No functionality changed here, the code didn't actually support this
option.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226457 91177308-0d34-0410-b5e6-96231b3b80d8
APIs and replace it and numerous booleans with an option struct.
The critical edge splitting API has a really large surface of flags and
so it seems worth burning a small option struct / builder. This struct
can be constructed with the various preserved analyses and then flags
can be flipped in a builder style.
The various users are now responsible for directly passing along their
analysis information. This should be enough for the critical edge
splitting to work cleanly with the new pass manager as well.
This API is still pretty crufty and could be cleaned up a lot, but I've
focused on this change just threading an option struct rather than
a pass through the API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226456 91177308-0d34-0410-b5e6-96231b3b80d8
SplitLandingPadPredecessors and remove the Pass argument from its
interface.
Another step to the utilities being usable with both old and new pass
managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226426 91177308-0d34-0410-b5e6-96231b3b80d8
rather than relying on the pass object.
This one is a bit annoying, but will pay off. First, supporting this one
will make the next one much easier, and for utilities like LoopSimplify,
this is moving them (slowly) closer to not having to pass the pass
object around throughout their APIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226396 91177308-0d34-0410-b5e6-96231b3b80d8
interface, removing Pass from its interface.
This also makes those analyses optional so that passes which don't even
preserve these (or use them) can skip the logic entirely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226394 91177308-0d34-0410-b5e6-96231b3b80d8
optionally updated by MergeBlockIntoPredecessors.
No functionality changed, just refactoring to clear the way for the new
pass manager.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226392 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of querying the pass every where we need to, do that once and
cache a pointer in the pass object. This is both simpler and I'm about
to add yet another place where we need to dig out that pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226391 91177308-0d34-0410-b5e6-96231b3b80d8
cleaner to derive from the generic base.
Thise removes a ton of boiler plate code and somewhat strange and
pointless indirections. It also remove a bunch of the previously needed
friend declarations. To fully remove these, I also lifted the verify
logic into the generic LoopInfoBase, which seems good anyways -- it is
generic and useful logic even for the machine side.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226385 91177308-0d34-0410-b5e6-96231b3b80d8
a LoopInfoWrapperPass to wire the object up to the legacy pass manager.
This switches all the clients of LoopInfo over and paves the way to port
LoopInfo to the new pass manager. No functionality change is intended
with this iteration.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226373 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE eliminates range checks of the form
0 <= A * I + B < Length
by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment. As an
example, IRCE will convert
len = < known positive >
for (i = 0; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
to
len = < known positive >
limit = smin(n, len)
// no first segment
for (i = 0; i < limit; i++) {
if (0 <= i && i < len) { // this check is fully redundant
do_something();
} else {
throw_out_of_bounds();
}
}
for (i = limit; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).
Currently IRCE does not do any profitability analysis. That is a
TODO.
Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline. Having said that, I will love
to get feedback and general input from people interested in trying
this out.
This pass was originally r226201. It was reverted because it used C++
features not supported by MSVC 2012.
Differential Revision: http://reviews.llvm.org/D6693
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226238 91177308-0d34-0410-b5e6-96231b3b80d8
The change used C++11 features not supported by MSVC 2012. I will fix
the change to use things supported MSVC 2012 and recommit shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226216 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE eliminates range checks of the form
0 <= A * I + B < Length
by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment. As an
example, IRCE will convert
len = < known positive >
for (i = 0; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
to
len = < known positive >
limit = smin(n, len)
// no first segment
for (i = 0; i < limit; i++) {
if (0 <= i && i < len) { // this check is fully redundant
do_something();
} else {
throw_out_of_bounds();
}
}
for (i = limit; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).
Currently IRCE does not do any profitability analysis. That is a
TODO.
Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline. Having said that, I will love
to get feedback and general input from people interested in trying
this out.
Differential Revision: http://reviews.llvm.org/D6693
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226201 91177308-0d34-0410-b5e6-96231b3b80d8
The pass is really just a means of accessing a cached instance of the
TargetLibraryInfo object, and this way we can re-use that object for the
new pass manager as its result.
Lots of delta, but nothing interesting happening here. This is the
common pattern that is developing to allow analyses to live in both the
old and new pass manager -- a wrapper pass in the old pass manager
emulates the separation intrinsic to the new pass manager between the
result and pass for analyses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226157 91177308-0d34-0410-b5e6-96231b3b80d8
While the term "Target" is in the name, it doesn't really have to do
with the LLVM Target library -- this isn't an abstraction which LLVM
targets generally need to implement or extend. It has much more to do
with modeling the various runtime libraries on different OSes and with
different runtime environments. The "target" in this sense is the more
general sense of a target of cross compilation.
This is in preparation for porting this analysis to the new pass
manager.
No functionality changed, and updates inbound for Clang and Polly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226078 91177308-0d34-0410-b5e6-96231b3b80d8
The functions {pred,succ,use,user}_{begin,end} exist, but many users
have to check *_begin() with *_end() by hand to determine if the
BasicBlock or User is empty. Fix this with a standard *_empty(),
demonstrating a few usecases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225760 91177308-0d34-0410-b5e6-96231b3b80d8
When we compute the size of a loop, we include the branch on the backedge and
the comparison feeding the conditional branch. Under normal circumstances,
these don't get replicated with the rest of the loop body when we unroll. This
led to the somewhat surprising behavior that really small loops would not get
unrolled enough -- they could be unrolled more and the resulting loop would be
below the threshold, because we were assuming they'd take
(LoopSize * UnrollingFactor) instructions after unrolling, instead of
(((LoopSize-2) * UnrollingFactor)+2) instructions. This fixes that computation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225565 91177308-0d34-0410-b5e6-96231b3b80d8
doing Load PRE"
It's not really expected to stick around, last time it provoked a weird LTO
build failure that I can't reproduce now, and the bot logs are long gone. I'll
re-revert it if the failures recur.
Original description: Perform Scalar PRE on gep indices that feed loads before
doing Load PRE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225536 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemoryDependenceAnalysis::getNonLocalPointerDependency was taking a list of properties about the instruction being queried. Since I'm about to need one more property to be passed down through the infrastructure - I need to know a query instruction is non-volatile in an inner helper - fix the interface once and for all.
I also added some assertions and behaviour clarifications around volatile and ordered field accesses. At the moment, this is mostly to document expected behaviour. The only non-standard instructions which can currently reach this are atomic, but unordered, loads and stores. Neither ordered or volatile accesses can reach here.
The call in GVN is protected by an isSimple check when it first considers the load. The calls in MemDepPrinter are protected by isUnordered checks. Both utilities also check isVolatile for loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225481 91177308-0d34-0410-b5e6-96231b3b80d8
The two buildbot failures were addressed in LLVM r225378 and CFE r225359.
This rapplies commit 225272 without modifications.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225379 91177308-0d34-0410-b5e6-96231b3b80d8
assert out of the new pre-splitting in SROA.
This fix makes the code do what was originally intended -- when we have
a store of a load both dealing in the same alloca, we force them to both
be pre-split with identical offsets. This is really quite hard to do
because we can keep discovering problems as we go along. We have to
track every load over the current alloca which for any resaon becomes
invalid for pre-splitting, and go back to remove all stores of those
loads. I've included a couple of test cases derived from PR22093 that
cover the different ways this can happen. While that PR only really
triggered the first of these two, its the same fundamental issue.
The other challenge here is documented in a FIXME now. We end up being
quite a bit more aggressive for pre-splitting when loads and stores
don't refer to the same alloca. This aggressiveness comes at the cost of
introducing potentially redundant loads. It isn't clear that this is the
right balance. It might be considerably better to require that we only
do pre-splitting when we can presplit every load and store involved in
the entire operation. That would give more consistent if conservative
results. Unfortunately, it requires a non-trivial change to the actual
pre-splitting operation in order to correctly handle cases where we end
up pre-splitting stores out-of-order. And it isn't 100% clear that this
is the right direction, although I'm starting to suspect that it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225149 91177308-0d34-0410-b5e6-96231b3b80d8
a cache of assumptions for a single function, and an immutable pass that
manages those caches.
The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.
Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.
For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225131 91177308-0d34-0410-b5e6-96231b3b80d8
a pre-splitting pass over loads and stores.
Historically, splitting could cause enough problems that I hamstrung the
entire process with a requirement that splittable integer loads and
stores must cover the entire alloca. All smaller loads and stores were
unsplittable to prevent chaos from ensuing. With the new pre-splitting
logic that does load/store pair splitting I introduced in r225061, we
can now very nicely handle arbitrarily splittable loads and stores. In
order to fully benefit from these smarts, we need to mark all of the
integer loads and stores as splittable.
However, we don't actually want to rewrite partitions with all integer
loads and stores marked as splittable. This will fail to extract scalar
integers from aggregates, which is kind of the point of SROA. =] In
order to resolve this, what we really want to do is only do
pre-splitting on the alloca slices with integer loads and stores fully
splittable. This allows us to uncover all non-integer uses of the alloca
that would benefit from a split in an integer load or store (and where
introducing the split is safe because it is just memory transfer from
a load to a store). Once done, we make all the non-whole-alloca integer
loads and stores unsplittable just as they have historically been,
repartition and rewrite.
The result is that when there are integer loads and stores anywhere
within an alloca (such as from a memcpy of a sub-object of a larger
object), we can split them up if there are non-integer components to the
aggregate hiding beneath. I've added the challenging test cases to
demonstrate how this is able to promote to scalars even a case where we
have even *partially* overlapping loads and stores.
This restores the single-store behavior for small arrays of i8s which is
really nice. I've restored both the little endian testing and big endian
testing for these exactly as they were prior to r225061. It also forced
me to be more aggressive in an alignment test to actually defeat SROA.
=] Without the added volatiles there, we actually split up the weird i16
loads and produce nice double allocas with better alignment.
This also uncovered a number of bugs where we failed to handle
splittable load and store slices which didn't have a begininng offset of
zero. Those fixes are included, and without them the existing test cases
explode in glorious fireworks. =]
I've kept support for leaving whole-alloca integer loads and stores as
splittable even for the purpose of rewriting, but I think that's likely
no longer needed. With the new pre-splitting, we might be able to remove
all the splitting support for loads and stores from the rewriter. Not
doing that in this patch to try to isolate any performance regressions
that causes in an easy to find and revert chunk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225074 91177308-0d34-0410-b5e6-96231b3b80d8
instructions.
I noticed this when working on dialing up how aggressively we can
pre-split loads and stores. My test case wasn't passing because dead
GEPs into the allocas persisted when they were built by this routine.
This isn't terribly harmful, we still rewrote and promoted the alloca
and I can't conceive of how to cause this to happen in a case where we
will keep the exact same alloca but rewrite and promote the uses of it.
If that ever happened, we'd get an assert out of mem2reg.
So I don't have a direct test case yet, but the subsequent commit's test
case wouldn't pass without this. There are other problems fixed by this
patch that I spotted purely by inspection such as the fact that
getAdjustedPtr could have actually deleted dead base pointers. I don't
know how to get a base pointer to go into getAdjustedPtr today, so
I think this bug could never have manifested (and I certainly can't
write a test case for it) but, it wasn't the intent of the code. The
code really just wanted to GC the new instructions built. That can be
done more directly by comparing with the base pointer which is the only
non-new instruction that this code can return.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225073 91177308-0d34-0410-b5e6-96231b3b80d8
array. This prevents it from walking out of bounds on the splits array.
Bug found with the existing tests by ASan and by the MSVC debug build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225069 91177308-0d34-0410-b5e6-96231b3b80d8
a +asserts bootstrap, but my bootstrap had asserts off. Oops.
Anyways, in some places it is reasonable to cast (as a sanity check) the
pointer operand to a load or store to an instruction within SROA --
namely when the pointer operand is expected to be derived from an
alloca, and thus always an instruction. However, the pre-splitting code
also deals with loads and stores to non-alloca pointers and there we
need to just use the Value*. Nothing about the code relied on the
instruction cast, it was only there essentially as an invariant
assertion. Remove the two that don't actually hold.
This should fix the proximate issue in PR22080, but I'm also doing an
asserts bootstrap myself to see if there are other issues lurking.
I'll craft a reduced test case in a moment, but I wanted to get the tree
healthy as quickly as possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225068 91177308-0d34-0410-b5e6-96231b3b80d8
of my new load and store splitting, and fix a bug where it logged
a totally irrelevant slice rather than the actual slice in question.
The logging here previously worked because we used to place new slices
onto the back of the core sequence, but that caused other problems.
I updated the actual code to store new slices in their own vector but
didn't update the logging. There isn't a good way to reuse the logging
any more, and frankly it wasn't needed. We can directly log this bit
more easily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225063 91177308-0d34-0410-b5e6-96231b3b80d8
stores.
When there are accesses to an entire alloca with an integer
load or store as well as accesses to small pieces of the alloca, SROA
splits up the large integer accesses. In order to do that, it uses bit
math to merge the small accesses into large integers. While this is
effective, it produces insane IR that can cause significant problems in
the rest of the optimizer:
- It can cause load and store mismatches with GVN on the non-alloca side
where we end up loading an i64 (or some such) rather than loading
specific elements that are stored.
- We can't always get rid of the integer bit math, which is why we can't
always fix the loads and stores to work well with GVN.
- This is especially bad when we have operations that mix poorly with
integer bit math such as floating point operations.
- It will block things like the vectorizer which might be able to handle
the scalar stores that underly the aggregate.
At the same time, we can't just directly split up these loads and stores
in all cases. If there is actual integer arithmetic involved on the
values, then using integer bit math is actually the perfect lowering
because we can often combine it heavily with the surrounding math.
The solution this patch provides is to find places where SROA is
partitioning aggregates into small elements, and look for splittable
loads and stores that it can split all the way to some other adjacent
load and store. These are uniformly the cases where failing to split the
loads and stores hurts the optimizer that I have seen, and I've looked
extensively at the code produced both from more and less aggressive
approaches to this problem.
However, it is quite tricky to actually do this in SROA. We may have
loads and stores to the same alloca, or other complex patterns that are
hard to handle. This complexity leads to the somewhat subtle algorithm
implemented here. We have to do this entire process as a separate pass
over the partitioning of the alloca, and split up all of the loads prior
to splitting the stores so that we can handle safely the cases of
overlapping, including partially overlapping, loads and stores to the
same alloca. We also have to reconstitute the post-split slice
configuration so we can avoid iterating again over all the alloca uses
(the slow part of SROA). But we also have to ensure that when we split
up loads and stores to *other* allocas, we *do* re-iterate over them in
SROA to adapt to the more refined partitioning now required.
With this, I actually think we can fix a long-standing TODO in SROA
where I avoided splitting as many loads and stores as probably should be
splittable. This limitation historically mitigated the fallout of all
the bad things mentioned above. Now that we have more intelligent
handling, I plan to remove the FIXME and more aggressively mark integer
loads and stores as splittable. I'll do that in a follow-up patch to
help with bisecting any fallout.
The net result of this change should be more fine-grained and accurate
scalars being formed out of aggregates. At the very least, Clang now
generates perfect code for this high-level test case using
std::complex<float>:
#include <complex>
void g1(std::complex<float> &x, float a, float b) {
x += std::complex<float>(a, b);
}
void g2(std::complex<float> &x, float a, float b) {
x -= std::complex<float>(a, b);
}
void foo(const std::complex<float> &x, float a, float b,
std::complex<float> &x1, std::complex<float> &x2) {
std::complex<float> l1 = x;
g1(l1, a, b);
std::complex<float> l2 = x;
g2(l2, a, b);
x1 = l1;
x2 = l2;
}
This code isn't just hypothetical either. It was reduced out of the hot
inner loops of essentially every part of the Eigen math library when
using std::complex<float>. Those loops would consistently and
pervasively hop between the floating point unit and the integer unit due
to bit math extraction and insertion of floating point values that were
"stored" in a 64-bit integer register around the loop backedge.
So far, this change has passed a bootstrap and I have done some other
testing and so far, no issues. That doesn't mean there won't be though,
so I'll be prepared to help with any fallout. If you performance swings
in particular, please let me know. I'm very curious what all the impact
of this change will be. Stay tuned for the follow-up to also split more
integer loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225061 91177308-0d34-0410-b5e6-96231b3b80d8