After submitting 234651, I noticed I hadn't responded to a review comment by mjacob. This patch addresses that comment and fixes a Release only build problem due to an unused variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234653 91177308-0d34-0410-b5e6-96231b3b80d8
Two related small changes:
Various dominance based queries about liveness can get confused if we're talking about unreachable blocks. To avoid reasoning about such cases, just remove them before rewriting statepoints.
Remove single entry phis (likely left behind by LCSSA) to reduce the number of live values.
Both of these are motivated by http://reviews.llvm.org/D8674 which will be submitted shortly.
Differential Revision: http://reviews.llvm.org/D8675
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234651 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds limited support for inserting explicit relocations when there's a vector of pointers live over the statepoint. This doesn't handle the case where the vector contains a mix of base and non-base pointers; that's future work.
The current implementation just scalarizes the vector over the gc.statepoint before doing the explicit rewrite. An alternate approach would be to plumb the vector all the way though the backend lowering, but doing that appears challenging. In particular, the size of the indirect spill slot is currently assumed to be sizeof(pointer) throughout the backend.
In practice, this is enough to allow running the SLP and Loop vectorizers before RewriteStatepointsForGC.
Differential Revision: http://reviews.llvm.org/D8671
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234647 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change moves creating calls to `llvm.uadd.with.overflow` from
InstCombine to CodeGenPrep. Combining overflow check patterns into
calls to the said intrinsic in InstCombine inhibits optimization because
it introduces an intrinsic call that not all other transforms and
analyses understand.
Depends on D8888.
Reviewers: majnemer, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8889
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234638 91177308-0d34-0410-b5e6-96231b3b80d8
WinEH currently turns invokes into calls. Long term, we will reconsider
this, but for now, make sure we remap the operands and clone the
successors of the new terminator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234608 91177308-0d34-0410-b5e6-96231b3b80d8
CallSite roughly behaves as a common base CallInst and InvokeInst. Bring
the behavior closer to that model by making upcasts explicit. Downcasts
remain implicit and work as before.
Following dyn_cast as a mental model checking whether a Value *V isa
CallSite now looks like this:
if (auto CS = CallSite(V)) // think dyn_cast
instead of:
if (CallSite CS = V)
This is an extra token but I think it is slightly clearer. Making the
ctor explicit has the advantage of not accidentally creating nullptr
CallSites, e.g. when you pass a Value * to a function taking a CallSite
argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234601 91177308-0d34-0410-b5e6-96231b3b80d8
The code uses a priority queue and a worklist, which share the same
visited set, but the visited set is only updated when inserting into
the priority queue. Instead, switch to using separate visited sets
for the priority queue and worklist.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234425 91177308-0d34-0410-b5e6-96231b3b80d8
(Re-apply r234361 with a fix and a testcase for PR23157)
Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.
Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).
In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.
When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).
The changes also adds support to query this property of the loop and
modify the vectorizer to use this.
Patch by Ashutosh Nema!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234424 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch adds an enum `OverflowCheckFlavor` and a function
`OptimizeOverflowCheck`. This will allow InstCombine to optimize
overflow checks without directly introducing an intermediate call to the
`llvm.$op.with.overflow` instrinsics.
This specific change is a refactoring and does not intend to change
behavior.
Reviewers: majnemer, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234388 91177308-0d34-0410-b5e6-96231b3b80d8
Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.
Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).
In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.
When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).
The changes also adds support to query this property of the loop and
modify the vectorizer to use this.
Patch by Ashutosh Nema!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234361 91177308-0d34-0410-b5e6-96231b3b80d8
Replace all uses of `DITypedArray<>` with `MDTupleTypedArrayWrapper<>`
and `MDTypeRefArray`. The APIs are completely different, but the
provided functionality is the same: treat an `MDTuple` as if it's an
array of a particular element type.
To simplify this patch a bit, I've temporarily typedef'ed
`DebugNodeArray` to `DIArray` and `MDTypeRefArray` to `DITypeArray`.
I've also temporarily conditionalized the accessors to check for null --
eventually these should be changed to asserts and the callers should
check for null themselves.
There's a tiny accompanying patch to clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234290 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Instead of making a local copy of `checkInterfaceFunction` for each
sanitizer, move the function in a common place.
Reviewers: kcc, samsonov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8775
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234220 91177308-0d34-0410-b5e6-96231b3b80d8
Remove `DIDescriptor::Verify()` and the `Verify()`s from subclasses.
They had already been gutted, and just did an `isa<>` check.
In a couple of cases I've temporarily dropped the check entirely, but
subsequent commits are going to disallow conversions to the
`DIDescriptor`s directly from `MDNode`, so the checks will come back in
another form soon enough.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234201 91177308-0d34-0410-b5e6-96231b3b80d8
There's still lots of callers passing nullptr, of course - some because
they'll never be migrated (InstCombines for bitcasts - well they don't
make any sense when the pointer type is opaque anyway, for example) and
others that will need more engineering to pass Types around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234126 91177308-0d34-0410-b5e6-96231b3b80d8
InstCombine didn't realize that it needs to use DataLayout to determine
how wide pointers are. This lead to assertion failures.
This fixes PR23113.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234046 91177308-0d34-0410-b5e6-96231b3b80d8
The plan here is to push the API changes out from the common components
(like Constant::getGetElementPtr and IRBuilder::CreateGEP related
functions) and just update callers to either pass the type if it's
obvious, or pass null.
Do this with LoadInst as well and anything else that comes up, then to
start porting specific uses to not pass null anymore - this may require
some refactoring in each case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234042 91177308-0d34-0410-b5e6-96231b3b80d8
This prevents us from running out of registers in the backend.
Introducing stack malloc calls prevents the backend from recognizing the
inline asm operands as stack objects. When the backend recognizes a
stack object, it doesn't need to materialize the address of the memory
in a physical register. Instead it generates a simple SP-based memory
operand. Introducing a stack malloc forces the backend to find a free
register for every memory operand. 32-bit x86 simply doesn't have enough
registers for this to succeed in most cases.
Reviewers: kcc, samsonov
Differential Revision: http://reviews.llvm.org/D8790
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233979 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The old requirement on GEP candidates being in bounds is unnecessary.
For off-bound GEPs, we still have
&B[i * S] = B + (i * S) * e = B + (i * e) * S
Test Plan: slsr_offbound_gep in slsr-gep.ll
Reviewers: meheff
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233949 91177308-0d34-0410-b5e6-96231b3b80d8
Require the pointee type to be passed explicitly and assert that it is
correct. For now it's possible to pass nullptr here (and I've done so in
a few places in this patch) but eventually that will be disallowed once
all clients have been updated or removed. It'll be a long road to get
all the way there... but if you have the cahnce to update your callers
to pass the type explicitly without depending on a pointer's element
type, that would be a good thing to do soon and a necessary thing to do
eventually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233938 91177308-0d34-0410-b5e6-96231b3b80d8
We used to do this before refactorings around r225640.
Some clang users checked for _chk libcall availability using:
__has_builtin(__builtin___memcpy_chk)
When compiling with -fno-builtin, this is always true.
When passing -ffreestanding/-mkernel, which both imply -fno-builtin, we
end up with fortified libcalls, which isn't acceptable in a freestanding
environment which only provides their non-fortified counterparts.
Until we change clang and/or teach external users to check for availability
differently, disregard the "nobuiltin" attribute and TLI::has.
Workaround for PR23093.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233776 91177308-0d34-0410-b5e6-96231b3b80d8
This pushes the use of PointerType::getElementType up into several
callers - I'll essentially just have to keep pushing that up the stack
until I can eliminate every call to it...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233604 91177308-0d34-0410-b5e6-96231b3b80d8
This just didn't need to be here at all, but the assertion I tried to
add wasn't appropriate either - the circumstance isn't impossible, it's
just not important to deal with it here - the gep-rooted version of this
instcombine will handle this case, we don't need to duplicate it for the
case where the gep happens to be used in a bitcast.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233404 91177308-0d34-0410-b5e6-96231b3b80d8
We make many redundant calls to isInterestingAlloca in the AddressSanitzier
pass. This is especially inefficient for allocas that have many uses. Let's
cache the results to speed up compilation.
The compile time improvements depend on the input. I did not see much
difference on benchmarks; however, I have a test case where compile time
goes from minutes to under a second.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233397 91177308-0d34-0410-b5e6-96231b3b80d8
This re-adds float2int to the tree, after fixing PR23038. It turns
out the argument to APSInt() is true-if-unsigned, rather than
true-if-signed :(. Added testcase and explanatory comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233370 91177308-0d34-0410-b5e6-96231b3b80d8
The assertion here was more expensive then it needed to be. We're only inserting allocas in the entry block, so we only need to consider ones in the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233362 91177308-0d34-0410-b5e6-96231b3b80d8
All the removed assertions are either implied locally by the assert at the top of the function or properties of the verifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233358 91177308-0d34-0410-b5e6-96231b3b80d8
Anding and comparing with zero can be done in a single instruction on
most archs so this is a bit cheaper.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233291 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch enhances SLSR to handle another candidate form &B[i * S]. If
we found two candidates
S1: X = &B[i * S]
S2: Y = &B[i' * S]
and S1 dominates S2, we can replace S2 with
Y = &X[(i' - i) * S]
Test Plan:
slsr-gep.ll
X86/no-slsr.ll: verify that we do not run SLSR on GEPs that already fit into
an addressing mode
Reviewers: eliben, atrick, meheff, hfinkel
Reviewed By: hfinkel
Subscribers: sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D7459
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233286 91177308-0d34-0410-b5e6-96231b3b80d8
Added test Float2Int/float2int-optnone.ll to verify that pass Float2Int
is not run on optnone functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233183 91177308-0d34-0410-b5e6-96231b3b80d8