Currently, there's a single flag, checked by the pass itself.
It can't force-enable the pass (and is on by default), because it
might not even have been created, as that's the targets decision.
Instead, have separate explicit flags, so that the decision is
consistently made in the target.
Keep the flag as a last-resort "force-disable GlobalMerge" for now,
for backwards compatibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234666 91177308-0d34-0410-b5e6-96231b3b80d8
The spilled registers are pristine and thus, correctly handled by
the register scavenger and so on, but the liveness information is
strictly speaking wrong at this point.
Fix that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234664 91177308-0d34-0410-b5e6-96231b3b80d8
This allows winehprepare to build sensible llvm.eh.actions calls for SEH
finally blocks. The pattern matching in this change is brittle and
should be replaced with something more robust soon. In the meantime,
this will let us write the code that produces __C_specific_handler xdata
tables, which we need regardless of how we decide to get finally blocks
through EH preparation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234663 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, isMask_N returned false for 0 but isShiftedMask_N returned true.
Almost all uses are for pattern matching bitfield operations in the backends,
and expect false (this was discovered because of AArch64's copy of this logic).
Unfortunately, I couldn't put together a small non-fragile test for this. The
nature of the bitfield operations means that this edge case is only really
triggered for nodes like "(and N, 0)", which the DAG combiner is usually very
good at folding away before they get to this stage.
rdar://20501377
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234659 91177308-0d34-0410-b5e6-96231b3b80d8
When rewriting statepoints to make relocations explicit, we need to have a conservative but consistent notion of where a particular pointer is live at a particular site. The old code just used dominance, which is correct, but decidedly more conservative then it needed to be. This patch implements a simple dataflow algorithm that's run one per function (well, twice counting fixup after base pointer insertion). There's still lots of room to make this faster, but it's fast enough for all practical purposes today.
Differential Revision: http://reviews.llvm.org/D8674
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234657 91177308-0d34-0410-b5e6-96231b3b80d8
r234638 chained another transform below which was tripping over the
deleted instruction. Use after free found by asan in many regression
tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234654 91177308-0d34-0410-b5e6-96231b3b80d8
After submitting 234651, I noticed I hadn't responded to a review comment by mjacob. This patch addresses that comment and fixes a Release only build problem due to an unused variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234653 91177308-0d34-0410-b5e6-96231b3b80d8
Two related small changes:
Various dominance based queries about liveness can get confused if we're talking about unreachable blocks. To avoid reasoning about such cases, just remove them before rewriting statepoints.
Remove single entry phis (likely left behind by LCSSA) to reduce the number of live values.
Both of these are motivated by http://reviews.llvm.org/D8674 which will be submitted shortly.
Differential Revision: http://reviews.llvm.org/D8675
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234651 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds limited support for inserting explicit relocations when there's a vector of pointers live over the statepoint. This doesn't handle the case where the vector contains a mix of base and non-base pointers; that's future work.
The current implementation just scalarizes the vector over the gc.statepoint before doing the explicit rewrite. An alternate approach would be to plumb the vector all the way though the backend lowering, but doing that appears challenging. In particular, the size of the indirect spill slot is currently assumed to be sizeof(pointer) throughout the backend.
In practice, this is enough to allow running the SLP and Loop vectorizers before RewriteStatepointsForGC.
Differential Revision: http://reviews.llvm.org/D8671
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234647 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change moves creating calls to `llvm.uadd.with.overflow` from
InstCombine to CodeGenPrep. Combining overflow check patterns into
calls to the said intrinsic in InstCombine inhibits optimization because
it introduces an intrinsic call that not all other transforms and
analyses understand.
Depends on D8888.
Reviewers: majnemer, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8889
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234638 91177308-0d34-0410-b5e6-96231b3b80d8
Stop leaking temporary nodes from `DIBuilder::createCompileUnit()`.
`replaceAllUsesWith()` doesn't delete the nodes, so we need to delete
them "manually" (well, `TempMDTuple` does that for us).
Similarly, stop leaking the temporary nodes used for variables of
subprograms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234617 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes some odd behavior differences between the two. In particular,
the version that takes a FD no longer unconditionally sets stdout to binary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234615 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we would always report success, which is pretty bogus.
I'm too lazy to write a test where rename will portably fail on all
platforms. I'm just trying to fix breakage introduced by r234597, which
happened to tickle this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234611 91177308-0d34-0410-b5e6-96231b3b80d8
WinEH currently turns invokes into calls. Long term, we will reconsider
this, but for now, make sure we remap the operands and clone the
successors of the new terminator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234608 91177308-0d34-0410-b5e6-96231b3b80d8
Iterating over loops from the LoopInfo instance only provides top-level loops.
We need to search the whole tree of loops to find the inner ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234603 91177308-0d34-0410-b5e6-96231b3b80d8
CallSite roughly behaves as a common base CallInst and InvokeInst. Bring
the behavior closer to that model by making upcasts explicit. Downcasts
remain implicit and work as before.
Following dyn_cast as a mental model checking whether a Value *V isa
CallSite now looks like this:
if (auto CS = CallSite(V)) // think dyn_cast
instead of:
if (CallSite CS = V)
This is an extra token but I think it is slightly clearer. Making the
ctor explicit has the advantage of not accidentally creating nullptr
CallSites, e.g. when you pass a Value * to a function taking a CallSite
argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234601 91177308-0d34-0410-b5e6-96231b3b80d8
Using SchedAliases is convenient and works well for latency and resource
lookup for instructions. However, this creates an entry in
AArch64WriteLatencyTable with a WriteResourceID of 0, breaking any
SchedReadAdvance since the lookup will fail.
http://reviews.llvm.org/D8043
Patch by Dave Estes <cestes@codeaurora.org>!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234594 91177308-0d34-0410-b5e6-96231b3b80d8
Cache NumEntries locally, it's only used in an assert and using the member
variable prevents the compiler from eliminating the tombstone check for types
with trivial destructors. No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234589 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Some optimizations such as jump threading and loop unswitching can negatively
affect performance when applied to divergent branches. The divergence analysis
added in this patch conservatively estimates which branches in a GPU program
can diverge. This information can then help LLVM to run certain optimizations
selectively.
Test Plan: test/Analysis/DivergenceAnalysis/NVPTX/diverge.ll
Reviewers: resistor, hfinkel, eliben, meheff, jholewinski
Subscribers: broune, bjarke.roune, madhur13490, tstellarAMD, dberlin, echristo, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8576
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234567 91177308-0d34-0410-b5e6-96231b3b80d8
The IPToState table must be emitted after we have generated labels for
all functions in the table. Don't rely on the order of the list of
globals. Instead, utilize WinEHFuncInfo to tell us how many catch
handlers we expect to outline. Once we know we've visited all the catch
handlers, emit the cppxdata.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234566 91177308-0d34-0410-b5e6-96231b3b80d8
When we have an instruction for this (and, thus, don't generate a runtime
call), we need to custom type legalize this (in a trivial way, just as we do
for fp_to_sint).
Fixes PR23173.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234561 91177308-0d34-0410-b5e6-96231b3b80d8
For the most common ones (such as fadd), we already did the promotion.
Do the same thing for all the others.
Currently, we'll just crash/assert on all these operations, as
there's no hardware or libcall support whatsoever.
f16 (half) is specified as an interchange - not arithmetic - format,
and is expected to be promoted to single-precision for arithmetic
operations.
While there, teach the legalizer about promoting some of the (mostly
floating-point) operations that we never needed before.
Differential Revision: http://reviews.llvm.org/D8648
See related discussion on the thread for: http://reviews.llvm.org/D8755
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234550 91177308-0d34-0410-b5e6-96231b3b80d8
formatted_raw_ostream is a wrapper over another stream to add column and line
number tracking.
It is used only for asm printing.
This patch moves the its creation down to where we know we are printing
assembly. This has the following advantages:
* Simpler lifetime management: std::unique_ptr
* We don't compute column and line number of object files :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234535 91177308-0d34-0410-b5e6-96231b3b80d8
We already do:
concat_vectors(scalar, undef) -> scalar_to_vector(scalar)
When the scalar is legal.
When it's not, but is a truncated legal scalar, we can also do:
concat_vectors(trunc(scalar), undef) -> scalar_to_vector(scalar)
Which is equivalent, since the upper lanes are undef anyway.
While there, teach the combine to look at more than 2 operands.
Differential Revision: http://reviews.llvm.org/D8883
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234530 91177308-0d34-0410-b5e6-96231b3b80d8
The integer extend optimization tries to fold the extend into the load
instruction. This requires us to identify if the extend has already been
emitted or not and act accordingly on it.
The check that was originally performed for this was not sufficient. Besides
checking the ValueMap for a mapped register we also need to check if the
virtual register has already an associated machine instruction that defines it.
This fixes rdar://problem/20470788.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234529 91177308-0d34-0410-b5e6-96231b3b80d8
Using this instead of
namespace llvm {
func...
}
Has the advantage that the build fails with a compiler error if it gets out
of sync with the .h file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234515 91177308-0d34-0410-b5e6-96231b3b80d8
Pull the `-preserve-*-use-list-order` flags out of "experimental" mode,
and preserve use-list order by default when serializing to bitcode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234510 91177308-0d34-0410-b5e6-96231b3b80d8