of templates in the new pass manager.
The analysis manager is now itself just a template predicated on the IR
unit. This makes lots of the templates really trivial and more clear:
they are all parameterized on a single type, the IR unit's type.
Everything else is a function of that. To me, this is a really nice
cleanup of the APIs and removes a layer of 'magic' and 'indirection'
that really wasn't there and just got in the way of understanding what
is going on here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225784 91177308-0d34-0410-b5e6-96231b3b80d8
the generic functionality of the pass managers themselves.
In the new infrastructure, the pass "manager" isn't actually interesting
at all. It just pipelines a single chunk of IR through N passes. We
don't need to know anything about the IR or the passes to do this really
and we can replace the 3 implementations of the exact same functionality
with a single generic PassManager template, complementing the single
generic AnalysisManager template.
I've left typedefs in place to give convenient names to the various
obvious instantiations of the template.
With this, I think I've nuked almost all of the redundant logic in the
managers, and I think the overall design is actually simpler for having
single templates that clearly indicate there is no special logic here.
The logging is made somewhat more annoying by this change, but I don't
think the difference is worth having heavy-weight traits to help log
things.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225783 91177308-0d34-0410-b5e6-96231b3b80d8
Peephole optimizer is scanning a basic block forward. At some point it
needs to answer the question "given a pointer to an MI in the current
BB, is it located before or after the current instruction".
To perform this, it keeps a set of the MIs already seen during the scan,
if a MI is not in the set, it is assumed to be after.
It means that newly created MIs have to be inserted in the set as well.
This commit passes the set as an argument to the target-dependent
optimizeSelect() so that it can properly update the set with the
(potentially) newly created MIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225772 91177308-0d34-0410-b5e6-96231b3b80d8
The functions {pred,succ,use,user}_{begin,end} exist, but many users
have to check *_begin() with *_end() by hand to determine if the
BasicBlock or User is empty. Fix this with a standard *_empty(),
demonstrating a few usecases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225760 91177308-0d34-0410-b5e6-96231b3b80d8
template.
This consolidates three copies of nearly the same core logic. It adds
"complexity" to the ModuleAnalysisManager in that it makes it possible
to share a ModuleAnalysisManager across multiple modules... But it does
so by deleting *all of the code*, so I'm OK with that. This will
naturally make fixing bugs in this code much simpler, etc.
The only down side here is that we have to use 'typename' and 'this->'
in various places, and the implementation is lifted into the header.
I'll take that for the code size reduction.
The convenient names are still typedef-ed and used throughout so that
users can largely ignore this aspect of the implementation.
The follow-up change to this will do the exact same refactoring for the
PassManagers. =D
It turns out that the interesting different code is almost entirely in
the adaptors. At the end, that should be essentially all that is left.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225757 91177308-0d34-0410-b5e6-96231b3b80d8
This name is less descriptive, but it sort of puts things in the
'llvm.frame...' namespace, relating it to frameallocate and
frameaddress. It also avoids using "allocate" and "allocation" together.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225752 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225746 91177308-0d34-0410-b5e6-96231b3b80d8
so has clang-format. Notably, this fixes a bunch of formatting in the
CGSCC pass manager side of things that has been improved in clang-format
recently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225743 91177308-0d34-0410-b5e6-96231b3b80d8
templated interface.
So far, every single IR unit I can come up with has address-identity.
That is, when two units of IR are both active in LLVM, their addresses
will be distinct of the IR is distinct. This is clearly true for
Modules, Functions, BasicBlocks, and Instructions. It turns out that the
only practical way to make the CGSCC stuff work the way we want is to
make it true for SCCs as well. I expect this pattern to continue.
When first designing the pass manager code, I kept this dimension of
freedom in the type parameters, essentially allowing for a wrapper-type
whose address did not form identity. But that really no longer makes
sense and is making the code more complex or subtle for no gain. If we
ever have an actual use case for this, we can figure out what makes
sense then and there. It will be better because then we will have the
actual example in hand.
While the simplifications afforded in this patch are fairly small
(mostly sinking the '&' out of many type parameters onto a few
interfaces), it would have become much more pronounced with subsequent
changes. I have a sequence of changes that will completely remove the
code duplication that currently exists between all of the pass managers
and analysis managers. =] Should make things much cleaner and avoid bug
fixing N times for the N pass managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
Add generic dispatch for the parts of `UniquableMDNode` that cast to
`MDTuple`. This makes adding other subclasses (like PR21433's
`MDLocation`) easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225697 91177308-0d34-0410-b5e6-96231b3b80d8
Stop erasing `MDNode`s from the uniquing sets in `LLVMContextImpl`
during teardown (in particular, during
`UniquableMDNode::~UniquableMDNode()`). Although it's currently
feasible, there isn't any clear benefit and it may not be feasible for
other subclasses (which don't explicitly store the lookup hash).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225696 91177308-0d34-0410-b5e6-96231b3b80d8
Same as with `MDTuple`, factor out a `friend MDNode` by moving creation
logic to the concrete subclass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225690 91177308-0d34-0410-b5e6-96231b3b80d8
Move creation logic for `MDTuple`s down where it belongs. Once there
are a few more subclasses, these functions really won't make much sense
here (the `friend` relationship was already awkward). For now, leave
the `MDNode` versions around, but have it forward down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225685 91177308-0d34-0410-b5e6-96231b3b80d8
Split `GenericMDNode` into two classes (with more descriptive names).
- `UniquableMDNode` will be a common subclass for `MDNode`s that are
sometimes uniqued like constants, and sometimes 'distinct'.
This class gets the (short-lived) RAUW support and related API.
- `MDTuple` is the basic tuple that has always been returned by
`MDNode::get()`. This is as opposed to more specific nodes to be
added soon, which have additional fields, custom assembly syntax,
and extra semantics.
This class gets the hash-related logic, since other sublcasses of
`UniquableMDNode` may need to hash based on other fields.
To keep this diff from getting too big, I've added casts to `MDTuple`
that won't really scale as new subclasses of `UniquableMDNode` are
added, but I'll clean those up incrementally.
(No functionality change intended.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225682 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of returning early on `handleChangedOperand()` recursion
(finally identified (and test added) in r225657), prevent it upfront by
releasing operands before RAUW.
Aside from massively different program flow, there should be no
functionality change ;).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225665 91177308-0d34-0410-b5e6-96231b3b80d8
This adds two new fields to the RegisterOperand TableGen class:
string OperandNamespace = "MCOI";
string OperandType = "OPERAND_REGISTER";
These fields can be used to specify a target specific operand type,
which will be stored in the OperandType member of the MCOperandInfo
object.
This can be useful for targets that need to store some extra information
about operands that cannot be expressed using the target independent
types. For example, in the R600 backend, there are operands which
can take either registers or immediates and it is convenient to be able
to specify this in the TableGen definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225661 91177308-0d34-0410-b5e6-96231b3b80d8
Change the return of `MDNode::isDistinct()` for `MDNode::getTemporary()`
to `true`. They aren't uniqued.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225646 91177308-0d34-0410-b5e6-96231b3b80d8
One is that AArch64 has additional restrictions on when local relocations can
be used. We have to take those into consideration when deciding to put a L
symbol in the symbol table or not.
The other is that ld64 requires the relocations to cstring to use linker
visible symbols on AArch64.
Thanks to Michael Zolotukhin for testing this!
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225644 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support for parsing and emitting the SBREL relocation variant for the
ARM target. Handling this relocation variant is necessary for supporting the
full ARM ELF specification. Addresses PR22128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225595 91177308-0d34-0410-b5e6-96231b3b80d8
This default constructor is a bit weird. It left the range in an invalid
state. That might be reasonable so that you can construct a local
iterator range and assign to it based on some logic to compute the range
you want. If folks would like to support that use case, I can add it
back, but in 238-odd usages none have actually wanted to do this. ;]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225592 91177308-0d34-0410-b5e6-96231b3b80d8
The bitcode reading interface used std::error_code to report an error to the
callers and it is the callers job to print diagnostics.
This is not ideal for error handling or diagnostic reporting:
* For error handling, all that the callers care about is 3 possibilities:
* It worked
* The bitcode file is corrupted/invalid.
* The file is not bitcode at all.
* For diagnostic, it is user friendly to include far more information
about the invalid case so the user can find out what is wrong with the
bitcode file. This comes up, for example, when a developer introduces a
bug while extending the format.
The compromise we had was to have a lot of error codes.
With this patch we use the DiagnosticHandler to communicate with the
human and std::error_code to communicate with the caller.
This allows us to have far fewer error codes and adds the infrastructure to
print better diagnostics. This is so because the diagnostics are printed when
he issue is found. The code that detected the problem in alive in the stack and
can pass down as much context as needed. As an example the patch updates
test/Bitcode/invalid.ll.
Using a DiagnosticHandler also moves the fatal/non-fatal error decision to the
caller. A simple one like llvm-dis can just use fatal errors. The gold plugin
needs a bit more complex treatment because of being passed non-bitcode files. An
hypothetical interactive tool would make all bitcode errors non-fatal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225562 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
One more attempt to fix UBSan reports: make sure DenseMapInfo::getEmptyKey()
and DenseMapInfo::getTombstoneKey() doesn't do any upcasts/downcasts to/from Value*.
Test Plan: check-llvm test suite with/without UBSan bootstrap
Reviewers: chandlerc, dexonsmith
Subscribers: llvm-commits, majnemer
Differential Revision: http://reviews.llvm.org/D6903
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225558 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r225498 (but leaves r225499, which was a worthy
cleanup).
My plan was to change `DEBUG_LOC` to store the `MDNode` directly rather
than its operands (patch was to go out this morning), but on reflection
it's not clear that it's strictly better. (I had missed that the
current code is unlikely to emit the `MDNode` at all.)
Conflicts:
lib/Bitcode/Reader/BitcodeReader.cpp (due to r225499)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225531 91177308-0d34-0410-b5e6-96231b3b80d8
This was used previously for metadata but is no longer needed there. Not
doing this simplifies ValueHandle and will make it easier to fix things
like AssertingVH's DenseMapInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225487 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemDepPrinter handled volatile and unordered accesses without involving MemoryDependencyAnalysis. By making a slight tweak to the documented interface - which is respected by both callers - we can move this responsibility to MDA for the benefit of any future callers. This is basically just cleanup.
In the future, we may decide to extend MDA's non local dependency analysis to return useful results for ordered or volatile loads. I believe (but have not really checked in detail) that local dependency analyis does get useful results for ordered, but not volatile, loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225483 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemoryDependenceAnalysis::getNonLocalPointerDependency was taking a list of properties about the instruction being queried. Since I'm about to need one more property to be passed down through the infrastructure - I need to know a query instruction is non-volatile in an inner helper - fix the interface once and for all.
I also added some assertions and behaviour clarifications around volatile and ordered field accesses. At the moment, this is mostly to document expected behaviour. The only non-standard instructions which can currently reach this are atomic, but unordered, loads and stores. Neither ordered or volatile accesses can reach here.
The call in GVN is protected by an isSimple check when it first considers the load. The calls in MemDepPrinter are protected by isUnordered checks. Both utilities also check isVolatile for loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225481 91177308-0d34-0410-b5e6-96231b3b80d8
Propagate whether `MDNode`s are 'distinct' through the other types of IR
(assembly and bitcode). This adds the `distinct` keyword to assembly.
Currently, no one actually calls `MDNode::getDistinct()`, so these nodes
only get created for:
- self-references, which are never uniqued, and
- nodes whose operands are replaced that hit a uniquing collision.
The concept of distinct nodes is still not quite first-class, since
distinct-ness doesn't yet survive across `MapMetadata()`.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225474 91177308-0d34-0410-b5e6-96231b3b80d8
PEI tries to keep track of how much starting or ending a call sequence adjusts the stack pointer by, so that it can resolve frame-index references. Currently, it takes a very simplistic view of how SP adjustments are done - both FrameStartOpcode and FrameDestroyOpcode adjust it exactly by the amount written in its first argument.
This view is in fact incorrect for some targets (e.g. due to stack re-alignment, or because it may want to adjust the stack pointer in multiple steps). However, that doesn't cause breakage, because most targets (the only in-tree exception appears to be 32-bit ARM) rely on being able to simplify the call frame pseudo-instructions earlier, so this code is never hit.
Moving the computation into TargetInstrInfo allows targets to override the way the adjustment is computed if they need to have a non-zero SPAdj.
Differential Revision: http://reviews.llvm.org/D6863
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225437 91177308-0d34-0410-b5e6-96231b3b80d8
type (in addition to the memory type).
The *LoadExt* legalization handling used to only have one type, the
memory type. This forced users to assume that as long as the extload
for the memory type was declared legal, and the result type was legal,
the whole extload was legal.
However, this isn't always the case. For instance, on X86, with AVX,
this is legal:
v4i32 load, zext from v4i8
but this isn't:
v4i64 load, zext from v4i8
Whereas v4i64 is (arguably) legal, even without AVX2.
Note that the same thing was done a while ago for truncstores (r46140),
but I assume no one needed it yet for extloads, so here we go.
Calls to getLoadExtAction were changed to add the value type, found
manually in the surrounding code.
Calls to setLoadExtAction were mechanically changed, by wrapping the
call in a loop, to match previous behavior. The loop iterates over
the MVT subrange corresponding to the memory type (FP vectors, etc...).
I also pulled neighboring setTruncStoreActions into some of the loops;
those shouldn't make a difference, as the additional types are illegal.
(e.g., i128->i1 truncstores on PPC.)
No functional change intended.
Differential Revision: http://reviews.llvm.org/D6532
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225421 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we have MVT::FIRST_VALUETYPE (r225362), we can provide a method
checking that the MVT is valid, that is, it's in
[FIRST_VALUETYPE, LAST_VALUETYPE[.
This commit also uses it in a few asserts, that would previously accept
invalid MVTs, such as the default constructed -1. In that case,
the code following those asserts would do an out-of-bounds array access.
Using MVT::isValid, those assertions fail as expected when passed
invalid MVTs.
It feels clunky to have such a validity checking function, but it's
at least better than the alternative of broken manual checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225411 91177308-0d34-0410-b5e6-96231b3b80d8
Allow distinct `MDNode`s to be explicitly created. There's no way (yet)
of representing their distinctness in assembly/bitcode, however, so this
still isn't first-class.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225406 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to indicate whether an `MDNode` is distinct. A distinct node is
not stored in the MDNode uniquing tables, and will never be returned by
`MDNode::get()`.
Although distinct nodes are only currently created by uniquing
collisions (when operands change), PR22111 will allow these nodes to be
explicitly created.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225401 91177308-0d34-0410-b5e6-96231b3b80d8