IRCE eliminates range checks of the form
0 <= A * I + B < Length
by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment. As an
example, IRCE will convert
len = < known positive >
for (i = 0; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
to
len = < known positive >
limit = smin(n, len)
// no first segment
for (i = 0; i < limit; i++) {
if (0 <= i && i < len) { // this check is fully redundant
do_something();
} else {
throw_out_of_bounds();
}
}
for (i = limit; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).
Currently IRCE does not do any profitability analysis. That is a
TODO.
Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline. Having said that, I will love
to get feedback and general input from people interested in trying
this out.
This pass was originally r226201. It was reverted because it used C++
features not supported by MSVC 2012.
Differential Revision: http://reviews.llvm.org/D6693
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226238 91177308-0d34-0410-b5e6-96231b3b80d8
be exported from a dylib if their containing object file were linked into one.
No test case: No command line tools query this flag, and there are no Object
unit tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226217 91177308-0d34-0410-b5e6-96231b3b80d8
The change used C++11 features not supported by MSVC 2012. I will fix
the change to use things supported MSVC 2012 and recommit shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226216 91177308-0d34-0410-b5e6-96231b3b80d8
IRCE eliminates range checks of the form
0 <= A * I + B < Length
by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment. As an
example, IRCE will convert
len = < known positive >
for (i = 0; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
to
len = < known positive >
limit = smin(n, len)
// no first segment
for (i = 0; i < limit; i++) {
if (0 <= i && i < len) { // this check is fully redundant
do_something();
} else {
throw_out_of_bounds();
}
}
for (i = limit; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).
Currently IRCE does not do any profitability analysis. That is a
TODO.
Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline. Having said that, I will love
to get feedback and general input from people interested in trying
this out.
Differential Revision: http://reviews.llvm.org/D6693
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226201 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLibraryAnalysis pass.
There are actually no direct tests of this already in the tree. I've
added the most basic test that the pass manager bits themselves work,
and the TLI object produced will be tested by an upcoming patches as
they port passes which rely on TLI.
This is starting to point out the awkwardness of the invalidate API --
it seems poorly fitting on the *result* object. I suspect I will change
it to live on the analysis instead, but that's not for this change, and
I'd rather have a few more passes ported in order to have more
experience with how this plays out.
I believe there is only one more analysis required in order to start
porting instcombine. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226160 91177308-0d34-0410-b5e6-96231b3b80d8
The pass is really just a means of accessing a cached instance of the
TargetLibraryInfo object, and this way we can re-use that object for the
new pass manager as its result.
Lots of delta, but nothing interesting happening here. This is the
common pattern that is developing to allow analyses to live in both the
old and new pass manager -- a wrapper pass in the old pass manager
emulates the separation intrinsic to the new pass manager between the
result and pass for analyses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226157 91177308-0d34-0410-b5e6-96231b3b80d8
While the term "Target" is in the name, it doesn't really have to do
with the LLVM Target library -- this isn't an abstraction which LLVM
targets generally need to implement or extend. It has much more to do
with modeling the various runtime libraries on different OSes and with
different runtime environments. The "target" in this sense is the more
general sense of a target of cross compilation.
This is in preparation for porting this analysis to the new pass
manager.
No functionality changed, and updates inbound for Clang and Polly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226078 91177308-0d34-0410-b5e6-96231b3b80d8
The transform is somewhat involved, but the basic idea is simple: find
derived pointers that have been offset from the base pointer using gep
and replace the relocate of the derived pointer with a gep to the
relocated base pointer (with the same offset).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226060 91177308-0d34-0410-b5e6-96231b3b80d8
This commit moves `MDLocation`, finishing off PR21433. There's an
accompanying clang commit for frontend testcases. I'll attach the
testcase upgrade script I used to PR21433 to help out-of-tree
frontends/backends.
This changes the schema for `DebugLoc` and `DILocation` from:
!{i32 3, i32 7, !7, !8}
to:
!MDLocation(line: 3, column: 7, scope: !7, inlinedAt: !8)
Note that empty fields (line/column: 0 and inlinedAt: null) don't get
printed by the assembly writer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226048 91177308-0d34-0410-b5e6-96231b3b80d8
Sometimes teardown happens before the debug info graph is complete
(e.g., when clang throws an error). In that case, `MDNode`s will still
have RAUW, so deleting constants that the `MDNode`s point at will be
relatively expensive -- it'll cause re-uniquing all up the chain (what
I've been referring to as "teardown madness").
So, drop references *before* deleting constants. We need to drop a few
more references now: the metadata side of the metadata/value bridges
needs to be dropped off the cliff along with the rest of it (previously,
the bridges were cleaned before we did anything with the `MDNode`s).
There's no real functionality change here -- state before and after
`LLVMContextImpl::~LLVMContextImpl()` is unchanged -- so no testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226044 91177308-0d34-0410-b5e6-96231b3b80d8
I haven't looked closely at exactly why the side effect is required, but
this seems better than not mentioning it at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226030 91177308-0d34-0410-b5e6-96231b3b80d8
utils/sort_includes.py.
I clearly haven't done this in a while, so more changed than usual. This
even uncovered a missing include from the InstrProf library that I've
added. No functionality changed here, just mechanical cleanup of the
include order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225974 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the passes are wrappers around this, we no longer need
a vtable, virtual destructor, and other associated mess. This is
particularly nice to me as this is a class template. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225970 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the domtree analysis to the new pass manager. The analysis
returns the same DominatorTree result entity used by the old pass
manager and essentially all of the code is shared. We just have
different boilerplate for running and printing the analysis.
I've converted one test to run in both modes just to make sure this is
exercised while both are live in the tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225969 91177308-0d34-0410-b5e6-96231b3b80d8
into the new pass manager's analysis cache which stores results
by-value.
Technically speaking, the dom trees were originally not movable but
copyable! This, unsurprisingly, didn't work at all -- the copy was
shallow and just resulted in rampant memory corruption. This change
explicitly forbids copying (as it would need to be a deep copy) and
makes them explicitly movable with the unsurprising boiler plate to
member-wise move them because we can't rely on MSVC to generate this
code for us. =/
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225966 91177308-0d34-0410-b5e6-96231b3b80d8
When processing an array, every Elt has the same layout, it is
useless to recursively call each ComputeLinearIndex on each element.
Just do it once and multiply by the number of elements.
Differential Revision: http://reviews.llvm.org/D6832
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225949 91177308-0d34-0410-b5e6-96231b3b80d8
class members are implicitly "inline", no key word needed.
Naturally, this could change how LLVM inlines these functions because
<GRR>, but that's not an excuse to use the keyword. ;]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225939 91177308-0d34-0410-b5e6-96231b3b80d8
significantly. Clean it up with the help of clang-format.
I've touched this up by hand in a couple of places that weren't quite
right (IMO). I think most of these actually have bugs open about
already.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225938 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the source and destination types can be specified,
allow doing an expansion that doesn't use an EXTLOAD of the
result type. Try to do a legal extload to an intermediate type
and extend that if possible.
This generalizes the special case custom lowering of extloads
R600 has been using to work around this problem.
This also happens to fix a bug that would incorrectly use more
aligned loads than should be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225925 91177308-0d34-0410-b5e6-96231b3b80d8
A pass that adds random noops to X86 binaries to introduce diversity with the goal of increasing security against most return-oriented programming attacks.
Command line options:
-noop-insertion // Enable noop insertion.
-noop-insertion-percentage=X // X% of assembly instructions will have a noop prepended (default: 50%, requires -noop-insertion)
-max-noops-per-instruction=X // Randomly generate X noops per instruction. ie. roll the dice X times with probability set above (default: 1). This doesn't guarantee X noop instructions.
In addition, the following 'quick switch' in clang enables basic diversity using default settings (currently: noop insertion and schedule randomization; it is intended to be extended in the future).
-fdiversify
This is the llvm part of the patch.
clang part: D3393
http://reviews.llvm.org/D3392
Patch by Stephen Crane (@rinon)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225908 91177308-0d34-0410-b5e6-96231b3b80d8
This adds handling for ExceptionHandling::MSVC, used by the
x86_64-pc-windows-msvc triple. It assumes that filter functions have
already been outlined in either the frontend or the backend. Filter
functions are used in place of the landingpad catch clause type info
operands. In catch clause order, the first filter to return true will
catch the exception.
The C specific handler table expects the landing pad to be split into
one block per handler, but LLVM IR uses a single landing pad for all
possible unwind actions. This patch papers over the mismatch by
synthesizing single instruction BBs for every catch clause to fill in
the EH selector that the landing pad block expects.
Missing functionality:
- Accessing data in the parent frame from outlined filters
- Cleanups (from __finally) are unsupported, as they will require
outlining and parent frame access
- Filter clauses are unsupported, as there's no clear analogue in SEH
In other words, this is the minimal set of changes needed to write IR to
catch arbitrary exceptions and resume normal execution.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D6300
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225904 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out, all callsites of the simplifier are guarded by a check for
CallInst::getCalledFunction (i.e., to make sure the callee is direct).
This check wasn't done when trying to further optimize a simplified fortified
libcall, introduced by a refactoring in r225640.
Fix that, add a testcase, and document the requirement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225895 91177308-0d34-0410-b5e6-96231b3b80d8
a print method.
This was formulated on a bad idea, but sadly I didn't uncover how bad
this was until I got further down the path. I had hoped that we could
provide a low boilerplate way of printing analyses, but it just doesn't
seem like this really fits the needs of the analyses. Not all analyses
really want to do printing, and those that do don't all use the same
interface. Instead, with the new pass manager let's just take advantage
of the fact that creating an explicit printer pass like the LCG has is
pretty low boilerplate already and rely on that for testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225861 91177308-0d34-0410-b5e6-96231b3b80d8
I'm adding generic analysis printing utility pass support which will
require such a method (or a specialization) so this will let the
existing printing logic satisfy that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225854 91177308-0d34-0410-b5e6-96231b3b80d8
and expose the necessary hooks in the API directly.
This makes it much cleaner for example to log the usage of a pass
manager from a library. It also makes it more obvious that this
functionality isn't "optional" or "asserts-only" for the pass manager.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225841 91177308-0d34-0410-b5e6-96231b3b80d8
referring to and give them nice comments.
Previously, these were used, but now things use the generic form of the
AnalysisManager.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225833 91177308-0d34-0410-b5e6-96231b3b80d8
This adds assembly and bitcode support for `MDLocation`. The assembly
side is rather big, since this is the first `MDNode` subclass (that
isn't `MDTuple`). Part of PR21433.
(If you're wondering where the mountains of testcase updates are, we
don't need them until I update `DILocation` and `DebugLoc` to actually
use this class.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225830 91177308-0d34-0410-b5e6-96231b3b80d8
This requires a new hook to prevent expanding sqrt in terms
of rsqrt and reciprocal. v_rcp_f32, v_rsq_f32, and v_sqrt_f32 are
all the same rate, so this expansion would just double the number
of instructions and cycles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225828 91177308-0d34-0410-b5e6-96231b3b80d8
Add a new subclass of `UniquableMDNode`, `MDLocation`. This will be the
IR version of `DebugLoc` and `DILocation`. The goal is to rename this
to `DILocation` once the IR classes supersede the `DI`-prefixed
wrappers.
This isn't used anywhere yet. Part of PR21433.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225824 91177308-0d34-0410-b5e6-96231b3b80d8
No functional changes, I'm just going to be doing a lot of work in these files and it would be helpful if they had more current LLVM style.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225817 91177308-0d34-0410-b5e6-96231b3b80d8
While, generally speaking, the process of lowering arguments for a patchpoint
is the same as lowering a regular indirect call, on some targets it may not be
exactly the same. Targets may not, for example, want to add additional register
dependencies that apply only to making cross-DSO calls through linker stubs,
may not want to load additional registers out of function descriptors, and may
not want to add additional side-effect-causing instructions that cannot be
removed later with the call itself being generated.
The PowerPC target will use this in a future commit (for all of the reasons
stated above).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225806 91177308-0d34-0410-b5e6-96231b3b80d8
Some targets, PowerPC for example, have pseudo-registers (such as that used to
represent the rounding mode), that don't have DWARF register numbers or a
register class. These are used only for internal dependency tracking, and
should not appear in the recorded live-outs. This adds a callback allowing the
target to pre-process the live-out mask in order to remove these kinds of
registers so that the StackMaps code does not complain about them and/or
attempt to include them in the output.
This will be used by the PowerPC target in a future commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225805 91177308-0d34-0410-b5e6-96231b3b80d8
a nested class template for the PassModel, and use the T-suffix for the
two typedefs to match the code in the AnalysisManager.
This is the last of the fairly fundamental code cleanups here. Will be
focusing on the printing of analyses next to finish that aspect off.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225785 91177308-0d34-0410-b5e6-96231b3b80d8
of templates in the new pass manager.
The analysis manager is now itself just a template predicated on the IR
unit. This makes lots of the templates really trivial and more clear:
they are all parameterized on a single type, the IR unit's type.
Everything else is a function of that. To me, this is a really nice
cleanup of the APIs and removes a layer of 'magic' and 'indirection'
that really wasn't there and just got in the way of understanding what
is going on here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225784 91177308-0d34-0410-b5e6-96231b3b80d8
the generic functionality of the pass managers themselves.
In the new infrastructure, the pass "manager" isn't actually interesting
at all. It just pipelines a single chunk of IR through N passes. We
don't need to know anything about the IR or the passes to do this really
and we can replace the 3 implementations of the exact same functionality
with a single generic PassManager template, complementing the single
generic AnalysisManager template.
I've left typedefs in place to give convenient names to the various
obvious instantiations of the template.
With this, I think I've nuked almost all of the redundant logic in the
managers, and I think the overall design is actually simpler for having
single templates that clearly indicate there is no special logic here.
The logging is made somewhat more annoying by this change, but I don't
think the difference is worth having heavy-weight traits to help log
things.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225783 91177308-0d34-0410-b5e6-96231b3b80d8
Peephole optimizer is scanning a basic block forward. At some point it
needs to answer the question "given a pointer to an MI in the current
BB, is it located before or after the current instruction".
To perform this, it keeps a set of the MIs already seen during the scan,
if a MI is not in the set, it is assumed to be after.
It means that newly created MIs have to be inserted in the set as well.
This commit passes the set as an argument to the target-dependent
optimizeSelect() so that it can properly update the set with the
(potentially) newly created MIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225772 91177308-0d34-0410-b5e6-96231b3b80d8
The functions {pred,succ,use,user}_{begin,end} exist, but many users
have to check *_begin() with *_end() by hand to determine if the
BasicBlock or User is empty. Fix this with a standard *_empty(),
demonstrating a few usecases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225760 91177308-0d34-0410-b5e6-96231b3b80d8
template.
This consolidates three copies of nearly the same core logic. It adds
"complexity" to the ModuleAnalysisManager in that it makes it possible
to share a ModuleAnalysisManager across multiple modules... But it does
so by deleting *all of the code*, so I'm OK with that. This will
naturally make fixing bugs in this code much simpler, etc.
The only down side here is that we have to use 'typename' and 'this->'
in various places, and the implementation is lifted into the header.
I'll take that for the code size reduction.
The convenient names are still typedef-ed and used throughout so that
users can largely ignore this aspect of the implementation.
The follow-up change to this will do the exact same refactoring for the
PassManagers. =D
It turns out that the interesting different code is almost entirely in
the adaptors. At the end, that should be essentially all that is left.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225757 91177308-0d34-0410-b5e6-96231b3b80d8
This name is less descriptive, but it sort of puts things in the
'llvm.frame...' namespace, relating it to frameallocate and
frameaddress. It also avoids using "allocate" and "allocation" together.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225752 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225746 91177308-0d34-0410-b5e6-96231b3b80d8
so has clang-format. Notably, this fixes a bunch of formatting in the
CGSCC pass manager side of things that has been improved in clang-format
recently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225743 91177308-0d34-0410-b5e6-96231b3b80d8
templated interface.
So far, every single IR unit I can come up with has address-identity.
That is, when two units of IR are both active in LLVM, their addresses
will be distinct of the IR is distinct. This is clearly true for
Modules, Functions, BasicBlocks, and Instructions. It turns out that the
only practical way to make the CGSCC stuff work the way we want is to
make it true for SCCs as well. I expect this pattern to continue.
When first designing the pass manager code, I kept this dimension of
freedom in the type parameters, essentially allowing for a wrapper-type
whose address did not form identity. But that really no longer makes
sense and is making the code more complex or subtle for no gain. If we
ever have an actual use case for this, we can figure out what makes
sense then and there. It will be better because then we will have the
actual example in hand.
While the simplifications afforded in this patch are fairly small
(mostly sinking the '&' out of many type parameters onto a few
interfaces), it would have become much more pronounced with subsequent
changes. I have a sequence of changes that will completely remove the
code duplication that currently exists between all of the pass managers
and analysis managers. =] Should make things much cleaner and avoid bug
fixing N times for the N pass managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225723 91177308-0d34-0410-b5e6-96231b3b80d8
Add generic dispatch for the parts of `UniquableMDNode` that cast to
`MDTuple`. This makes adding other subclasses (like PR21433's
`MDLocation`) easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225697 91177308-0d34-0410-b5e6-96231b3b80d8
Stop erasing `MDNode`s from the uniquing sets in `LLVMContextImpl`
during teardown (in particular, during
`UniquableMDNode::~UniquableMDNode()`). Although it's currently
feasible, there isn't any clear benefit and it may not be feasible for
other subclasses (which don't explicitly store the lookup hash).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225696 91177308-0d34-0410-b5e6-96231b3b80d8
Same as with `MDTuple`, factor out a `friend MDNode` by moving creation
logic to the concrete subclass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225690 91177308-0d34-0410-b5e6-96231b3b80d8
Move creation logic for `MDTuple`s down where it belongs. Once there
are a few more subclasses, these functions really won't make much sense
here (the `friend` relationship was already awkward). For now, leave
the `MDNode` versions around, but have it forward down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225685 91177308-0d34-0410-b5e6-96231b3b80d8
Split `GenericMDNode` into two classes (with more descriptive names).
- `UniquableMDNode` will be a common subclass for `MDNode`s that are
sometimes uniqued like constants, and sometimes 'distinct'.
This class gets the (short-lived) RAUW support and related API.
- `MDTuple` is the basic tuple that has always been returned by
`MDNode::get()`. This is as opposed to more specific nodes to be
added soon, which have additional fields, custom assembly syntax,
and extra semantics.
This class gets the hash-related logic, since other sublcasses of
`UniquableMDNode` may need to hash based on other fields.
To keep this diff from getting too big, I've added casts to `MDTuple`
that won't really scale as new subclasses of `UniquableMDNode` are
added, but I'll clean those up incrementally.
(No functionality change intended.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225682 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of returning early on `handleChangedOperand()` recursion
(finally identified (and test added) in r225657), prevent it upfront by
releasing operands before RAUW.
Aside from massively different program flow, there should be no
functionality change ;).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225665 91177308-0d34-0410-b5e6-96231b3b80d8
This adds two new fields to the RegisterOperand TableGen class:
string OperandNamespace = "MCOI";
string OperandType = "OPERAND_REGISTER";
These fields can be used to specify a target specific operand type,
which will be stored in the OperandType member of the MCOperandInfo
object.
This can be useful for targets that need to store some extra information
about operands that cannot be expressed using the target independent
types. For example, in the R600 backend, there are operands which
can take either registers or immediates and it is convenient to be able
to specify this in the TableGen definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225661 91177308-0d34-0410-b5e6-96231b3b80d8
Change the return of `MDNode::isDistinct()` for `MDNode::getTemporary()`
to `true`. They aren't uniqued.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225646 91177308-0d34-0410-b5e6-96231b3b80d8
One is that AArch64 has additional restrictions on when local relocations can
be used. We have to take those into consideration when deciding to put a L
symbol in the symbol table or not.
The other is that ld64 requires the relocations to cstring to use linker
visible symbols on AArch64.
Thanks to Michael Zolotukhin for testing this!
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225644 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support for parsing and emitting the SBREL relocation variant for the
ARM target. Handling this relocation variant is necessary for supporting the
full ARM ELF specification. Addresses PR22128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225595 91177308-0d34-0410-b5e6-96231b3b80d8
This default constructor is a bit weird. It left the range in an invalid
state. That might be reasonable so that you can construct a local
iterator range and assign to it based on some logic to compute the range
you want. If folks would like to support that use case, I can add it
back, but in 238-odd usages none have actually wanted to do this. ;]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225592 91177308-0d34-0410-b5e6-96231b3b80d8
The bitcode reading interface used std::error_code to report an error to the
callers and it is the callers job to print diagnostics.
This is not ideal for error handling or diagnostic reporting:
* For error handling, all that the callers care about is 3 possibilities:
* It worked
* The bitcode file is corrupted/invalid.
* The file is not bitcode at all.
* For diagnostic, it is user friendly to include far more information
about the invalid case so the user can find out what is wrong with the
bitcode file. This comes up, for example, when a developer introduces a
bug while extending the format.
The compromise we had was to have a lot of error codes.
With this patch we use the DiagnosticHandler to communicate with the
human and std::error_code to communicate with the caller.
This allows us to have far fewer error codes and adds the infrastructure to
print better diagnostics. This is so because the diagnostics are printed when
he issue is found. The code that detected the problem in alive in the stack and
can pass down as much context as needed. As an example the patch updates
test/Bitcode/invalid.ll.
Using a DiagnosticHandler also moves the fatal/non-fatal error decision to the
caller. A simple one like llvm-dis can just use fatal errors. The gold plugin
needs a bit more complex treatment because of being passed non-bitcode files. An
hypothetical interactive tool would make all bitcode errors non-fatal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225562 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
One more attempt to fix UBSan reports: make sure DenseMapInfo::getEmptyKey()
and DenseMapInfo::getTombstoneKey() doesn't do any upcasts/downcasts to/from Value*.
Test Plan: check-llvm test suite with/without UBSan bootstrap
Reviewers: chandlerc, dexonsmith
Subscribers: llvm-commits, majnemer
Differential Revision: http://reviews.llvm.org/D6903
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225558 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r225498 (but leaves r225499, which was a worthy
cleanup).
My plan was to change `DEBUG_LOC` to store the `MDNode` directly rather
than its operands (patch was to go out this morning), but on reflection
it's not clear that it's strictly better. (I had missed that the
current code is unlikely to emit the `MDNode` at all.)
Conflicts:
lib/Bitcode/Reader/BitcodeReader.cpp (due to r225499)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225531 91177308-0d34-0410-b5e6-96231b3b80d8
This was used previously for metadata but is no longer needed there. Not
doing this simplifies ValueHandle and will make it easier to fix things
like AssertingVH's DenseMapInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225487 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemDepPrinter handled volatile and unordered accesses without involving MemoryDependencyAnalysis. By making a slight tweak to the documented interface - which is respected by both callers - we can move this responsibility to MDA for the benefit of any future callers. This is basically just cleanup.
In the future, we may decide to extend MDA's non local dependency analysis to return useful results for ordered or volatile loads. I believe (but have not really checked in detail) that local dependency analyis does get useful results for ordered, but not volatile, loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225483 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MemoryDependenceAnalysis::getNonLocalPointerDependency was taking a list of properties about the instruction being queried. Since I'm about to need one more property to be passed down through the infrastructure - I need to know a query instruction is non-volatile in an inner helper - fix the interface once and for all.
I also added some assertions and behaviour clarifications around volatile and ordered field accesses. At the moment, this is mostly to document expected behaviour. The only non-standard instructions which can currently reach this are atomic, but unordered, loads and stores. Neither ordered or volatile accesses can reach here.
The call in GVN is protected by an isSimple check when it first considers the load. The calls in MemDepPrinter are protected by isUnordered checks. Both utilities also check isVolatile for loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225481 91177308-0d34-0410-b5e6-96231b3b80d8
Propagate whether `MDNode`s are 'distinct' through the other types of IR
(assembly and bitcode). This adds the `distinct` keyword to assembly.
Currently, no one actually calls `MDNode::getDistinct()`, so these nodes
only get created for:
- self-references, which are never uniqued, and
- nodes whose operands are replaced that hit a uniquing collision.
The concept of distinct nodes is still not quite first-class, since
distinct-ness doesn't yet survive across `MapMetadata()`.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225474 91177308-0d34-0410-b5e6-96231b3b80d8
PEI tries to keep track of how much starting or ending a call sequence adjusts the stack pointer by, so that it can resolve frame-index references. Currently, it takes a very simplistic view of how SP adjustments are done - both FrameStartOpcode and FrameDestroyOpcode adjust it exactly by the amount written in its first argument.
This view is in fact incorrect for some targets (e.g. due to stack re-alignment, or because it may want to adjust the stack pointer in multiple steps). However, that doesn't cause breakage, because most targets (the only in-tree exception appears to be 32-bit ARM) rely on being able to simplify the call frame pseudo-instructions earlier, so this code is never hit.
Moving the computation into TargetInstrInfo allows targets to override the way the adjustment is computed if they need to have a non-zero SPAdj.
Differential Revision: http://reviews.llvm.org/D6863
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225437 91177308-0d34-0410-b5e6-96231b3b80d8
type (in addition to the memory type).
The *LoadExt* legalization handling used to only have one type, the
memory type. This forced users to assume that as long as the extload
for the memory type was declared legal, and the result type was legal,
the whole extload was legal.
However, this isn't always the case. For instance, on X86, with AVX,
this is legal:
v4i32 load, zext from v4i8
but this isn't:
v4i64 load, zext from v4i8
Whereas v4i64 is (arguably) legal, even without AVX2.
Note that the same thing was done a while ago for truncstores (r46140),
but I assume no one needed it yet for extloads, so here we go.
Calls to getLoadExtAction were changed to add the value type, found
manually in the surrounding code.
Calls to setLoadExtAction were mechanically changed, by wrapping the
call in a loop, to match previous behavior. The loop iterates over
the MVT subrange corresponding to the memory type (FP vectors, etc...).
I also pulled neighboring setTruncStoreActions into some of the loops;
those shouldn't make a difference, as the additional types are illegal.
(e.g., i128->i1 truncstores on PPC.)
No functional change intended.
Differential Revision: http://reviews.llvm.org/D6532
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225421 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we have MVT::FIRST_VALUETYPE (r225362), we can provide a method
checking that the MVT is valid, that is, it's in
[FIRST_VALUETYPE, LAST_VALUETYPE[.
This commit also uses it in a few asserts, that would previously accept
invalid MVTs, such as the default constructed -1. In that case,
the code following those asserts would do an out-of-bounds array access.
Using MVT::isValid, those assertions fail as expected when passed
invalid MVTs.
It feels clunky to have such a validity checking function, but it's
at least better than the alternative of broken manual checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225411 91177308-0d34-0410-b5e6-96231b3b80d8
Allow distinct `MDNode`s to be explicitly created. There's no way (yet)
of representing their distinctness in assembly/bitcode, however, so this
still isn't first-class.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225406 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to indicate whether an `MDNode` is distinct. A distinct node is
not stored in the MDNode uniquing tables, and will never be returned by
`MDNode::get()`.
Although distinct nodes are only currently created by uniquing
collisions (when operands change), PR22111 will allow these nodes to be
explicitly created.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225401 91177308-0d34-0410-b5e6-96231b3b80d8
`MDNode::replaceOperandWith()` changes all instances of metadata. Stop
using it when linking module flags, since (due to uniquing) the flag
values could be used by other metadata.
Instead, use new API `NamedMDNode::setOperand()` to update the reference
directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225397 91177308-0d34-0410-b5e6-96231b3b80d8
This change includes the most basic possible GCStrategy for a GC which is using the statepoint lowering code. At the moment, this GCStrategy doesn't really do much - aside from actually generate correct stackmaps that is - but I went ahead and added a few extra correctness checks as proof of concept. It's mostly here to provide documentation on how to do one, and to provide a point for various optimization legality hooks I'd like to add going forward. (For context, see the TODOs in InstCombine around gc.relocate.)
Most of the validation logic added here as proof of concept will soon move in to the Verifier. That move is dependent on http://reviews.llvm.org/D6811
There was discussion in the review thread about addrspace(1) being reserved for something. I'm going to follow up on a seperate llvmdev thread. If needed, I'll update all the code at once.
Note that I am deliberately not making a GCStrategy required to use gc.statepoints with this change. I want to give folks out of tree - including myself - a chance to migrate. In a week or two, I'll make having a GCStrategy be required for gc.statepoints. To this end, I added the gc tag to one of the test cases but not others.
Differential Revision: http://reviews.llvm.org/D6808
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225365 91177308-0d34-0410-b5e6-96231b3b80d8
Many places reference MVT::LAST_VALUETYPE when iterating over all
valid MVTs, but they usually start with 0.
With FIRST_VALUETYPE, we can avoid explicit constants when we really
should be using MVT::SimpleValueType.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225362 91177308-0d34-0410-b5e6-96231b3b80d8
Used to iterate over previously added memory dependencies in
adjustChainDeps() and iterateChainSucc().
SDep::isCtrl() was previously used in these places, that also gave
anti and output edges. The code may be worse if these are followed,
because MisNeedChainEdge() will conservatively return true since a
non-memory instruction has no memory operands, and a false chain dep
will be added. It is also unnecessary since all memory accesses of
interest will be reached by memory dependencies, and there is a budget
limit for the number of edges traversed.
This problem was found on an out-of-tree target with enabled alias
analysis. No test case for an in-tree target has been found.
Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225351 91177308-0d34-0410-b5e6-96231b3b80d8
requiring and invalidating specific analyses. Also make their printed
names match their class names. Writing these out as prose really doesn't
make sense to me any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225346 91177308-0d34-0410-b5e6-96231b3b80d8
r221973 changed SmallVector::operator[] to use size_t instead of unsigned.
Before that, on 64bit platforms, when a large index (say -1) was passed,
truncating it to unsigned avoided an overflow when computing 'begin() + idx',
and failed the range checking assertion, as expected.
With r221973, idx isn't truncated, so the addition wraps to
'(char*)begin() - 1', and doesn't fire anymore when it should have done so.
This commit changes the comparison to instead compute 'end() - begin()'
(i.e., 'size()'), which avoids potentially overflowing additions, and
correctly triggers the assertion when values such as -1 are passed.
Note that the problem already existed before that revision, on platforms
where sizeof(size_t) == sizeof(unsigned).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225338 91177308-0d34-0410-b5e6-96231b3b80d8
We can't drop support for RAUW entirely in `MDNode`s, since it's
required for graph construction. This comment was from before I'd done
the math on that (out-of-tree), and never should have been committed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225334 91177308-0d34-0410-b5e6-96231b3b80d8
passes too many time.
I think this is actually the issue that someone raised with me at the
developer's meeting and in an email, but that we never really got to the
bottom of. Having all the testing utilities made it much easier to dig
down and uncover the core issue.
When a pass manager is running many passes over a single function, we
need it to invalidate the analyses between each run so that they can be
re-computed as needed. We also need to track the intersection of
preserved higher-level analyses across all the passes that we run (for
example, if there is one module analysis which all the function analyses
preserve, we want to track that and propagate it). Unfortunately, this
interacted poorly with any enclosing pass adaptor between two IR units.
It would see the intersection of preserved analyses, and need to
invalidate any other analyses, but some of the un-preserved analyses
might have already been invalidated *and recomputed*! We would fail to
propagate the fact that the analysis had already been invalidated.
The solution to this struck me as really strange at first, but the more
I thought about it, the more natural it seemed. After a nice discussion
with Duncan about it on IRC, it seemed even nicer. The idea is that
invalidating an analysis *causes* it to be preserved! Preserving the
lack of result is trivial. If it is recomputed, great. Until something
*else* invalidates it again, we're good.
The consequence of this is that the invalidate methods on the analysis
manager which operate over many passes now consume their
PreservedAnalyses object, update it to "preserve" every analysis pass to
which it delivers an invalidation (regardless of whether the pass
chooses to be removed, or handles the invalidation itself by updating
itself). Then we return this augmented set from the invalidate routine,
letting the pass manager take the result and use the intersection of
*that* across each pass run to compute the final preserved set. This
accounts for all the places where the early invalidation of an analysis
has already "preserved" it for a future run.
I've beefed up the testing and adjusted the assertions to show that we
no longer repeatedly invalidate or compute the analyses across nested
pass managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedAdd's smarts will live in ValueTracking as
computeOverflowForUnsignedAdd. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225329 91177308-0d34-0410-b5e6-96231b3b80d8
This is affecting the behavior of some ObjC++ / AArch64 test cases on Darwin.
Reverting to get the bots green while I track down the source of the changed
behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225311 91177308-0d34-0410-b5e6-96231b3b80d8
The goal is to allows MachineFunctionInfo to override this create
function to customize the creation.
No change intended in existing backend in this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225292 91177308-0d34-0410-b5e6-96231b3b80d8
This will be used for AMD GPUs with the Graphics Core Next architecture,
which are currently using by the r600 triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225276 91177308-0d34-0410-b5e6-96231b3b80d8
Use this to test that path of invalidation. This test actually shows
redundant invalidation here that is really bad. I'm going to work on
fixing that next, but wanted to commit the test harness now that its all
working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225257 91177308-0d34-0410-b5e6-96231b3b80d8
a specific analysis result.
This is quite handy to test things, and will also likely be very useful
for debugging issues. You could narrow down pass validation failures by
walking these invalidate pass runs up and down the pass pipeline, etc.
I've added support to the pass pipeline parsing to be able to create one
of these for any analysis pass desired.
Just adding this class uncovered one latent bug where the
AnalysisManager CRTP base class had a hard-coded Module type rather than
using IRUnitT.
I've also added tests for invalidation and caching of analyses in
a basic way across all the pass managers. These in turn uncovered two
more bugs where we failed to correctly invalidate an analysis -- its
results were invalidated but the key for re-running the pass was never
cleared and so it was never re-run. Quite nasty. I'm very glad to debug
this here rather than with a full system.
Also, yes, the naming here is horrid. I'm going to update some of the
names to be slightly less awful shortly. But really, I've no "good"
ideas for naming. I'll be satisfied if I can get it to "not bad".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225246 91177308-0d34-0410-b5e6-96231b3b80d8
is a no-op other than requiring some analysis results be available.
This can be used in real pass pipelines to force the usually lazy
analysis running to eagerly compute something at a specific point, and
it can be used to test the pass manager infrastructure (my primary use
at the moment).
I've also added bit of pipeline parsing magic to support generating
these directly from the opt command so that you can directly use these
when debugging your analysis. The syntax is:
require<analysis-name>
This can be used at any level of the pass manager. For example:
cgscc(function(require<my-analysis>,no-op-function))
This would produce a no-op function pass requiring my-analysis, followed
by a fully no-op function pass, both of these in a function pass manager
which is nested inside of a bottom-up CGSCC pass manager which is in the
top-level (implicit) module pass manager.
I have zero attachment to the particular syntax I'm using here. Consider
it a straw man for use while I'm testing and fleshing things out.
Suggestions for better syntax welcome, and I'll update everything based
on any consensus that develops.
I've used this new functionality to more directly test the analysis
printing rather than relying on the cgscc pass manager running an
analysis for me. This is still minimally tested because I need to have
analyses to run first! ;] That patch is next, but wanted to keep this
one separate for easier review and discussion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
dsymutil would like to use all the AsmPrinter/MCStreamer infrastructure
to stream out the DWARF. In order to do so, it will reuse the DIE object
and so this header needs to be public.
The interface exposed here has some corners that cannot be used without a
DwarfDebug object, but clients that want to stream Dwarf can just avoid
these.
Differential Revision: http://reviews.llvm.org/D6695
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225208 91177308-0d34-0410-b5e6-96231b3b80d8
when all are being preserved.
We want to short-circuit this for a couple of reasons. One, I don't
really want passes to grow a dependency on actually receiving their
invalidate call when they've been preserved. I'm thinking about removing
this entirely. But more importantly, preserving everything is likely to
be the common case in a lot of scenarios, and it would be really good to
bypass all of the invalidation and preservation machinery there.
Avoiding calling N opaque functions to try to invalidate things that are
by definition still valid seems important. =]
This wasn't really inpsired by much other than seeing the spam in the
logging for analyses, but it seems better ot get it checked in rather
than forgetting about it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225163 91177308-0d34-0410-b5e6-96231b3b80d8
manager.
This starts to allow us to test analyses more easily, but it's really
only the beginning. Some of the code here is still untestable without
manual changes to create analysis passes, but I wanted to factor it into
a small of chunks as possible.
Next up in order to be able to test things are, in no particular order:
- No-op analyses passes so we don't have to use real ones to exercise
the pass maneger itself.
- Automatic way of generating dummy passes that require an analysis be
run, including a variant that calls a 'print' method on a pass to make
it even easier to print out the results of an analysis.
- Dummy passes that invalidate all analyses for their IR unit so we can
test invalidation and re-runs.
- Automatic way to print each analysis pass as it is re-run.
- Automatic but optional verification of analysis passes everywhere
possible.
I'm not claiming I'll get to all of these immediately, but that's what
is in the pipeline at some stage. I'm fleshing out exactly what I need
and what to prioritize by working on converting analyses and then trying
to test the conversion. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225162 91177308-0d34-0410-b5e6-96231b3b80d8
renaming a file from AssumptionTracker.h to AssumptionCache.h.
Thanks to Philip Reames for noticing and pointing it out in code review!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225146 91177308-0d34-0410-b5e6-96231b3b80d8
units.
This was debated back and forth a bunch, but using references is now
clearly cleaner. Of all the code written using pointers thus far, in
only one place did it really make more sense to have a pointer. In most
cases, this just removes immediate dereferencing from the code. I think
it is much better to get errors on null IR units earlier, potentially
at compile time, than to delay it.
Most notably, the legacy pass manager uses references for its routines
and so as more and more code works with both, the use of pointers was
likely to become really annoying. I noticed this when I ported the
domtree analysis over and wrote the entire thing with references only to
have it fail to compile. =/ It seemed better to switch now than to
delay. We can, of course, revisit this is we learn that references are
really problematic in the API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225145 91177308-0d34-0410-b5e6-96231b3b80d8
a cache of assumptions for a single function, and an immutable pass that
manages those caches.
The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.
Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.
For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225131 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code provided for specifying a global loop alignment preference.
However, the preferred loop alignment might depend on the loop itself. For
recent POWER cores, loops between 5 and 8 instructions should have 32-byte
alignment (while the others are better with 16-byte alignment) so that the
entire loop will fit in one i-cache line.
To support this, getPrefLoopAlignment has been made virtual, and can be
provided with an optional MachineLoop* so the target can inspect the loop
before answering the query. The default behavior, as before, is to return the
value set with setPrefLoopAlignment. MachineBlockPlacement now queries the
target for each loop instead of only once per function. There should be no
functional change for other targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225117 91177308-0d34-0410-b5e6-96231b3b80d8
FunctionPassManager. These never got documented, likely due to the
clutter of this header file. This fixes another problem people noticed
when they started trying to use the new pass manager.
I've also used this to document the aspirational constraints I would
like to hold passes to. I don't really have a better place to document
such things at this point, but eventually will probably create a proper
.rst file and page for the LLVM pass infrastructure that carries such
high-level concerns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225097 91177308-0d34-0410-b5e6-96231b3b80d8
concept-based polymorphism in the pass manager to a separate header.
I got feedback from someone reading the code and trying to use it that
this was really making it hard to dive in and start using these APIs and
that makes a lot of sense.
This only requires a moderate amount of gymnastics to separate in this
way, namely rinsing the PreservedAnalysis object through a template
argument in a few places so that it is dependent and we only examine it
on instantiation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225094 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225076 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r225059. I think MSVC 2012 has a problem with this. This is
an attempt to fix one of the MSVC 2012 bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225065 91177308-0d34-0410-b5e6-96231b3b80d8
This appears to have broken at least the windows build bots due to
compile errors in the predicate that didn't simply supress the overload.
I'm not sure what the fix is, and the bots have been broken for a long
time now so I'm just reverting until Michael can figure out a fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225064 91177308-0d34-0410-b5e6-96231b3b80d8
The issues was that AArch64 has additional restrictions on when local
relocations can be used. We have to take those into consideration when
deciding to put a L symbol in the symbol table or not.
Original message:
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225048 91177308-0d34-0410-b5e6-96231b3b80d8
Under the large code model, we cannot assume that __morestack lives within
2^31 bytes of the call site, so we cannot use pc-relative addressing. We
cannot perform the call via a temporary register, as the rax register may
be used to store the static chain, and all other suitable registers may be
either callee-save or used for parameter passing. We cannot use the stack
at this point either because __morestack manipulates the stack directly.
To avoid these issues, perform an indirect call via a read-only memory
location containing the address.
This solution is not perfect, as it assumes that the .rodata section
is laid out within 2^31 bytes of each function body, but this seems to
be sufficient for JIT.
Differential Revision: http://reviews.llvm.org/D6787
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225003 91177308-0d34-0410-b5e6-96231b3b80d8
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224985 91177308-0d34-0410-b5e6-96231b3b80d8
Nothing particularly interesting, just adding infrastructure for use by in tree users and out of tree users.
Note: These were extracted out of a working frontend, but they have not been well tested in isolation.
Differential Revision: http://reviews.llvm.org/D6807
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224981 91177308-0d34-0410-b5e6-96231b3b80d8
This change implements four basic optimizations:
If a relocated value isn't used, it doesn't need to be relocated.
If the value being relocated is null, relocation doesn't change that. (Technically, this might be collector specific. I don't know of one which it doesn't work for though.)
If the value being relocated is undef, the relocation is meaningless.
If the value being relocated was known nonnull, the relocated pointer also isn't null. (Since it points to the same source language object.)
I outlined other planned work in comments.
Differential Revision: http://reviews.llvm.org/D6600
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224968 91177308-0d34-0410-b5e6-96231b3b80d8
a CLANG_LIBDIR_SUFFIX variable. This is necessary before I can add
support for using that variable to CMake and the C++ code in Clang, and
the autoconf build system does all substitutions in the LLVM tree.
As mentioned before, I'm not planning to add actual multilib support to
the autoconf build, just enough stubs for it to keep playing nicely with
the CMake build once that one has support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224922 91177308-0d34-0410-b5e6-96231b3b80d8
If the control flow is modelling an if-statement where the only instruction in
the 'then' basic block (excluding the terminator) is a call to cttz/ctlz,
CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control
flow graph.
Example:
\code
entry:
%cmp = icmp eq i64 %val, 0
br i1 %cmp, label %end.bb, label %then.bb
then.bb:
%c = tail call i64 @llvm.cttz.i64(i64 %val, i1 true)
br label %end.bb
end.bb:
%cond = phi i64 [ %c, %then.bb ], [ 64, %entry]
\code
In this example, basic block %then.bb is taken if value %val is not zero.
Also, the phi node in %end.bb would propagate the size-of in bits of %val
only if %val is equal to zero.
With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb
into basic block %entry only if cttz is cheap to speculate for the target.
Added two new hooks in TargetLowering.h to let targets customize the behavior
(i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The
two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'.
By default, both methods return 'false'.
On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has
LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI.
Differential Revision: http://reviews.llvm.org/D6728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224899 91177308-0d34-0410-b5e6-96231b3b80d8
.set directives may be overridden by other .set directives as well as
label definitions.
This fixes PR22019.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224811 91177308-0d34-0410-b5e6-96231b3b80d8
This function constructs the main liverange by merging all subranges if
subregister liveness tracking is available. This should be slightly
faster to compute instead of performing the liveness calculation again
for the main range. More importantly it avoids cases where the main
liverange would cover positions where no subrange was live. These cases
happened for partial definitions where the actual defined part was dead
and only the undefined parts used later.
The register coalescing requires that every part covered by the main
live range has at least one subrange live.
I also expect this function to become usefull later for places where the
subranges are modified in a way that it is hard to correctly fix the
main liverange in the machine scheduler, we can simply reconstruct it
from subranges then.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224806 91177308-0d34-0410-b5e6-96231b3b80d8
a size and alignment. Several assertions in DwarfDebug rely on all variable
types to report back a size, or to be derived from a type with a size.
Tested in CFE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224780 91177308-0d34-0410-b5e6-96231b3b80d8
Previously I tried to plug musttail into the existing vararg lowering
code. That turned out to be a mistake, because non-vararg calls use
significantly different register lowering, even on x86. For example, AVX
vectors are usually passed in registers to normal functions and memory
to vararg functions. Now musttail uses a completely separate lowering.
Hopefully this can be used as the basis for non-x86 perfect forwarding.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D6156
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224745 91177308-0d34-0410-b5e6-96231b3b80d8
Take two disjoint Loops L1 and L2.
LoopSimplify fails to simplify some loops (e.g. when indirect branches
are involved). In such situations, it can happen that an exit for L1 is
the header of L2. Thus, when we create PHIs in one of such exits we are
also inserting PHIs in L2 header.
This could break LCSSA form for L2 because these inserted PHIs can also
have uses in L2 exits, which are never handled in the current
implementation. Provide a fix for this corner case and test that we
don't assert/crash on that.
Differential Revision: http://reviews.llvm.org/D6624
rdar://problem/19166231
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224740 91177308-0d34-0410-b5e6-96231b3b80d8
In resent times asan and valgrind have found way more memory management bugs
in llvm than the special purpose leak detector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224703 91177308-0d34-0410-b5e6-96231b3b80d8
(X & INT_MIN) ? X & INT_MAX : X into X & INT_MAX
(X & INT_MIN) ? X : X & INT_MAX into X
(X & INT_MIN) ? X | INT_MIN : X into X
(X & INT_MIN) ? X : X | INT_MIN into X | INT_MIN
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224669 91177308-0d34-0410-b5e6-96231b3b80d8
It is intended to be used for a family of personality functions that
have similar IR preparation requirements. Typically when interoperating
with MSVC personality functions, bits of functionality need to be
outlined from the main function into helper functions. There is also
usually more than one landing pad per invoke, which does not match the
LLVM IR landingpad representation.
None of this is implemented yet. This change just adds a new enum that
is active for *-windows-msvc and delegates to the EH removal preparation
pass. No functionality change for other targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224625 91177308-0d34-0410-b5e6-96231b3b80d8
dsymutil needs access to DWARF specific inforamtion, the small DIContext
wrapper isn't sufficient. Other DWARF consumers might want to use it too
(I'm looking at you lldb).
Differential Revision: http://reviews.llvm.org/D6694
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224594 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of reusing the name `MapValue()` when mapping `Metadata`, use
`MapMetadata()`. The old name doesn't make much sense after the
`Metadata`/`Value` split.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224566 91177308-0d34-0410-b5e6-96231b3b80d8
- This also fixes a bug introduced in r223880 where values were not
correctly marked as Dead anymore.
- Cleanup computeDeadValues(): split up SubRange code variant, simplify
arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224538 91177308-0d34-0410-b5e6-96231b3b80d8
of the abi we should be using. For targets that don't use the
option there's no change, otherwise this allows external users
to set the ABI via string and avoid some of the -backend-option
pain in clang.
Use this option to move the ABI for the ARM port from the
Subtarget to the TargetMachine and update the testcases
accordingly since it's no longer valid to set via -mattr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224492 91177308-0d34-0410-b5e6-96231b3b80d8
Make `DICompositeType` mutators private to prevent misuse. All calls to
`setArrays()` and `setContainingType()` should go through
`DIBuilder::replaceArrays()` and `DIBuilder::replaceVTableHolder()`.
This is a follow-up to r224482 (now that clang has been updated in
r224483).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224486 91177308-0d34-0410-b5e6-96231b3b80d8
Also corrected the name of the load command to not end in an ’S’ as well as corrected
the name of the MachO::linker_option_command struct and other places that had the
word option as plural which did not match the Mac OS X headers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224485 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to DIBuilder to handle self-referencing `DICompositeType`s.
Self-references aren't expected in the debug info graph, and we take
advantage of that by only calling `resolveCycles()` on nodes that were
once forward declarations (otherwise, DIBuilder needs an expensive
tracking reference to every unresolved node it creates, which in cyclic
graphs is *all of them*).
However, clang seems to create self-referencing `DICompositeType`s. Add
API to manage this safely. The paired commit to clang will include the
regression test.
I'll make the `DICompositeType` API `private` in a follow-up to prevent
misuse (I've separated that to prevent build failures from missing the
clang commit).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224482 91177308-0d34-0410-b5e6-96231b3b80d8
This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors.
This fixes PR15872 and provides a partial fix for PR21711.
Differential Revision: http://reviews.llvm.org/D6678
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224429 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When generating MIPS assembly, LLVM always overrides the default assembler options by emitting the '.set noreorder', '.set nomacro' and '.set noat' directives,
while GCC uses the default options if an assembly-level function contains inline assembly code.
This becomes a problem when the code generated by LLVM is interleaved with inline assembly which assumes GCC-like assembler options (from Linux, for example).
This patch fixes these conflicts by setting the appropriate assembler options at the beginning of an inline asm block and popping them at the end.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6637
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224425 91177308-0d34-0410-b5e6-96231b3b80d8
The type promotion helper does not support vector type, so when make
such it does not kick in in such cases.
Original commit message:
[CodeGenPrepare] Move sign/zero extensions near loads using type promotion.
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.
Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.
** Context **
Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.
** Motivating Example **
Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
%ld = load i8* %addr1
%zextld = zext i8 %ld to i32
%ld2 = load i32* %addr2
%add = add nsw i32 %ld2, %zextld
%sextadd = sext i32 %add to i64
%zexta = zext i8 %a to i32
%addza = add nsw i32 %zexta, %zextld
%sextaddza = sext i32 %addza to i64
%addb = add nsw i32 %b, %zextld
%sextaddb = sext i32 %addb to i64
call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
ret void
}
As it is, this IR generates the following assembly on x86_64:
[...]
movzbl (%rdi), %eax # zero-extended load
movl (%rsi), %es # plain load
addl %eax, %esi # 32-bit add
movslq %esi, %rdi # sign extend the result of add
movzbl %dl, %edx # zero extend the first argument
addl %eax, %edx # 32-bit add
movslq %edx, %rsi # sign extend the result of add
addl %eax, %ecx # 32-bit add
movslq %ecx, %rdx # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.
Now, by promoting the additions to form more extended loads we would generate:
[...]
movzbl (%rdi), %eax # zero-extended load
movslq (%rsi), %rdi # sign-extended load
addq %rax, %rdi # 64-bit add
movzbl %dl, %esi # zero extend the first argument
addq %rax, %rsi # 64-bit add
movslq %ecx, %rdx # sign extend the second argument
addq %rax, %rdx # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.
This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.
Note: The throughput numbers are similar on Sandy Bridge and Haswell.
** Proposed Solution **
To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))
The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.
** Performance **
Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar: ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%
The results are consistent for both O3 and Os.
<rdar://problem/18310086>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224402 91177308-0d34-0410-b5e6-96231b3b80d8
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.
Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.
** Context **
Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.
** Motivating Example **
Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
%ld = load i8* %addr1
%zextld = zext i8 %ld to i32
%ld2 = load i32* %addr2
%add = add nsw i32 %ld2, %zextld
%sextadd = sext i32 %add to i64
%zexta = zext i8 %a to i32
%addza = add nsw i32 %zexta, %zextld
%sextaddza = sext i32 %addza to i64
%addb = add nsw i32 %b, %zextld
%sextaddb = sext i32 %addb to i64
call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
ret void
}
As it is, this IR generates the following assembly on x86_64:
[...]
movzbl (%rdi), %eax # zero-extended load
movl (%rsi), %es # plain load
addl %eax, %esi # 32-bit add
movslq %esi, %rdi # sign extend the result of add
movzbl %dl, %edx # zero extend the first argument
addl %eax, %edx # 32-bit add
movslq %edx, %rsi # sign extend the result of add
addl %eax, %ecx # 32-bit add
movslq %ecx, %rdx # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.
Now, by promoting the additions to form more extended loads we would generate:
[...]
movzbl (%rdi), %eax # zero-extended load
movslq (%rsi), %rdi # sign-extended load
addq %rax, %rdi # 64-bit add
movzbl %dl, %esi # zero extend the first argument
addq %rax, %rsi # 64-bit add
movslq %ecx, %rdx # sign extend the second argument
addq %rax, %rdx # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.
This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.
Note: The throughput numbers are similar on Sandy Bridge and Haswell.
** Proposed Solution **
To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))
The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.
** Performance **
Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar: ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%
The results are consistent for both O3 and Os.
<rdar://problem/18310086>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224351 91177308-0d34-0410-b5e6-96231b3b80d8
Add in definedness checks for shift operators, null checks when
pointers are assumed by the code to be non-null, and explicit
unreachables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224255 91177308-0d34-0410-b5e6-96231b3b80d8
- by Ella Bolshinsky
The alias analysis is used define whether the given instruction
is a barrier for store sinking. For 2 identical stores, following
instructions are checked in the both basic blocks, to determine
whether they are sinking barriers.
http://reviews.llvm.org/D6420
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224247 91177308-0d34-0410-b5e6-96231b3b80d8
It makes more sense for ThreadLocal<const T>::get to return a const T*
and ThreadLocal<T>::get to return a T*.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224225 91177308-0d34-0410-b5e6-96231b3b80d8
Clang's static analyzer found several potential cases of undefined
behavior, use of un-initialized values, and potentially null pointer
dereferences in tablegen, Support, MC, and ADT. This cleans them up
with specific assertions on the assumptions of the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224154 91177308-0d34-0410-b5e6-96231b3b80d8
The __fp16 type is unconditionally exposed. Since -mfp16-format is not yet
supported, there is not a user switch to change this behaviour. This build
attribute should capture the default behaviour of the compiler, which is to
expose the IEEE 754 version of __fp16.
When -mfp16-format is emitted, that will be the way to control the value of
this build attribute.
Change-Id: I8a46641ff0fd2ef8ad0af5f482a6d1af2ac3f6b0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224115 91177308-0d34-0410-b5e6-96231b3b80d8
Also remove redundant documentation:
- doxygen will copy documentation to overriden methods.
- Use \copydoc on PIMPL classes instead of replicating the text.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224089 91177308-0d34-0410-b5e6-96231b3b80d8
Updating comments to reflect the current state of the world after my recent changes to ownership structure and generally better describe what a GCStrategy is and how it works.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224086 91177308-0d34-0410-b5e6-96231b3b80d8
Add an option to disable optimization to shrink truncated larger type
loads to smaller type loads. On SI this prevents using scalar load
instructions in some cases, since there are no scalar extloads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224084 91177308-0d34-0410-b5e6-96231b3b80d8
`MDString`s can have arbitrary characters in them. Prevent an assertion
that fired in `BitcodeWriter` because of sign extension by copying the
characters into the record as `unsigned char`s.
Based on a patch by Keno Fischer; fixes PR21882.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224077 91177308-0d34-0410-b5e6-96231b3b80d8
This reflects the typelessness of `Metadata` in the bitcode format,
removing types from all metadata operands.
`METADATA_VALUE` represents a `ValueAsMetadata`, and always has two
fields: the type and the value.
`METADATA_NODE` represents an `MDNode`, and unlike `METADATA_OLD_NODE`,
doesn't store types. It stores operands at their ID+1 so that `0` can
reference `nullptr` operands.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224073 91177308-0d34-0410-b5e6-96231b3b80d8
This gives us better leak detection messages, like `Value` has.
This also has the side effect of papering over a problem where
`MachineInstr`s are added as garbage to the leak detector and then
deleted without being removed. If `MDNode::getTemporary()` allocates an
`MDNodeFwdDecl` in the same spot, the leak detector asserts. By
separating `MDNode`s into their own container we lose that assertion.
Since `MachineInstr` is required to have a trivial destructor, its usage
of `LeakDetector` at all is pretty suspect. I'll be sending a patch
soon to strip that out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224060 91177308-0d34-0410-b5e6-96231b3b80d8
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.
To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.
This is the 2nd attempt at this after realizing that PassManager::add() may
actually delete the pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224059 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than requiring overloads in the wrapper and the impl, just
overload the impl and use templates in the wrapper. This makes it less
error prone to add more overloads (`void *` defeats any chance the
compiler has at noticing bugs, so the easier the better).
At the same time, correct the comment that was lying about not changing
functionality for `Value`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224058 91177308-0d34-0410-b5e6-96231b3b80d8
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.
To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224042 91177308-0d34-0410-b5e6-96231b3b80d8
This change moves the ownership and access of GCFunctionInfo (the object which describes the safepoints associated with a safepoint under GCRoot) to GCModuleInfo. Previously, this was owned by GCStrategy which was in turned owned by GCModuleInfo. This made GCStrategy module specific which is 'surprising' given it's name and other purposes.
There's a few more changes needed, but we're getting towards the point we can reuse GCStrategy for gc.statepoint as well.
p.s. The style of this code ends up being a mess. I was trying to move code around without otherwise changing much. Once I get the ownership structure rearranged, I will go through and fixup spacing, naming, comments etc.
Differential Revision: http://reviews.llvm.org/D6587
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223994 91177308-0d34-0410-b5e6-96231b3b80d8
These methods are only used by MCJIT and are very specific to it. In fact, they
are also fairly specific to the fact that we have a dynamic linker of
relocatable objects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223964 91177308-0d34-0410-b5e6-96231b3b80d8
Don't call `dropAllReferences()` from `MDNode::~MDNode()`, call it
directly from `~MDNodeFwdDecl()` and `~GenericMDNode()`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223904 91177308-0d34-0410-b5e6-96231b3b80d8
This works like the composeSubRegisterIndices() function but transforms
a subregister lane mask instead of a subregister index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223874 91177308-0d34-0410-b5e6-96231b3b80d8
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223873 91177308-0d34-0410-b5e6-96231b3b80d8
Nothing particularly interesting here, just documenting the way the code currently works before I start changing it...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223866 91177308-0d34-0410-b5e6-96231b3b80d8
In the current implementation, GCStrategy is a part of the ownership structure for the gc metadata which describes a Module. It also contains a reference to the module in question. As a result, GCStrategy instances are essentially Module specific.
I plan to transition away from this design. Instead, a GCStrategy will be owned by the LLVMContext. It will be a lightweight policy object which contains no information about the Modules or Functions involved, but can be easily reached given a Function.
The first step in this transition is to remove the direct Module reference from GCStrategy. This also requires removing the single user of this reference, the GCMetadataPrinter hierarchy. In theory, this will allow the lifetime of the printers to be scoped to the LLVMContext as well, but in practice, I'm not actually changing that. (Yet?)
An alternate design would have been to move the direct Module reference into the GCMetadataPrinter and change the keying of the owning maps to explicitly key off both GCStrategy and Module. I'm open to doing it that way instead, but didn't see much value in preserving the per Module association for GCMetadataPrinters.
The next change in this sequence will be to start unwinding the intertwined ownership between GCStrategy, GCModuleInfo, and GCFunctionInfo.
Differential Revision: http://reviews.llvm.org/D6566
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223859 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM_EXPLICIT is only supported by recent version of MSVC, and it seems
the not-so-recent versions get confused about the operator bool() when
tryint to resolve operator== calls.
This removed the operator bool()'s since they don't seem to be used
anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223824 91177308-0d34-0410-b5e6-96231b3b80d8
It is a static method of IRObjectFile, so having to use
IRObjectFile::createIRObjectFile was redundant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223822 91177308-0d34-0410-b5e6-96231b3b80d8
Tidy up the code a little by using 'auto' when the type is obvious, doxify the
comments, and clang-format the file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223807 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is desirable for WebKit and other clients of the llvm-shlib because C++ exit time destructors have a tendency to crash when invoked from multi-threaded applications.
Ideally this option will be temporary, because the ideal fix is to just not have exit time destructors.
Reviewers: chapuni, ributzka
Reviewed By: ributzka
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6572
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223805 91177308-0d34-0410-b5e6-96231b3b80d8
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532. Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.
I have a follow-up patch prepared for `clang`. If this breaks other
sub-projects, I apologize in advance :(. Help me compile it on Darwin
I'll try to fix it. FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.
This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.
Here's a quick guide for updating your code:
- `Metadata` is the root of a class hierarchy with three main classes:
`MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from
the `Value` class hierarchy. It is typeless -- i.e., instances do
*not* have a `Type`.
- `MDNode`'s operands are all `Metadata *` (instead of `Value *`).
- `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.
If you're referring solely to resolved `MDNode`s -- post graph
construction -- just use `MDNode*`.
- `MDNode` (and the rest of `Metadata`) have only limited support for
`replaceAllUsesWith()`.
As long as an `MDNode` is pointing at a forward declaration -- the
result of `MDNode::getTemporary()` -- it maintains a side map of its
uses and can RAUW itself. Once the forward declarations are fully
resolved RAUW support is dropped on the ground. This means that
uniquing collisions on changing operands cause nodes to become
"distinct". (This already happened fairly commonly, whenever an
operand went to null.)
If you're constructing complex (non self-reference) `MDNode` cycles,
you need to call `MDNode::resolveCycles()` on each node (or on a
top-level node that somehow references all of the nodes). Also,
don't do that. Metadata cycles (and the RAUW machinery needed to
construct them) are expensive.
- An `MDNode` can only refer to a `Constant` through a bridge called
`ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).
As a side effect, accessing an operand of an `MDNode` that is known
to be, e.g., `ConstantInt`, takes three steps: first, cast from
`Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
third, cast down to `ConstantInt`.
The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
metadata schema owners transition away from using `Constant`s when
the type isn't important (and they don't care about referring to
`GlobalValue`s).
In the meantime, I've added transitional API to the `mdconst`
namespace that matches semantics with the old code, in order to
avoid adding the error-prone three-step equivalent to every call
site. If your old code was:
MDNode *N = foo();
bar(isa <ConstantInt>(N->getOperand(0)));
baz(cast <ConstantInt>(N->getOperand(1)));
bak(cast_or_null <ConstantInt>(N->getOperand(2)));
bat(dyn_cast <ConstantInt>(N->getOperand(3)));
bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));
you can trivially match its semantics with:
MDNode *N = foo();
bar(mdconst::hasa <ConstantInt>(N->getOperand(0)));
baz(mdconst::extract <ConstantInt>(N->getOperand(1)));
bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2)));
bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3)));
bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));
and when you transition your metadata schema to `MDInt`:
MDNode *N = foo();
bar(isa <MDInt>(N->getOperand(0)));
baz(cast <MDInt>(N->getOperand(1)));
bak(cast_or_null <MDInt>(N->getOperand(2)));
bat(dyn_cast <MDInt>(N->getOperand(3)));
bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));
- A `CallInst` -- specifically, intrinsic instructions -- can refer to
metadata through a bridge called `MetadataAsValue`. This is a
subclass of `Value` where `getType()->isMetadataTy()`.
`MetadataAsValue` is the *only* class that can legally refer to a
`LocalAsMetadata`, which is a bridged form of non-`Constant` values
like `Argument` and `Instruction`. It can also refer to any other
`Metadata` subclass.
(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223802 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the pattern match code to work also with Values instead with
Instructions only. Also remove the no longer need matcher (m_Instruction).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223797 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, walk the obj symbol list in parallel to find the GV. This shouldn't
change anything on ELF where global symbols are not mangled, but it is a step
toward supporting other object formats.
Gold itself is ELF only, but bfd ld supports COFF and the logic in the gold
plugin could be reused on lld.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223780 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce the ``llvm.instrprof_increment`` intrinsic and the
``-instrprof`` pass. These provide the infrastructure for writing
counters for profiling, as in clang's ``-fprofile-instr-generate``.
The implementation of the instrprof pass is ported directly out of the
CodeGenPGO classes in clang, and with the followup in clang that rips
that code out to use these new intrinsics this ends up being NFC.
Doing the instrumentation this way opens some doors in terms of
improving the counter performance. For example, this will make it
simple to experiment with alternate lowering strategies, and allows us
to try handling profiling specially in some optimizations if we want
to.
Finally, this drastically simplifies the frontend and puts all of the
lowering logic in one place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223672 91177308-0d34-0410-b5e6-96231b3b80d8
DenseSet used to be implemented as DenseMap<Key, char>, which usually doubled
the memory footprint of the map. Now we use a compressed set so the second
element uses no memory at all. This required some surgery on DenseMap as
all accesses to the bucket now have to go through methods; this should
have no impact on the behavior of DenseMap though. The new default bucket
type for DenseMap is a slightly extended std::pair as we expose it through
DenseMap's iterator and don't want to break any existing users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223588 91177308-0d34-0410-b5e6-96231b3b80d8
Required some APInt massaging to get proper empty/tombstone values. Apart
from making the code a bit simpler this also reduces the bucket size of
the ConstantInt map from 32 to 24 bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223478 91177308-0d34-0410-b5e6-96231b3b80d8
It relies on undefined behaviour, since `StringMapEntry<>` is not
a standard layout type. There are no users anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223439 91177308-0d34-0410-b5e6-96231b3b80d8
This matches std::vector and should avoid unnecessary masking to 32 bits
when calling them on o 64 bits system.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223365 91177308-0d34-0410-b5e6-96231b3b80d8
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223348 91177308-0d34-0410-b5e6-96231b3b80d8
Use the MCAsmInfo instead of the DataLayout, and allow
specifying a custom prefix for labels specifically. HSAIL
requires that labels begin with @, but global symbols with &.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223323 91177308-0d34-0410-b5e6-96231b3b80d8
The non-opaque part can be structurally uniqued. To keep this to just
a hash lookup, we don't try to unique cyclic types.
Also change the type mapping algorithm to be optimistic about a type
not being recursive and only create a new type when proven to be wrong.
This is not as strong as trying to speculate that we can keep the source
type, but is simpler (no speculation to revert) and more powerfull
than what we had before (we don't copy non-recursive types at least).
I initially wrote this to try to replace the name based type merging.
It is not strong enough to replace it, but is is a useful addition.
With this patch the number of named struct types is a clang lto bootstrap goes
from 49674 to 15986.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223278 91177308-0d34-0410-b5e6-96231b3b80d8
When lazy reading a module, the types used in a function will not be visible to
a TypeFinder until the body is read.
This patch fixes that by asking the module for its identified struct types.
If a materializer is present, the module asks it. If not, it uses a TypeFinder.
This fixes pr21374.
I will be the first to say that this is ugly, but it was the best I could find.
Some of the options I looked at:
* Asking the LLVMContext. This could be made to work for gold, but not currently
for ld64. ld64 will load multiple modules into a single context before merging
them. This causes us to see types from future merges. Unfortunately,
MappedTypes is not just a cache when it comes to opaque types. Once the
mapping has been made, we have to remember it for as long as the key may
be used. This would mean moving MappedTypes to the Linker class and having
to drop the Linker::LinkModules static methods, which are visible from C.
* Adding an option to ignore function bodies in the TypeFinder. This would
fix the PR by picking the worst result. It would work, but unfortunately
we are currently quite dependent on the upfront type merging. I will
try to reduce our dependency, but it is not clear that we will be able
to get rid of it for now.
The only clean solution I could think of is making the Module own the types.
This would have other advantages, but it is a much bigger change. I will
propose it, but it is nice to have this fixed while that is discussed.
With the gold plugin, this patch takes the number of types in the LTO clang
binary from 52817 to 49669.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223215 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Ben Gamari!
This redefines the `prefix` attribute introduced previously and
introduces a `prologue` attribute. There are a two primary usecases
that these attributes aim to serve,
1. Function prologue sigils
2. Function hot-patching: Enable the user to insert `nop` operations
at the beginning of the function which can later be safely replaced
with a call to some instrumentation facility
3. Runtime metadata: Allow a compiler to insert data for use by the
runtime during execution. GHC is one example of a compiler that
needs this functionality for its tables-next-to-code functionality.
Previously `prefix` served cases (1) and (2) quite well by allowing the user
to introduce arbitrary data at the entrypoint but before the function
body. Case (3), however, was poorly handled by this approach as it
required that prefix data was valid executable code.
Here we redefine the notion of prefix data to instead be data which
occurs immediately before the function entrypoint (i.e. the symbol
address). Since prefix data now occurs before the function entrypoint,
there is no need for the data to be valid code.
The previous notion of prefix data now goes under the name "prologue
data" to emphasize its duality with the function epilogue.
The intention here is to handle cases (1) and (2) with prologue data and
case (3) with prefix data.
References
----------
This idea arose out of discussions[1] with Reid Kleckner in response to a
proposal to introduce the notion of symbol offsets to enable handling of
case (3).
[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-May/073235.html
Test Plan: testsuite
Differential Revision: http://reviews.llvm.org/D6454
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223189 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it easier to debug Twine as the 'Kind' fields now show their enum values in lldb and not escaped characters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223178 91177308-0d34-0410-b5e6-96231b3b80d8
Previously .cpu directive in ARM assembler didnt switch to the new CPU and
therefore acted as a nop. This implemented real action for .cpu and eg.
allows to assembler FreeBSD kernel with -integrated-as.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223147 91177308-0d34-0410-b5e6-96231b3b80d8
Presumably it was added to the CMake system when MAXPATHLEN was still
used by code built for Windows. Currently only lib/Support/Path.inc uses
MAXPATHLEN, and it should be available on all Unices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223139 91177308-0d34-0410-b5e6-96231b3b80d8
This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them.
With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now.
I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it.
During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases.
In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure.
Reviewed by: atrick, ributzka
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223137 91177308-0d34-0410-b5e6-96231b3b80d8
This operating system type represents the AMD HSA runtime,
and will be required by the R600 backend in order to generate
correct code for this runtime.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223124 91177308-0d34-0410-b5e6-96231b3b80d8
The default ARM floating-point mode does not support IEEE 754 mode exactly. Of
relevance to this patch is that input denormals are flushed to zero. The way in
which they're flushed to zero depends on the architecture,
* For VFPv2, it is implementation defined as to whether the sign of zero is
preserved.
* For VFPv3 and above, the sign of zero is always preserved when a denormal
is flushed to zero.
When FP support has been disabled, the strategy taken by this patch is to
assume the software support will mirror the behaviour of the hardware support
for the target *if it existed*. That is, for architectures which can only have
VFPv2, it is assumed the software will flush to positive zero. For later
architectures it is assumed the software will flush to zero preserving sign.
Change-Id: Icc5928633ba222a4ba3ca8c0df44a440445865fd
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223110 91177308-0d34-0410-b5e6-96231b3b80d8
This is the second patch in a small series. This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints. It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code. I will be submitting the SelectionDAG parts within the next 24-48 hours. Since those pieces are by far the most complicated, I wanted to minimize the size of that patch. That patch will include the tests which exercise the functionality in this patch. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.
The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots. The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes. The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT. (Doing so would break relocation semantics for collectors which wish to relocate roots.)
The implementation of STATEPOINT is closely modeled on PATCHPOINT. Eventually, much of the code in this patch will be removed. The long term plan is to merge the functionality provided by statepoints and patchpoints. Merging their implementations in the backend is likely to be a good starting point.
Reviewed by: atrick, ributzka
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223085 91177308-0d34-0410-b5e6-96231b3b80d8
The statepoint intrinsics are intended to enable precise root tracking through the compiler as to support garbage collectors of all types. The addition of the statepoint intrinsics to LLVM should have no impact on the compilation of any program which does not contain them. There are no side tables created, no extra metadata, and no inhibited optimizations.
A statepoint works by transforming a call site (or safepoint poll site) into an explicit relocation operation. It is the frontend's responsibility (or eventually the safepoint insertion pass we've developed, but that's not part of this patch series) to ensure that any live pointer to a GC object is correctly added to the statepoint and explicitly relocated. The relocated value is just a normal SSA value (as seen by the optimizer), so merges of relocated and unrelocated values are just normal phis. The explicit relocation operation, the fact the statepoint is assumed to clobber all memory, and the optimizers standard semantics ensure that the relocations flow through IR optimizations correctly.
This is the first patch in a small series. This patch contains only the IR parts; the documentation and backend support will be following separately. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.
Reviewed by: atrick, ributzka
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223078 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
".weak" symbols cannot be consumed by ptxas (PR21685). This patch makes the
weak directive in MCAsmPrinter customizable, and disables emitting ".weak"
symbols for NVPTX.
Test Plan: weak-linkage.ll
Reviewers: jholewinski
Reviewed By: jholewinski
Subscribers: majnemer, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D6455
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223077 91177308-0d34-0410-b5e6-96231b3b80d8
The explicit set of destination types is not fully redundant when lazy loading
since the TypeFinder will not find types used only in function bodies.
This keeps the logic to drop the name of mapped types since it still helps
with avoiding further renaming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223043 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of keeping an explicit set, just drop the names of types we choose
to map to some other type.
This has the advantage that the name of the unused will not cause the context
to rename types on module read.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222986 91177308-0d34-0410-b5e6-96231b3b80d8
If built with -Wunused-variable, clang objects to the declarations due to the
unused variable; drop the names. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222944 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot. I'll respond to the commit on the
list with a reproduction of one of the failures.
Conflicts:
lib/Target/X86/X86TargetTransformInfo.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222936 91177308-0d34-0410-b5e6-96231b3b80d8
The AAPCS treats small structs and homogeneous floating (or vector) aggregates
specially, and guarantees they either get passed as a contiguous block of
registers, or prevent any future use of those registers and get passed on the
stack.
This concept can fit quite neatly into LLVM's own type system, mapping an HFA
to [N x float] and so on, and small structs to [N x i64]. Doing so allows
front-ends to emit AAPCS compliant code without having to duplicate the
register counting logic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222903 91177308-0d34-0410-b5e6-96231b3b80d8
The current 8 bits is sufficient for ELF32 targets but ELF64 requires
32 bits. Add a test for AArch64 that exposes the issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222898 91177308-0d34-0410-b5e6-96231b3b80d8
This mostly entails adding relocations, however there are a couple of
changes to existing relocations:
1. R_AARCH64_NONE is defined to be zero rather than 256
R_AARCH64_NONE has been defined to be zero for a long time elsewhere
e.g. binutils and glibc since the submission of the AArch64 port in
2012 so this is required for compatibility.
2. R_AARCH64_TLSDESC_ADR_PAGE renamed to R_AARCH64_TLSDESC_ADR_PAGE21
I don't think there is any way for relocation names to leak out of LLVM
so this should not break anything.
Tested with check-all with no regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222821 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, when loading an object file, RuntimeDyld (1) took ownership of the
ObjectFile instance (and associated MemoryBuffer), (2) potentially modified the
object in-place, and (3) returned an ObjectImage that managed ownership of the
now-modified object and provided some convenience methods. This scheme accreted
over several years as features were tacked on to RuntimeDyld, and was both
unintuitive and unsafe (See e.g. http://llvm.org/PR20722).
This patch fixes the issue by removing all ownership and in-place modification
of object files from RuntimeDyld. Existing behavior, including debugger
registration, is preserved.
Noteworthy changes include:
(1) ObjectFile instances are now passed to RuntimeDyld by const-ref.
(2) The ObjectImage and ObjectBuffer classes have been removed entirely, they
existed to model ownership within RuntimeDyld, and so are no longer needed.
(3) RuntimeDyld::loadObject now returns an instance of a new class,
RuntimeDyld::LoadedObjectInfo, which can be used to construct a modified
object suitable for registration with the debugger, following the existing
debugger registration scheme.
(4) The JITRegistrar class has been removed, and the GDBRegistrar class has been
re-written as a JITEventListener.
This should fix http://llvm.org/PR20722 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222810 91177308-0d34-0410-b5e6-96231b3b80d8
clearly only exactly equal width ptrtoint and inttoptr casts are no-op
casts, it says so right there in the langref. Make the code agree.
Original log from r220277:
Teach the load analysis to allow finding available values which require
inttoptr or ptrtoint cast provided there is datalayout available.
Eventually, the datalayout can just be required but in practice it will
always be there today.
To go with the ability to expose available values requiring a ptrtoint
or inttoptr cast, helpers are added to perform one of these three casts.
These smarts are necessary to finish canonicalizing loads and stores to
the operational type requirements without regressing fundamental
combines.
I've added some test cases. These should actually improve as the load
combining and store combining improves, but they may fundamentally be
highlighting some missing combines for select in addition to exercising
the specific added logic to load analysis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222739 91177308-0d34-0410-b5e6-96231b3b80d8
Fill in omission of `cast_or_null<>` and `dyn_cast_or_null<>` for types
that wrap pointers (e.g., smart pointers).
Type traits need to be slightly stricter than for `cast<>` and
`dyn_cast<>` to resolve ambiguities with simple types.
There didn't seem to be any unit tests for pointer wrappers, so I tested
`isa<>`, `cast<>`, and `dyn_cast<>` while I was in there.
This only supports pointer wrappers with a conversion to `bool` to check
for null. If in the future it's useful to support wrappers without such
a conversion, it should be a straightforward incremental step to use the
`simplify_type` machinery for the null check. In that case, the unit
tests should be updated to remove the `operator bool()` from the
`pointer_wrappers::PTy`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222644 91177308-0d34-0410-b5e6-96231b3b80d8
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222632 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes the self-host fail. Note that this commit activates dominator
analysis in the combiner by default (like the original commit did).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222590 91177308-0d34-0410-b5e6-96231b3b80d8
This should allow the list of relocations for a particular
architecture to be kept in a single header rather than duplicated
whenever we need to enumerate all the relocations.
Patch by Will Newton.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222565 91177308-0d34-0410-b5e6-96231b3b80d8
The logic for detecting EOF was wrong and would fail if we ever requested
more than 16k past the last read position.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222505 91177308-0d34-0410-b5e6-96231b3b80d8
po_iterator_storage's insertEdge was updated to reflect the API
changes from many of our insert methods in r222334, however the
template specialization for external storage was not updated. This
updates the specialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222446 91177308-0d34-0410-b5e6-96231b3b80d8
AliasSetTracker::addUnknown may create an AliasSet devoid of pointers
just to contain an instruction if no suitable AliasSet already exists.
It will then AliasSet::addUnknownInst and we will be done.
However, it's possible for addUnknown to choose an existing AliasSet to
addUnknownInst.
If this were to occur, we are in a bit of a pickle: removing pointers
from the AliasSet can cause the entire AliasSet to become destroyed,
taking our unknown instructions out with them.
Instead, keep track whether or not our AliasSet has any unknown
instructions.
This fixes PR21582.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222338 91177308-0d34-0410-b5e6-96231b3b80d8
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.
This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222334 91177308-0d34-0410-b5e6-96231b3b80d8
If LowerGEP is enabled, it can lower a GEP with multiple indices into GEPs with a single index
or arithmetic operations. Lowering GEPs can always extract structure indices. Lowering GEPs can
also give use more optimization opportunities. It can benefit passes like CSE, LICM and CGP.
Reviewed in http://reviews.llvm.org/D5864
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222328 91177308-0d34-0410-b5e6-96231b3b80d8
Having two ways to do this doesn't seem terribly helpful and
consistently using the insert version (which we already has) seems like
it'll make the code easier to understand to anyone working with standard
data structures. (I also updated many references to the Entry's
key and value to use first() and second instead of getKey{Data,Length,}
and get/setValue - for similar consistency)
Also removes the GetOrCreateValue functions so there's less surface area
to StringMap to fix/improve/change/accommodate move semantics, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222319 91177308-0d34-0410-b5e6-96231b3b80d8
StringSet is still a bit dodgy in that it exposes the raw iterator of
the StringMap parent, which exposes the weird detail that StringSet
actually has a 'value'... but anyway, this is useful for a handful of
clients that want to reference the newly inserted/persistent string data
in the StringSet/Map/Entry/thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222302 91177308-0d34-0410-b5e6-96231b3b80d8
The other option would be to do something like
if (that.isSingleWord())
VAL = that.VAL;
else
pVal = that.pVal
This bug was causing 86TTI::getIntImmCost to be miscompiled in a LTO
bootstrap in stage2, causing the build of stage3 to fail.
LLVM is getting quiet good at exploiting this. Not sure if there is anything
a sanitizer could do to help
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222294 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
move the code from BreakCriticalEdges::runOnFunction()
into a separate utility function llvm::SplitAllCriticalEdges()
so that it can be used independently.
No functionality change intended.
Test Plan: check-llvm
Reviewers: nlewycky
Reviewed By: nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6313
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222288 91177308-0d34-0410-b5e6-96231b3b80d8
Having the operands at the back prevents subclasses from safely adding
fields. Move them to the front.
Instead of replicating the custom `malloc()`, `free()` and `DestroyFlag`
logic that was there before, overload `new` and `delete`.
I added calls to a new `GenericMDNode::dropAllReferences()` in
`LLVMContextImpl::~LLVMContextImpl()`. There's a maze of callbacks
happening during teardown, and this resolves them before we enter
the destructors.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222211 91177308-0d34-0410-b5e6-96231b3b80d8
Split `MDNode` into two classes:
- `GenericMDNode`, which is uniquable (and for now, always starts
uniqued). Once `Metadata` is split from the `Value` hierarchy, this
class will lose the ability to RAUW itself.
- `MDNodeFwdDecl`, which is used for the "temporary" interface, is
never uniqued, and isn't managed by `LLVMContext` at all.
I've left most of the guts in `MDNode` for now, but I'll incrementally
move things to the right places (or delete the functionality, as
appropriate).
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222205 91177308-0d34-0410-b5e6-96231b3b80d8
use DIScopeRef.
A paired commit at clang will follow to show cases where we will use an
identifer for the context of a global variable.
rdar://18958417
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222195 91177308-0d34-0410-b5e6-96231b3b80d8
Change uniquing from a `FoldingSet` to a `DenseSet` with custom
`DenseMapInfo`. Unfortunately, this doesn't save any memory, since
`DenseSet<T>` is a simple wrapper for `DenseMap<T, char>`, but I'll come
back to fix that later.
I used the name `GenericDenseMapInfo` to the custom `DenseMapInfo` since
I'll be splitting `MDNode` into two classes soon: `MDNodeFwdDecl` for
temporaries, and `GenericMDNode` for everything else.
I also added a non-debug-info reduced version of a type-uniquing test
that started failing on an earlier draft of this patch.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222191 91177308-0d34-0410-b5e6-96231b3b80d8
They were producing the wrong result if NumBits == BitsInWord. The old mask
produced -1, the new mask 0.
This should fix the 32 bit bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222166 91177308-0d34-0410-b5e6-96231b3b80d8
The specializations were broken. For example,
void foo(const CallGraph *G) {
auto I = GraphTraits<const CallGraph *>::nodes_begin(G);
auto K = I++;
...
}
or
void bar(const CallGraphNode *N) {
auto I = GraphTraits<const CallGraphNode *>::nodes_begin(G);
auto K = I++;
....
}
would not compile.
Patch by Speziale Ettore!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222149 91177308-0d34-0410-b5e6-96231b3b80d8
This was motivated by a bug which caused code like this to be
miscompiled:
declare void @take_ptr(i8*)
define void @test() {
%addr1.32 = alloca i8
%addr2.32 = alloca i32, i32 1028
call void @take_ptr(i8* %addr1)
ret void
}
This was emitting the following assembly to get the value of %addr1:
add r0, sp, #1020
add r0, r0, #8
However, "add r0, r0, #8" is not a valid Thumb1 instruction, and this
could not be assembled. The generated object file contained this,
resulting in r0 holding SP+8 rather tha SP+1028:
add r0, sp, #1020
add r0, sp, #8
This function looked like it could have caused miscompilations for
other combinations of registers and offsets (though I don't think it is
currently called with these), and the heuristic it used did not match
the emitted code in all cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222125 91177308-0d34-0410-b5e6-96231b3b80d8
We were a little lax in a few areas:
- We pretended that import libraries were like any old COFF file, they
are not. In fact, they aren't really COFF files at all, we should
probably grow some specialized functionality to handle them smarter.
- Our symbol iterators were more than happy to attempt to go past the
end of the symbol table if you had a symbol with a bad list of
auxiliary symbols.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222124 91177308-0d34-0410-b5e6-96231b3b80d8
Indices into the table are stored in each MCRegisterClass instead of a pointer. A new method, getRegClassName, is added to MCRegisterInfo and TargetRegisterInfo to lookup the string in the table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222118 91177308-0d34-0410-b5e6-96231b3b80d8
This adds back r222061, but now calls initializePAEvalPass from the correct
library to avoid link problems.
Original message:
Don't make assumptions about the name of private global variables.
Private variables are can be renamed, so it is not reliable to make
decisions on the name.
The name is also dropped by the assembler before getting to the
linker, so using the name causes a disconnect between how llvm makes a
decision (var name) and how the linker makes a decision (section it is
in).
This patch changes one case where we were looking at the variable name to use
the section instead.
Test tuning by Michael Gottesman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222117 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Several places in DependenceAnalysis assumes both SCEVs in a subscript pair
share the same integer type. For instance, isKnownPredicate calls
SE->getMinusSCEV(X, Y) which asserts X and Y share the same type. However,
DependenceAnalysis fails to ensure this assumption when producing a subscript
pair, causing tests such as NonCanonicalizedSubscript to crash. With this
patch, DependenceAnalysis runs unifySubscriptType before producing any
subscript pair, ensuring the assumption.
Test Plan:
Added NonCanonicalizedSubscript.ll on which DependenceAnalysis before the fix
crashed because subscripts have different types.
Reviewers: spop, sebpop, jingyue
Reviewed By: jingyue
Subscribers: eliben, meheff, llvm-commits
Differential Revision: http://reviews.llvm.org/D6289
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222100 91177308-0d34-0410-b5e6-96231b3b80d8
Make explicit the requirement that most IR values in `DIBuilder` are
`Constant`. This requires a follow-up change in clang.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222070 91177308-0d34-0410-b5e6-96231b3b80d8
Now that `MDString` and `MDNode` have a common base class, use it. Note
that it's not useful to assume subclasses of `Metadata` must be one or
the other since we'll be adding more subclasses soon enough.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222064 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The current "WinEH" exception handling type is more about Itanium-style
LSDA tables layered on top of the Windows native unwind info format
instead of .eh_frame tables or EHABI unwind info. Use the name
"ItaniumWinEH" to better reflect the hybrid nature of the design.
Also rename isExceptionHandlingDWARF to usesItaniumLSDAForExceptions,
since the LSDA is part of the Itanium C++ ABI document, and not the
DWARF standard.
Reviewers: echristo
Subscribers: llvm-commits, compnerd
Differential Revision: http://reviews.llvm.org/D6279
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222062 91177308-0d34-0410-b5e6-96231b3b80d8
Private variables are can be renamed, so it is not reliable to make
decisions on the name.
The name is also dropped by the assembler before getting to the
linker, so using the name causes a disconnect between how llvm makes a
decision (var name) and how the linker makes a decision (section it is
in).
This patch changes one case where we were looking at the variable name to use
the section instead.
Test tuning by Michael Gottesman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222061 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r221842 which was a revert of r221836 and of the
test parts of r221837.
This new version fixes an UB bug pointed out by David (along with
addressing some other review comments), makes some dumping more
resilient to broken input data and forces the accelerator tables
to be dumped in the tests where we use them (this decision is
platform specific otherwise).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@222003 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds builtin support for xvdivdp and xvdivsp, along with a
test case. Straightforward stuff.
There's a companion patch for Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221983 91177308-0d34-0410-b5e6-96231b3b80d8
In support of serializing executables, obj2yaml now records the virtual address
and size of sections. It also serializes whatever we strictly need from
the PE header, it expects that it can reconstitute everything else via
inference.
yaml2obj can reconstitute a fully linked executable.
In order to get executables correctly serialized/deserialized, other
bugs were fixed as a circumstance. We now properly respect file and
section alignments. We also avoid writing out string tables unless they
are strictly necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221975 91177308-0d34-0410-b5e6-96231b3b80d8
This matches std::vector and is more efficient as it avoids
truncations.
With this the text segment of opt goes from 19705442 bytes
to 19703930 bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221973 91177308-0d34-0410-b5e6-96231b3b80d8
This teaches CoverageMapping::getCoveredFunctions to filter to a
particular file and uses that to replace most of the logic found in
llvm-cov report.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221962 91177308-0d34-0410-b5e6-96231b3b80d8
Stop using `Value::getName()` to get the string behind an `MDString`.
Switch to `StringMapEntry<MDString>` so that we can find the string by
its coallocation.
This is part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221960 91177308-0d34-0410-b5e6-96231b3b80d8
When "MBB->Insert(It, ...)" is called, we want It to be pointing inside the
correct basic block. No actual failures at the moment, but it's caused problems
before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221953 91177308-0d34-0410-b5e6-96231b3b80d8
Hide the fact that `MDString`'s string is stored in `Value::Name` --
that's going to change soon. Update the only in-tree client that was
using it instead of `Value::getString()`.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221951 91177308-0d34-0410-b5e6-96231b3b80d8
Creating tests for the ConstantIslands pass is very difficult, since it depends
on precise layout details. Having the ability to precisely inject a number of
bytes into the stream helps greatly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221903 91177308-0d34-0410-b5e6-96231b3b80d8
This will become the root of a new class hierarchy separate from
`Value`. As a first step, stick it between `Value` and `MDNode`.
This is part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221886 91177308-0d34-0410-b5e6-96231b3b80d8
The reading of 64 bit values could still be optimized, but at least this cuts
down on the number of virtual calls to fetch more data.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221865 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r221836.
The tests are asserting on some buildbots. This also reverts the
test part of r221837 as it relies on dwarfdump dumping the
accelerator tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221842 91177308-0d34-0410-b5e6-96231b3b80d8
This avoids an issue where AtEndOfStream mistakenly returns true at the /start/ of
a stream.
(In the rare case that the size is known and actually 0, the slow path will still
handle it correctly.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221840 91177308-0d34-0410-b5e6-96231b3b80d8
The class used for the dump only allows to dump for the moment, but
it can (and will) be easily extended to support search also.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221836 91177308-0d34-0410-b5e6-96231b3b80d8
Currently FormValues are only used for attributes of DIEs and thus
uers always have a CU lying around when calling into the FormValue
API.
Accelerator tables encode their information using the same Forms
as the attributes, thus it is natural to use DWARFFormValue to
extract/dump them. There is no CU in that case though. Allow the
API to be called with a null CU arguemnt by making the RelocMap
lookup conditional on the CU pointer validity. And document this
new behvior in the header. (Test coverage for this use of the API
comes in the DwarfAccelTable support patch)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221835 91177308-0d34-0410-b5e6-96231b3b80d8
One of them (__memcpy_chk) was already there, the others were checked
by comparing function names.
Note that the fortified libfuncs are now part of TLI, but are always
available, because they aren't generated, only optimized into the
non-checking versions.
Differential Revision: http://reviews.llvm.org/D6179
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221817 91177308-0d34-0410-b5e6-96231b3b80d8
Returning more information will allow BitstreamReader to be simplified a bit
and changed to read 64 bits at a time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221794 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Large-model was added first. With the addition of support for multiple PIC
models in LLVM, now add small-model PIC for 32-bit PowerPC, SysV4 ABI. This
generates more optimal code, for shared libraries with less than about 16380
data objects.
Test Plan: Test cases added or updated
Reviewers: joerg, hfinkel
Reviewed By: hfinkel
Subscribers: jholewinski, mcrosier, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D5399
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221791 91177308-0d34-0410-b5e6-96231b3b80d8
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.
New LLVM intrinsics are provided to represent these four instructions
in IntrinsicsPowerPC.td. These are patterned after the similar
intrinsics for lvx and stvx (Altivec). In PPCInstrVSX.td, these
intrinsics are tied to the code gen patterns, with additional patterns
to allow plain vanilla loads and stores to still generate these
instructions.
At -O1 and higher the intrinsics are immediately converted to loads
and stores in InstCombineCalls.cpp. This will open up more
optimization opportunities while still allowing the correct
instructions to be generated. (Similar code exists for aligned
Altivec loads and stores.)
The new intrinsics are added to the code that checks for consecutive
loads and stores in PPCISelLowering.cpp, as well as to
PPCTargetLowering::getTgtMemIntrinsic().
There's a new test to verify the correct instructions are generated.
The loads and stores tend to be reordered, so the test just counts
their number. It runs at -O2, as it's not very effective to test this
at -O0, when many unnecessary loads and stores are generated.
I ended up having to modify vsx-fma-m.ll. It turns out this test case
is slightly unreliable, but I don't know a good way to prevent
problems with it. The xvmaddmdp instructions read and write the same
register, which is one of the multiplicands. Commutativity allows
either to be chosen. If the FMAs are reordered differently than
expected by the test, the register assignment can be different as a
result. Hopefully this doesn't change often.
There is a companion patch for Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221767 91177308-0d34-0410-b5e6-96231b3b80d8
Every MemoryObject is a StreamableMemoryObject since the removal of
StringRefMemoryObject, so just merge the two.
I will clean up the MemoryObject interface in the upcoming commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221766 91177308-0d34-0410-b5e6-96231b3b80d8
A subtle bug was found where attempting to copy a non-const function_ref
lvalue would actually invoke the generic forwarding constructor (as it
was a closer match - being T& rather than the const T& of the implicit
copy constructor). In the particular case this lead to a dangling
function_ref member (since it had referenced the function_ref passed by
value to its ctor, rather than the outer function_ref that was still
alive)
SFINAE the converting constructor to not be considered if the copy
constructor is available and demonstrate that this causes the copy to
refer to the original functor, not to the function_ref it was copied
from. (without the code change, the test would fail as Y would be
referencing X and Y() would see the result of the mutation to X, ie: 2)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221753 91177308-0d34-0410-b5e6-96231b3b80d8
With this patch MCDisassembler::getInstruction takes an ArrayRef<uint8_t>
instead of a MemoryObject.
Even on X86 there is a maximum size an instruction can have. Given
that, it seems way simpler and more efficient to just pass an ArrayRef
to the disassembler instead of a MemoryObject and have it do a virtual
call every time it wants some extra bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221751 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change moves asan-coverage instrumentation
into a separate Module pass.
The other part of the change in clang introduces a new flag
-fsanitize-coverage=N.
Another small patch will update tests in compiler-rt.
With this patch no functionality change is expected except for the flag name.
The following changes will make the coverage instrumentation work with tsan/msan
Test Plan: Run regression tests, chromium.
Reviewers: nlewycky, samsonov
Reviewed By: nlewycky, samsonov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6152
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221718 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, we're going to separate metadata from the Value hierarchy. See
PR21532.
This reverts commit r221375.
This reverts commit r221373.
This reverts commit r221359.
This reverts commit r221167.
This reverts commit r221027.
This reverts commit r221024.
This reverts commit r221023.
This reverts commit r220995.
This reverts commit r220994.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221711 91177308-0d34-0410-b5e6-96231b3b80d8
What would happen before that commit is that the SDDbgValues associated with
a deallocated SDNode would be marked Invalidated, but SDDbgInfo would keep
a map entry keyed by the SDNode pointer pointing to this list of invalidated
SDDbgNodes. As the memory gets reused, the list might get wrongly associated
with another new SDNode. As the SDDbgValues are cloned when they are transfered,
this can lead to an exponential number of SDDbgValues being produced during
DAGCombine like in http://llvm.org/bugs/show_bug.cgi?id=20893
Note that the previous behavior wasn't really buggy as the invalidation made
sure that the SDDbgValues won't be used. This commit can be considered a
memory optimization and as such is really hard to validate in a unit-test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221709 91177308-0d34-0410-b5e6-96231b3b80d8
This commit adds a new pass that can inject checks before indirect calls to
make sure that these calls target known locations. It supports three types of
checks and, at compile time, it can take the name of a custom function to call
when an indirect call check fails. The default failure function ignores the
error and continues.
This pass incidentally moves the function JumpInstrTables::transformType from
private to public and makes it static (with a new argument that specifies the
table type to use); this is so that the CFI code can transform function types
at call sites to determine which jump-instruction table to use for the check at
that site.
Also, this removes support for jumptables in ARM, pending further performance
analysis and discussion.
Review: http://reviews.llvm.org/D4167
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221708 91177308-0d34-0410-b5e6-96231b3b80d8