Propagate whether `MDNode`s are 'distinct' through the other types of IR
(assembly and bitcode). This adds the `distinct` keyword to assembly.
Currently, no one actually calls `MDNode::getDistinct()`, so these nodes
only get created for:
- self-references, which are never uniqued, and
- nodes whose operands are replaced that hit a uniquing collision.
The concept of distinct nodes is still not quite first-class, since
distinct-ness doesn't yet survive across `MapMetadata()`.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225474 91177308-0d34-0410-b5e6-96231b3b80d8
PEI tries to keep track of how much starting or ending a call sequence adjusts the stack pointer by, so that it can resolve frame-index references. Currently, it takes a very simplistic view of how SP adjustments are done - both FrameStartOpcode and FrameDestroyOpcode adjust it exactly by the amount written in its first argument.
This view is in fact incorrect for some targets (e.g. due to stack re-alignment, or because it may want to adjust the stack pointer in multiple steps). However, that doesn't cause breakage, because most targets (the only in-tree exception appears to be 32-bit ARM) rely on being able to simplify the call frame pseudo-instructions earlier, so this code is never hit.
Moving the computation into TargetInstrInfo allows targets to override the way the adjustment is computed if they need to have a non-zero SPAdj.
Differential Revision: http://reviews.llvm.org/D6863
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225437 91177308-0d34-0410-b5e6-96231b3b80d8
type (in addition to the memory type).
The *LoadExt* legalization handling used to only have one type, the
memory type. This forced users to assume that as long as the extload
for the memory type was declared legal, and the result type was legal,
the whole extload was legal.
However, this isn't always the case. For instance, on X86, with AVX,
this is legal:
v4i32 load, zext from v4i8
but this isn't:
v4i64 load, zext from v4i8
Whereas v4i64 is (arguably) legal, even without AVX2.
Note that the same thing was done a while ago for truncstores (r46140),
but I assume no one needed it yet for extloads, so here we go.
Calls to getLoadExtAction were changed to add the value type, found
manually in the surrounding code.
Calls to setLoadExtAction were mechanically changed, by wrapping the
call in a loop, to match previous behavior. The loop iterates over
the MVT subrange corresponding to the memory type (FP vectors, etc...).
I also pulled neighboring setTruncStoreActions into some of the loops;
those shouldn't make a difference, as the additional types are illegal.
(e.g., i128->i1 truncstores on PPC.)
No functional change intended.
Differential Revision: http://reviews.llvm.org/D6532
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225421 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we have MVT::FIRST_VALUETYPE (r225362), we can provide a method
checking that the MVT is valid, that is, it's in
[FIRST_VALUETYPE, LAST_VALUETYPE[.
This commit also uses it in a few asserts, that would previously accept
invalid MVTs, such as the default constructed -1. In that case,
the code following those asserts would do an out-of-bounds array access.
Using MVT::isValid, those assertions fail as expected when passed
invalid MVTs.
It feels clunky to have such a validity checking function, but it's
at least better than the alternative of broken manual checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225411 91177308-0d34-0410-b5e6-96231b3b80d8
Allow distinct `MDNode`s to be explicitly created. There's no way (yet)
of representing their distinctness in assembly/bitcode, however, so this
still isn't first-class.
Part of PR22111.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225406 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to indicate whether an `MDNode` is distinct. A distinct node is
not stored in the MDNode uniquing tables, and will never be returned by
`MDNode::get()`.
Although distinct nodes are only currently created by uniquing
collisions (when operands change), PR22111 will allow these nodes to be
explicitly created.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225401 91177308-0d34-0410-b5e6-96231b3b80d8
`MDNode::replaceOperandWith()` changes all instances of metadata. Stop
using it when linking module flags, since (due to uniquing) the flag
values could be used by other metadata.
Instead, use new API `NamedMDNode::setOperand()` to update the reference
directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225397 91177308-0d34-0410-b5e6-96231b3b80d8
This change includes the most basic possible GCStrategy for a GC which is using the statepoint lowering code. At the moment, this GCStrategy doesn't really do much - aside from actually generate correct stackmaps that is - but I went ahead and added a few extra correctness checks as proof of concept. It's mostly here to provide documentation on how to do one, and to provide a point for various optimization legality hooks I'd like to add going forward. (For context, see the TODOs in InstCombine around gc.relocate.)
Most of the validation logic added here as proof of concept will soon move in to the Verifier. That move is dependent on http://reviews.llvm.org/D6811
There was discussion in the review thread about addrspace(1) being reserved for something. I'm going to follow up on a seperate llvmdev thread. If needed, I'll update all the code at once.
Note that I am deliberately not making a GCStrategy required to use gc.statepoints with this change. I want to give folks out of tree - including myself - a chance to migrate. In a week or two, I'll make having a GCStrategy be required for gc.statepoints. To this end, I added the gc tag to one of the test cases but not others.
Differential Revision: http://reviews.llvm.org/D6808
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225365 91177308-0d34-0410-b5e6-96231b3b80d8
Many places reference MVT::LAST_VALUETYPE when iterating over all
valid MVTs, but they usually start with 0.
With FIRST_VALUETYPE, we can avoid explicit constants when we really
should be using MVT::SimpleValueType.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225362 91177308-0d34-0410-b5e6-96231b3b80d8
Used to iterate over previously added memory dependencies in
adjustChainDeps() and iterateChainSucc().
SDep::isCtrl() was previously used in these places, that also gave
anti and output edges. The code may be worse if these are followed,
because MisNeedChainEdge() will conservatively return true since a
non-memory instruction has no memory operands, and a false chain dep
will be added. It is also unnecessary since all memory accesses of
interest will be reached by memory dependencies, and there is a budget
limit for the number of edges traversed.
This problem was found on an out-of-tree target with enabled alias
analysis. No test case for an in-tree target has been found.
Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225351 91177308-0d34-0410-b5e6-96231b3b80d8
requiring and invalidating specific analyses. Also make their printed
names match their class names. Writing these out as prose really doesn't
make sense to me any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225346 91177308-0d34-0410-b5e6-96231b3b80d8
r221973 changed SmallVector::operator[] to use size_t instead of unsigned.
Before that, on 64bit platforms, when a large index (say -1) was passed,
truncating it to unsigned avoided an overflow when computing 'begin() + idx',
and failed the range checking assertion, as expected.
With r221973, idx isn't truncated, so the addition wraps to
'(char*)begin() - 1', and doesn't fire anymore when it should have done so.
This commit changes the comparison to instead compute 'end() - begin()'
(i.e., 'size()'), which avoids potentially overflowing additions, and
correctly triggers the assertion when values such as -1 are passed.
Note that the problem already existed before that revision, on platforms
where sizeof(size_t) == sizeof(unsigned).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225338 91177308-0d34-0410-b5e6-96231b3b80d8
We can't drop support for RAUW entirely in `MDNode`s, since it's
required for graph construction. This comment was from before I'd done
the math on that (out-of-tree), and never should have been committed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225334 91177308-0d34-0410-b5e6-96231b3b80d8
passes too many time.
I think this is actually the issue that someone raised with me at the
developer's meeting and in an email, but that we never really got to the
bottom of. Having all the testing utilities made it much easier to dig
down and uncover the core issue.
When a pass manager is running many passes over a single function, we
need it to invalidate the analyses between each run so that they can be
re-computed as needed. We also need to track the intersection of
preserved higher-level analyses across all the passes that we run (for
example, if there is one module analysis which all the function analyses
preserve, we want to track that and propagate it). Unfortunately, this
interacted poorly with any enclosing pass adaptor between two IR units.
It would see the intersection of preserved analyses, and need to
invalidate any other analyses, but some of the un-preserved analyses
might have already been invalidated *and recomputed*! We would fail to
propagate the fact that the analysis had already been invalidated.
The solution to this struck me as really strange at first, but the more
I thought about it, the more natural it seemed. After a nice discussion
with Duncan about it on IRC, it seemed even nicer. The idea is that
invalidating an analysis *causes* it to be preserved! Preserving the
lack of result is trivial. If it is recomputed, great. Until something
*else* invalidates it again, we're good.
The consequence of this is that the invalidate methods on the analysis
manager which operate over many passes now consume their
PreservedAnalyses object, update it to "preserve" every analysis pass to
which it delivers an invalidation (regardless of whether the pass
chooses to be removed, or handles the invalidation itself by updating
itself). Then we return this augmented set from the invalidate routine,
letting the pass manager take the result and use the intersection of
*that* across each pass run to compute the final preserved set. This
accounts for all the places where the early invalidation of an analysis
has already "preserved" it for a future run.
I've beefed up the testing and adjusted the assertions to show that we
no longer repeatedly invalidate or compute the analyses across nested
pass managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225333 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedAdd's smarts will live in ValueTracking as
computeOverflowForUnsignedAdd. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225329 91177308-0d34-0410-b5e6-96231b3b80d8
This is affecting the behavior of some ObjC++ / AArch64 test cases on Darwin.
Reverting to get the bots green while I track down the source of the changed
behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225311 91177308-0d34-0410-b5e6-96231b3b80d8
The goal is to allows MachineFunctionInfo to override this create
function to customize the creation.
No change intended in existing backend in this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225292 91177308-0d34-0410-b5e6-96231b3b80d8
This will be used for AMD GPUs with the Graphics Core Next architecture,
which are currently using by the r600 triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225276 91177308-0d34-0410-b5e6-96231b3b80d8
Use this to test that path of invalidation. This test actually shows
redundant invalidation here that is really bad. I'm going to work on
fixing that next, but wanted to commit the test harness now that its all
working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225257 91177308-0d34-0410-b5e6-96231b3b80d8
a specific analysis result.
This is quite handy to test things, and will also likely be very useful
for debugging issues. You could narrow down pass validation failures by
walking these invalidate pass runs up and down the pass pipeline, etc.
I've added support to the pass pipeline parsing to be able to create one
of these for any analysis pass desired.
Just adding this class uncovered one latent bug where the
AnalysisManager CRTP base class had a hard-coded Module type rather than
using IRUnitT.
I've also added tests for invalidation and caching of analyses in
a basic way across all the pass managers. These in turn uncovered two
more bugs where we failed to correctly invalidate an analysis -- its
results were invalidated but the key for re-running the pass was never
cleared and so it was never re-run. Quite nasty. I'm very glad to debug
this here rather than with a full system.
Also, yes, the naming here is horrid. I'm going to update some of the
names to be slightly less awful shortly. But really, I've no "good"
ideas for naming. I'll be satisfied if I can get it to "not bad".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225246 91177308-0d34-0410-b5e6-96231b3b80d8
is a no-op other than requiring some analysis results be available.
This can be used in real pass pipelines to force the usually lazy
analysis running to eagerly compute something at a specific point, and
it can be used to test the pass manager infrastructure (my primary use
at the moment).
I've also added bit of pipeline parsing magic to support generating
these directly from the opt command so that you can directly use these
when debugging your analysis. The syntax is:
require<analysis-name>
This can be used at any level of the pass manager. For example:
cgscc(function(require<my-analysis>,no-op-function))
This would produce a no-op function pass requiring my-analysis, followed
by a fully no-op function pass, both of these in a function pass manager
which is nested inside of a bottom-up CGSCC pass manager which is in the
top-level (implicit) module pass manager.
I have zero attachment to the particular syntax I'm using here. Consider
it a straw man for use while I'm testing and fleshing things out.
Suggestions for better syntax welcome, and I'll update everything based
on any consensus that develops.
I've used this new functionality to more directly test the analysis
printing rather than relying on the cgscc pass manager running an
analysis for me. This is still minimally tested because I need to have
analyses to run first! ;] That patch is next, but wanted to keep this
one separate for easier review and discussion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225236 91177308-0d34-0410-b5e6-96231b3b80d8
dsymutil would like to use all the AsmPrinter/MCStreamer infrastructure
to stream out the DWARF. In order to do so, it will reuse the DIE object
and so this header needs to be public.
The interface exposed here has some corners that cannot be used without a
DwarfDebug object, but clients that want to stream Dwarf can just avoid
these.
Differential Revision: http://reviews.llvm.org/D6695
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225208 91177308-0d34-0410-b5e6-96231b3b80d8
when all are being preserved.
We want to short-circuit this for a couple of reasons. One, I don't
really want passes to grow a dependency on actually receiving their
invalidate call when they've been preserved. I'm thinking about removing
this entirely. But more importantly, preserving everything is likely to
be the common case in a lot of scenarios, and it would be really good to
bypass all of the invalidation and preservation machinery there.
Avoiding calling N opaque functions to try to invalidate things that are
by definition still valid seems important. =]
This wasn't really inpsired by much other than seeing the spam in the
logging for analyses, but it seems better ot get it checked in rather
than forgetting about it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225163 91177308-0d34-0410-b5e6-96231b3b80d8
manager.
This starts to allow us to test analyses more easily, but it's really
only the beginning. Some of the code here is still untestable without
manual changes to create analysis passes, but I wanted to factor it into
a small of chunks as possible.
Next up in order to be able to test things are, in no particular order:
- No-op analyses passes so we don't have to use real ones to exercise
the pass maneger itself.
- Automatic way of generating dummy passes that require an analysis be
run, including a variant that calls a 'print' method on a pass to make
it even easier to print out the results of an analysis.
- Dummy passes that invalidate all analyses for their IR unit so we can
test invalidation and re-runs.
- Automatic way to print each analysis pass as it is re-run.
- Automatic but optional verification of analysis passes everywhere
possible.
I'm not claiming I'll get to all of these immediately, but that's what
is in the pipeline at some stage. I'm fleshing out exactly what I need
and what to prioritize by working on converting analyses and then trying
to test the conversion. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225162 91177308-0d34-0410-b5e6-96231b3b80d8
renaming a file from AssumptionTracker.h to AssumptionCache.h.
Thanks to Philip Reames for noticing and pointing it out in code review!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225146 91177308-0d34-0410-b5e6-96231b3b80d8
units.
This was debated back and forth a bunch, but using references is now
clearly cleaner. Of all the code written using pointers thus far, in
only one place did it really make more sense to have a pointer. In most
cases, this just removes immediate dereferencing from the code. I think
it is much better to get errors on null IR units earlier, potentially
at compile time, than to delay it.
Most notably, the legacy pass manager uses references for its routines
and so as more and more code works with both, the use of pointers was
likely to become really annoying. I noticed this when I ported the
domtree analysis over and wrote the entire thing with references only to
have it fail to compile. =/ It seemed better to switch now than to
delay. We can, of course, revisit this is we learn that references are
really problematic in the API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225145 91177308-0d34-0410-b5e6-96231b3b80d8
a cache of assumptions for a single function, and an immutable pass that
manages those caches.
The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.
Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.
For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225131 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code provided for specifying a global loop alignment preference.
However, the preferred loop alignment might depend on the loop itself. For
recent POWER cores, loops between 5 and 8 instructions should have 32-byte
alignment (while the others are better with 16-byte alignment) so that the
entire loop will fit in one i-cache line.
To support this, getPrefLoopAlignment has been made virtual, and can be
provided with an optional MachineLoop* so the target can inspect the loop
before answering the query. The default behavior, as before, is to return the
value set with setPrefLoopAlignment. MachineBlockPlacement now queries the
target for each loop instead of only once per function. There should be no
functional change for other targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225117 91177308-0d34-0410-b5e6-96231b3b80d8
FunctionPassManager. These never got documented, likely due to the
clutter of this header file. This fixes another problem people noticed
when they started trying to use the new pass manager.
I've also used this to document the aspirational constraints I would
like to hold passes to. I don't really have a better place to document
such things at this point, but eventually will probably create a proper
.rst file and page for the LLVM pass infrastructure that carries such
high-level concerns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225097 91177308-0d34-0410-b5e6-96231b3b80d8
concept-based polymorphism in the pass manager to a separate header.
I got feedback from someone reading the code and trying to use it that
this was really making it hard to dive in and start using these APIs and
that makes a lot of sense.
This only requires a moderate amount of gymnastics to separate in this
way, namely rinsing the PreservedAnalysis object through a template
argument in a few places so that it is dependent and we only examine it
on instantiation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225094 91177308-0d34-0410-b5e6-96231b3b80d8
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225076 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r225059. I think MSVC 2012 has a problem with this. This is
an attempt to fix one of the MSVC 2012 bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225065 91177308-0d34-0410-b5e6-96231b3b80d8
This appears to have broken at least the windows build bots due to
compile errors in the predicate that didn't simply supress the overload.
I'm not sure what the fix is, and the bots have been broken for a long
time now so I'm just reverting until Michael can figure out a fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225064 91177308-0d34-0410-b5e6-96231b3b80d8
The issues was that AArch64 has additional restrictions on when local
relocations can be used. We have to take those into consideration when
deciding to put a L symbol in the symbol table or not.
Original message:
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225048 91177308-0d34-0410-b5e6-96231b3b80d8
Under the large code model, we cannot assume that __morestack lives within
2^31 bytes of the call site, so we cannot use pc-relative addressing. We
cannot perform the call via a temporary register, as the rax register may
be used to store the static chain, and all other suitable registers may be
either callee-save or used for parameter passing. We cannot use the stack
at this point either because __morestack manipulates the stack directly.
To avoid these issues, perform an indirect call via a read-only memory
location containing the address.
This solution is not perfect, as it assumes that the .rodata section
is laid out within 2^31 bytes of each function body, but this seems to
be sufficient for JIT.
Differential Revision: http://reviews.llvm.org/D6787
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225003 91177308-0d34-0410-b5e6-96231b3b80d8
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224985 91177308-0d34-0410-b5e6-96231b3b80d8
Nothing particularly interesting, just adding infrastructure for use by in tree users and out of tree users.
Note: These were extracted out of a working frontend, but they have not been well tested in isolation.
Differential Revision: http://reviews.llvm.org/D6807
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224981 91177308-0d34-0410-b5e6-96231b3b80d8
This change implements four basic optimizations:
If a relocated value isn't used, it doesn't need to be relocated.
If the value being relocated is null, relocation doesn't change that. (Technically, this might be collector specific. I don't know of one which it doesn't work for though.)
If the value being relocated is undef, the relocation is meaningless.
If the value being relocated was known nonnull, the relocated pointer also isn't null. (Since it points to the same source language object.)
I outlined other planned work in comments.
Differential Revision: http://reviews.llvm.org/D6600
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224968 91177308-0d34-0410-b5e6-96231b3b80d8
a CLANG_LIBDIR_SUFFIX variable. This is necessary before I can add
support for using that variable to CMake and the C++ code in Clang, and
the autoconf build system does all substitutions in the LLVM tree.
As mentioned before, I'm not planning to add actual multilib support to
the autoconf build, just enough stubs for it to keep playing nicely with
the CMake build once that one has support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224922 91177308-0d34-0410-b5e6-96231b3b80d8
If the control flow is modelling an if-statement where the only instruction in
the 'then' basic block (excluding the terminator) is a call to cttz/ctlz,
CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control
flow graph.
Example:
\code
entry:
%cmp = icmp eq i64 %val, 0
br i1 %cmp, label %end.bb, label %then.bb
then.bb:
%c = tail call i64 @llvm.cttz.i64(i64 %val, i1 true)
br label %end.bb
end.bb:
%cond = phi i64 [ %c, %then.bb ], [ 64, %entry]
\code
In this example, basic block %then.bb is taken if value %val is not zero.
Also, the phi node in %end.bb would propagate the size-of in bits of %val
only if %val is equal to zero.
With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb
into basic block %entry only if cttz is cheap to speculate for the target.
Added two new hooks in TargetLowering.h to let targets customize the behavior
(i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The
two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'.
By default, both methods return 'false'.
On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has
LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI.
Differential Revision: http://reviews.llvm.org/D6728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224899 91177308-0d34-0410-b5e6-96231b3b80d8
.set directives may be overridden by other .set directives as well as
label definitions.
This fixes PR22019.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224811 91177308-0d34-0410-b5e6-96231b3b80d8
This function constructs the main liverange by merging all subranges if
subregister liveness tracking is available. This should be slightly
faster to compute instead of performing the liveness calculation again
for the main range. More importantly it avoids cases where the main
liverange would cover positions where no subrange was live. These cases
happened for partial definitions where the actual defined part was dead
and only the undefined parts used later.
The register coalescing requires that every part covered by the main
live range has at least one subrange live.
I also expect this function to become usefull later for places where the
subranges are modified in a way that it is hard to correctly fix the
main liverange in the machine scheduler, we can simply reconstruct it
from subranges then.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224806 91177308-0d34-0410-b5e6-96231b3b80d8
a size and alignment. Several assertions in DwarfDebug rely on all variable
types to report back a size, or to be derived from a type with a size.
Tested in CFE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224780 91177308-0d34-0410-b5e6-96231b3b80d8
Previously I tried to plug musttail into the existing vararg lowering
code. That turned out to be a mistake, because non-vararg calls use
significantly different register lowering, even on x86. For example, AVX
vectors are usually passed in registers to normal functions and memory
to vararg functions. Now musttail uses a completely separate lowering.
Hopefully this can be used as the basis for non-x86 perfect forwarding.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D6156
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224745 91177308-0d34-0410-b5e6-96231b3b80d8
Take two disjoint Loops L1 and L2.
LoopSimplify fails to simplify some loops (e.g. when indirect branches
are involved). In such situations, it can happen that an exit for L1 is
the header of L2. Thus, when we create PHIs in one of such exits we are
also inserting PHIs in L2 header.
This could break LCSSA form for L2 because these inserted PHIs can also
have uses in L2 exits, which are never handled in the current
implementation. Provide a fix for this corner case and test that we
don't assert/crash on that.
Differential Revision: http://reviews.llvm.org/D6624
rdar://problem/19166231
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224740 91177308-0d34-0410-b5e6-96231b3b80d8
In resent times asan and valgrind have found way more memory management bugs
in llvm than the special purpose leak detector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224703 91177308-0d34-0410-b5e6-96231b3b80d8
(X & INT_MIN) ? X & INT_MAX : X into X & INT_MAX
(X & INT_MIN) ? X : X & INT_MAX into X
(X & INT_MIN) ? X | INT_MIN : X into X
(X & INT_MIN) ? X : X | INT_MIN into X | INT_MIN
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224669 91177308-0d34-0410-b5e6-96231b3b80d8
It is intended to be used for a family of personality functions that
have similar IR preparation requirements. Typically when interoperating
with MSVC personality functions, bits of functionality need to be
outlined from the main function into helper functions. There is also
usually more than one landing pad per invoke, which does not match the
LLVM IR landingpad representation.
None of this is implemented yet. This change just adds a new enum that
is active for *-windows-msvc and delegates to the EH removal preparation
pass. No functionality change for other targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224625 91177308-0d34-0410-b5e6-96231b3b80d8
dsymutil needs access to DWARF specific inforamtion, the small DIContext
wrapper isn't sufficient. Other DWARF consumers might want to use it too
(I'm looking at you lldb).
Differential Revision: http://reviews.llvm.org/D6694
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224594 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of reusing the name `MapValue()` when mapping `Metadata`, use
`MapMetadata()`. The old name doesn't make much sense after the
`Metadata`/`Value` split.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224566 91177308-0d34-0410-b5e6-96231b3b80d8
- This also fixes a bug introduced in r223880 where values were not
correctly marked as Dead anymore.
- Cleanup computeDeadValues(): split up SubRange code variant, simplify
arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224538 91177308-0d34-0410-b5e6-96231b3b80d8
of the abi we should be using. For targets that don't use the
option there's no change, otherwise this allows external users
to set the ABI via string and avoid some of the -backend-option
pain in clang.
Use this option to move the ABI for the ARM port from the
Subtarget to the TargetMachine and update the testcases
accordingly since it's no longer valid to set via -mattr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224492 91177308-0d34-0410-b5e6-96231b3b80d8
Make `DICompositeType` mutators private to prevent misuse. All calls to
`setArrays()` and `setContainingType()` should go through
`DIBuilder::replaceArrays()` and `DIBuilder::replaceVTableHolder()`.
This is a follow-up to r224482 (now that clang has been updated in
r224483).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224486 91177308-0d34-0410-b5e6-96231b3b80d8
Also corrected the name of the load command to not end in an ’S’ as well as corrected
the name of the MachO::linker_option_command struct and other places that had the
word option as plural which did not match the Mac OS X headers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224485 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to DIBuilder to handle self-referencing `DICompositeType`s.
Self-references aren't expected in the debug info graph, and we take
advantage of that by only calling `resolveCycles()` on nodes that were
once forward declarations (otherwise, DIBuilder needs an expensive
tracking reference to every unresolved node it creates, which in cyclic
graphs is *all of them*).
However, clang seems to create self-referencing `DICompositeType`s. Add
API to manage this safely. The paired commit to clang will include the
regression test.
I'll make the `DICompositeType` API `private` in a follow-up to prevent
misuse (I've separated that to prevent build failures from missing the
clang commit).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224482 91177308-0d34-0410-b5e6-96231b3b80d8
This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors.
This fixes PR15872 and provides a partial fix for PR21711.
Differential Revision: http://reviews.llvm.org/D6678
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224429 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When generating MIPS assembly, LLVM always overrides the default assembler options by emitting the '.set noreorder', '.set nomacro' and '.set noat' directives,
while GCC uses the default options if an assembly-level function contains inline assembly code.
This becomes a problem when the code generated by LLVM is interleaved with inline assembly which assumes GCC-like assembler options (from Linux, for example).
This patch fixes these conflicts by setting the appropriate assembler options at the beginning of an inline asm block and popping them at the end.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6637
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224425 91177308-0d34-0410-b5e6-96231b3b80d8
The type promotion helper does not support vector type, so when make
such it does not kick in in such cases.
Original commit message:
[CodeGenPrepare] Move sign/zero extensions near loads using type promotion.
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.
Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.
** Context **
Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.
** Motivating Example **
Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
%ld = load i8* %addr1
%zextld = zext i8 %ld to i32
%ld2 = load i32* %addr2
%add = add nsw i32 %ld2, %zextld
%sextadd = sext i32 %add to i64
%zexta = zext i8 %a to i32
%addza = add nsw i32 %zexta, %zextld
%sextaddza = sext i32 %addza to i64
%addb = add nsw i32 %b, %zextld
%sextaddb = sext i32 %addb to i64
call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
ret void
}
As it is, this IR generates the following assembly on x86_64:
[...]
movzbl (%rdi), %eax # zero-extended load
movl (%rsi), %es # plain load
addl %eax, %esi # 32-bit add
movslq %esi, %rdi # sign extend the result of add
movzbl %dl, %edx # zero extend the first argument
addl %eax, %edx # 32-bit add
movslq %edx, %rsi # sign extend the result of add
addl %eax, %ecx # 32-bit add
movslq %ecx, %rdx # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.
Now, by promoting the additions to form more extended loads we would generate:
[...]
movzbl (%rdi), %eax # zero-extended load
movslq (%rsi), %rdi # sign-extended load
addq %rax, %rdi # 64-bit add
movzbl %dl, %esi # zero extend the first argument
addq %rax, %rsi # 64-bit add
movslq %ecx, %rdx # sign extend the second argument
addq %rax, %rdx # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.
This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.
Note: The throughput numbers are similar on Sandy Bridge and Haswell.
** Proposed Solution **
To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))
The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.
** Performance **
Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar: ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%
The results are consistent for both O3 and Os.
<rdar://problem/18310086>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224402 91177308-0d34-0410-b5e6-96231b3b80d8
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.
Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.
** Context **
Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.
** Motivating Example **
Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
%ld = load i8* %addr1
%zextld = zext i8 %ld to i32
%ld2 = load i32* %addr2
%add = add nsw i32 %ld2, %zextld
%sextadd = sext i32 %add to i64
%zexta = zext i8 %a to i32
%addza = add nsw i32 %zexta, %zextld
%sextaddza = sext i32 %addza to i64
%addb = add nsw i32 %b, %zextld
%sextaddb = sext i32 %addb to i64
call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
ret void
}
As it is, this IR generates the following assembly on x86_64:
[...]
movzbl (%rdi), %eax # zero-extended load
movl (%rsi), %es # plain load
addl %eax, %esi # 32-bit add
movslq %esi, %rdi # sign extend the result of add
movzbl %dl, %edx # zero extend the first argument
addl %eax, %edx # 32-bit add
movslq %edx, %rsi # sign extend the result of add
addl %eax, %ecx # 32-bit add
movslq %ecx, %rdx # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.
Now, by promoting the additions to form more extended loads we would generate:
[...]
movzbl (%rdi), %eax # zero-extended load
movslq (%rsi), %rdi # sign-extended load
addq %rax, %rdi # 64-bit add
movzbl %dl, %esi # zero extend the first argument
addq %rax, %rsi # 64-bit add
movslq %ecx, %rdx # sign extend the second argument
addq %rax, %rdx # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.
This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.
Note: The throughput numbers are similar on Sandy Bridge and Haswell.
** Proposed Solution **
To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))
The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.
** Performance **
Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar: ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%
The results are consistent for both O3 and Os.
<rdar://problem/18310086>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224351 91177308-0d34-0410-b5e6-96231b3b80d8
Add in definedness checks for shift operators, null checks when
pointers are assumed by the code to be non-null, and explicit
unreachables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224255 91177308-0d34-0410-b5e6-96231b3b80d8
- by Ella Bolshinsky
The alias analysis is used define whether the given instruction
is a barrier for store sinking. For 2 identical stores, following
instructions are checked in the both basic blocks, to determine
whether they are sinking barriers.
http://reviews.llvm.org/D6420
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224247 91177308-0d34-0410-b5e6-96231b3b80d8
It makes more sense for ThreadLocal<const T>::get to return a const T*
and ThreadLocal<T>::get to return a T*.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224225 91177308-0d34-0410-b5e6-96231b3b80d8
Clang's static analyzer found several potential cases of undefined
behavior, use of un-initialized values, and potentially null pointer
dereferences in tablegen, Support, MC, and ADT. This cleans them up
with specific assertions on the assumptions of the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224154 91177308-0d34-0410-b5e6-96231b3b80d8
The __fp16 type is unconditionally exposed. Since -mfp16-format is not yet
supported, there is not a user switch to change this behaviour. This build
attribute should capture the default behaviour of the compiler, which is to
expose the IEEE 754 version of __fp16.
When -mfp16-format is emitted, that will be the way to control the value of
this build attribute.
Change-Id: I8a46641ff0fd2ef8ad0af5f482a6d1af2ac3f6b0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224115 91177308-0d34-0410-b5e6-96231b3b80d8
Also remove redundant documentation:
- doxygen will copy documentation to overriden methods.
- Use \copydoc on PIMPL classes instead of replicating the text.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224089 91177308-0d34-0410-b5e6-96231b3b80d8
Updating comments to reflect the current state of the world after my recent changes to ownership structure and generally better describe what a GCStrategy is and how it works.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224086 91177308-0d34-0410-b5e6-96231b3b80d8
Add an option to disable optimization to shrink truncated larger type
loads to smaller type loads. On SI this prevents using scalar load
instructions in some cases, since there are no scalar extloads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224084 91177308-0d34-0410-b5e6-96231b3b80d8
`MDString`s can have arbitrary characters in them. Prevent an assertion
that fired in `BitcodeWriter` because of sign extension by copying the
characters into the record as `unsigned char`s.
Based on a patch by Keno Fischer; fixes PR21882.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224077 91177308-0d34-0410-b5e6-96231b3b80d8
This reflects the typelessness of `Metadata` in the bitcode format,
removing types from all metadata operands.
`METADATA_VALUE` represents a `ValueAsMetadata`, and always has two
fields: the type and the value.
`METADATA_NODE` represents an `MDNode`, and unlike `METADATA_OLD_NODE`,
doesn't store types. It stores operands at their ID+1 so that `0` can
reference `nullptr` operands.
Part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224073 91177308-0d34-0410-b5e6-96231b3b80d8
This gives us better leak detection messages, like `Value` has.
This also has the side effect of papering over a problem where
`MachineInstr`s are added as garbage to the leak detector and then
deleted without being removed. If `MDNode::getTemporary()` allocates an
`MDNodeFwdDecl` in the same spot, the leak detector asserts. By
separating `MDNode`s into their own container we lose that assertion.
Since `MachineInstr` is required to have a trivial destructor, its usage
of `LeakDetector` at all is pretty suspect. I'll be sending a patch
soon to strip that out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224060 91177308-0d34-0410-b5e6-96231b3b80d8
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.
To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.
This is the 2nd attempt at this after realizing that PassManager::add() may
actually delete the pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224059 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than requiring overloads in the wrapper and the impl, just
overload the impl and use templates in the wrapper. This makes it less
error prone to add more overloads (`void *` defeats any chance the
compiler has at noticing bugs, so the easier the better).
At the same time, correct the comment that was lying about not changing
functionality for `Value`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224058 91177308-0d34-0410-b5e6-96231b3b80d8
Previously print+verify passes were added in a very unsystematic way, which is
annoying when debugging as you miss intermediate steps and allows bugs to stay
unnotice when no verification is performed.
To make this change practical I added the possibility to explicitely disable
verification. I used this option on all places where no verification was
performed previously (because alot of places actually don't pass the
MachineVerifier).
In the long term these problems should be fixed properly and verification
enabled after each pass. I'll enable some more verification in subsequent
commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@224042 91177308-0d34-0410-b5e6-96231b3b80d8
This change moves the ownership and access of GCFunctionInfo (the object which describes the safepoints associated with a safepoint under GCRoot) to GCModuleInfo. Previously, this was owned by GCStrategy which was in turned owned by GCModuleInfo. This made GCStrategy module specific which is 'surprising' given it's name and other purposes.
There's a few more changes needed, but we're getting towards the point we can reuse GCStrategy for gc.statepoint as well.
p.s. The style of this code ends up being a mess. I was trying to move code around without otherwise changing much. Once I get the ownership structure rearranged, I will go through and fixup spacing, naming, comments etc.
Differential Revision: http://reviews.llvm.org/D6587
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223994 91177308-0d34-0410-b5e6-96231b3b80d8
These methods are only used by MCJIT and are very specific to it. In fact, they
are also fairly specific to the fact that we have a dynamic linker of
relocatable objects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223964 91177308-0d34-0410-b5e6-96231b3b80d8
Don't call `dropAllReferences()` from `MDNode::~MDNode()`, call it
directly from `~MDNodeFwdDecl()` and `~GenericMDNode()`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223904 91177308-0d34-0410-b5e6-96231b3b80d8
This works like the composeSubRegisterIndices() function but transforms
a subregister lane mask instead of a subregister index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223874 91177308-0d34-0410-b5e6-96231b3b80d8
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223873 91177308-0d34-0410-b5e6-96231b3b80d8
Nothing particularly interesting here, just documenting the way the code currently works before I start changing it...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223866 91177308-0d34-0410-b5e6-96231b3b80d8
In the current implementation, GCStrategy is a part of the ownership structure for the gc metadata which describes a Module. It also contains a reference to the module in question. As a result, GCStrategy instances are essentially Module specific.
I plan to transition away from this design. Instead, a GCStrategy will be owned by the LLVMContext. It will be a lightweight policy object which contains no information about the Modules or Functions involved, but can be easily reached given a Function.
The first step in this transition is to remove the direct Module reference from GCStrategy. This also requires removing the single user of this reference, the GCMetadataPrinter hierarchy. In theory, this will allow the lifetime of the printers to be scoped to the LLVMContext as well, but in practice, I'm not actually changing that. (Yet?)
An alternate design would have been to move the direct Module reference into the GCMetadataPrinter and change the keying of the owning maps to explicitly key off both GCStrategy and Module. I'm open to doing it that way instead, but didn't see much value in preserving the per Module association for GCMetadataPrinters.
The next change in this sequence will be to start unwinding the intertwined ownership between GCStrategy, GCModuleInfo, and GCFunctionInfo.
Differential Revision: http://reviews.llvm.org/D6566
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223859 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM_EXPLICIT is only supported by recent version of MSVC, and it seems
the not-so-recent versions get confused about the operator bool() when
tryint to resolve operator== calls.
This removed the operator bool()'s since they don't seem to be used
anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223824 91177308-0d34-0410-b5e6-96231b3b80d8
It is a static method of IRObjectFile, so having to use
IRObjectFile::createIRObjectFile was redundant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223822 91177308-0d34-0410-b5e6-96231b3b80d8
Tidy up the code a little by using 'auto' when the type is obvious, doxify the
comments, and clang-format the file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223807 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is desirable for WebKit and other clients of the llvm-shlib because C++ exit time destructors have a tendency to crash when invoked from multi-threaded applications.
Ideally this option will be temporary, because the ideal fix is to just not have exit time destructors.
Reviewers: chapuni, ributzka
Reviewed By: ributzka
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6572
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223805 91177308-0d34-0410-b5e6-96231b3b80d8
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532. Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.
I have a follow-up patch prepared for `clang`. If this breaks other
sub-projects, I apologize in advance :(. Help me compile it on Darwin
I'll try to fix it. FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.
This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.
Here's a quick guide for updating your code:
- `Metadata` is the root of a class hierarchy with three main classes:
`MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from
the `Value` class hierarchy. It is typeless -- i.e., instances do
*not* have a `Type`.
- `MDNode`'s operands are all `Metadata *` (instead of `Value *`).
- `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.
If you're referring solely to resolved `MDNode`s -- post graph
construction -- just use `MDNode*`.
- `MDNode` (and the rest of `Metadata`) have only limited support for
`replaceAllUsesWith()`.
As long as an `MDNode` is pointing at a forward declaration -- the
result of `MDNode::getTemporary()` -- it maintains a side map of its
uses and can RAUW itself. Once the forward declarations are fully
resolved RAUW support is dropped on the ground. This means that
uniquing collisions on changing operands cause nodes to become
"distinct". (This already happened fairly commonly, whenever an
operand went to null.)
If you're constructing complex (non self-reference) `MDNode` cycles,
you need to call `MDNode::resolveCycles()` on each node (or on a
top-level node that somehow references all of the nodes). Also,
don't do that. Metadata cycles (and the RAUW machinery needed to
construct them) are expensive.
- An `MDNode` can only refer to a `Constant` through a bridge called
`ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).
As a side effect, accessing an operand of an `MDNode` that is known
to be, e.g., `ConstantInt`, takes three steps: first, cast from
`Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
third, cast down to `ConstantInt`.
The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
metadata schema owners transition away from using `Constant`s when
the type isn't important (and they don't care about referring to
`GlobalValue`s).
In the meantime, I've added transitional API to the `mdconst`
namespace that matches semantics with the old code, in order to
avoid adding the error-prone three-step equivalent to every call
site. If your old code was:
MDNode *N = foo();
bar(isa <ConstantInt>(N->getOperand(0)));
baz(cast <ConstantInt>(N->getOperand(1)));
bak(cast_or_null <ConstantInt>(N->getOperand(2)));
bat(dyn_cast <ConstantInt>(N->getOperand(3)));
bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));
you can trivially match its semantics with:
MDNode *N = foo();
bar(mdconst::hasa <ConstantInt>(N->getOperand(0)));
baz(mdconst::extract <ConstantInt>(N->getOperand(1)));
bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2)));
bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3)));
bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));
and when you transition your metadata schema to `MDInt`:
MDNode *N = foo();
bar(isa <MDInt>(N->getOperand(0)));
baz(cast <MDInt>(N->getOperand(1)));
bak(cast_or_null <MDInt>(N->getOperand(2)));
bat(dyn_cast <MDInt>(N->getOperand(3)));
bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));
- A `CallInst` -- specifically, intrinsic instructions -- can refer to
metadata through a bridge called `MetadataAsValue`. This is a
subclass of `Value` where `getType()->isMetadataTy()`.
`MetadataAsValue` is the *only* class that can legally refer to a
`LocalAsMetadata`, which is a bridged form of non-`Constant` values
like `Argument` and `Instruction`. It can also refer to any other
`Metadata` subclass.
(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223802 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the pattern match code to work also with Values instead with
Instructions only. Also remove the no longer need matcher (m_Instruction).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223797 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, walk the obj symbol list in parallel to find the GV. This shouldn't
change anything on ELF where global symbols are not mangled, but it is a step
toward supporting other object formats.
Gold itself is ELF only, but bfd ld supports COFF and the logic in the gold
plugin could be reused on lld.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223780 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce the ``llvm.instrprof_increment`` intrinsic and the
``-instrprof`` pass. These provide the infrastructure for writing
counters for profiling, as in clang's ``-fprofile-instr-generate``.
The implementation of the instrprof pass is ported directly out of the
CodeGenPGO classes in clang, and with the followup in clang that rips
that code out to use these new intrinsics this ends up being NFC.
Doing the instrumentation this way opens some doors in terms of
improving the counter performance. For example, this will make it
simple to experiment with alternate lowering strategies, and allows us
to try handling profiling specially in some optimizations if we want
to.
Finally, this drastically simplifies the frontend and puts all of the
lowering logic in one place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223672 91177308-0d34-0410-b5e6-96231b3b80d8
DenseSet used to be implemented as DenseMap<Key, char>, which usually doubled
the memory footprint of the map. Now we use a compressed set so the second
element uses no memory at all. This required some surgery on DenseMap as
all accesses to the bucket now have to go through methods; this should
have no impact on the behavior of DenseMap though. The new default bucket
type for DenseMap is a slightly extended std::pair as we expose it through
DenseMap's iterator and don't want to break any existing users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223588 91177308-0d34-0410-b5e6-96231b3b80d8
Required some APInt massaging to get proper empty/tombstone values. Apart
from making the code a bit simpler this also reduces the bucket size of
the ConstantInt map from 32 to 24 bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223478 91177308-0d34-0410-b5e6-96231b3b80d8
It relies on undefined behaviour, since `StringMapEntry<>` is not
a standard layout type. There are no users anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223439 91177308-0d34-0410-b5e6-96231b3b80d8
This matches std::vector and should avoid unnecessary masking to 32 bits
when calling them on o 64 bits system.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223365 91177308-0d34-0410-b5e6-96231b3b80d8
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223348 91177308-0d34-0410-b5e6-96231b3b80d8
Use the MCAsmInfo instead of the DataLayout, and allow
specifying a custom prefix for labels specifically. HSAIL
requires that labels begin with @, but global symbols with &.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223323 91177308-0d34-0410-b5e6-96231b3b80d8
The non-opaque part can be structurally uniqued. To keep this to just
a hash lookup, we don't try to unique cyclic types.
Also change the type mapping algorithm to be optimistic about a type
not being recursive and only create a new type when proven to be wrong.
This is not as strong as trying to speculate that we can keep the source
type, but is simpler (no speculation to revert) and more powerfull
than what we had before (we don't copy non-recursive types at least).
I initially wrote this to try to replace the name based type merging.
It is not strong enough to replace it, but is is a useful addition.
With this patch the number of named struct types is a clang lto bootstrap goes
from 49674 to 15986.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223278 91177308-0d34-0410-b5e6-96231b3b80d8
When lazy reading a module, the types used in a function will not be visible to
a TypeFinder until the body is read.
This patch fixes that by asking the module for its identified struct types.
If a materializer is present, the module asks it. If not, it uses a TypeFinder.
This fixes pr21374.
I will be the first to say that this is ugly, but it was the best I could find.
Some of the options I looked at:
* Asking the LLVMContext. This could be made to work for gold, but not currently
for ld64. ld64 will load multiple modules into a single context before merging
them. This causes us to see types from future merges. Unfortunately,
MappedTypes is not just a cache when it comes to opaque types. Once the
mapping has been made, we have to remember it for as long as the key may
be used. This would mean moving MappedTypes to the Linker class and having
to drop the Linker::LinkModules static methods, which are visible from C.
* Adding an option to ignore function bodies in the TypeFinder. This would
fix the PR by picking the worst result. It would work, but unfortunately
we are currently quite dependent on the upfront type merging. I will
try to reduce our dependency, but it is not clear that we will be able
to get rid of it for now.
The only clean solution I could think of is making the Module own the types.
This would have other advantages, but it is a much bigger change. I will
propose it, but it is nice to have this fixed while that is discussed.
With the gold plugin, this patch takes the number of types in the LTO clang
binary from 52817 to 49669.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223215 91177308-0d34-0410-b5e6-96231b3b80d8