Replumb the `AsmWriter` so that `Metadata::print()` is generally useful.
(Similarly change `Metadata::printAsOperand()`.)
- `SlotTracker` now has a mode where all metadata will be correctly
numbered when initializing a `Module`. Normally, `Metadata` only
referenced from within `Function`s gets numbered when the `Function`
is incorporated.
- `Metadata::print()` and `Metadata::printAsOperand()` (and
`Metadata::dump()`) now take an optional `Module` argument. When
provided, `SlotTracker` is initialized with the new mode, and the
numbering will be complete and consistent for all calls to `print()`.
- `Value::print()` uses the new `SlotTracker` mode when printing
intrinsics with `MDNode` operands, `MetadataAsValue` operands, or the
bodies of functions. Thus, metadata numbering will be consistent
between calls to `Metadata::print()` and `Value::print()`.
- `Metadata::print()` (and `Metadata::dump()`) now print the full
definition of `MDNode`s:
!5 = !{!6, !"abc", !7}
This matches behaviour for `Value::print()`, which includes the name
of instructions.
- Updated call sites in `Verifier` to call `print()` instead of
`printAsOperand()`.
All this, so that `Verifier` can print out useful failure messages that
involve `Metadata` for PR22777.
Note that `Metadata::printAsOperand()` previously took an optional
`bool` and `Module` operand. The former was cargo-culted from
`Value::printAsOperand()` and wasn't doing anything useful. The latter
didn't give consistent results (without the new `SlotTracker` mode).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232275 91177308-0d34-0410-b5e6-96231b3b80d8
Adding nullptr to all the IRBuilder stuff because it's the first thing
that fails to build when testing without the back-compat functions, so
I'll keep having to re-add these locally for each chunk of migration I
do. Might as well check them in to save me the churn. Eventually I'll
have to migrate these too, but I'm going breadth-first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232270 91177308-0d34-0410-b5e6-96231b3b80d8
Sorting them is obviously a noop and we can skip the libc call. This is
surprisingly common in clang. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232265 91177308-0d34-0410-b5e6-96231b3b80d8
I'm just going to migrate these in a pretty ad-hoc & incremental way -
providing the backwards compatible API for now, then locally removing
it, fixing a few callers, adding it back in and commiting those callers.
Rinse, repeat.
The assertions should ensure that if I get this wrong we'll find out
about it and not just have one giant patch to revert, recommit, revert,
recommit, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232240 91177308-0d34-0410-b5e6-96231b3b80d8
This speeds up llvm-ar building lib64/libclangSema.a with debug IR files
from 8.658015807 seconds to just 0.351036519 seconds :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232221 91177308-0d34-0410-b5e6-96231b3b80d8
This happened to be fairly easy to support backwards compatibility based
on the number of operands (old format had an even number, new format has
one more operand so an odd number).
test/Bitcode/old-aliases.ll already appears to test old gep operators
(if I remove the backwards compatibility in the BitcodeReader, this and
another test fail) so I'm not adding extra test coverage here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232216 91177308-0d34-0410-b5e6-96231b3b80d8
We only defer loading metadata inside ParseModule when ShouldLazyLoadMetadata
is true and we have not loaded any Metadata block yet.
This commit implements all-or-nothing loading of Metadata. If there is a
request to load any metadata block, we will load all deferred metadata blocks.
We make sure the deferred metadata blocks are loaded before we materialize any
function or a module.
The default value of the added parameter ShouldLazyLoadMetadata for
getLazyBitcodeModule is false, so the default behavior stays the same.
We only set the parameter to true when creating LTOModule in local contexts.
These can only really be used for parsing symbols, so it's unnecessary to ever
load the metadata blocks.
If we are going to enable lazy-loading of Metadata for other usages of
getLazyBitcodeModule, where deferred metadata blocks need to be loaded, we can
expose BitcodeReader::materializeMetadata to Module, similar to
Module::materialize.
rdar://19804575
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232198 91177308-0d34-0410-b5e6-96231b3b80d8
The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.
This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break
anything.
The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate
Constraint_* values.
PR22883 was caused the matching operands copying the whole of the operand flags
for the matched operand. This included the constraint id which needed to be
replaced with the operand number. This has been fixed with a conversion
function. Following on from this, matching operands also used the operand
number as the constraint id. This has been fixed by looking up the matched
operand and taking it from there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232165 91177308-0d34-0410-b5e6-96231b3b80d8
This should complete the job started in r231794 and continued in r232045:
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.
This should complete the removal of insert/extract128 intrinsics.
The Clang precursor patch for this change was checked in at r232109.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232120 91177308-0d34-0410-b5e6-96231b3b80d8
This (r232027) has caused PR22883; so it seems those bits might be used by
something else after all. Reverting until we can figure out what else to do.
Original commit message:
The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.
This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break anything.
The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate Constraint_*
values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232093 91177308-0d34-0410-b5e6-96231b3b80d8
Currently IntervalMap would assert when used with keys bigger than host
pointers. This patch uses the AlignedCharArrayUnion functionality to
overcome that limitation.
Differential Revision: http://reviews.llvm.org/D8268
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232079 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.
This is very much like D8086 (checked in at r231794):
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is also the LLVM sibling to the cfe D8275 patch.
Differential Revision: http://reviews.llvm.org/D8276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232045 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The operand flag word for ISD::INLINEASM nodes now contains a 15-bit
memory constraint ID when the operand kind is Kind_Mem. This constraint
ID is a numeric equivalent to the constraint code string and is converted
with a target specific hook in TargetLowering.
This patch maps all memory constraints to InlineAsm::Constraint_m so there
is no functional change at this point. It just proves that using these
previously unused bits in the encoding of the flag word doesn't break anything.
The next patch will make each target preserve the current mapping of
everything to Constraint_m for itself while changing the target independent
implementation of the hook to return Constraint_Unknown appropriately. Each
target will then be adapted in separate patches to use appropriate Constraint_*
values.
Reviewers: hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8171
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232027 91177308-0d34-0410-b5e6-96231b3b80d8
These docs *don't* match the way WinEHPrepare uses them yet, and
verifier support isn't implemented either. The implementation will come
after the documentation text is reviewed and agreed upon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232003 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, run both EH preparation passes, and have them both ignore
functions with unrecognized EH personalities. Pass delegation involved
some hacky code for creating an AnalysisResolver that we don't need now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231995 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
I don't know why every singled backend had to redeclare its own DataLayout.
There was a virtual getDataLayout() on the common base TargetMachine, the
default implementation returned nullptr. It was not clear from this that
we could assume at call site that a DataLayout will be available with
each Target.
Now getDataLayout() is no longer virtual and return a pointer to the
DataLayout member of the common base TargetMachine. I plan to turn it into
a reference in a future patch.
The only backend that didn't have a DataLayout previsouly was the CPPBackend.
It now initializes the default DataLayout. This commit is NFC for all the
other backends.
Test Plan: clang+llvm ninja check-all
Reviewers: echristo
Subscribers: jfb, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8243
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231987 91177308-0d34-0410-b5e6-96231b3b80d8
that control, individually, all of the disparate things it was
controlling.
At the same time move a FIXME in the Hexagon port to a new
subtarget function that will enable a user of the machine
scheduler to avoid using the source scheduler for pre-RA-scheduling.
The FIXME would have this removed, but involves either testcase
changes or adding -pre-RA-sched=source to a few testcases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231980 91177308-0d34-0410-b5e6-96231b3b80d8
time. The target independent code was passing in one all the
time and targets weren't checking validity before using. Update
a few calls to pass in a MachineFunction where necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231970 91177308-0d34-0410-b5e6-96231b3b80d8
If a function is going in an unique section (because of -ffunction-sections
for example), putting a jump table in .rodata will keep .rodata alive and
that will keep alive any other function that also has a jump table.
Instead, put the jump table in a unique section that is associated with the
function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231961 91177308-0d34-0410-b5e6-96231b3b80d8
MachineFunction argument so that it can look up the subtarget
rather than using a cached one in some Targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231888 91177308-0d34-0410-b5e6-96231b3b80d8
update all ports accordingly. Required a couple of small rewrites
in handling subtarget features during creation in PPC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231861 91177308-0d34-0410-b5e6-96231b3b80d8
This lets us pass the symbol to the constructor and avoid the mutable field.
This also opens the way for outputting the symbol only when needed, instead
of outputting them at the start of the file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231859 91177308-0d34-0410-b5e6-96231b3b80d8
- Use TargetLowering to check for the actual cost of each extension.
- Provide a factorized method to check for the cost of an extension:
TargetLowering::isExtFree.
- Provide a virtual method TargetLowering::isExtFreeImpl for targets to be able
to tune the cost of non-free extensions.
This refactoring offers a better granularity to model what really happens on
different targets.
No performance changes and very few code differences.
Part of <rdar://problem/19267165>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231855 91177308-0d34-0410-b5e6-96231b3b80d8
Follow up from r231505.
Fix the non-determinism by using a MapVector and reintroduce the AArch64
testcase. Defer deleting the got candidates up to the end and remove
them in a bulk, avoiding linear time removal of each element.
Thanks to Renato Golin for trying it out on other platforms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231830 91177308-0d34-0410-b5e6-96231b3b80d8
This is the final patch that actually introduces the new parameter of
partition mapping to RuntimePointerCheck::needsChecking.
Another API (LAI::getInstructionsForAccess) is also exposed that helps
to map pointers to instructions because ultimately we partition
instructions.
The WIP version of the Loop Distribution pass in D6930 has been adapted
to use all this. See for example, how
InstrPartitionContainer::computePartitionSetForPointers sets up the
partitions using the above API and then calls to LAI::addRuntimeCheck
with the pointer partitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231818 91177308-0d34-0410-b5e6-96231b3b80d8
Now the analysis won't "fail" if the memchecks exceed the threshold. It
is the transform pass' responsibility to perform the check.
This allows the transform pass to further analyze/eliminate the
memchecks. E.g. in Loop distribution we only need to check pointers
that end up in different partitions.
Note that there is a slight change of functionality here. The logic in
analyzeLoop is that if dependence checking fails due to non-constant
distance between the pointers, another attempt is made to prove safety
of the dependences purely using run-time checks.
Before this patch we could fail the loop due to exceeding the memcheck
threshold after the first step, now we only check the threshold in the
client after the full analysis. There is no measurable compile-time
effect but I wanted to record this here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231817 91177308-0d34-0410-b5e6-96231b3b80d8
The dependences are now expose through the new getInterestingDependences
API so we can use that with -analyze too and fix the FIXME.
This lets us remove the test that relied on -debug to check the
dependences.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231807 91177308-0d34-0410-b5e6-96231b3b80d8
Gather an array of interesting dependences rather than just failing
after the first unsafe one and regarding the loop unsafe. Loop
Distribution needs to be able to collect all dependences in order to
isolate the dependence cycles into their own partition.
Since the dependence checking algorithm is quadratic in terms of
accesses sharing the same underlying pointer, I am applying a cut-off
threshold (MaxInterestingDependence). Exceeding that, the logic reverts
back to the original approach deeming the loop unsafe upon encountering
the first unsafe dependence.
The main idea of the patch is to split isDepedent from directly
answering the question whether the dep is safe for vectorization to
return a dependence type which then gets mapped to old boolean result
using Dependence::isSafeForVectorization.
Tested that this was compile-time neutral on SpecINT2006 LTO bitcode
inputs. No assembly change on the testsuite including external.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231806 91177308-0d34-0410-b5e6-96231b3b80d8
LoopDistribution needs to query various results of the dependence
analysis. This series will expose some more APIs and state of the
dependence checker.
This patch is a simple one to just expose the DepChecker instance. The
set is compile-time neutral measured with LTO bitcode files of
SpecINT2006. Also there is no assembly change on the testsuite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231805 91177308-0d34-0410-b5e6-96231b3b80d8
This makes code that uses section relative expressions (debug info) simpler and
less brittle.
This is still a bit awkward as the symbol is created late and has to be
stored in a mutable field.
I will move the symbol creation earlier in the next patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231802 91177308-0d34-0410-b5e6-96231b3b80d8
When tail merging it may be necessary to remove MMOs from memory operations to
ensures later passes (e.g., MI sched) conservatively compute dependencies.
Currently, we only remove the MMO from the common tail if the MMO doesn't match
with the relative instruction in the non-common tail(s).
A more robust solution would be to add multiple MMOs from the duplicate MIs to
the new MI. Currently ScheduleDAGInstrs.cpp ignores all MMOs on instructions
with multiple MMOs, so this solution is equivalent for the time being.
No test case included as this is incredibly difficult to reproduce.
Patch was a collaborative effort between Ana Pazos and myself.
Phabricator: http://reviews.llvm.org/D7769
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231799 91177308-0d34-0410-b5e6-96231b3b80d8
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is the sibling patch for the Clang half of this change:
http://reviews.llvm.org/D8088
Differential Revision: http://reviews.llvm.org/D8086
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231794 91177308-0d34-0410-b5e6-96231b3b80d8