Summary:
When getConstant() is called for an expanded vector type, it is split into
multiple scalar constants which are then combined using appropriate build_vector
and bitcast operations.
In addition to the usual big/little endian differences, the case where the
element-order of the vector does not have the same endianness as the elements
themselves is also accounted for. For example, for v4i32 on big-endian MIPS,
the byte-order of the vector is <3210,7654,BA98,FEDC>. For little-endian, it is
<0123,4567,89AB,CDEF>.
Handling this case turns out to be a nop since getConstant() returns a splatted
vector (so reversing the element order doesn't change the value)
This fixes a number of cases in MIPS MSA where calling getConstant() during
operation legalization introduces illegal types (e.g. to legalize v2i64 UNDEF
into a v2i64 BUILD_VECTOR of illegal i64 zeros). It should also handle bigger
differences between illegal and legal types such as legalizing v2i64 into v8i16.
lowerMSASplatImm() in the MIPS backend no longer needs to avoid calling
getConstant() so this function has been updated in the same patch.
For the sake of transparency, the steps I've taken since the review are:
* Added 'virtual' to isVectorEltOrderLittleEndian() as requested. This revealed
that the MIPS tests were falsely passing because a polymorphic function was
not actually polymorphic in the reviewed patch.
* Fixed the tests that were now failing. This involved deleting the code to
handle the MIPS MSA element-order (which was previously doing an byte-order
swap instead of an element-order swap). This left
isVectorEltOrderLittleEndian() unused and it was deleted.
* Fixed build failures caused by rebasing beyond r194467-r194472. These build
failures involved the bset, bneg, and bclr instructions added in these commits
using lowerMSASplatImm() in a way that was no longer valid after this patch.
Some of these were fixed by calling SelectionDAG::getConstant() instead,
others were fixed by a new function getBuildVectorSplat() that provided the
removed functionality of lowerMSASplatImm() in a more sensible way.
Reviewers: bkramer
Reviewed By: bkramer
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1973
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194811 91177308-0d34-0410-b5e6-96231b3b80d8
This is to avoid this transformation in some cases:
fold (conv (load x)) -> (load (conv*)x)
On architectures that don't natively support some vector
loads efficiently casting the load to a smaller vector of
larger types and loading is more efficient.
Patch by Micah Villmow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194783 91177308-0d34-0410-b5e6-96231b3b80d8
Including only Debug.h did not cause a compilation error, but you couldn't
do anything (like writing something with <<) to raw_ostreams returned by
llvm::dbgs() or llvm::errs() without including raw_ostream.h. So including
it from Debug.h should make sense.
Differential Revision: http://llvm-reviews.chandlerc.com/D2183
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194759 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for debugging issues in the BlockFrequency implementation since
one can easily visualize where probability mass and other errors occur in the
propagation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194654 91177308-0d34-0410-b5e6-96231b3b80d8
- readInt() should check all 4 bytes can be read, not just 1.
- In the event of false data in the gcno file, it was possible to index
into a non-existent index of SmallVector, causing assertion error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194639 91177308-0d34-0410-b5e6-96231b3b80d8
According to the hazy gcov documentation, it appeared to be technically
possible for lines within a block to belong to different source files.
However, upon further investigation, gcov does not actually support
multiple source files for a single block.
This change removes a level of separation between blocks and lines by
replacing the StringMap of GCOVLines with a SmallVector of ints
representing line numbers. This also means that the GCOVLines class is
no longer needed.
This paves the way for supporting the "-a" option, which will output
block information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194637 91177308-0d34-0410-b5e6-96231b3b80d8
Unified the interface for read functions. They all return a boolean
indicating if the read from file succeeded. Functions that previously
returned the read value now store it into a variable that is passed in
by reference instead. Callers will need to check the return value to
detect if an error occurred.
Also added a new test which ensures that no assertions occur when file
contains invalid data. llvm-cov should return with error code 1 upon
failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194635 91177308-0d34-0410-b5e6-96231b3b80d8
instructions. This patch does not include the shift right and accumulate
instructions. A number of non-overloaded intrinsics have been remove in favor
of their overloaded counterparts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194598 91177308-0d34-0410-b5e6-96231b3b80d8
Accepting quotes is a property of an assembler, not of an object file. For
example, ELF can support any names for sections and symbols, but the gnu
assembler only accepts quotes in some contexts and llvm-mc in a few more.
LLVM should not produce different symbols based on a guess about which assembler
will be reading the code it is printing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194575 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new scalar pass that reads a file with samples generated
by 'perf' during runtime. The samples read from the profile are
incorporated and emmited as IR metadata reflecting that profile.
The profile file is assumed to have been generated by an external
profile source. The profile information is converted into IR metadata,
which is later used by the analysis routines to estimate block
frequencies, edge weights and other related data.
External profile information files have no fixed format, each profiler
is free to define its own. This includes both the on-disk representation
of the profile and the kind of profile information stored in the file.
A common kind of profile is based on sampling (e.g., perf), which
essentially counts how many times each line of the program has been
executed during the run.
The SampleProfileLoader pass is organized as a scalar transformation.
On startup, it reads the file given in -sample-profile-file to
determine what kind of profile it contains. This file is assumed to
contain profile information for the whole application. The profile
data in the file is read and incorporated into the internal state of
the corresponding profiler.
To facilitate testing, I've organized the profilers to support two file
formats: text and native. The native format is whatever on-disk
representation the profiler wants to support, I think this will mostly
be bitcode files, but it could be anything the profiler wants to
support. To do this, every profiler must implement the
SampleProfile::loadNative() function.
The text format is mostly meant for debugging. Records are separated by
newlines, but each profiler is free to interpret records as it sees fit.
Profilers must implement the SampleProfile::loadText() function.
Finally, the pass will call SampleProfile::emitAnnotations() for each
function in the current translation unit. This function needs to
translate the loaded profile into IR metadata, which the analyzer will
later be able to use.
This patch implements the first steps towards the above design. I've
implemented a sample-based flat profiler. The format of the profile is
fairly simplistic. Each sampled function contains a list of relative
line locations (from the start of the function) together with a count
representing how many samples were collected at that line during
execution. I generate this profile using perf and a separate converter
tool.
Currently, I have only implemented a text format for these profiles. I
am interested in initial feedback to the whole approach before I send
the other parts of the implementation for review.
This patch implements:
- The SampleProfileLoader pass.
- The base ExternalProfile class with the core interface.
- A SampleProfile sub-class using the above interface. The profiler
generates branch weight metadata on every branch instructions that
matches the profiles.
- A text loader class to assist the implementation of
SampleProfile::loadText().
- Basic unit tests for the pass.
Additionally, the patch uses profile information to compute branch
weights based on instruction samples.
This patch converts instruction samples into branch weights. It
does a fairly simplistic conversion:
Given a multi-way branch instruction, it calculates the weight of
each branch based on the maximum sample count gathered from each
target basic block.
Note that this assignment of branch weights is somewhat lossy and can be
misleading. If a basic block has more than one incoming branch, all the
incoming branches will get the same weight. In reality, it may be that
only one of them is the most heavily taken branch.
I will adjust this assignment in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194566 91177308-0d34-0410-b5e6-96231b3b80d8
This bug only bit the C++98 build bots because all of the actual uses
really do move. ;] But not *quite* ready to do the whole C++11 switch
yet, so clean it up. Also add a unit test that catches this immediately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194548 91177308-0d34-0410-b5e6-96231b3b80d8
more smarts in it. This is where most of the interesting logic that used
to live in the implicit-scheduling-hackery of the old pass manager will
live.
Like the previous commits, note that this is a very early prototype!
I expect substantial changes before this is ready to use.
The core of the design is the following:
- We have an AnalysisManager which can be used across a series of
passes over a module.
- The code setting up a pass pipeline registers the analyses available
with the manager.
- Individual transform passes can check than an analysis manager
provides the analyses they require in order to fail-fast.
- There is *no* implicit registration or scheduling.
- Analysis passes are different from other passes: they produce an
analysis result that is cached and made available via the analysis
manager.
- Cached results are invalidated automatically by the pass managers.
- When a transform pass requests an analysis result, either the analysis
is run to produce the result or a cached result is provided.
There are a few aspects of this design that I *know* will change in
subsequent commits:
- Currently there is no "preservation" system, that needs to be added.
- All of the analysis management should move up to the analysis library.
- The analysis management needs to support at least SCC passes. Maybe
loop passes. Living in the analysis library will facilitate this.
- Need support for analyses which are *both* module and function passes.
- Need support for pro-actively running module analyses to have cached
results within a function pass manager.
- Need a clear design for "immutable" passes.
- Need support for requesting cached results when available and not
re-running the pass even if that would be necessary.
- Need more thorough testing of all of this infrastructure.
There are other aspects that I view as open questions I'm hoping to
resolve as I iterate a bit on the infrastructure, and especially as
I start writing actual passes against this.
- Should we have separate management layers for function, module, and
SCC analyses? I think "yes", but I'm not yet ready to switch the code.
Adding SCC support will likely resolve this definitively.
- How should the 'require' functionality work? Should *that* be the only
way to request results to ensure that passes always require things?
- How should preservation work?
- Probably some other things I'm forgetting. =]
Look forward to more patches in shorter order now that this is in place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194538 91177308-0d34-0410-b5e6-96231b3b80d8
Add user-supplied C runtime and compiler-rt library functions to
llvm.compiler.used to protect them from premature optimization by
passes like -globalopt and -ipsccp. Calls to (seemingly unused)
runtime library functions can be added by -instcombine and instruction
lowering.
Patch by Duncan Exon Smith, thanks!
Fixes <rdar://problem/14740087>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194514 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r194485.
The variable is unused in some macro instantiations, but not others. We should
probably fix clang to not warn on this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194486 91177308-0d34-0410-b5e6-96231b3b80d8
This will enable the PBQP register allocator to provide its own normalizing function.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194417 91177308-0d34-0410-b5e6-96231b3b80d8
Besides, this relates it more obviously to the VirtRegAuxInfo::calculateSpillWeightAndHint.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194404 91177308-0d34-0410-b5e6-96231b3b80d8
Based on discussions with Lang Hames and Jakob Stoklund Olesen at the hacker's lab, and in the light of upcoming work on the PBQP register allocator, it was though that CalcSpillWeights does not need to be a pass. This change will enable to customize / tune the spill weight computation depending on the allocator.
Update the documentation style while there.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194356 91177308-0d34-0410-b5e6-96231b3b80d8
This is still just a skeleton. I'm trying to pull together the
experimentation I've done into committable chunks, and this is the first
coherent one. Others will follow in hopefully short order that move this
more toward a useful initial implementation. I still expect the design
to continue evolving in small ways as I work through the different
requirements and features needed here though.
Keep in mind, all of this is off by default.
Currently, this mostly exercises the use of a polymorphic smart pointer
and templates to hide the polymorphism for the pass manager from the
pass implementation. The next step will be more significant, adding the
first framework of analysis support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194325 91177308-0d34-0410-b5e6-96231b3b80d8
give the files a legacy prefix in the right directory. Use forwarding
headers in the old locations to paper over the name change for most
clients during the transitional period.
No functionality changed here! This is just clearing some space to
reduce renaming churn later on with a new system.
Even when the new stuff starts to go in, it is going to be hidden behind
a flag and off-by-default as it is still WIP and under development.
This patch is specifically designed so that very little out-of-tree code
has to change. I'm going to work as hard as I can to keep that the case.
Only direct forward declarations of the PassManager class are impacted
by this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194324 91177308-0d34-0410-b5e6-96231b3b80d8
unique ownership smart pointer which is *deep* copyable by assuming it
can call a T::clone() method to allocate a copy of the owned data.
This is mostly useful with containers or other collections of uniquely
owned data in C++98 where they *might* copy. With C++11 we can likely
remove this in favor of move-only types and containers wrapped around
those types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194315 91177308-0d34-0410-b5e6-96231b3b80d8