CGSCC pass manager. This should insulate the inlining decisions from the
vectorization decisions, however it may have both compile time and code
size problems so it is just an experimental option right now.
Adding this based on a discussion with Arnold and it seems at least
worth having this flag for us to both run some experiments to see if
this strategy is workable. It may solve some of the regressions seen
with the loop vectorizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184698 91177308-0d34-0410-b5e6-96231b3b80d8
There is some hope of eventually supporting a unified build with it, but
until then this lets me (and others) check it out in this location
without things breaking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184697 91177308-0d34-0410-b5e6-96231b3b80d8
exponent_t is only used internally in APFloat and no exponent_t values are
exposed via the APFloat API. In light of such conditions it does not make any
sense to gum up the llvm namespace with said type. Plus it makes it clearer that
exponent_t is associated with APFloat.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184686 91177308-0d34-0410-b5e6-96231b3b80d8
We now no longer need alias analysis - the cases that alias analysis would
handle are now handled as accesses with a large dependence distance.
We can now vectorize loops with simple constant dependence distances.
for (i = 8; i < 256; ++i) {
a[i] = a[i+4] * a[i+8];
}
for (i = 8; i < 256; ++i) {
a[i] = a[i-4] * a[i-8];
}
We would be able to vectorize about 200 more loops (in many cases the cost model
instructs us no to) in the test suite now. Results on x86-64 are a wash.
I have seen one degradation in ammp. Interestingly, the function in which we
now vectorize a loop is never executed so we probably see some instruction
cache effects. There is a 2% improvement in h264ref. There is one or the other
TSCV loop kernel that speeds up.
radar://13681598
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184685 91177308-0d34-0410-b5e6-96231b3b80d8
This class checks dependences by subtracting two Scalar Evolution access
functions allowing us to catch very simple linear dependences.
The checker assumes source order in determining whether vectorization is safe.
We currently don't reorder accesses.
Positive true dependencies need to be a multiple of VF otherwise we impede
store-load forwarding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184684 91177308-0d34-0410-b5e6-96231b3b80d8
Sets of dependent accesses are built by unioning sets based on underlying
objects. This class will be used by the upcoming dependence checker.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184683 91177308-0d34-0410-b5e6-96231b3b80d8
Untill now we detected the vectorizable tree and evaluated the cost of the
entire tree. With this patch we can decide to trim-out branches of the tree
that are not profitable to vectorizer.
Also, increase the max depth from 6 to 12. In the worse possible case where all
of the code is made of diamond-shaped graph this can bring the cost to 2**10,
but diamonds are not very common.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184681 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it possible to write unit tests that are less susceptible
to minor code motion, particularly copy placement. block-placement.ll
covers this case with -pre-RA-sched=source which will soon be
default. One incorrectly named block is already fixed, but without
this fix, enabling new coalescing and scheduling would cause more
failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184680 91177308-0d34-0410-b5e6-96231b3b80d8
The RAII builder location guard is saving a reference to instructions, so we can't erase instructions during vectorization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184671 91177308-0d34-0410-b5e6-96231b3b80d8
This is an awful implementation of the target hook. But we don't have
abstractions yet for common machine ops, and I don't see any quick way
to make it table-driven.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184664 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrote the SLP-vectorization as a whole-function vectorization pass. It is now able to vectorize chains across multiple basic blocks.
It still does not vectorize PHIs, but this should be easy to do now that we scan the entire function.
I removed the support for extracting values from trees.
We are now able to vectorize more programs, but there are some serious regressions in many workloads (such as flops-6 and mandel-2).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184647 91177308-0d34-0410-b5e6-96231b3b80d8
Although in reality the symbol table in ELF resides in a section, the
standard requires that there be no more than one SHT_SYMTAB. To enforce
this constraint, it is cleaner to group all the symbols under a
top-level `Symbols` key on the object file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184627 91177308-0d34-0410-b5e6-96231b3b80d8
The improperly aligned section content in the output was causing
buildbot failures. This should fix them.
Incidentally, this results in a simpler and more robust API for
ContiguousBlobAccumulator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184621 91177308-0d34-0410-b5e6-96231b3b80d8
We have no targets on trunk that bundle before regalloc. However, we
have been advertising regalloc as bundle safe for use with out-of-tree
targets. We need to at least contain the parts of the code that are
still unsafe.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184620 91177308-0d34-0410-b5e6-96231b3b80d8
It wouldn't really test anything that doesn't already have a more
targeted test:
`yaml2obj-elf-section-basic.yaml`:
Already tests that section content is correctly passed though.
`yaml2obj-elf-symbol-basic.yaml` (this file):
Tests that the st_value and st_size attributes of `main` are set
correctly.
Between those two tests, disassembling the file doesn't really add
anything, so just remove mention of disassembling the file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184607 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r184602. In an upcoming commit, I will just remove
the disassembler part of the test; it was mostly just a "nifty" thing
marking a milestone but it doesn't test anything that isn't tested
elsewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184606 91177308-0d34-0410-b5e6-96231b3b80d8
A FastISel optimization was causing us to emit no information for such
parameters & when they go missing we end up emitting a different
function type. By avoiding that shortcut we not only get types correct
(very important) but also location information (handy) - even if it's
only live at the start of a function & may be clobbered later.
Reviewed/discussion by Evan Cheng & Dan Gohman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184604 91177308-0d34-0410-b5e6-96231b3b80d8
This was causing buildbot failures when build without X86 support.
Is there a way to conditionalize the test on the X86 target being
present?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184597 91177308-0d34-0410-b5e6-96231b3b80d8
that have been run through the 'C' pre-processor.
The implementation of SrcMgr.FindLineNumber() is slow but OK if
it uses its cache when called multiple times with an SMLoc that is
forward of the previous call.
In the case of generating dwarf for assembly source files that have
been run through the 'C' pre-processor we need to calculate the
logical line number based on the last parsed cpp hash file line
comment. And the current code calls SrcMgr.FindLineNumber()
twice to do this causing its cache not to work and results in very
slow compile times:
% time /Volumes/SandBox/build-llvm/Debug+Asserts/bin/llvm-mc -triple thumbv7-apple-ios -filetype=obj -o /tmp/x.o mscorlib.dll.E -g
672.542u 0.299s 11:13.15 99.9% 0+0k 0+2io 2106pf+0w
So we save the info from the last parsed cpp hash file line comment
to avoid making the second call to SrcMgr.FindLineNumber() most times
and end up with compile times like:
% time /Volumes/SandBox/build-llvm/Debug+Asserts/bin/llvm-mc -triple thumbv7-apple-ios -filetype=obj -o /tmp/x.o mscorlib.dll.E -g
3.404u 0.104s 0:03.80 92.1% 0+0k 0+3io 2105pf+0w
rdar://14156934
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184592 91177308-0d34-0410-b5e6-96231b3b80d8
Zero is used by BlockFrequencyInfo as a special "don't know" value. It also
causes a sink for frequencies as you can't ever get off a zero frequency with
more multiplies.
This recovers a 10% regression on MultiSource/Benchmarks/7zip. A zero frequency
was propagated into an inner loop causing excessive spilling.
PR16402.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184584 91177308-0d34-0410-b5e6-96231b3b80d8
IR for CUDA should use "nvptx[64]-nvidia-cuda", and IR for NV OpenCL should use "nvptx[64]-nvidia-nvcl"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184579 91177308-0d34-0410-b5e6-96231b3b80d8