In gcov, there's a -n/--no-output option, which disables the writing
of any .gcov files, so that it emits only the summary info on stdout.
This implements the same behaviour in llvm-cov.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208148 91177308-0d34-0410-b5e6-96231b3b80d8
This is similar to the getAlignment patch, but is done just for
completeness. It looks like we never call getSection on an alias. All the
tests still pass if the if is replaced with an assert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208139 91177308-0d34-0410-b5e6-96231b3b80d8
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208129 91177308-0d34-0410-b5e6-96231b3b80d8
This removes arguments passed everywhere and allows the use of
standard iteration over lists.
Should be no functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208127 91177308-0d34-0410-b5e6-96231b3b80d8
Split from the musttail inliner change. This will be covered by an opt
test when the inliner change lands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208126 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When I initially introduced -pass-remarks, I thought it would be a
neat idea to make it additive. So, if one used it as:
$ llc -pass-remarks=inliner --pass-remarks=loop.*
the compiler would build the regular expression '(inliner)|(loop.*)'.
The more I think about it, the more I regret it. This is not how
other flags work. The standard semantics are right-to-left overrides.
This is how clang interprets -Rpass. And I think the two should be
compatible in this respect.
Reviewers: qcolombet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3614
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208122 91177308-0d34-0410-b5e6-96231b3b80d8
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.
With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).
This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208107 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).
So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
An alias has the address of what it points to, so it also has the same
alignment.
This allows a few optimizations to see past aliases for free.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208103 91177308-0d34-0410-b5e6-96231b3b80d8
fall back to the normal path without a cpu. While doing this fix
llc to just exit when we don't have a module to process instead of
asserting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208102 91177308-0d34-0410-b5e6-96231b3b80d8
The fact that GlobalAlias::setAlignment exists at all is a side effect of
how the classes are organized, it should never be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208094 91177308-0d34-0410-b5e6-96231b3b80d8
Obviously we can't expect the two backends to produce identical diagnostics,
since what's possible depends quite a bit on how the .td files are structured.
I think the ARM64 diagnostics are basically of the same quality in all the
changed cases, so I've split the CHECK lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208084 91177308-0d34-0410-b5e6-96231b3b80d8
I found it useful in the past and now again to have a version of the .td file
where all the records are expanded. This adds a makefile rule to generate
this on demand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208056 91177308-0d34-0410-b5e6-96231b3b80d8
Also, provide the ability to create temporary and non-temporary
declarations, as not all declarations may be replaced by definitions
later on.
This provides the necessary infrastructure for Clang to fix PR19598,
leaking temporary MDNodes in Clang's debug info generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208054 91177308-0d34-0410-b5e6-96231b3b80d8
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.
Patch by Jameson Nash!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208029 91177308-0d34-0410-b5e6-96231b3b80d8
On x64, windows.h doesn't include intrin.h for intrinsics. It just
declares them in the global namespace and uses them, expecting the
compiler to lower it as a builtin. We basically need to do this in
clang, eventually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208023 91177308-0d34-0410-b5e6-96231b3b80d8
The number of tail call to loop conversions remains the same (1618 by my count).
The new algorithm does a local scan over the use-def chains to identify local "alloca-derived" values, as well as points where the alloca could escape. Then, a visit over the CFG marks blocks as being before or after the allocas have escaped, and annotates the calls accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208017 91177308-0d34-0410-b5e6-96231b3b80d8