Somewhat unnoticed in the original implementation of discriminators, but
it could cause instructions to end up in new, small,
DW_TAG_lexical_blocks due to the use of DILexicalBlock to track
discriminator changes.
Instead, use DILexicalBlockFile which we already use to track file
changes without introducing new scopes, so it works well to track
discriminator changes in the same way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216239 91177308-0d34-0410-b5e6-96231b3b80d8
isPow2DivCheap
That name doesn't specify signed or unsigned.
Lazy as I am, I eventually read the function and variable comments. It turns out that this is strictly about signed div. But I discovered that the comments are wrong:
srl/add/sra
is not the general sequence for signed integer division by power-of-2. We need one more 'sra':
sra/srl/add/sra
That's the sequence produced in DAGCombiner. The first 'sra' may be removed when dividing by exactly '2', but that's a special case.
This patch corrects the comments, changes the name of the flag bit, and changes the name of the accessor methods.
No functional change intended.
Differential Revision: http://reviews.llvm.org/D5010
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216237 91177308-0d34-0410-b5e6-96231b3b80d8
The advanced copy optimization does not yield any difference on the whole llvm
test-suite + SPECs, either in compile time or runtime (binaries are identical),
but has a big potential when data go back and forth between register files as
demonstrated with test/CodeGen/ARM/adv-copy-opt.ll.
Note: This was measured for both Os and O3 for armv7s, arm64, and x86_64.
<rdar://problem/12702965>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216236 91177308-0d34-0410-b5e6-96231b3b80d8
This is mostly achieved by providing the correct register class manually,
because getRegClassFor always returns the GPR*AllRegClass for MVT::i32 and
MVT::i64.
Also cleanup the code to use the FastEmitInst_* method whenever possible. This
makes sure that the operands' register class is properly constrained. For all
the remaining cases this adds the missing constrainOperandRegClass calls for
each operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216225 91177308-0d34-0410-b5e6-96231b3b80d8
This will simplify the SGPR spilling and also allow us to use
MachineFrameInfo for calculating offsets, which should be more
reliable than our custom code.
This fixes a crash in some cases where a register would be spilled
in a branch such that the VGPR defined for spilling did not dominate
all the uses when restoring.
This fixes a crash in an ocl conformance test. The test requries
register spilling and is too big to include.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216217 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a crash in an ocl conformance test. The test requries
register spilling and is too big to include.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216216 91177308-0d34-0410-b5e6-96231b3b80d8
There is a fundamental difference between how the gold API and lib/LTO view
the LTO process.
The gold API talks about a particular symbol in a particular file. The lib/LTO
API talks about a symbol in the merged module.
The merged module is then defined in terms of the IR semantics. In particular,
a linkonce_odr GV is only copied if it is used, since it is valid to drop
unused linkonce_odr GVs.
In the testcase in pr19901 both properties collide. What happens is that gold
asks us to keep a particular linkonce_odr symbol, but the IR linker doesn't
copy it to the merged module and we never have a chance to ask lib/LTO to keep
it.
This patch fixes it by having a more direct implementation of the gold API. If
it asks us to keep a symbol, we change the linkage so it is not linkonce. If it
says we can drop a symbol, we do so. All of this before we even send the module
to lib/Linker.
Since now we don't have to produce LTO_SYMBOL_SCOPE_DEFAULT_CAN_BE_HIDDEN,
during symbol resolution we can use a temporary LLVMContext and do lazy
module loading. This allows us to keep the minimum possible amount of
allocated memory around. This should also allow as much parallelism as
we want, since there is no shared context.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216215 91177308-0d34-0410-b5e6-96231b3b80d8
We discussed the issue of generality vs. readability of the AVX512 classes
recently. I proposed this approach to try to hide and centralize the mappings
we commonly perform based on the vector type. A new class X86VectorVTInfo
captures these.
The idea is to pass an instance of this class to classes/multiclasses instead
of the corresponding ValueType. Then the class/multiclass can use its field
for things that derive from the type rather than passing all those as separate
arguments.
I modified avx512_valign to demonstrate this new approach. As you can see
instead of 7 related template parameters we now have one. The downside is
that we have to refer to fields for the derived values. I named the argument
'_' in order to make this as invisible as possible. Please let me know if you
absolutely hate this. (Also once we allow local initializations in
multiclasses we can recover the original version by assigning the fields to
local variables.)
Another possible use-case for this class is to directly map things, e.g.:
RegisterClass KRC = X86VectorVTInfo<32, i16>.KRC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216209 91177308-0d34-0410-b5e6-96231b3b80d8
The profile data format was recently updated and the new indexing api
requires the code coverage tool to know the function's hash as well
as the function's name to get the execution counts for a function.
Differential Revision: http://reviews.llvm.org/D4994
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216207 91177308-0d34-0410-b5e6-96231b3b80d8
The AdvSIMD pass may produce copies that are not coalescer-friendly. The
peephole optimizer knows how to fix that as demonstrated in the test case.
<rdar://problem/12702965>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216200 91177308-0d34-0410-b5e6-96231b3b80d8
There are two add-immediate instructions in Thumb1: tADDi8 and tADDi3. Only
the latter supports using different source and destination registers, so
whenever we materialize a new base register (at a certain offset) we'd do
so by moving the base register value to the new register and then adding in
place. This patch changes the code to use a single tADDi3 if the offset is
small enough to fit in 3 bits.
Differential Revision: http://reviews.llvm.org/D5006
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216193 91177308-0d34-0410-b5e6-96231b3b80d8
In both Clang and LLVM, this is a common pattern:
Size = sizeof(DeclRefExpr) + SomeExtraStuff;
void *Mem = Context.Allocate(Size, llvm::alignOf<DeclRefExpr>());
return new (Mem) DeclRefExpr(...);
The annoying thing is that because the default placement-new operator has a
nothrow specification, the compiler will insert a null check of Mem before
calling the DeclRefExpr constructor. This null check is redundant for us,
because we expect the allocation functions to never return null.
By annotating the allocator functions with returns_nonnull, we can optimize
away these checks. Compiling clang with a recent version of Clang and measuring
with:
$ perf stat -r20 bin/clang.patch -fsyntax-only -w gcc.c && perf stat -r20 bin/clang.orig -fsyntax-only -w gcc.c
Shows a 2.4% speed-up (+- 0.8%).
The pattern occurs in LLVM too. Measuring with -O3 (and now using bzip2.c
instead, because it's smaller):
$ perf stat -r20 bin/clang.patch -O3 -w bzip2.c && perf stat -r20 bin/clang.orig -O3 -w bzip2.c
Shows 4.4 % speed-up (+- 1%).
If anyone knows of a similar attribute we can use for MSVC, or some other
technique to get rid off the null check there, please let me know.
Differential Revision: http://reviews.llvm.org/D4989
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216192 91177308-0d34-0410-b5e6-96231b3b80d8
The FPv4-SP floating-point unit is generally referred to as
single-precision only, but it does have double-precision registers and
load, store and GPR<->DPR move instructions which operate on them.
This patch enables the use of these registers, the main advantage of
which is that we now comply with the AAPCS-VFP calling convention.
This partially reverts r209650, which added some AAPCS-VFP support,
but did not handle return values or alignment of double arguments in
registers.
This patch also adds tests for Thumb2 code generation for
floating-point instructions and intrinsics, which previously only
existed for ARM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216172 91177308-0d34-0410-b5e6-96231b3b80d8
It's not meant to be used with operator delete and this avoids emitting virtual
dtors for every derived format object.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216170 91177308-0d34-0410-b5e6-96231b3b80d8
Currently only "add nsw" are widened. This patch eliminates tons of "sext" instructions for 64 bit code (and the corresponding target code) in cases like:
int N = 100;
float **A;
void foo(int x0, int x1)
{
float * A_cur = &A[0][0];
float * A_next = &A[1][0];
for(int x = x0; x < x1; ++x).
{
// Currently only [x+N] case is widened. Others 2 cases lead to sext.
// This patch fixes it, so all 3 cases do not need sext.
const float div = A_cur[x + N] + A_cur[x - N] + A_cur[x * N];
A_next[x] = div;
}
}
...
> clang++ test.cpp -march=core-avx2 -Ofast -fno-unroll-loops -fno-tree-vectorize -S -o -
Differential Revision: http://reviews.llvm.org/D4695
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216160 91177308-0d34-0410-b5e6-96231b3b80d8