128-bit lane crossings, not 'half' crossings. This came up in code
review ages ago, but I hadn't really addresesd it. Also added some
documentation for the helper.
No functionality changed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218203 91177308-0d34-0410-b5e6-96231b3b80d8
actual support for complex AVX shuffling tricks. We can do independent
blends of the low and high 128-bit lanes of an avx vector, so shuffle
the inputs into place and then do the blend at 256 bits. This will in
many cases remove one blend instruction.
The next step is to permute the low and high halves in-place rather than
extracting them and re-inserting them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218202 91177308-0d34-0410-b5e6-96231b3b80d8
link.exe:
Fuzz testing has shown that COMMON symbols with size > 32 will always
have an alignment of at least 32 and all symbols with size < 32 will
have an alignment of at least the largest power of 2 less than the size
of the symbol.
binutils:
The BFD linker essentially work like the link.exe behavior but with
alignment 4 instead of 32. The BFD linker also supports an extension to
COFF which adds an -aligncomm argument to the .drectve section which
permits specifying a precise alignment for a variable but MC currently
doesn't support editing .drectve in this way.
With all of this in mind, we decide to play a little trick: we can
ensure that the alignment will be respected by bumping the size of the
global to it's alignment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218201 91177308-0d34-0410-b5e6-96231b3b80d8
under AVX.
This really just documents the current state of the world. I'm going to
try to flesh it out to cover any test cases I plan to improve prior to
improving them so that the delta made by changes is actually visible to
code reviewers.
This is made easier by the fact that I now have a script to automate the
process of producing test cases including the check lines. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218199 91177308-0d34-0410-b5e6-96231b3b80d8
single-input shuffles with doubles. This allows them to fold memory
operands into the shuffle, etc. This is just the analog to the v4f32
case in my prior commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218193 91177308-0d34-0410-b5e6-96231b3b80d8
instruction for single-vector floating point shuffles. This in turn
allows the shuffles to fold a load into the instruction which is one of
the common regressions hit with the new shuffle lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218190 91177308-0d34-0410-b5e6-96231b3b80d8
We had a few bugs:
- We were considering the GVKind instead of just looking at the section
characteristics
- We would never print out 'y' when a section was meant to be unreadable
- We would never print out 's' when a section was meant to be shared
- We translated IMAGE_SCN_MEM_DISCARDABLE to 'n' when it should've meant
IMAGE_SCN_LNK_REMOVE
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218189 91177308-0d34-0410-b5e6-96231b3b80d8
duplication of check lines. The idea is to have broad sets of
compilation modes that will frequently diverge without having to always
and immediately explode to the precise ISA feature set.
While this already helps due to VEX encoded differences, it will help
much more as I teach the new shuffle lowering about more of the new VEX
encoded instructions which can still be used to implement 128-bit
shuffles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218188 91177308-0d34-0410-b5e6-96231b3b80d8
This patch modifies RTDyldMemoryManager::getSymbolAddress(Name)'s behavior to
make it consistent with how clients are using it: Name should be mangled, and
getSymbolAddress should demangle it on the caller's behalf before looking the
name up in the process. This patch also fixes the one client
(MCJIT::getPointerToFunction) that had been passing unmangled names (by having
it pass mangled names instead).
Background:
RTDyldMemoryManager::getSymbolAddress(Name) has always used a re-try mechanism
when looking up symbol names in the current process. Prior to this patch
getSymbolAddress first tried to look up 'Name' exactly as the user passed it in
and then, if that failed, tried to demangle 'Name' and re-try the look up. The
implication of this behavior is that getSymbolAddress expected to be called with
unmangled names, and that handling mangled names was a fallback for convenience.
This is inconsistent with how clients (particularly the RuntimeDyldImpl
subclasses, but also MCJIT) usually use this API. Most clients pass in mangled
names, and succeed only because of the fallback case. For clients passing in
mangled names, getSymbolAddress's old behavior was actually dangerous, as it
could cause unmangled names in the process to shadow mangled names being looked
up.
For example, consider:
foo.c:
int _x = 7;
int x() { return _x; }
foo.o:
000000000000000c D __x
0000000000000000 T _x
If foo.c becomes part of the process (E.g. via dlopen("libfoo.dylib")) it will
add symbols 'x' (the function) and '_x' (the variable) to the process. However
jit clients looking for the function 'x' will be using the mangled function name
'_x' (note how function 'x' appears in foo.o). When getSymbolAddress goes
looking for '_x' it will find the variable instead, and return its address and
in place of the function, leading to JIT'd code calling the variable and
crashing (if we're lucky).
By requiring that getSymbolAddress be called with mangled names, and demangling
only when we're about to do a lookup in the process, the new behavior
implemented in this patch should eliminate any chance of names being shadowed
during lookup.
There's no good way to test this at the moment: This issue only arrises when
looking up process symbols (not JIT'd symbols). Any test case would have to
generate a platform-appropriate dylib to pass to llvm-rtdyld, and I'm not
aware of any in-tree tool for doing this in a portable way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218187 91177308-0d34-0410-b5e6-96231b3b80d8
This splits the logic for actually looking up coverage information
from the logic that displays it. These were tangled rather thoroughly
so this change is a bit large, but it mostly consists of moving things
around. The coverage lookup logic itself now lives in the library,
rather than being spread between the library and the tool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218184 91177308-0d34-0410-b5e6-96231b3b80d8
This debug output is really for testing CoverageMappingReader, not the
llvm-cov tool. Move it to where it can be more useful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218183 91177308-0d34-0410-b5e6-96231b3b80d8
A problem with our old behavior becomes observable under x86-64 COFF
when we need a read-only GV which has an initializer which is referenced
using a relocation: we would mark the section as writable. Marking the
section as writable interferes with section merging.
This fixes PR21009.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218179 91177308-0d34-0410-b5e6-96231b3b80d8
tricky case of single-element insertion into the zero lane of a zero
vector.
We can't just use the same pattern here as we do in every other vector
type because the general insertion logic can handle insertion into the
non-zero lane of the vector. However, in SSE4.1 with v4f32 vectors we
have INSERTPS that is a much better choice than the generic one for such
lowerings. But INSERTPS can do lots of other lowerings as well so
factoring its logic into the general insertion logic doesn't work very
well. We also can't just extract the core common part of the general
insertion logic that is faster (forming VZEXT_MOVL synthetic nodes that
lower to MOVSS when they can) because VZEXT_MOVL is often *faster* than
a blend while INSERTPS is slower! So instead we do a restrictive
condition on attempting to use the generic insertion logic to narrow it
to those cases where VZEXT_MOVL won't need a shuffle afterward and thus
will do better than INSERTPS. Then we try blending. Then we go back to
INSERTPS.
This still doesn't generate perfect code for some silly reasons that can
be fixed by tweaking the td files for lowering VZEXT_MOVL to use
XORPS+BLENDPS when available rather than XORPS+MOVSS when the input ends
up in a register rather than a load from memory -- BLENDPSrr has twice
the reciprocal throughput of MOVSSrr. Don't you love this ISA?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218177 91177308-0d34-0410-b5e6-96231b3b80d8
analysis used elsewhere. This removes the last duplicate of this logic.
Also simplify the code here quite a bit. No functionality changed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218176 91177308-0d34-0410-b5e6-96231b3b80d8
floating point types and use it for both v2f64 and v2i64 single-element
insertion lowering.
This fixes the last non-AVX performance regression test case I've gotten
of for the new vector shuffle lowering. There is obvious analogous
lowering for v4f32 that I'll add in a follow-up patch (because with
INSERTPS, v4f32 requires special treatment). After that, its AVX stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218175 91177308-0d34-0410-b5e6-96231b3b80d8
vector lanes can be modeled as zero with a call to the new function that
computes a bit-vector representing that information.
No functionality changed here, but will allow doing more clever things
with the zero-test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218174 91177308-0d34-0410-b5e6-96231b3b80d8
I just tried reproducing some of the optimization failures in README.txt in the
X86 backend, and many of them could not be reproduced. In general the entire
file appears quite bit-rotted, whatever interesting parts remain should be
moved to bugzilla, and the rest deleted. I did not spend the time to do that,
so I just deleted the few I tried reproducing which are obsolete, to save some
time to whoever will find the courage to do it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218170 91177308-0d34-0410-b5e6-96231b3b80d8
When looking through sign/zero-extensions the code would always assume there is
such an extension instruction and use the wrong operand for the address.
There was also a minor issue in the handling of 'AND' instructions. I
accidentially used a 'cast' instead of a 'dyn_cast'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218161 91177308-0d34-0410-b5e6-96231b3b80d8
it from the shuffle pattern matching logic.
Also cleaned up variable names, comments, etc. No functionality changed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218152 91177308-0d34-0410-b5e6-96231b3b80d8
In r217636, the value stored in KernelInfo.Num[VS]GPRSs was changed from
the highest GPR index used to the number of gprs in order to be
consistent with the name of the variable.
The code writing the config values still assumed that the value in this
variable was the highest GPR index used, which caused the compiler to
over report the number of GPRs being used.
https://bugs.freedesktop.org/show_bug.cgi?id=84089
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218150 91177308-0d34-0410-b5e6-96231b3b80d8
lowering to support both anyext and zext and to custom lower for many
different microarchitectures.
Using this allows us to get *exactly* the right code for zext and anyext
shuffles in all the vector sizes. For v16i8, the improvement is *huge*.
The new SSE2 test case added I refused to add before this because it was
sooooo muny instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218143 91177308-0d34-0410-b5e6-96231b3b80d8
Having create* functions return the object they create is more
readable than using an in-out parameter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218139 91177308-0d34-0410-b5e6-96231b3b80d8
Since llvm-cov shows the source file in its output, be careful about
potentially matching the check lines themselves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218138 91177308-0d34-0410-b5e6-96231b3b80d8
To reduce the size of -gmlt data, skip the subprograms without any
inlined subroutines. Since we've now got the ability to make these
determinations in the backend (funnily enough - we added the flag so we
wouldn't produce ranges under -gmlt, but with this change we use the
flag, but go back to producing ranges under -gmlt).
Instead, just produce CU ranges to inform the consumer which parts of
the code are described by this CU's line table. Tools could inspect the
line table directly to compute the range, but the CU ranges only seem to
be about 0.5% of object/executable size, so I'm not too worried about
teaching llvm-symbolizer that trick just yet - it's certainly a possible
piece of future work.
Update an llvm-symbolizer test just to demonstrate that this schema is
acceptable there (if it wasn't, the compiler-rt tests would catch this,
but good to have an in-llvm-tree test for llvm-symbolizer's behavior
here)
Building the clang binary with -gmlt with this patch reduces the total
size of object files by 5.1% (5.56% without ranges) without compression
and the executable by 4.37% (4.75% without ranges).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218129 91177308-0d34-0410-b5e6-96231b3b80d8