This adds FK_SecRel_2 relocation support to ARM. This enables the building of
object files for armv7-windows-msvc which enables CodeView line tables for
debugging as opposed to armv7-windows-itanium which currently uses DWARF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208273 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.
Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.
Added some tests for different cases where this optimization triggers.
Reviewers: nadav, delena, craig.topper
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3521
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
The change to ExtractGV.cpp has no functionality change except to avoid
the asserts. Existing testcases already cover this, so I didn't add a
new one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208264 91177308-0d34-0410-b5e6-96231b3b80d8
Visibilities of `hidden` and `protected` are meaningless for symbols
with local linkage.
- Change the assembler to reject non-default visibility on symbols
with local linkage.
- Change the bitcode reader to auto-upgrade `hidden` and `protected`
to `default` when the linkage is local.
- Update LangRef.
<rdar://problem/16141113>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208263 91177308-0d34-0410-b5e6-96231b3b80d8
`ModuleLinker::getLinkageResult()` shouldn't create symbols with local
linkage and non-default visibility -- in fact, symbols with local
linkage shouldn't be merged at all. Assert to that effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208262 91177308-0d34-0410-b5e6-96231b3b80d8
Since visibility is meaningless for symbols with local linkage, check
local linkage before visibility when setting symbol attributes.
When linkage is `internal` and the visibility is `hidden`, the exposed
attribute is now `LTO_SYMBOL_SCOPE_INTERNAL` instead of
`LTO_SYMBOL_SCOPE_HIDDEN`. Although the bitfield allows *both* to be
specified, the combination is nonsense anyway.
Given changes (in progress) to drop visibility when a symbol has local
linkage, this almost has no functionality change: it's mostly a cleanup
to clarify the logic.
The exception is when something has `appending` linkage. Before this
change, such symbols would be advertised as `LTO_SYMBOL_SCOPE_INTERNAL`;
now, they'll be given `LTO_SYMBOL_SCOPE_COMMON`.
Unfortunately this is really awkward to test. This only changes what we
advertise to linkers (before running LTO), not what the final object
looks like. In theory I could add `DEBUG` output to `llvm-lto` (and
test with "REQUIRES: asserts"), but follow-up commits to disallow
`internal hidden` simplify this anyway.
<rdar://problem/16141113>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208261 91177308-0d34-0410-b5e6-96231b3b80d8
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.
I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0
Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0
<rdar://problem/16800495>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208260 91177308-0d34-0410-b5e6-96231b3b80d8
relocation entries it applies.
Prior to this patch, RuntimeDyldImpl::resolveExternalSymbols discarded
relocations for external symbols once they had been applied. This causes issues
if the client calls MCJIT::finalizeLoadedModules more than once, and updates the
location of any symbols in between (e.g. by calling MCJIT::mapSectionAddress).
No test case yet: None of our in-tree memory managers support moving sections
around. I'll have to hack up a dummy memory manager before I can write a unit
test.
Fixes <rdar://problem/16764378>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208257 91177308-0d34-0410-b5e6-96231b3b80d8
The loop stream detector (LSD) on modern Intel cores, which optimizes the
execution of small loops, has limits on the number of taken branches in
addition to uop-count limits (modern AMD cores have similar limits).
Unfortunately, at the IR level, estimating the number of branches that will be
taken is difficult. For one thing, it strongly depends on later passes (block
placement, etc.). The original implementation took a conservative approach and
limited the maximal BB DFS depth of the loop. However, fairly-extensive
benchmarking by several of us has revealed that this is the wrong approach. In
fact, there are zero known cases where the branch limit prevents a detrimental
unrolling (but plenty of cases where it does prevent beneficial unrolling).
While we could improve the current branch counting logic by incorporating
branch probabilities, this further complication seems unjustified without a
motivating regression. Instead, unless and until a regression appears, the
branch counting will be removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208255 91177308-0d34-0410-b5e6-96231b3b80d8
Given a FMA family (e.g., 213, 231), not all the variants (i.e., register or
memory) are commutable.
E.g., for the 213 family (with the syntax src1, src2, src3):
fmaXXX213 A, B, reg3/mem3 == fmaXXX213 B, A, reg3/mem3
Now consider the 231 family:
fmaXXX231 A, B, reg3 == fmaXXX231 A, reg3, B
But
fmaXXX231 A, B, mem3 != fmaXXX231 A, mem3, B
Indeed, mem3 cannot be the second argument of the memory variant of fmaXXX231.
Working on a reduced test case!
<rdar://problem/16800495>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208252 91177308-0d34-0410-b5e6-96231b3b80d8
OnDiskHashTable::insert() calls the Item constructor via placement new, but
nothing called the destructor. This matters in cases when the Info template
parameter has key_type or data_type typedefs that have a destructor, for
example like IdentifierIndexWriterTrait in clang's GlobalModuleIndex.cpp.
This fixes a 5-year old bug that's been around since the OnDiskHashTable code
was added in r64192. Bug found by LSan!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208243 91177308-0d34-0410-b5e6-96231b3b80d8
When reducing the bitwidth of a comparison against a constant, the
original setcc's result type was used, which was incorrect.
No test since I don't think any other in tree targets change the
bitwidth of the setcc type depending on the bitwidth of the compared
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208236 91177308-0d34-0410-b5e6-96231b3b80d8
To compute the dimensions of the array in a unique way, we split the
delinearization analysis in three steps:
- find parametric terms in all memory access functions
- compute the array dimensions from the set of terms
- compute the delinearized access functions for each dimension
The first step is executed on all the memory access functions such that we
gather all the patterns in which an array is accessed. The second step reduces
all this information in a unique description of the sizes of the array. The
third step is delinearizing each memory access function following the common
description of the shape of the array computed in step 2.
This rewrite of the delinearization pass also solves a problem we had with the
previous implementation: because the previous algorithm was by induction on the
structure of the SCEV, it would not correctly recognize the shape of the array
when the memory access was not following the nesting of the loops: for example,
see polly/test/ScopInfo/multidim_only_ivs_3d_reverse.ll
; void foo(long n, long m, long o, double A[n][m][o]) {
;
; for (long i = 0; i < n; i++)
; for (long j = 0; j < m; j++)
; for (long k = 0; k < o; k++)
; A[i][k][j] = 1.0;
Starting with this patch we no longer delinearize access functions that do not
contain parameters, for example in test/Analysis/DependenceAnalysis/GCD.ll
;; for (long int i = 0; i < 100; i++)
;; for (long int j = 0; j < 100; j++) {
;; A[2*i - 4*j] = i;
;; *B++ = A[6*i + 8*j];
these accesses will not be delinearized as the upper bound of the loops are
constants, and their access functions do not contain SCEVUnknown parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208232 91177308-0d34-0410-b5e6-96231b3b80d8
default architecture for reasonable modern x86 processors, actually be
modern. This processor model should essentially be "tuned" for modern
x86 chips as much as possible without undue penalties on any specific
architecture. Previously we weren't even using the nice scheduling
models. There are a few other tweaks needed here, but this change at
least I have benchmarked across a decent swatch of chips (intel's
clovertown, westmere, and sandybridge; amd's istanbul) and seen no
significant regressions.
If anyone has suggested ways to test this, just let me know. Somewhat
alarmingly, no existing tests failed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208230 91177308-0d34-0410-b5e6-96231b3b80d8
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0. The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check. The AdvSIMDScalar pass is disabled by default at all
optimization levels. This patch leaves that pass disabled by default.
Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging. This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208223 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These processors will only be available for the integrated assembler at
first (CodeGen will emit a fatal error saying they are not implemented).
The intention is to work through the existing instructions and correctly
annotate the ISA they were added in so that we have a sufficiently good
base to start MIPS64r6 development. MIPS64r6 removes/re-encodes certain
instructions and I believe it is best to define ISA's using set-union's
as far as possible rather than using set-subtraction.
Reviewers: vmedic
Subscribers: emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D3569
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208221 91177308-0d34-0410-b5e6-96231b3b80d8
This is a followup to r208171, where a call to make_unique was
disambiguated for MSVC. Disambiguate two more calls, and remove the
comment about it since this is what we do everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208219 91177308-0d34-0410-b5e6-96231b3b80d8
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208210 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
One small functional change. The recently added PAUSE instruction now has
the HasStdEnc predicate which was accidentally removed by a Requires<>.
Depends on D3640
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3641
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208209 91177308-0d34-0410-b5e6-96231b3b80d8
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.
The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208198 91177308-0d34-0410-b5e6-96231b3b80d8
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.
Consider a simple memory load followed by a bitconvert then a store.
v0 = load v2i32
v1 = BITCAST v2i32 v0 to v4i16
store v4i16 v2
In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.
LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence: v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = BITCAST v2i32 v1 to v4i16
v3 = REV v4i16 v2 (implicit)
store v4i16 v3
But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:
v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = REV v2i32
v3 = BITCAST v2i32 v2 to v4i16
v4 = REV v4i16
v5 = REV v4i16 v4 (implicit)
store v4i16 v5
This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
(REV64_2s (REV64_4h X)) === (REV32_4h X)
There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.
Most bitconverts require some sort of conversion. The only exceptions are:
a) Identity conversions - vNfX <-> vNiX
b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX
Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.
This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208194 91177308-0d34-0410-b5e6-96231b3b80d8