Summary:
In PNaCl, most atomic instructions have their own @llvm.nacl.atomic.* function, each one, with a few exceptions, represents a consistent behaviour across all NaCl-supported targets. Unfortunately, the atomic RMW operations nand, [u]min, and [u]max aren't directly represented by any such @llvm.nacl.atomic.* function. This patch refines shouldExpandAtomicRMWInIR in TargetLowering so that a future `Le32TargetLowering` class can selectively inform the caller how the target desires the atomic RMW instruction to be expanded (ie via load-linked/store-conditional for ARM/AArch64, via cmpxchg for X86/others?, or not at all for Mips) if at all.
This does not represent a behavioural change and as such no tests were added.
Patch by: Richard Diamond.
Reviewers: jfb
Reviewed By: jfb
Subscribers: jfb, aemerson, t.p.northover, llvm-commits
Differential Revision: http://reviews.llvm.org/D7713
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231250 91177308-0d34-0410-b5e6-96231b3b80d8
This "itinerary class map" in PPCSchedule.td is incomplete and
redundant with the actual code. As it provides no value, we've
decided to remove it.
No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231246 91177308-0d34-0410-b5e6-96231b3b80d8
The target-independent selection algorithm in FastISel already knows how
to select a SINT_TO_FP if the target is SSE but not AVX.
On targets that have SSE but not AVX, the tablegen'd 'fastEmit' functions
for ISD::SINT_TO_FP know how to select instruction X86::CVTSI2SSrr
(for an i32 to f32 conversion) and X86::CVTSI2SDrr (for an i32 to f64
conversion).
This patch simplifies the logic in method X86SelectSIToFP knowing that
the code would not be reachable if the subtarget doesn't have AVX.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231243 91177308-0d34-0410-b5e6-96231b3b80d8
Do not instrument direct accesses to stack variables that can be
proven to be inbounds, e.g. accesses to fields of structs on stack.
But it eliminates 33% of instrumentation on webrtc/modules_unittests
(number of memory accesses goes down from 290152 to 193998) and
reduces binary size by 15% (from 74M to 64M) and improved compilation time by 6-12%.
The optimization is guarded by asan-opt-stack flag that is off by default.
http://reviews.llvm.org/D7583
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231241 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Use more reasonable names for these pseudo-instructions.
As there's only one definition tied to any one of these classes, I named them with abbreviated versions of their respective class' name.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7831
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231240 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Move the "Filler" parameter to the end of the parameter list as it is,
conceptually, the only output parameter of that function.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7726
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231239 91177308-0d34-0410-b5e6-96231b3b80d8
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231238 91177308-0d34-0410-b5e6-96231b3b80d8
This commit fixes a bug introduced in r230956 where we were creating
CMovFP_{T,F} nodes with multiple return value types (one for each operand).
With this change the return value type of the new node is the same as the
value type of the True/False operands of the original node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231237 91177308-0d34-0410-b5e6-96231b3b80d8
In a CFG with the edges A->B->C and A->C, B is an optional branch.
LLVM's default behavior is to lay the blocks out naturally, i.e. A, B,
C, in order to improve code locality and fallthroughs. However, if a
function contains many of those optional branches only a few of which
are taken, this leads to a lot of unnecessary icache misses. Moving B
out of line can work around this.
Review: http://reviews.llvm.org/D7719
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231230 91177308-0d34-0410-b5e6-96231b3b80d8
As is described at http://llvm.org/bugs/show_bug.cgi?id=22408, the GNU linkers
ld.bfd and ld.gold currently only support a subset of the whole range of AArch64
ELF TLS relocations. Furthermore, they assume that some of the code sequences to
access thread-local variables are produced in a very specific sequence.
When the sequence is not as the linker expects, it can silently mis-relaxe/mis-optimize
the instructions.
Even if that wouldn't be the case, it's good to produce the exact sequence,
as that ensures that linkers can perform optimizing relaxations.
This patch:
* implements support for 16MiB TLS area size instead of 4GiB TLS area size. Ideally clang
would grow an -mtls-size option to allow support for both, but that's not part of this patch.
* by default doesn't produce local dynamic access patterns, as even modern ld.bfd and ld.gold
linkers do not support the associated relocations. An option (-aarch64-elf-ldtls-generation)
is added to enable generation of local dynamic code sequence, but is off by default.
* makes sure that the exact expected code sequence for local dynamic and general dynamic
accesses is produced, by making use of a new pseudo instruction. The patch also removes
two (AArch64ISD::TLSDESC_BLR, AArch64ISD::TLSDESC_CALL) pre-existing AArch64-specific pseudo
SDNode instructions that are superseded by the new one (TLSDESC_CALLSEQ).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231227 91177308-0d34-0410-b5e6-96231b3b80d8
Using the implicit default copy assignment operator in the presence of a
user-declared copy ctor is deprecated in C++11.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231225 91177308-0d34-0410-b5e6-96231b3b80d8
Asserting that the source and destination iterators are from the same
region is unnecessary - there's no reason to disallow reassignment from
any regions, so long as they aren't compared.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231224 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to convert a BUILD_VECTOR into a shuffle, we try to split a single source vector that is twice as wide as the destination vector.
We can not do this when we also need the zero vector to create a blend.
This fixes PR22774.
Differential Revision: http://reviews.llvm.org/D8040
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231219 91177308-0d34-0410-b5e6-96231b3b80d8
Since OptionValue (& its base classes) have user-declared dtors, use of
the implicit copy ctor/assignment operator is deprecated in C++11.
Provide them explicitly (defaulted) to avoid depending on this
deprecated feature.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231218 91177308-0d34-0410-b5e6-96231b3b80d8
These objects are never polymorphically owned, so there's no need for
virtual dtors - just make the dtor protected in the base classes, and
make the derived classes final.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231217 91177308-0d34-0410-b5e6-96231b3b80d8
This will now display enum definitions both at the global
scope as well as nested inside of classes. Additionally,
it will no longer display enums at the global scope if the
enum is nested. Instead, it will omit the definition of
the enum globally and instead emit it in the corresponding
class definition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231215 91177308-0d34-0410-b5e6-96231b3b80d8
(They are called emitDwarfDIE and emitDwarfAbbrevs in their new home)
llvm-dsymutil wants to reuse that code, but it doesn't have a DwarfUnit or
a DwarfDebug object to call those. It has access to an AsmPrinter though.
Having emitDIE in the AsmPrinter also removes the DwarfFile dependency
on DwarfDebug, and thus the patch drops that field.
Differential Revision: http://reviews.llvm.org/D8024
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231210 91177308-0d34-0410-b5e6-96231b3b80d8
GCC 4.7's libstdc++ doesn't have std::map::emplace, but it does have
std::unordered_map::emplace, and the use case here doesn't appear to
need ordering. The container has been changed in a separate/precursor
patch, and now this patch should hopefully build cleanly even with
GCC 4.7.
& then I realized the order of the container did matter, so extra
handling of ordering was added in r231189.
Original commit message:
This makes LiveRange non-copyable, and LiveInterval is already
non-movable (due to the explicit dtor), so now it's non-copyable and
non-movable.
Fix the one case where we were relying on the (deprecated in C++11)
implicit copy ctor of LiveInterval (which happened to work because the
ctor created an object with a null segmentSet, so double-deleting the
null pointer was fine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231192 91177308-0d34-0410-b5e6-96231b3b80d8
test - we only care that there are two moves in the loop and not
which part is relative to which register anyhow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231191 91177308-0d34-0410-b5e6-96231b3b80d8
The order of this container was needed at one point - so, at that point
create a temporary array of pointers, sort those, then iterate them.
This keeps lookup efficient (& the lesser issue, of allowing the use of
emplace... ), object identity preserved, and ordered iteration in the
one place that requires it.
While this has no functional change, I realize it does mean allocating
an extra data structure and performing a sort - so if this looks suspect
to anyone regarding perf characteristics, I'm all ears.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231189 91177308-0d34-0410-b5e6-96231b3b80d8
There is a known bug where the register coalescer fails to merge
subranges when multiple ranges end up in the "overflow" bit 32 of the
lanemasks. A proper fix for this is complicated so for now this is a
workaround which lets the register coalescer drop the subregister
liveness information (we just loose some precision by that) and
continue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231186 91177308-0d34-0410-b5e6-96231b3b80d8
Apparently something does care about ordering of LiveIntervals... so
revert all that stuff (r231175, r231176, r231177) & take some time to
re-evaluate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231184 91177308-0d34-0410-b5e6-96231b3b80d8