The copies already diverged, don't let them become any worse. Reduce
redundancy in code with a little macro metaprogramming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231401 91177308-0d34-0410-b5e6-96231b3b80d8
Currently shuffles may only be combined if they are of the same type, despite the fact that bitcasts are often introduced in between shuffle nodes (e.g. x86 shuffle type widening).
This patch allows a single input shuffle to peek through bitcasts and if the input is another shuffle will merge them, shuffling using the smallest sized type, and re-applying the bitcasts at the inputs and output instead.
Dropped old ShuffleToZext test - this patch removes the use of the zext and vector-zext.ll covers these anyhow.
Differential Revision: http://reviews.llvm.org/D7939
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231380 91177308-0d34-0410-b5e6-96231b3b80d8
Added lowering for ISD::CONCAT_VECTORS and ISD::INSERT_SUBVECTOR for i1 vectors,
it is needed to pass all masked_memop.ll tests for SKX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231371 91177308-0d34-0410-b5e6-96231b3b80d8
Also it extracts getCopyFromRegs helper function in SelectionDAGBuilder as we need to be able to customize type of the register exported from basic block during lowering of the gc.result.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231366 91177308-0d34-0410-b5e6-96231b3b80d8
Build time (user time) for building llvm+clang+lldb in release mode:
- default allocator: 9086 seconds
- with PBQP: 9126 seconds
- with PBQP + local bit matrix cache: 9097 seconds
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231360 91177308-0d34-0410-b5e6-96231b3b80d8
already been added and the inconsistency made choosing names and
changing code more annoying. Plus, wow are they better for this code!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231347 91177308-0d34-0410-b5e6-96231b3b80d8
result reasonable.
This code predated clang-format and so there was a reasonable amount of
crufty formatting that had accumulated. This should ensure that neither
myself nor others end up with formatting-only changes sneaking into
other fixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231341 91177308-0d34-0410-b5e6-96231b3b80d8
just arbitrarily interleaving unrelated control flows once they get
moved "out-of-line" (both outside of natural CFG ordering and with
diamonds that cannot be fully laid out by chaining fallthrough edges).
This easy solution doesn't work in practice, and it isn't just a small
bug. It looks like a very different strategy will be required. I'm
working on that now, and it'll again go behind some flag so that
everyone can experiment and make sure it is working well for them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231332 91177308-0d34-0410-b5e6-96231b3b80d8
To be used/tested by llvm-dsymutil. (llvm-dsymutil does a 'static' link,
no need for relocations for most things, so it'll just emit raw integers
for most attributes)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231298 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
DataLayout keeps the string used for its creation.
As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().
Get rid of DataLayoutPass: the DataLayout is in the Module
The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.
Make DataLayout Non-Optional in the Module
Module->getDataLayout() will never returns nullptr anymore.
Reviewers: echristo
Subscribers: resistor, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D7992
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231270 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In PNaCl, most atomic instructions have their own @llvm.nacl.atomic.* function, each one, with a few exceptions, represents a consistent behaviour across all NaCl-supported targets. Unfortunately, the atomic RMW operations nand, [u]min, and [u]max aren't directly represented by any such @llvm.nacl.atomic.* function. This patch refines shouldExpandAtomicRMWInIR in TargetLowering so that a future `Le32TargetLowering` class can selectively inform the caller how the target desires the atomic RMW instruction to be expanded (ie via load-linked/store-conditional for ARM/AArch64, via cmpxchg for X86/others?, or not at all for Mips) if at all.
This does not represent a behavioural change and as such no tests were added.
Patch by: Richard Diamond.
Reviewers: jfb
Reviewed By: jfb
Subscribers: jfb, aemerson, t.p.northover, llvm-commits
Differential Revision: http://reviews.llvm.org/D7713
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231250 91177308-0d34-0410-b5e6-96231b3b80d8
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231238 91177308-0d34-0410-b5e6-96231b3b80d8
In a CFG with the edges A->B->C and A->C, B is an optional branch.
LLVM's default behavior is to lay the blocks out naturally, i.e. A, B,
C, in order to improve code locality and fallthroughs. However, if a
function contains many of those optional branches only a few of which
are taken, this leads to a lot of unnecessary icache misses. Moving B
out of line can work around this.
Review: http://reviews.llvm.org/D7719
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231230 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to convert a BUILD_VECTOR into a shuffle, we try to split a single source vector that is twice as wide as the destination vector.
We can not do this when we also need the zero vector to create a blend.
This fixes PR22774.
Differential Revision: http://reviews.llvm.org/D8040
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231219 91177308-0d34-0410-b5e6-96231b3b80d8
(They are called emitDwarfDIE and emitDwarfAbbrevs in their new home)
llvm-dsymutil wants to reuse that code, but it doesn't have a DwarfUnit or
a DwarfDebug object to call those. It has access to an AsmPrinter though.
Having emitDIE in the AsmPrinter also removes the DwarfFile dependency
on DwarfDebug, and thus the patch drops that field.
Differential Revision: http://reviews.llvm.org/D8024
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231210 91177308-0d34-0410-b5e6-96231b3b80d8
GCC 4.7's libstdc++ doesn't have std::map::emplace, but it does have
std::unordered_map::emplace, and the use case here doesn't appear to
need ordering. The container has been changed in a separate/precursor
patch, and now this patch should hopefully build cleanly even with
GCC 4.7.
& then I realized the order of the container did matter, so extra
handling of ordering was added in r231189.
Original commit message:
This makes LiveRange non-copyable, and LiveInterval is already
non-movable (due to the explicit dtor), so now it's non-copyable and
non-movable.
Fix the one case where we were relying on the (deprecated in C++11)
implicit copy ctor of LiveInterval (which happened to work because the
ctor created an object with a null segmentSet, so double-deleting the
null pointer was fine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231192 91177308-0d34-0410-b5e6-96231b3b80d8
The order of this container was needed at one point - so, at that point
create a temporary array of pointers, sort those, then iterate them.
This keeps lookup efficient (& the lesser issue, of allowing the use of
emplace... ), object identity preserved, and ordered iteration in the
one place that requires it.
While this has no functional change, I realize it does mean allocating
an extra data structure and performing a sort - so if this looks suspect
to anyone regarding perf characteristics, I'm all ears.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231189 91177308-0d34-0410-b5e6-96231b3b80d8
There is a known bug where the register coalescer fails to merge
subranges when multiple ranges end up in the "overflow" bit 32 of the
lanemasks. A proper fix for this is complicated so for now this is a
workaround which lets the register coalescer drop the subregister
liveness information (we just loose some precision by that) and
continue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231186 91177308-0d34-0410-b5e6-96231b3b80d8
Apparently something does care about ordering of LiveIntervals... so
revert all that stuff (r231175, r231176, r231177) & take some time to
re-evaluate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231184 91177308-0d34-0410-b5e6-96231b3b80d8
GCC 4.7's libstdc++ doesn't have std::map::emplace, but it does have
std::unordered_map::emplace, and the use case here doesn't appear to
need ordering. The container has been changed in a separate/precursor
patch, and now this patch should hopefully build cleanly even with
GCC 4.7.
Original commit message:
This makes LiveRange non-copyable, and LiveInterval is already
non-movable (due to the explicit dtor), so now it's non-copyable and
non-movable.
Fix the one case where we were relying on the (deprecated in C++11)
implicit copy ctor of LiveInterval (which happened to work because the
ctor created an object with a null segmentSet, so double-deleting the
null pointer was fine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231176 91177308-0d34-0410-b5e6-96231b3b80d8
This makes LiveRange non-copyable, and LiveInterval is already
non-movable (due to the explicit dtor), so now it's non-copyable and
non-movable.
Fix the one case where we were relying on the (deprecated in C++11)
implicit copy ctor of LiveInterval (which happened to work because the
ctor created an object with a null segmentSet, so double-deleting the
null pointer was fine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231168 91177308-0d34-0410-b5e6-96231b3b80d8
Ultimately, we'll need to leave something behind to indicate which
alloca will hold the exception, but we can figure that out when it comes
time to emit the __CxxFrameHandler3 catch handler table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231164 91177308-0d34-0410-b5e6-96231b3b80d8
From:
int M, total;
void foo() {
int i;
for (i = 0; i < M; i++) {
total = total + i / 2;
}
}
This is the kernel loop:
.LBB0_2: # %for.body
=>This Inner Loop Header: Depth=1
movl %edx, %esi
movl %ecx, %edx
shrl $31, %edx
addl %ecx, %edx
sarl %edx
addl %esi, %edx
incl %ecx
cmpl %eax, %ecx
jl .LBB0_2
--------------------------
The first mov insn "movl %edx, %esi" could be removed if we change "addl %esi, %edx" to "addl %edx, %esi".
The IR before TwoAddressInstructionPass is:
BB#2: derived from LLVM BB %for.body
Predecessors according to CFG: BB#1 BB#2
%vreg3<def> = COPY %vreg12<kill>; GR32:%vreg3,%vreg12
%vreg2<def> = COPY %vreg11<kill>; GR32:%vreg2,%vreg11
%vreg7<def,tied1> = SHR32ri %vreg3<tied0>, 31, %EFLAGS<imp-def,dead>; GR32:%vreg7,%vreg3
%vreg8<def,tied1> = ADD32rr %vreg3<tied0>, %vreg7<kill>, %EFLAGS<imp-def,dead>; GR32:%vreg8,%vreg3,%vreg7
%vreg9<def,tied1> = SAR32r1 %vreg8<kill,tied0>, %EFLAGS<imp-def,dead>; GR32:%vreg9,%vreg8
%vreg4<def,tied1> = ADD32rr %vreg9<kill,tied0>, %vreg2<kill>, %EFLAGS<imp-def,dead>; GR32:%vreg4,%vreg9,%vreg2
%vreg5<def,tied1> = INC64_32r %vreg3<kill,tied0>, %EFLAGS<imp-def,dead>; GR32:%vreg5,%vreg3
CMP32rr %vreg5, %vreg0, %EFLAGS<imp-def>; GR32:%vreg5,%vreg0
%vreg11<def> = COPY %vreg4; GR32:%vreg11,%vreg4
%vreg12<def> = COPY %vreg5<kill>; GR32:%vreg12,%vreg5
JL_4 <BB#2>, %EFLAGS<imp-use,kill>
Now TwoAddressInstructionPass will choose vreg9 to be tied with vreg4. However, it doesn't see that there is copy from vreg4 to vreg11 and another copy from vreg11 to vreg2 inside the loop body. To remove those copies, it is necessary to choose vreg2 to be tied with vreg4 instead of vreg9. This code pattern commonly appears when there is reduction operation in a loop.
So check for a reversed copy chain and if we encounter one then we can commute the add instruction so we can avoid a copy.
Patch by Wei Mi.
http://reviews.llvm.org/D7806
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231148 91177308-0d34-0410-b5e6-96231b3b80d8
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.
This reverts commit r231135.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231136 91177308-0d34-0410-b5e6-96231b3b80d8
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231135 91177308-0d34-0410-b5e6-96231b3b80d8
This type could be made copyable (= default a protected copy ctor in the
base class, and preferably make the derived class final to avoid risks
of providing a slicing copy operation to further derived classes) but it
seemed easier to avoid that complexity for a dump function that I assume
(by symmetry with ResourcePriorityQueue's dump, which was actively
buggy) not often used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@231133 91177308-0d34-0410-b5e6-96231b3b80d8