We used to erroneously match:
(v4i64 shuffle (v2i64 load), <0,0,0,0>)
Whereas vbroadcasti128 is more like:
(v4i64 shuffle (v2i64 load), <0,1,0,1>)
This problem doesn't exist for vbroadcastf128, which kept matching
the intrinsic after r231182. We should perhaps re-introduce the
intrinsic here as well, but that's a separate issue still being
discussed.
While there, add some proper vbroadcastf128 tests. We don't currently
match those, like for loading vbroadcastsd/ss on AVX (the reg-reg
broadcasts where added in AVX2).
Fixes PR23886.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240488 91177308-0d34-0410-b5e6-96231b3b80d8
When UpdateBaseRegUses sees an instruction that defines the base
register it must stop, as the base register value it is updating is no
longer live. Ideally we would already have seen the register be killed
(which is already checked for), but the kill flags may be inaccurate
and we have to account for this.
Differential Revision: http://reviews.llvm.org/D10566
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240424 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This only adds support for ULHU of an immediate address with/without a source register.
It does not include support for ULHU of the address of a symbol.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9671
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240410 91177308-0d34-0410-b5e6-96231b3b80d8
So far, LLVM has not emitted correct addend for N64 and N32 ABI. This patch
fixes that. It also removes fixup from MCJIT for R_MIPS_PC16 relocation.
Patch by Vladimir Radosavljevic.
Differential Revision: http://reviews.llvm.org/D10565
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240404 91177308-0d34-0410-b5e6-96231b3b80d8
The summary is that it moves the mangling earlier and replaces a few
calls to .addExternalSymbol with addSym.
I originally wanted to replace all the uses of addExternalSymbol with
addSym, but noticed it was a lot of work and doesn't need to be done
all at once.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240395 91177308-0d34-0410-b5e6-96231b3b80d8
Avoid shifting a negative value by sign-extending after the shift.
Fixes a couple of tests that were failing under ubsan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240381 91177308-0d34-0410-b5e6-96231b3b80d8
Currently ( D10321, http://reviews.llvm.org/rL239486 ), we can use the machine combiner pass
to reassociate the following sequence to reduce the critical path:
A = ? op ?
B = A op X
C = B op Y
-->
A = ? op ?
B = X op Y
C = A op B
'op' is currently limited to x86 AVX scalar FP adds (with fast-math on), but in theory, it could
be any associative math/logic op (see TODO in code comment).
This patch generalizes the pattern match to ignore the instruction that defines 'A'. So instead of
a sequence of 3 adds, we now only need to find 2 dependent adds and decide if it's worth
reassociating them.
This generalization has a compile-time cost because we can now match more instruction sequences
and we rely more heavily on the machine combiner to discard sequences where reassociation doesn't
improve the critical path.
For example, in the new test case:
A = M div N
B = A add X
C = B add Y
We'll match 2 reassociation patterns, but this transform doesn't reduce the critical path:
A = M div N
B = A add Y
C = B add X
We need the combiner to reject that pattern but select this:
A = M div N
B = X add Y
C = B add A
Differential Revision: http://reviews.llvm.org/D10460
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240361 91177308-0d34-0410-b5e6-96231b3b80d8
The _Int instructions are special, in that they operate on the full
VR128 instead of FR32. The load folding then looks at MOVSS, at the
user, and bails out when it sees a size mismatch.
What we really know is that the rm_Int instructions don't load the
higher lanes, so folding is fine.
This happens for the straightforward intrinsic code, e.g.:
_mm_add_ss(a, _mm_load_ss(p));
Fixes PR23349.
Differential Revision: http://reviews.llvm.org/D10554
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240326 91177308-0d34-0410-b5e6-96231b3b80d8
According to the documentation, .thumb_set is 'the equivalent of a .set directive'.
We didn't have equivalent behaviour in terms of all the errors we could throw, for
example, when a symbol is redefined.
This change refactors parseAssignment so that it can be used by .set and .thumb_set
and implements tests for .thumb_set for all the errors thrown by that method.
Reviewed by Rafael Espíndola.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240318 91177308-0d34-0410-b5e6-96231b3b80d8
D8982 ( checked in at http://reviews.llvm.org/rL239001 ) added command-line
options to allow reciprocal estimate instructions to be used in place of
divisions and square roots.
This patch changes the default settings for x86 targets to allow that recip
codegen (except for scalar division because that breaks too much code) when
using -ffast-math or its equivalent.
This matches GCC behavior for this kind of codegen.
Differential Revision: http://reviews.llvm.org/D10396
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240310 91177308-0d34-0410-b5e6-96231b3b80d8
Before this we were producing a TargetExternalSymbol from a MCSymbol.
That meant extracting the symbol name and fetching the symbol again
down the pipeline.
This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the
DAG.
Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900,
allowing r240130 to be committed again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240300 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: In this case, we're supposed to load the immediate in AT and then ADDu it with the source register and put it in the destination register.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9367
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240278 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In this case, we're supposed to load the address of the symbol in AT and then ADDu it with the source register and
put it in the destination register.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9366
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240273 91177308-0d34-0410-b5e6-96231b3b80d8
This allows more call sequences to use pushes instead of movs when optimizing for size.
In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported.
Differential Revision: http://reviews.llvm.org/D10500
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240257 91177308-0d34-0410-b5e6-96231b3b80d8
This patch changes getRelocationAddend to use ErrorOr and considers it an error
to try to get the addend of a REL section.
If, for example, a x86_64 file has a REL section, that file is corrupted and
we should reject it.
Using ErrorOr is not ideal since we check the section type once per relocation
instead of once per section.
Checking once per section would involve getRelocationAddend just asserting and
callers checking the section before iterating over the relocations.
In any case, this is an improvement and includes a test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240176 91177308-0d34-0410-b5e6-96231b3b80d8
The patch is generated using this command:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
llvm/lib/
Thanks to Eugene Kosov for the original patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240137 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, we canonicalize shuffles that produce a result larger than
their operands with:
shuffle(concat(v1, undef), concat(v2, undef))
->
shuffle(concat(v1, v2), undef)
because we can access quad vectors (see PerformVECTOR_SHUFFLECombine).
This is useful in the general case, but there are special cases where
native shuffles produce larger results: the two-result ops.
We can look through the concat when lowering them:
shuffle(concat(v1, v2), undef)
->
concat(VZIP(v1, v2):0, :1)
This lets us generate the native shuffles instead of scalarizing to
dozens of VMOVs.
Differential Revision: http://reviews.llvm.org/D10424
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240118 91177308-0d34-0410-b5e6-96231b3b80d8
In preparation for a future patch: makes it easier to do the same
matching to generate different nodes, without duplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240116 91177308-0d34-0410-b5e6-96231b3b80d8
Deduplicates some code and lets us use LEA on atom when adjusting the
stack around callee-cleanup calls. This is the only intended
functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240044 91177308-0d34-0410-b5e6-96231b3b80d8
They had been getting emitted as a section + offset reference, which
is bogus since the value needs to be the offset within the GOT, not
the actual address of the symbol's object.
Differential Revision: http://reviews.llvm.org/D10441
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240020 91177308-0d34-0410-b5e6-96231b3b80d8