This is a followup patch for r214366, which added the same behavior to the
AArch64 and X86 FastISel code. This fix reproduces the already existing
behavior of SelectionDAG in FastISel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214531 91177308-0d34-0410-b5e6-96231b3b80d8
Found by inspection while looking at PR20280: code would mark slots
in the parameter save area where a byval parameter is passed as
"immutable". This is not correct since code is allowed to modify
byval parameters in place in the parameter save area.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214517 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of moving out the data in a ErrorOr<std::unique_ptr<Foo>>, get
a reference to it.
Thanks to David Blaikie for the suggestion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214516 91177308-0d34-0410-b5e6-96231b3b80d8
Note: The current code in DecodeMSRMask() rejects the unpredictable A/R MSR mask '0000' with Fail. The code in the patch follows this style and rejects unpredictable M-class MSR masks also with Fail (instead of SoftFail). If SoftFail is preferred in this case then additional changes to ARMInstPrinter (to print non-symbolic masks) and ARMAsmParser (to parse non-symbolic masks) will be needed.
Patch by Petr Pavlu!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214505 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM ARM prohibits LDRB/LDRSB instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214500 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM ARM prohibits LDRH/LDRSH instructions with writeback into the source register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214499 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM ARM prohibits LDR instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDR instructions with unpredictable behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214498 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Big-endian mode was not correctly adjusting the offset for types smaller
than an ABI slot.
Fixes PR19612
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: sstankovic, llvm-commits
Differential Revision: http://reviews.llvm.org/D4556
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214493 91177308-0d34-0410-b5e6-96231b3b80d8
"Create a default symver on Linux like ELF OSes."
Fails the build under Debian with ld.gold:
/usr/bin/ld.gold: --default-symver: unknown option
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214482 91177308-0d34-0410-b5e6-96231b3b80d8
Altivec vector loads on PowerPC have an interesting property: They always load
from an aligned address (by rounding down the address actually provided if
necessary). In order to generate an actual unaligned load, you can generate two
load instructions, one with the original address, one offset by one vector
length, and use a special permutation to extract the bytes desired.
When this was originally implemented, I generated these two loads using regular
ISD::LOAD nodes, now marked as aligned. Unfortunately, there is a problem with
this:
The alignment of a load does not contribute to its identity, and SDNodes
are uniqued. So, imagine that we have some unaligned load, L1, that is not
aligned. The routine will create two loads, L1(aligned) and (L1+16)(aligned).
Further imagine that there had already existed a load (L1+16)(unaligned) with
the same chain operand as the load L1. When (L1+16)(aligned) is created as part
of the lowering of L1, this load *is* also the (L1+16)(unaligned) node, just
now marked as aligned (because the new alignment overwrites the old). But the
original users of (L1+16)(unaligned) now get the data intended for the
permutation yielding the data for L1, and (L1+16)(unaligned) no longer exists
to get its own permutation-based expansion. This was PR19991.
A second potential problem has to do with the MMOs on these loads, which can be
used by AA during instruction scheduling to break chain-based dependencies. If
the new "aligned" loads get the MMO from the original unaligned load, this does
not represent the fact that it will load data from below the original address.
Normally, this would not matter, but this load might be combined with another
load pair for a previous vector, and then the dependency on the otherwise-
ignored lower bytes can matter.
To fix both problems, instead of generating the necessary loads using regular
ISD::LOAD instructions, ppc_altivec_lvx intrinsics are used instead. These are
provided with MMOs with a conservative address range.
Unfortunately, I no longer have a failing test case (since PR19991 was
reported, other changes in CodeGen have forced this bug back into hiding it
again). Nevertheless, this should fix the underlying problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214481 91177308-0d34-0410-b5e6-96231b3b80d8
ADDS and SUBS cannot encode negative immediates or immediates larger than 12bit.
This fix checks if the immediate version can be used under this constraints and
if we can convert ADDS to SUBS or vice versa to support negative immediates.
Also update the test cases to test the immediate versions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214470 91177308-0d34-0410-b5e6-96231b3b80d8
When generating unaligned vector loads, we need to search for other loads or
stores nearby offset by one vector width. If we find one, then we know that we
can safely generate another aligned load at that address. Otherwise, we must
generate the next load using an offset of the vector width minus one byte (so
we don't read off the end of the allocation if the base unaligned address
happened to be aligned at runtime). We had previously done this using only
other vector loads and stores, but did not consider the PowerPC-specific vector
load/store intrinsics. Now we'll also consider vector intrinsics. By itself,
this change is a feature enhancement, but is a necessary step toward fixing the
underlying problem behind PR19991.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214469 91177308-0d34-0410-b5e6-96231b3b80d8
This improves the diagnostics from the regular assembler, but more
importantly it fixes an assertion when parsing inline assembly. Test
landing in Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214468 91177308-0d34-0410-b5e6-96231b3b80d8
Abs/neg folding has moved out of foldOperands and into the instruction
selection phase using complex patterns. As a consequence of this
change, we now prefer to select the 64-bit encoding for most
instructions and the modifier operands have been dropped from
integer VOP3 instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214467 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for cases when stand-alone patterns are preferred to the
patterns included in the instruction definitions. Instead of requiring
that stand-alone patterns set a larger AddedComplexity value, which
can be confusing to new developers, the allows us to reduce the
complexity of the included patterns to achieve the same result.
There will be test cases for this added to the R600 backend in a
future commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214466 91177308-0d34-0410-b5e6-96231b3b80d8
We were incorrectly assuming that all VOP2 instructions can read SGPRs
in Src0, but this is not true for instructions that read carry-in from
VCC.
The old logic has been replaced with new logic which checks the defined
register classes of the VOP2 instruction to determine whether or not to
legalize the operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214465 91177308-0d34-0410-b5e6-96231b3b80d8
We were commuting the instruction by still shrinking it using the
original opcode.
NOTE: This is a candidate for the 3.5 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214463 91177308-0d34-0410-b5e6-96231b3b80d8
This allows assembling the two new instructions, encls and enclu for the
SKX processor model.
Note the diffs are a bigger than what might think, but to fit the new
MRM_CF and MRM_D7 in things in the right places things had to be
renumbered and shuffled down causing a bit more diffs.
rdar://16228228
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214460 91177308-0d34-0410-b5e6-96231b3b80d8
If INTRINSIC_W_CHAIN and INTRINSIC_VOID are MemIntrinsicSDNodes, and a
MemIntrinsicSDNode is a MemSDNode, then INTRINSIC_W_CHAIN and INTRINSIC_VOID
must be MemSDNodes too.
Noticed by inspection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214452 91177308-0d34-0410-b5e6-96231b3b80d8
Currently when DAGCombine converts loads feeding a switch into a switch of
addresses feeding a load the new load inherits the isInvariant flag of the left
side. This is incorrect since invariant loads can be reordered in cases where it
is illegal to reoarder normal loads.
This patch adds an isInvariant parameter to getExtLoad() and updates all call
sites to pass in the data if they have it or false if they don't. It also
changes the DAGCombine to use that data to make the right decision when
creating the new load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214449 91177308-0d34-0410-b5e6-96231b3b80d8
The current remark is ambiguous and makes it sounds like explicitly specifying vectorization will allow the loop to be vectorized. This is not the case. The improved remark directs the user to -Rpass-analysis=loop-vectorize to determine the cause of the pass-miss.
Reviewed by Arnold Schwaighofer`
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@214445 91177308-0d34-0410-b5e6-96231b3b80d8