This also fixes a bug in the predication of LR to LOCR: I'd forgotten
that with these in-place instruction builds, the implicit operands need
to be added manually. I think this was latent until now, but is tested
by int-cmp-45.c. It also adds a CC valid mask to STOC, again tested by
int-cmp-45.c.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187573 91177308-0d34-0410-b5e6-96231b3b80d8
Convert >= 1 to > 0, etc. Using comparison with zero isn't a win on its own,
but it exposes more opportunities for CC reuse (the next patch).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187571 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Ana Pazos.
- Completed implementation of instruction formats:
AdvSIMD three same
AdvSIMD modified immediate
AdvSIMD scalar pairwise
- Completed implementation of instruction classes
(some of the instructions in these classes
belong to yet unfinished instruction formats):
Vector Arithmetic
Vector Immediate
Vector Pairwise Arithmetic
- Initial implementation of instruction formats:
AdvSIMD scalar two-reg misc
AdvSIMD scalar three same
- Intial implementation of instruction class:
Scalar Arithmetic
- Initial clang changes to support arm v8 intrinsics.
Note: no clang changes for scalar intrinsics function name mangling yet.
- Comprehensive test cases for added instructions
To verify auto codegen, encoding, decoding, diagnosis, intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187567 91177308-0d34-0410-b5e6-96231b3b80d8
The following are made available by clang in the XCore ABI
__builtin_bitrev
__builtin_getid
__builtin_getps
__builtin_setps
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187566 91177308-0d34-0410-b5e6-96231b3b80d8
1) They should never be inlined.
2) A naming inconsistency with gcc mips16
3) Stubs should not have the global attribute
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187555 91177308-0d34-0410-b5e6-96231b3b80d8
The clients of this code have been updated to all support AliasArgs.
This depends on Clang r187538 and lld r187541.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187546 91177308-0d34-0410-b5e6-96231b3b80d8
This makes option aliases more powerful by enabling them to
pass along arguments to the option they're aliasing.
For example, if we have a joined option "-foo=", we can now
specify a flag option "-bar" to be an alias of that, with the
argument "baz".
This is especially useful for the cl.exe compatible clang driver,
where many options are aliases. For example, this patch enables
us to alias "/Ox" to "-O3" (-O is a joined option), and "/WX" to
"-Werror" (again, -W is a joined option).
Differential Revision: http://llvm-reviews.chandlerc.com/D1245
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187537 91177308-0d34-0410-b5e6-96231b3b80d8
While the .td entry is nice and all, it takes a pretty gross hack in
ARMAsmParser::ParseInstruction() because of handling of other "subs"
instructions to get it to match. Ran it by Jim Grosbach and he said it was
about what he expected to make this work given the existing code.
rdar://14214063
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187530 91177308-0d34-0410-b5e6-96231b3b80d8
If we merge vector when a vector is used, it will generate an artificial
antidependency that can prevent 2 tex/vtx instructions to use the same
clause and thus generate extra clauses that reduce performance.
There is no test case as such situation is really hard to predict.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187516 91177308-0d34-0410-b5e6-96231b3b80d8
There are a lot of restrictions on instruction groups that contain
LDS instructions, so for now we will be conservative and not packetize
anything else with them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187513 91177308-0d34-0410-b5e6-96231b3b80d8
We were using two instructions for similar purpose : break and
predicated break. Only predicated_break was emitted and it was
lowered at R600ControlFlowFinalizer to JUMP;CF_BREAK;POP.
This commit simplify the situation by making AMDILCFGStructurizer
emit IF_PREDICATE;BREAK;ENDIF; instead of predicated_break (which
is now removed).
There is no functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187510 91177308-0d34-0410-b5e6-96231b3b80d8
The loop optimizers were assuming that scales > 1 were OK. I think this
is actually a bug in TargetLoweringBase::isLegalAddressingMode(),
since it seems to be trying to reject anything that isn't r+i or r+r,
but it has no default case for scales other than 0, 1 or 2. Implementing
the hook for z means that z can no longer test any change there though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187497 91177308-0d34-0410-b5e6-96231b3b80d8
Extend r187495 to conditional loads. I split this out because the
easiest way seemed to be to force a particular operand order in
SystemZISelDAGToDAG.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187496 91177308-0d34-0410-b5e6-96231b3b80d8
System z branches have a mask to select which of the 4 CC values should
cause the branch to be taken. We can invert a branch by inverting the mask.
However, not all instructions can produce all 4 CC values, so inverting
the branch like this can lead to some oddities. For example, integer
comparisons only produce a CC of 0 (equal), 1 (less) or 2 (greater).
If an integer EQ is reversed to NE before instruction selection,
the branch will test for 1 or 2. If instead the branch is reversed
after instruction selection (by inverting the mask), it will test for
1, 2 or 3. Both are correct, but the second isn't really canonical.
This patch therefore keeps track of which CC values are possible
and uses this when inverting a mask.
Although this is mostly cosmestic, it fixes undefined behavior
for the CIJNLH in branch-08.ll. Another fix would have been
to mask out bit 0 when generating the fused compare and branch,
but the point of this patch is that we shouldn't need to do that
in the first place.
The patch also makes it easier to reuse CC results from other instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187495 91177308-0d34-0410-b5e6-96231b3b80d8
r187116 moved compare-and-branch generation from the instruction-selection
pass to the peephole optimizer (via optimizeCompare). It turns out that even
this is a bit too early. Fused compare-and-branch instructions don't
interact well with predication, where a CC result is needed. They also
make it harder to reuse the CC side-effects of earlier instructions
(not yet implemented, but the subject of a later patch).
Another problem was that the AnalyzeBranch family of routines weren't
handling compares and branches, so we weren't able to reverse the fused
form in cases where we would reverse a separate branch. This could have
been fixed by extending AnalyzeBranch, but given the other problems,
I've instead moved the fusing to the long-branch pass, which is also
responsible for the opposite transformation: splitting out-of-range
compares and branches into separate compares and long branches.
I've added a test for the AnalyzeBranch problem. A test for the
predication problem is included in the next patch, which fixes a bug
in the choice of CC mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187494 91177308-0d34-0410-b5e6-96231b3b80d8
r186399 aggressively used the RISBG instruction for immediate ANDs,
both because it can handle some values that AND IMMEDIATE can't,
and because it allows the destination register to be different from
the source. I realized later while implementing the distinct-ops
support that it would be better to leave the choice up to
convertToThreeAddress() instead. The AND IMMEDIATE form is shorter
and is less likely to be cracked.
This is a problem for 32-bit ANDs because we assume that all 32-bit
operations will leave the high word untouched, whereas RISBG used in
this way will either clear the high word or copy it from the source
register. The patch uses the z196 instruction RISBLG for this instead.
This means that z10 will be restricted to NILL, NILH and NILF for
32-bit ANDs, but I think that should be OK for now. Although we're
using z10 as the base architecture, the optimization work is going
to be focused more on z196 and zEC12.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187492 91177308-0d34-0410-b5e6-96231b3b80d8
All insertf*/extractf* functions replaced with insert/extract since we have insertf and inserti forms.
Added lowering for INSERT_VECTOR_ELT / EXTRACT_VECTOR_ELT for 512-bit vectors.
Added lowering for EXTRACT/INSERT subvector for 512-bit vectors.
Added a test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187491 91177308-0d34-0410-b5e6-96231b3b80d8
This fix is very lightweight. The same fix already existed for AddRec
but was missing for NAry expressions.
This is obviously an improvement and I'm unsure how to test compile
time problems.
Patch by Xiaoyi Guo!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187475 91177308-0d34-0410-b5e6-96231b3b80d8
For a testcase like the following:
typedef unsigned long uint64_t;
typedef struct {
uint64_t lo;
uint64_t hi;
} blob128_t;
void add_128_to_128(const blob128_t *in, blob128_t *res) {
asm ("PAND %1, %0" : "+Q"(*res) : "Q"(*in));
}
where we'll fail to allocate the register for the output constraint,
our matching input constraint will not find a register to match,
and could try to search past the end of the current operands array.
On the idea that we'd like to attempt to keep compilation going
to find more errors in the module, change the error cases when
we're visiting inline asm IR to return immediately and avoid
trying to create a node in the DAG. This leaves us with only
a single error message per inline asm instruction, but allows us
to safely keep going in the general case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187470 91177308-0d34-0410-b5e6-96231b3b80d8