Making use of the recently-added ISD::FROUND, which allows for custom lowering
of round(), the PPC backend will now map frin to round(). Previously, we had
been using frin to lower nearbyint() (and rint() via some custom lowering to
handle the extra fenv flags requirements), but only in fast-math mode because
frin does not tie-to-even. Several users had complained about this behavior,
and this new mapping of frin to round is certainly more appropriate (and does
not require fast-math mode).
In effect, this reverts r178362 (and part of r178337, replacing the nearbyint
mapping with the round mapping).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187960 91177308-0d34-0410-b5e6-96231b3b80d8
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.
For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).
This will be used by the PowerPC backend in a follow-up commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187926 91177308-0d34-0410-b5e6-96231b3b80d8
This follows the same lines as the integer code. In the end it seemed
easier to have a second 4-bit mask in TSFlags to specify the compare-like
CC values. That eats one more TSFlags bit than adding a CCHasUnordered
would have done, but it feels more concise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187883 91177308-0d34-0410-b5e6-96231b3b80d8
Since the VSrc_* register classes contain both VGPRs and SGPRs, copies
that used be emitted by isel like this:
SGPR = COPY VGPR
Will now be emitted like this:
VSrC = COPY VGPR
This patch also adds a pass that tries to identify and fix situations where
a VGPR to SGPR copy may occur. Hopefully, these changes will make it
impossible for the compiler to generate illegal VGPR to SGPR copies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187831 91177308-0d34-0410-b5e6-96231b3b80d8
unnecessary jalr InstAliases in Mips64InstrInfo.td and add the code to print
jalr InstAliases in MipsInstPrinter::printAlias.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187821 91177308-0d34-0410-b5e6-96231b3b80d8
The PPC backend had been missing a pattern to generate mulli for 64-bit
multiples. We had been generating it only for 32-bit multiplies. Unfortunately,
generating li + mulld unnecessarily increases register pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187807 91177308-0d34-0410-b5e6-96231b3b80d8
We do use a very small set of physical registers, so account for
them in the virtual register encoding between MachineInstr and MC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187799 91177308-0d34-0410-b5e6-96231b3b80d8
This change converts the NVPTX target to use the MC infrastructure
instead of directly emitting MachineInstr instances. This brings
the target more up-to-date with LLVM TOT, and should fix PR15175
and PR15958 (libNVPTXInstPrinter is empty) as a side-effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187798 91177308-0d34-0410-b5e6-96231b3b80d8
This change came about primarily because of two issues in the existing code.
Niether of:
define i64 @test1(i64 %val) {
%in = trunc i64 %val to i32
tail call i32 @ret32(i32 returned %in)
ret i64 %val
}
define i64 @test2(i64 %val) {
tail call i32 @ret32(i32 returned undef)
ret i32 42
}
should be tail calls, and the function sameNoopInput is responsible. The main
problem is that it is completely symmetric in the "tail call" and "ret" value,
but in reality different things are allowed on each side.
For these cases:
1. Any truncation should lead to a larger value being generated by "tail call"
than needed by "ret".
2. Undef should only be allowed as a source for ret, not as a result of the
call.
Along the way I noticed that a mismatch between what this function treats as a
valid truncation and what the backends see can lead to invalid calls as well
(see x86-32 test case).
This patch refactors the code so that instead of being based primarily on
values which it recurses into when necessary, it starts by inspecting the type
and considers each fundamental slot that the backend will see in turn. For
example, given a pathological function that returned {{}, {{}, i32, {}}, i32}
we would consider each "real" i32 in turn, and ask if it passes through
unchanged. This is much closer to what the backend sees as a result of
ComputeValueVTs.
Aside from the bug fixes, this eliminates the recursion that's going on and, I
believe, makes the bulk of the code significantly easier to understand. The
trade-off is the nasty iterators needed to find the real types inside a
returned value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187787 91177308-0d34-0410-b5e6-96231b3b80d8
Without explicit dependencies, both per-file action and in-CommonTableGen action could run in parallel.
It races to emit *.inc files simultaneously.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187780 91177308-0d34-0410-b5e6-96231b3b80d8
We use MVT::i32 for the vector index type, because we use 32-bit
operations to caculate offsets when dynamically indexing vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187749 91177308-0d34-0410-b5e6-96231b3b80d8
This patch just uses a peephole test for "add; compare; branch" sequences
within a single block. The IR optimizers already convert loops to
decrement-and-branch-on-nonzero form in some cases, so even this
simplistic test triggers many times during a clang bootstrap and
projects/test-suite run. It looks like there are still cases where we
need to more strongly prefer branches on nonzero though. E.g. I saw a
case where a loop that started out with a check for 0 ended up with a
check for -1. I'll try to look at that sometime.
I ended up adding the Reference class because MachineInstr::readsRegister()
doesn't check for subregisters (by design, as far as I could tell).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187723 91177308-0d34-0410-b5e6-96231b3b80d8
Perhaps predictably, doing comparison elimination on the fly during
SystemZLongBranch turned out to be a bad idea. The next patches make
use of LOAD AND TEST and BRANCH ON COUNT, both of which require
changes to earlier instructions.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187718 91177308-0d34-0410-b5e6-96231b3b80d8
helper functions. This can be optimized out later when the remaining
parts of the helper function work is moved into the Mips16HardFloat pass.
For now it forces us to use the 32 bit save/restore instructions instead
of the 16 bit ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187712 91177308-0d34-0410-b5e6-96231b3b80d8
Due to the weird and wondeful usual arithmetic conversions, some
calculations involving negative values were getting performed in
uint32_t and then promoted to int64_t, which is really not a good
idea.
Patch by Katsuhiro Ueno.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187703 91177308-0d34-0410-b5e6-96231b3b80d8
Using an object to do the cleanup may look like overkill, but it's safer and nicer than putting deletes everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187696 91177308-0d34-0410-b5e6-96231b3b80d8
Internally, the PowerPC backend names the 32-bit GPRs R[0-9]+, and names the
64-bit parent GPRs X[0-9]+. When matching inline assembly constraints with
explicit register names, on PPC64 when an i64 MVT has been requested, we need
to follow gcc's convention of using r[0-9]+ to refer to the 64-bit (parent)
registers.
At some point, we'll probably want to arrange things so that the generic code
in TargetLowering uses the AsmName fields declared in *RegisterInfo.td in order
to match these inline asm register constraints. If we do that, this change can
be reverted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187693 91177308-0d34-0410-b5e6-96231b3b80d8