Make more use of the new mask helpers in valuetypes.h
Combine (sra (srl x, c1), c1) -> sext_inreg if legal
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26263 91177308-0d34-0410-b5e6-96231b3b80d8
and SUBE nodes that actually expose what's going on and allow for
significant simplifications in the targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26255 91177308-0d34-0410-b5e6-96231b3b80d8
conversions to __floatdidf instead of __floatdisf on targets that support
f32 but not i64 (e.g. sparc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26254 91177308-0d34-0410-b5e6-96231b3b80d8
other small targets that do that can be learned from. They also have
the added advantage of being tested :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26243 91177308-0d34-0410-b5e6-96231b3b80d8
proves to be worth 20% on Ptrdist/ks. Might be related to dependency
breaking support.
2. Added FsMOVAPSrr and FsMOVAPDrr as aliases to MOVAPSrr and MOVAPDrr. These
are used for FR32 / FR64 reg-to-reg copies.
3. Tell reg-allocator to generate MOVSSrm / MOVSDrm and MOVSSmr / MOVSDmr to
spill / restore FsMOVAPSrr and FsMOVAPDrr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26241 91177308-0d34-0410-b5e6-96231b3b80d8
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26238 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a testcase that nate reduced from spass.
Also included are a couple minor code changes that don't affect the generated
code at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26235 91177308-0d34-0410-b5e6-96231b3b80d8
We do not want to emit "Loop: ... brcond Out; br Loop", as it adds an extra
instruction in the loop. Instead, invert the condition and emit
"Loop: ... br!cond Loop; br Out.
Generalize the fix by moving it from PPCDAGToDAGISel to SelectionDAGLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26231 91177308-0d34-0410-b5e6-96231b3b80d8
want to copy the files when the .cpp file changes, we want to copy them
to the .cvs versions when the .l/.y file change (like the comments even say).
This avoids having bogus changes show up in diffs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26229 91177308-0d34-0410-b5e6-96231b3b80d8
transfer.
According to the Intel P4 Optimization Manual:
Moves that write a portion of a register can introduce unwanted
dependences. The movsd reg, reg instruction writes only the bottom
64 bits of a register, not to all 128 bits. This introduces a dependence on
the preceding instruction that produces the upper 64 bits (even if those
bits are not longer wanted). The dependence inhibits register renaming,
and thereby reduces parallelism.
Not to mention movaps is shorter than movss.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26226 91177308-0d34-0410-b5e6-96231b3b80d8
Turns them into calls to memset / memcpy if 1) buffer(s) are not DWORD aligned,
2) size is not known to be greater or equal to some minimum value (currently 128).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26224 91177308-0d34-0410-b5e6-96231b3b80d8
unswitch this loop on 2 before sweating to unswitch on 1/3.
void test4(int N, int i, int C, int*P, int*Q) {
int j;
for (j = 0; j < N; ++j) {
switch (C) { // general unswitching.
default: P[i+j] = 0; break;
case 1: Q[i+j] = 0; break;
case 3: P[i+j] = Q[i+j]; break;
case 2: break; // TRIVIAL UNSWITCH on C==2
}
}
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26223 91177308-0d34-0410-b5e6-96231b3b80d8
this for example:
for (j = 0; j < N; ++j) { // trivial unswitch
if (C)
P[i+j] = 0;
}
turning it into the obvious code without bothering to duplicate an empty loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26220 91177308-0d34-0410-b5e6-96231b3b80d8