Commit Graph

1372 Commits

Author SHA1 Message Date
Chris Lattner
25b8b8cb2c Fix CodeGen/Generic/2006-04-28-Sign-extend-bool.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28017 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-28 21:56:10 +00:00
Chris Lattner
e481e8bc8f Add a note
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27999 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-28 00:04:05 +00:00
Nate Begeman
b3f70d7d55 No functionality changes, but cleaner code with correct comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27966 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-25 04:45:59 +00:00
Nate Begeman
37efe67645 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27947 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-22 18:53:45 +00:00
Chris Lattner
ea4a9c575f Teach the JIT how to relocate LI, this fixes the JIT on Prolangs-C/TimberWolfMC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27943 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-22 06:17:56 +00:00
Nate Begeman
d9993b0b2d Fix the comment
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27938 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-21 22:11:27 +00:00
Nate Begeman
6fcbd6961d Change the PPC JIT to use a Static relocation model
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27937 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-21 22:04:15 +00:00
Chris Lattner
ba2194ae84 Fix the CodeGen/PowerPC/buildvec_canonicalize.ll regression last night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27908 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-20 19:01:30 +00:00
Chris Lattner
0231007269 Make sure that the new instructions selected have the right type. This fixes
CodeGen/PowerPC/2006-04-19-vmaddfp-crash.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27868 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-20 05:58:10 +00:00
Chris Lattner
8cc5ffb27f add a note
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27832 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-19 16:22:38 +00:00
Chris Lattner
bf9b3716ed add a note
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27828 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-19 05:55:06 +00:00
Chris Lattner
80f362a48f These are correctly encoded by the JIT. I checked :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27810 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 19:03:38 +00:00
Chris Lattner
87140126e8 add a note
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27809 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 18:30:19 +00:00
Chris Lattner
0090120c2b Fix a crash on:
void foo2(vector float *A, vector float *B) {
  vector float C = (vector float)vec_cmpeq(*A, *B);
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
  *A = C;
}


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27808 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 18:28:22 +00:00
Chris Lattner
f70f8d91a7 pretty print node name
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27806 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 18:05:58 +00:00
Chris Lattner
90564f26d1 Implement an important entry from README_ALTIVEC:
If an altivec predicate compare is used immediately by a branch, don't
use a (serializing) MFCR instruction to read the CR6 register, which requires
a compare to get it back to CR's.  Instead, just branch on CR6 directly. :)

For example, for:
void foo2(vector float *A, vector float *B) {
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
}

We now generate:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        bne cr6, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

instead of:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        cmpwi cr0, r3, 0
        beq cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

This implements CodeGen/PowerPC/vec_br_cmp.ll.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27804 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 17:59:36 +00:00
Chris Lattner
3be29059ab move some stuff around, clean things up
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27802 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 17:52:36 +00:00
Chris Lattner
cea2aa77eb Use vmladduhm to do v8i16 multiplies which is faster and simpler than doing
even/odd halves.  Thanks to Nate telling me what's what.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27793 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 04:28:57 +00:00
Chris Lattner
19a815238e Implement v16i8 multiply with this code:
vmuloub v5, v3, v2
        vmuleub v2, v3, v2
        vperm v2, v2, v5, v4

This implements CodeGen/PowerPC/vec_mul.ll.  With this, v16i8 multiplies are
6.79x faster than before.

Overall, UnitTests/Vector/multiplies.c is now 2.45x faster with LLVM than with
GCC.

Remove the 'integer multiplies' todo from the README file.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27792 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 03:57:35 +00:00
Chris Lattner
72dd9bdcc5 Lower v8i16 multiply into this code:
li r5, lo16(LCPI1_0)
        lis r6, ha16(LCPI1_0)
        lvx v4, r6, r5
        vmulouh v5, v3, v2
        vmuleuh v2, v3, v2
        vperm v2, v2, v5, v4

where v4 is:
LCPI1_0:                                        ;  <16 x ubyte>
        .byte   2
        .byte   3
        .byte   18
        .byte   19
        .byte   6
        .byte   7
        .byte   22
        .byte   23
        .byte   10
        .byte   11
        .byte   26
        .byte   27
        .byte   14
        .byte   15
        .byte   30
        .byte   31

This is 5.07x faster on the G5 (measured) than lowering to scalar code +
loads/stores.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27789 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 03:43:48 +00:00
Chris Lattner
e7c768ea24 Custom lower v4i32 multiplies into a cute sequence, instead of having legalize
scalarize the sequence into 4 mullw's and a bunch of load/store traffic.

This speeds up v4i32 multiplies 4.1x (measured) on a G5.  This implements
PowerPC/vec_mul.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27788 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18 03:24:30 +00:00
Chris Lattner
22fcbb1320 remove done item
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27778 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 21:52:03 +00:00
Chris Lattner
f9568d8700 Don't diddle VRSAVE if no registers need to be added/removed from it. This
allows us to codegen functions as:

_test_rol:
        vspltisw v2, -12
        vrlw v2, v2, v2
        blr

instead of:

_test_rol:
        mfvrsave r2, 256
        mr r3, r2
        mtvrsave r3
        vspltisw v2, -12
        vrlw v2, v2, v2
        mtvrsave r2
        blr

Testcase here: CodeGen/PowerPC/vec_vrsave.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27777 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 21:48:13 +00:00
Chris Lattner
402504b1ba Vectors that are known live-in and live-out are clearly already marked in
the vrsave register for the caller.  This allows us to codegen a function as:

_test_rol:
        mfspr r2, 256
        mr r3, r2
        mtspr 256, r3
        vspltisw v2, -12
        vrlw v2, v2, v2
        mtspr 256, r2
        blr

instead of:

_test_rol:
        mfspr r2, 256
        oris r3, r2, 40960
        mtspr 256, r3
        vspltisw v0, -12
        vrlw v2, v0, v0
        mtspr 256, r2
        blr


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27772 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 21:22:06 +00:00
Chris Lattner
939274fcfd Prefer to allocate V2-V5 before V0,V1. This lets us generate code like this:
vspltisw v2, -12
        vrlw v2, v2, v2

instead of:

        vspltisw v0, -12
        vrlw v2, v0, v0

when a function is returning a value.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27771 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 21:19:12 +00:00
Chris Lattner
369503f841 Move some knowledge about registers out of the code emitter into the register info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27770 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 21:07:20 +00:00
Chris Lattner
f7d2372b74 Use a small table instead of macros to do this conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27769 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 20:59:25 +00:00
Chris Lattner
dbce85dedf Make sure to check splats of every constant we can, handle splat(31) by
being a bit more clever, add support for odd splats from -31 to -17.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27764 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 18:09:22 +00:00
Chris Lattner
bdd558cd94 Teach the ppc backend to use rol and vsldoi to generate splatted constants.
This implements vec_constants.ll:test_vsldoi and test_rol


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27760 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 17:55:10 +00:00
Chris Lattner
966083fd1a add a note
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27758 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 17:29:41 +00:00
Chris Lattner
6876e66e5d Make some code more general, adding support for constant formation of several
new patterns.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27754 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 06:58:41 +00:00
Chris Lattner
c408382eca Learn how to make odd splatted constants in range [17,29]. This implements
PowerPC/vec_constants.ll:test_29.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27752 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 06:07:44 +00:00
Chris Lattner
4a998b9ca8 Pull some code out into a helper function.
Effeciently codegen even splats in the range [-32,30].

This allows us to codegen <30,30,30,30> as:

        vspltisw v0, 15
        vadduwm v2, v0, v0

instead of as a cp load.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27750 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 06:00:21 +00:00
Chris Lattner
5913810b82 Implement a TODO: for any shuffle that can be viewed as a v4[if]32 shuffle,
if it can be implemented in 3 or fewer discrete altivec instructions, codegen
it as such.  This implements Regression/CodeGen/PowerPC/vec_perf_shuffle.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27748 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 05:28:54 +00:00
Chris Lattner
cffeb86169 Regenerate with adjusted costs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27746 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 05:26:20 +00:00
Chris Lattner
586d6a808d Regenerate with correct offset
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27744 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 05:08:46 +00:00
Chris Lattner
c74e710000 Increase the opcodes by one each to disambiguate COPY from VMRGHW.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27742 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 00:47:48 +00:00
Chris Lattner
6703461f04 Check in a table, generated by llvm-PerfectShuffle, of optimal shuffles
of various 4-element vectors.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27739 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17 00:37:02 +00:00
Chris Lattner
f3f69decca Implement a TODO: have the legalizer canonicalize a bunch of operations to
one type (v4i32) so that we don't have to write patterns for each type, and
so that more CSE opportunities are exposed.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27731 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-16 01:37:57 +00:00
Chris Lattner
b17f1679e3 Make the BUILD_VECTOR lowering code much more aggressive w.r.t constant vectors.
Remove some done items from the todo list.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27729 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-16 01:01:29 +00:00
Chris Lattner
730b45694b Fix a crash when faced with a shuffle vector that has an undef in its mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27726 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-15 23:48:05 +00:00
Chris Lattner
6e94af75de Add patterns for matching vnots with bit converted inputs. Most of these will
go away when I start using evan's binop type canonicalizer


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27725 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-15 23:45:24 +00:00
Chris Lattner
b097aa9353 Allow undef in a shuffle mask
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27714 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14 23:19:08 +00:00
Chris Lattner
1a635d617a Move the rest of the PPCTargetLowering::LowerOperation cases out into
separate functions, for simplicity and code clarity.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27693 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14 06:01:58 +00:00
Chris Lattner
f1b4708950 Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separate
functions, which makes the code much cleaner :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27692 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14 05:19:18 +00:00
Chris Lattner
a39d798e0a Force non-darwin targets to use a static relo model. This fixes PR734,
tested by CodeGen/Generic/vector.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27657 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-13 17:10:48 +00:00
Chris Lattner
ed93790517 add a note, move an altivec todo to the altivec list.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27654 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-13 16:48:00 +00:00
Reid Spencer
3758552428 Add the README files to the distribution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27651 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-13 06:39:24 +00:00
Chris Lattner
ac225ca051 Add a new way to match vector constants, which make it easier to bang bits of
different types.

Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load,
implementing PowerPC/vec_constants.ll:test1.  This compiles:

typedef float vf __attribute__ ((vector_size (16)));
typedef int vi __attribute__ ((vector_size (16)));
void test(vi *P1, vi *P2, vf *P3) {
  *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000};
  *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF};
  *P3 = vec_abs((vector float)*P3);
}

to:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        vspltisw v0, -1
        vslw v0, v0, v0
        lvx v1, 0, r3
        vand v1, v1, v0
        stvx v1, 0, r3
        lvx v1, 0, r4
        vandc v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vandc v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

instead of (with two constant pool entries):

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        li r6, lo16(LCPI1_0)
        lis r7, ha16(LCPI1_0)
        li r8, lo16(LCPI1_1)
        lis r9, ha16(LCPI1_1)
        lvx v0, r7, r6
        lvx v1, 0, r3
        vand v0, v1, v0
        stvx v0, 0, r3
        lvx v0, r9, r8
        lvx v1, 0, r4
        vand v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vand v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

GCC produces (with 2 cp entries):

_test:
        mfspr r0,256
        stw r0,-4(r1)
        oris r0,r0,0xc00c
        mtspr 256,r0
        lis r2,ha16(LC0)
        lis r9,ha16(LC1)
        la r2,lo16(LC0)(r2)
        lvx v0,0,r3
        lvx v1,0,r5
        la r9,lo16(LC1)(r9)
        lwz r12,-4(r1)
        lvx v12,0,r2
        lvx v13,0,r9
        vand v0,v0,v12
        stvx v0,0,r3
        vspltisw v0,-1
        vslw v12,v0,v0
        vandc v1,v1,v12
        stvx v1,0,r5
        lvx v0,0,r4
        vand v0,v0,v13
        stvx v0,0,r4
        mtspr 256,r12
        blr


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27624 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-12 19:07:14 +00:00
Chris Lattner
e87192a854 Rename get_VSPLI_elt -> get_VSPLTI_elt
Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each
form, eliminating a bunch of Pat patterns in the .td file and allowing us to
CSE stuff more aggressively.  This implements
PowerPC/buildvec_canonicalize.ll:VSPLTI


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27614 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-12 17:37:20 +00:00