Commit Graph

137 Commits

Author SHA1 Message Date
Nate Begeman
acc398c195 First part of bug 680:
Remove TLI.LowerVA* and replace it with SDNodes that are lowered the same
way as everything else.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25606 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-25 18:21:52 +00:00
Evan Cheng
3f23952404 If scheduler choice is the default (-sched=default), use target scheduling
preference to determine which scheduler to use. SchedulingForLatency ==
Breadth first; SchedulingForRegPressure == bottom up register reduction list
scheduler.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25599 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-25 09:12:57 +00:00
Jim Laskey
17d52f7234 Typo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25545 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-23 13:34:04 +00:00
Evan Cheng
f0f9c90204 Skeleton of the list schedule.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25544 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-23 08:26:10 +00:00
Evan Cheng
4ef1086749 Factor out more instruction scheduler code to the base class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25532 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-23 07:01:07 +00:00
Chris Lattner
39a17dd31d Fix bugs lowering stackrestore, fixing 2004-08-12-InlinerAndAllocas.c on
PPC.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25522 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-23 05:22:07 +00:00
Chris Lattner
a3818e6f9a Fix a bug in a recent refactor that caused a bunch of programs to miscompile
or the compiler to crash.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25503 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-21 19:12:11 +00:00
Evan Cheng
a9c2091cd3 Do some code refactoring on Jim's scheduler in preparation of the new list
scheduler.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25493 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-21 02:32:06 +00:00
Chris Lattner
4eebb60f84 If the target doesn't support f32 natively, insert the FP_EXTEND in target-indep
code, so that the LowerReturn code doesn't have to handle it.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25482 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-20 18:38:32 +00:00
Chris Lattner
d12b2d7b5a Temporary work around for a libcall insertion bug: If a target doesn't
support FSIN/FCOS nodes, do not lower sin/cos to them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25425 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-18 21:50:14 +00:00
Robert Bocchino
4eb2e3a6f4 Support for the insertelement operation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25405 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-17 20:06:42 +00:00
Reid Spencer
0b118206bf For PR411:
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
  llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
  llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
  llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
  llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
  llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25366 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-16 21:12:35 +00:00
Nate Begeman
3a04ffbf6d Remove some duplicated code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25313 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-14 03:18:27 +00:00
Nate Begeman
d88fc03602 bswap implementation
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25312 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-14 03:14:10 +00:00
Chris Lattner
140d53c99c Compile llvm.stacksave/restore into STACKSAVE/STACKRESTORE nodes, and allow
targets to custom expand them as they desire.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25273 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-13 02:50:02 +00:00
Chris Lattner
e8f7a4bbee Add "support" for stacksave/stackrestore to the dag isel
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25268 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-13 02:24:42 +00:00
Robert Bocchino
c0f4cd9931 Added selection DAG support for the extractelement operation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25179 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-10 19:04:57 +00:00
Jim Laskey
b2efb853f0 Applied some recommend changes from sabre. The dominate one beginning "let the
pass manager do it's thing."  Fixes crash when compiling -g files and suppresses
dwarf statements if no debug info is present.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25100 91177308-0d34-0410-b5e6-96231b3b80d8
2006-01-04 22:28:25 +00:00
Chris Lattner
9797c5cc3e enable the gep isel opt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24910 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-21 19:36:36 +00:00
Chris Lattner
3b841e9f52 Lower ConstantAggregateZero into zeros
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24890 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-21 02:43:26 +00:00
Jim Laskey
f5395cee6a Added source file/line correspondence for dwarf (PowerPC only at this point.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24748 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-16 22:45:29 +00:00
Chris Lattner
86cb643801 Don't lump the filename and working dir together
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24697 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-13 17:40:33 +00:00
Chris Lattner
ac22c83e68 Accept and ignore prefetches for now
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24678 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-12 22:51:16 +00:00
Chris Lattner
3802c2552f Minor tweak to get isel opt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24663 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-11 09:05:13 +00:00
Chris Lattner
c78b0b740b improve code insertion in two ways:
1. Only forward subst offsets into loads and stores, not into arbitrary
   things, where it will likely become a load.
2. If the source is a cast from pointer, forward subst the cast as well,
   allowing us to fold the cast away (improving cases when the cast is
   from an alloca or global).

This hasn't been fully tested, but does appear to further reduce register
pressure and improve code.  Lets let the testers grind on it a bit. :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24640 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-08 08:00:12 +00:00
Nate Begeman
cc827e60b6 Fix a crash where ConstantVec nodes were being generated with the wrong
type when the target did not support them.  Also teach Legalize how to
expand ConstantVecs.

This allows us to generate

_test:
        lwz r2, 12(r3)
        lwz r4, 8(r3)
        lwz r5, 4(r3)
        lwz r6, 0(r3)
        addi r2, r2, 4
        addi r4, r4, 3
        addi r5, r5, 2
        addi r6, r6, 1
        stw r2, 12(r3)
        stw r4, 8(r3)
        stw r5, 4(r3)
        stw r6, 0(r3)
        blr

For:

void %test(%v4i *%P) {
        %T = load %v4i* %P
        %S = add %v4i %T, <int 1, int 2, int 3, int 4>
        store %v4i %S, %v4i * %P
        ret void
}

On PowerPC.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24633 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-07 19:48:11 +00:00
Nate Begeman
8cfa57b1b4 Teach the SelectionDAG ISel how to turn ConstantPacked values into
constant nodes with vector types.  Also teach the asm printer how to print
ConstantPacked constant pool entries.  This allows us to generate altivec
code such as the following, which adds a vector constantto a packed float.

LCPI1_0:  <4 x float> < float 0.0e+0, float 0.0e+0, float 0.0e+0, float 1.0e+0 >
        .space  4
        .space  4
        .space  4
        .long   1065353216      ; float 1
        .text
        .align  4
        .globl  _foo
_foo:
        lis r2, ha16(LCPI1_0)
        la r2, lo16(LCPI1_0)(r2)
        li r4, 0
        lvx v0, r4, r2
        lvx v1, r4, r3
        vaddfp v0, v1, v0
        stvx v0, r4, r3
        blr

For the llvm code:

void %foo(<4 x float> * %a) {
entry:
  %tmp1 = load <4 x float> * %a;
  %tmp2 = add <4 x float> %tmp1, < float 0.0, float 0.0, float 0.0, float 1.0 >
  store <4 x float> %tmp2, <4 x float> *%a
  ret void
}


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24616 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-06 06:18:55 +00:00
Chris Lattner
c88d8e944d Fix the #1 code quality problem that I have seen on X86 (and it also affects
PPC and other targets).  In a particular, consider code like this:

struct Vector3 { double x, y, z; };
struct Matrix3 { Vector3 a, b, c; };
double dot(Vector3 &a, Vector3 &b) {
   return a.x * b.x  +  a.y * b.y  +  a.z * b.z;
}
Vector3 mul(Vector3 &a, Matrix3 &b) {
   Vector3 r;
   r.x = dot( a, b.a );
   r.y = dot( a, b.b );
   r.z = dot( a, b.c );
   return r;
}
void transform(Matrix3 &m, Vector3 *x, int n) {
   for (int i = 0; i < n; i++)
      x[i] = mul( x[i], m );
}

we compile transform to a loop with all of the GEP instructions for indexing
into 'm' pulled out of the loop (9 of them).  Because isel occurs a bb at a time
we are unable to fold the constant index into the loads in the loop, leading to
PPC code that looks like this:

LBB3_1: ; no_exit.preheader
        li r2, 0
        addi r6, r3, 64        ;; 9 values live across the loop body!
        addi r7, r3, 56
        addi r8, r3, 48
        addi r9, r3, 40
        addi r10, r3, 32
        addi r11, r3, 24
        addi r12, r3, 16
        addi r30, r3, 8
LBB3_2: ; no_exit
        lfd f0, 0(r30)
        lfd f1, 8(r4)
        fmul f0, f1, f0
        lfd f2, 0(r3)        ;; no constant indices folded into the loads!
        lfd f3, 0(r4)
        lfd f4, 0(r10)
        lfd f5, 0(r6)
        lfd f6, 0(r7)
        lfd f7, 0(r8)
        lfd f8, 0(r9)
        lfd f9, 0(r11)
        lfd f10, 0(r12)
        lfd f11, 16(r4)
        fmadd f0, f3, f2, f0
        fmul f2, f1, f4
        fmadd f0, f11, f10, f0
        fmadd f2, f3, f9, f2
        fmul f1, f1, f6
        stfd f0, 0(r4)
        fmadd f0, f11, f8, f2
        fmadd f1, f3, f7, f1
        stfd f0, 8(r4)
        fmadd f0, f11, f5, f1
        addi r29, r4, 24
        stfd f0, 16(r4)
        addi r2, r2, 1
        cmpw cr0, r2, r5
        or r4, r29, r29
        bne cr0, LBB3_2 ; no_exit

uh, yuck.  With this patch, we now sink the constant offsets into the loop, producing
this code:

LBB3_1: ; no_exit.preheader
        li r2, 0
LBB3_2: ; no_exit
        lfd f0, 8(r3)
        lfd f1, 8(r4)
        fmul f0, f1, f0
        lfd f2, 0(r3)
        lfd f3, 0(r4)
        lfd f4, 32(r3)       ;; much nicer.
        lfd f5, 64(r3)
        lfd f6, 56(r3)
        lfd f7, 48(r3)
        lfd f8, 40(r3)
        lfd f9, 24(r3)
        lfd f10, 16(r3)
        lfd f11, 16(r4)
        fmadd f0, f3, f2, f0
        fmul f2, f1, f4
        fmadd f0, f11, f10, f0
        fmadd f2, f3, f9, f2
        fmul f1, f1, f6
        stfd f0, 0(r4)
        fmadd f0, f11, f8, f2
        fmadd f1, f3, f7, f1
        stfd f0, 8(r4)
        fmadd f0, f11, f5, f1
        addi r6, r4, 24
        stfd f0, 16(r4)
        addi r2, r2, 1
        cmpw cr0, r2, r5
        or r4, r6, r6
        bne cr0, LBB3_2 ; no_exit

This is much nicer as it reduces register pressure in the loop a lot.  On X86,
this takes the function from having 9 spilled registers to 2.  This should help
some spec programs on X86 (gzip?)

This is currently only enabled with -enable-gep-isel-opt to allow perf testing
tonight.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24606 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-05 07:10:48 +00:00
Chris Lattner
d67b3a8bf7 dbg.stoppoint returns a value, don't forget to init it
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24583 91177308-0d34-0410-b5e6-96231b3b80d8
2005-12-03 18:50:48 +00:00
Nate Begeman
f43a3ca26d First chunk of actually generating vector code for packed types. These
changes allow us to generate the following code:

_foo:
        li r2, 0
        lvx v0, r2, r3
        vaddfp v0, v0, v0
        stvx v0, r2, r3
        blr

for this llvm:

void %foo(<4 x float>* %a) {
entry:
        %tmp1 = load <4 x float>* %a
        %tmp2 = add <4 x float> %tmp1, %tmp1
        store <4 x float> %tmp2, <4 x float>* %a
        ret void
}


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24534 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-30 08:22:07 +00:00
Reid Spencer
6ff7240a5c Fix a problem with llvm-ranlib that (on some platforms) caused the archive
file to become corrupted due to interactions between mmap'd memory segments
and file descriptors closing. The problem is completely avoiding by using
a third temporary file.

Patch provided by Evan Jones


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24527 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-30 05:21:10 +00:00
Chris Lattner
36ce69195e Add support for a new STRING and LOCATION node for line number support, patch
contributed by Daniel Berlin, with a few cleanups here and there by me.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24515 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-29 06:21:05 +00:00
Nate Begeman
ab48be3772 Check in code to scalarize arbitrarily wide packed types for some simple
vector operations (load, add, sub, mul).

This allows us to codegen:
void %foo(<4 x float> * %a) {
entry:
  %tmp1 = load <4 x float> * %a;
  %tmp2 = add <4 x float> %tmp1, %tmp1
  store <4 x float> %tmp2, <4 x float> *%a
  ret void
}

on ppc as:
_foo:
        lfs f0, 12(r3)
        lfs f1, 8(r3)
        lfs f2, 4(r3)
        lfs f3, 0(r3)
        fadds f0, f0, f0
        fadds f1, f1, f1
        fadds f2, f2, f2
        fadds f3, f3, f3
        stfs f0, 12(r3)
        stfs f1, 8(r3)
        stfs f2, 4(r3)
        stfs f3, 0(r3)
        blr


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24484 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-22 18:16:00 +00:00
Nate Begeman
4ef3b817fe Rather than attempting to legalize 1 x float, make sure the SD ISel never
generates it.  Make MVT::Vector expand-only, and remove the code in
Legalize that attempts to legalize it.

The plan for supporting N x Type is to continually epxand it in ExpandOp
until it gets down to 2 x Type, where it will be scalarized into a pair of
scalars.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24482 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-22 01:29:36 +00:00
Chris Lattner
b67eb9131c Unbreak codegen of bools. This should fix the llc/jit/llc-beta failures
from last night.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24427 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-19 18:40:42 +00:00
Nate Begeman
5fbb5d2459 Teach LLVM how to scalarize packed types. Currently, this only works on
packed types with an element count of 1, although more generic support is
coming.  This allows LLVM to turn the following code:

void %foo(<1 x float> * %a) {
entry:
  %tmp1 = load <1 x float> * %a;
  %tmp2 = add <1 x float> %tmp1, %tmp1
  store <1 x float> %tmp2, <1 x float> *%a
  ret void
}

Into:

_foo:
        lfs f0, 0(r3)
        fadds f0, f0, f0
        stfs f0, 0(r3)
        blr


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24416 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-19 00:36:38 +00:00
Nate Begeman
e21ea61588 Split out the shift code from visitBinary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24412 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-18 07:42:56 +00:00
Chris Lattner
b1a5a5c4c0 when debugging lower dbg intrinsics to calls
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24377 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-16 07:22:30 +00:00
Andrew Lenharth
8b91c77385 added a chain output
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24306 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-11 22:48:54 +00:00
Andrew Lenharth
51b8d54922 continued readcyclecounter support
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24300 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-11 16:47:30 +00:00
Chris Lattner
c9ea6fde30 Refactor intrinsic lowering stuff out of visitCall
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24261 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-09 19:44:01 +00:00
Chris Lattner
6b2d69655a Fix CodeGen/X86/shift-folding.ll:test3 on X86
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24256 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-09 16:50:40 +00:00
Chris Lattner
7436b57de3 Avoid creating a token factor node in trivially redundant cases. This
eliminates almost one node per block in common cases.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24254 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-09 05:03:03 +00:00
Chris Lattner
7c0104b525 Handle GEP's a bit more intelligently. Fold constant indices early and
turn power-of-two multiplies into shifts early to improve compile time.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24253 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-09 04:45:33 +00:00
Nate Begeman
ae232e7a10 Add the necessary support to the ISel to allow targets to codegen the new
alignment information appropriately.  Includes code for PowerPC to support
fixed-size allocas with alignment larger than the stack.  Support for
arbitrarily aligned dynamic allocas coming soon.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24224 91177308-0d34-0410-b5e6-96231b3b80d8
2005-11-06 09:00:38 +00:00
Chris Lattner
bf209489ad Significantly simplify this code and make it more aggressive. Instead of having
a special case hack for X86, make the hack more general: if an incoming argument
register is not used in any block other than the entry block, don't copy it to
a vreg.  This helps us compile code like this:

%struct.foo = type { int, int, [0 x ubyte] }
int %test(%struct.foo* %X) {
        %tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100
        %tmp = load ubyte* %tmp1                ; <ubyte> [#uses=1]
        %tmp2 = cast ubyte %tmp to int          ; <int> [#uses=1]
        ret int %tmp2
}

to:

_test:
        lbz r3, 108(r3)
        blr

instead of:

_test:
        lbz r2, 108(r3)
        or r3, r2, r2
        blr

The (dead) copy emitted to copy r3 into a vreg for extra-block uses was
increasing the live range of r3 past the load, preventing the coallescing.

This implements CodeGen/PowerPC/reg-coallesce-simple.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@24115 91177308-0d34-0410-b5e6-96231b3b80d8
2005-10-30 19:42:35 +00:00
Nate Begeman
4a95945fa5 Add the ability to lower return instructions to TargetLowering. This
allows us to lower legal return types to something else, to meet ABI
requirements (such as that i64 be returned in two i32 regs on Darwin/ppc).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23802 91177308-0d34-0410-b5e6-96231b3b80d8
2005-10-18 23:23:37 +00:00
Chris Lattner
d222f6ab67 Fix Generic/2005-10-18-ZeroSizeStackObject.ll by not requesting a zero
sized stack object if either the array size or the type size is zero.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23801 91177308-0d34-0410-b5e6-96231b3b80d8
2005-10-18 22:14:06 +00:00
Chris Lattner
2dfa8192ab remove hack
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23797 91177308-0d34-0410-b5e6-96231b3b80d8
2005-10-18 22:11:42 +00:00
Chris Lattner
af21d55aee Enable Nate's excellent DAG combiner work by default. This allows the
removal of a bunch of ad-hoc and crufty code from SelectionDAG.cpp.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@23682 91177308-0d34-0410-b5e6-96231b3b80d8
2005-10-10 16:47:10 +00:00