Commit Graph

296 Commits

Author SHA1 Message Date
Chris Lattner
39a83dc37c Fix a major bug in the signed shr code, which apparently only breaks 134.perl!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17902 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-16 18:40:52 +00:00
Chris Lattner
36c625d3a5 Simplify and rearrange long shift code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17861 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-15 23:16:34 +00:00
Chris Lattner
ce7cafa960 shld is a very high latency operation. Instead of emitting it for shifts of
two or three, open code the equivalent operation which is faster on athlon
and P4 (by a substantial margin).

For example, instead of compiling this:

long long X2(long long Y) { return Y << 2; }

to:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $2, %eax, %edx
        shll $2, %eax
        ret

Compile it to:

X2:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $30, %edx
        leal (%edx,%ecx,4), %edx
        shll $2, %eax
        ret

Likewise, for << 3, compile to:

X3:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $29, %edx
        leal (%edx,%ecx,8), %edx
        shll $3, %eax
        ret

This matches icc, except that icc open codes the shifts as adds on the P4.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17707 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:48:57 +00:00
Chris Lattner
62f5a9402c Add missing check
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17706 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:04:38 +00:00
Chris Lattner
44205cadba Compile:
long long X3_2(long long Y) { return Y+Y; }
int X(int Y) { return Y+Y; }

into:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        addl %eax, %eax
        adcl %edx, %edx
        ret
X:
        movl 4(%esp), %eax
        addl %eax, %eax
        ret

instead of:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $1, %eax, %edx
        shll $1, %eax
        ret

X:
        movl 4(%esp), %eax
        shll $1, %eax
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17705 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:03:48 +00:00
Chris Lattner
611fb259ba Don't print stuff out from the code generator. This broke the JIT horribly
last night. :)  bork!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17093 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 17:40:50 +00:00
Chris Lattner
56a31c69c8 Rewrite support for cast uint -> FP. In particular, we used to compile this:
double %test(uint %X) {
        %tmp.1 = cast uint %X to double         ; <double> [#uses=1]
        ret double %tmp.1
}

into:

test:
        sub %ESP, 8
        mov %EAX, DWORD PTR [%ESP + 12]
        mov %ECX, 0
        mov DWORD PTR [%ESP], %EAX
        mov DWORD PTR [%ESP + 4], %ECX
        fild QWORD PTR [%ESP]
        add %ESP, 8
        ret

... which basically zero extends to 8 bytes, then does an fild for an
8-byte signed int.

Now we generate this:


test:
        sub %ESP, 4
        mov %EAX, DWORD PTR [%ESP + 8]
        mov DWORD PTR [%ESP], %EAX
        fild DWORD PTR [%ESP]
        shr %EAX, 31
        fadd DWORD PTR [.CPItest_0 + 4*%EAX]
        add %ESP, 4
        ret

        .section .rodata
        .align  4
.CPItest_0:
        .quad   5728578726015270912

This does a 32-bit signed integer load, then adds in an offset if the sign
bit of the integer was set.

It turns out that this is substantially faster than the preceeding sequence.
Consider this testcase:

unsigned a[2]={1,2};
volatile double G;

void main() {
    int i;
    for (i=0; i<100000000; ++i )
        G += a[i&1];
}

On zion (a P4 Xeon, 3Ghz), this patch speeds up the testcase from 2.140s
to 0.94s.

On apoc, an athlon MP 2100+, this patch speeds up the testcase from 1.72s
to 1.34s.

Note that the program takes 2.5s/1.97s on zion/apoc with GCC 3.3 -O3
-fomit-frame-pointer.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17083 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 08:01:28 +00:00
Chris Lattner
de95c9e0bb fold:
%X = and Y, constantint
  %Z = setcc %X, 0

instead of emitting:

        and %EAX, 3
        test %EAX, %EAX
        je .LBBfoo2_2   # UnifiedReturnBlock

We now emit:

        test %EAX, 3
        je .LBBfoo2_2   # UnifiedReturnBlock

This triggers 581 times on 176.gcc for example.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17080 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 06:10:40 +00:00
Chris Lattner
30483b0c84 Teach the X86 backend about unreachable and undef. Among other things, we
now compile:

'foo() {}' into "ret" instead of "mov EAX, 0; ret"


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17049 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-16 18:13:05 +00:00
Chris Lattner
358a9027a8 Instruction select globals with offsets better. For example, on this test
case:

int C[100];
int foo() {
  return C[4];
}

We now codegen:

foo:
        mov %EAX, DWORD PTR [C + 16]
        ret

instead of:

foo:
        mov %EAX, OFFSET C
        mov %EAX, DWORD PTR [%EAX + 16]
        ret

Other impressive features may be coming later.

This patch is contributed by Jeff Cohen!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17011 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-15 05:05:29 +00:00
Chris Lattner
b0f4e389db Fix a major regression from the bugfix for 2004-10-08-SelectSetCCFold.llx,
which prevented setcc's from being folded into branches.  It appears that
conditional branchinst's CC operand is actually operand(2), not operand(0)
as we might expect. :(


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16859 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-08 22:24:31 +00:00
Chris Lattner
d04cd55796 Fix bug: 2004-10-08-SelectSetCCFold.llx. Normally this is hidden by the
instcombine xform, which is why we didn't notice it before.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16840 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-08 16:34:13 +00:00
Chris Lattner
09c750f73d Remove debugging code, fix encoding problem. This fixes the problems
the JIT had last night.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16766 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 14:31:50 +00:00
Chris Lattner
2483f67914 Codegen signed mod by 2 or -2 more efficiently. Instead of generating:
t:
        mov %EDX, DWORD PTR [%ESP + 4]
        mov %ECX, 2
        mov %EAX, %EDX
        sar %EDX, 31
        idiv %ECX
        mov %EAX, %EDX
        ret

Generate:
t:
        mov %ECX, DWORD PTR [%ESP + 4]
***     mov %EAX, %ECX
        cdq
        and %ECX, 1
        xor %ECX, %EDX
        sub %ECX, %EDX
***     mov %EAX, %ECX
        ret

Note that the two marked moves are redundant, and should be eliminated by the
register allocator, but aren't.

Compare this to GCC, which generates:

t:
        mov     %eax, DWORD PTR [%esp+4]
        mov     %edx, %eax
        shr     %edx, 31
        lea     %ecx, [%edx+%eax]
        and     %ecx, -2
        sub     %eax, %ecx
        ret

or ICC 8.0, which generates:

t:
        movl      4(%esp), %ecx                                 #3.5
        movl      $-2147483647, %eax                            #3.25
        imull     %ecx                                          #3.25
        movl      %ecx, %eax                                    #3.25
        sarl      $31, %eax                                     #3.25
        addl      %ecx, %edx                                    #3.25
        subl      %edx, %eax                                    #3.25
        addl      %eax, %eax                                    #3.25
        negl      %eax                                          #3.25
        subl      %eax, %ecx                                    #3.25
        movl      %ecx, %eax                                    #3.25
        ret                                                     #3.25

We would be in great shape if not for the moves.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16763 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 05:01:07 +00:00
Chris Lattner
3ffdff6448 Fix a scary bug with signed division by a power of two. We used to generate:
s:   ;; X / 4
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 1
        shr %ECX, 30
        mov %EDX, %EAX
        add %EDX, %ECX
        sar %EAX, 2
        ret

When we really meant:

s:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 1
        shr %ECX, 30
        add %EAX, %ECX
        sar %EAX, 2
        ret

Hey, this also reduces register pressure too :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16761 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 04:19:43 +00:00
Chris Lattner
610f1e2785 Codegen signed divides by 2 and -2 more efficiently. In particular
instead of:

s:   ;; X / 2
        movl 4(%esp), %eax
        movl %eax, %ecx
        shrl $31, %ecx
        movl %eax, %edx
        addl %ecx, %edx
        sarl $1, %eax
        ret

t:   ;; X / -2
        movl 4(%esp), %eax
        movl %eax, %ecx
        shrl $31, %ecx
        movl %eax, %edx
        addl %ecx, %edx
        sarl $1, %eax
        negl %eax
        ret

Emit:

s:
        movl 4(%esp), %eax
        cmpl $-2147483648, %eax
        sbbl $-1, %eax
        sarl $1, %eax
        ret

t:
        movl 4(%esp), %eax
        cmpl $-2147483648, %eax
        sbbl $-1, %eax
        sarl $1, %eax
        negl %eax
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16760 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 04:02:39 +00:00
Misha Brukman
eae1bf10ea s/ISel/X86ISel/ to have unique class names for debugging via gdb because the C++
front-end in gcc does not mangle classes in anonymous namespaces correctly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16469 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-21 18:21:21 +00:00
Reid Spencer
2da5c3dda6 Convert code to compile with vc7.1.
Patch contributed by Paolo Invernizzi. Thanks Paolo!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16368 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-15 17:06:42 +00:00
Reid Spencer
551ccae044 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-01 22:55:40 +00:00
Reid Spencer
fc989e1ee0 Reduce the number of arguments in the instruction builder and make some
improvements on instruction selection that account for register and frame
index bases.

Patch contributed by Jeff Cohen. Thanks Jeff!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16110 91177308-0d34-0410-b5e6-96231b3b80d8
2004-08-30 00:13:26 +00:00
Misha Brukman
91b5ca838a Fix file header as it has been renamed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15239 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-26 18:45:48 +00:00
Chris Lattner
3dbb504081 Fix cases where we generated horrible code like this:
mov %EDI, 12
        add %EDI, %ECX
        mov %ECX, 12
        add %ECX, %EDX
        mov %EDX, 12
        add %EDX, %ESI

instead (really!) generate this:

        add %ECX, 12
        add %EDX, 12
        add %ESI, 12


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15090 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-21 21:28:26 +00:00
Chris Lattner
4771288fe3 While I'm at it, don't break codegen of mul by 3,5,9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15013 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-19 23:50:57 +00:00
Chris Lattner
596b97f1ab Generate better code for multiplies by negative constants like -4, -1, -9, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15012 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-19 23:47:21 +00:00
Reid Spencer
8863f1814b bug 122:
- Replace ConstantPointerRef usage with GlobalValue usage
- Minimize redundant isa<GlobalValue> usage
- Correct isa<Constant> for GlobalValue subclass


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14950 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-18 00:38:32 +00:00
Chris Lattner
76e2df2645 Patches towards fixing PR341
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14841 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-15 02:14:30 +00:00
Chris Lattner
d2995df5b7 Improve codegen for the LLVM offsetof/sizeof "operator". Before we compiled
this LLVM function:

int %foo() {
        ret int cast (int** getelementptr (int** null, int 1) to int)
}

into:

foo:
        mov %EAX, 0
        lea %EAX, DWORD PTR [%EAX + 4]
        ret

now we compile it into:

foo:
        mov %EAX, 4
        ret

This sequence is frequently generated by the MSIL front-end, and soon the malloc lowering pass and
Java front-ends as well..

-Chris


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14834 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-15 00:58:53 +00:00
Chris Lattner
8b486a114e Fix a regression from r1.224. In particular, codegen a cast from double ->
float as a truncation by going through memory.  This truncation was being
skipped, which caused 175.vpr to fail after aggressive register promotion.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14473 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-29 00:14:38 +00:00
Chris Lattner
3048373748 Move the IntrinsicLowering header into the CodeGen directory, as per PR346
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14266 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-20 07:49:54 +00:00
Chris Lattner
667ea024b5 Codegen sub C, X a little bit better for register pressure. Instead of
mov REG, C
sub REG, X

generate:

neg X
add X, C

which uses one less reg


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14213 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:50:37 +00:00
Chris Lattner
a6f9fe6dbc Fold setcc instructions into select and branches that are not in the same BB as
the setcc.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14212 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:29:22 +00:00
Chris Lattner
ccd9796a46 Do not fold loads into instructions if it is used more than once. In particular
we do not want to fold the load in cases like this:

  X = load
    = add A, X
    = add B, X


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14204 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 22:15:25 +00:00
Chris Lattner
f70c22b019 Rename Type::PrimitiveID to TypeId and ::getPrimitiveID() to ::getTypeID()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14201 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 18:19:28 +00:00
Chris Lattner
4adf066f99 Remove support for llvm.isnan. Alkis wins :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14189 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:48:07 +00:00
Chris Lattner
dc5724478e Add basic support for the isunordered intrinsic. The isnan stuff still needs to go
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14185 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:36:44 +00:00
Chris Lattner
01cdb1b367 By far, one of the most common uses of isnan is to make 'isunordered'
comparisons.  In an 'isunordered' predicate, which looks like this at
the LLVM level:

        %a = call bool %llvm.isnan(double %X)
        %b = call bool %llvm.isnan(double %Y)
        %COM = or bool %a, %b

We used to generate this code:

        fxch %ST(1)
        fucomip %ST(0), %ST(0)
        setp %AL
        fucomip %ST(0), %ST(0)
        setp %AH
        or %AL, %AH

With this patch, we generate this code:

        fucomip %ST(0), %ST(1)
        fstp %ST(0)
        setp %AL

Which should make alkis happy.  Tested as X86/compare_folding.llx:test1


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14148 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 05:33:49 +00:00
Chris Lattner
0ca2c8e02c Now that compare instructions aren't lumped in with the other twoargfp instructions,
we can get rid of the FpUCOM/FpUCOMi pseudo instructions, which makes stuff simpler
and faster.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14144 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 04:49:02 +00:00
Chris Lattner
b4fe76cbb5 Add direct support for the isnan intrinsic, implementing test/Regression/CodeGen/X86/isnan.llx
testcase


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14141 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 04:31:10 +00:00
John Criswell
6b5bd5857d Fix for PR#366. We use getClassB() so that we can handle cast instructions
that cast to bool.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14096 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-09 15:18:51 +00:00
Chris Lattner
d029cd2d5a Convert to the new TargetMachine interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13952 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-02 05:55:25 +00:00
Chris Lattner
df04097f87 Add some notes to myself, no functional changes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13695 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-23 21:23:12 +00:00
Brian Gaeke
9f088e481c Generate branch machine instructions with MachineBasicBlock operands instead of
LLVM BasicBlock operands.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13566 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-14 06:54:56 +00:00
Chris Lattner
b7cb9ffd34 Two more improvements for null pointer handling: storing a null pointer
and passing a null pointer into a function.

For this testcase:

void %test(int** %X) {
  store int* null, int** %X
  call void %test(int** null)
  ret void
}

we now generate this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov DWORD PTR [%EAX], 0
        mov DWORD PTR [%ESP], 0
        call test
        add %ESP, 12
        ret

instead of this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov DWORD PTR [%EAX], %ECX
        mov %EAX, 0
        mov DWORD PTR [%ESP], %EAX
        call test
        add %ESP, 12
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13558 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 15:26:48 +00:00
Chris Lattner
9f1b531090 Second half of my fixed-sized-alloca patch. This folds the LEA to compute
the alloca address into common operations like loads/stores.

In a simple testcase like this (which is just designed to excersize the
alloca A, nothing more):

int %test(int %X, bool %C) {
        %A = alloca int
        store int %X, int* %A
        store int* %A, int** %G
        br bool %C, label %T, label %F
T:
        call int %test(int 1, bool false)
        %V = load int* %A
        ret int %V
F:
        call int %test(int 123, bool true)
        %V2 = load int* %A
        ret int %V2
}

We now generate:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %CL, BYTE PTR [%ESP + 20]
***     mov DWORD PTR [%ESP + 8], %EAX
        mov %EAX, OFFSET G
        lea %EDX, DWORD PTR [%ESP + 8]
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov DWORD PTR [%ESP + 4], 0
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov DWORD PTR [%ESP + 4], 1
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Instead of:

test:
        sub %ESP, 20
        mov %EAX, DWORD PTR [%ESP + 24]
        mov %CL, BYTE PTR [%ESP + 28]
***     lea %EDX, DWORD PTR [%ESP + 16]
***     mov DWORD PTR [%EDX], %EAX
        mov %EAX, OFFSET G
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
***     mov DWORD PTR [%ESP + 12], %EDX
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov %EAX, 0
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13557 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 15:12:43 +00:00
Chris Lattner
cb2fd557ee Substantially improve code generation for address exposed locals (aka fixed
sized allocas in the entry block).  Instead of generating code like this:

entry:
  reg1024 = ESP+1234
... (much later)
  *reg1024 = 17


Generate code that looks like this:
entry:
  (no code generated)
... (much later)
  t = ESP+1234
  *t = 17

The advantage being that we DRAMATICALLY reduce the register pressure for these
silly temporaries (they were all being spilled to the stack, resulting in very
silly code).  This is actually a manual implementation of rematerialization :)

I have a patch to fold the alloca address computation into loads & stores, which
will make this much better still, but just getting this right took way too much time
and I'm sleepy.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13554 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 07:40:27 +00:00
Chris Lattner
2b10b08ad6 Pass boolean constants into function calls more efficiently, generating:
mov DWORD PTR [%ESP + 4], 1

instead of:

        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13494 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-12 16:35:04 +00:00
Chris Lattner
c81e6bae88 Fix a fairly serious pessimizaion that was preventing us from efficiently
compiling things like 'add long %X, 1'.  The problem is that we were switching
the order of the operands for longs even though we can't fold them yet.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13451 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-10 15:15:55 +00:00
Chris Lattner
9984fd0df9 Fix some comments, avoid sign extending booleans when zero extend works fine
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13440 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-09 23:16:33 +00:00
Chris Lattner
96e3b426d5 Generate more efficient code for casting booleans to integers (no sign extension required)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13439 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-09 22:28:45 +00:00
Chris Lattner
e7a31c98db Codegen floating point stores of constants into integer instructions. This
allows us to compile:

store float 10.0, float* %P

into:
        mov DWORD PTR [%EAX], 1092616192

instead of:

.CPItest_0:                                     # float 0x4024000000000000
.long   1092616192      # float 10
...
        fld DWORD PTR [.CPItest_0]
        fstp DWORD PTR [%EAX]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13409 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-07 21:18:15 +00:00