Commit Graph

319 Commits

Author SHA1 Message Date
Chris Lattner
8b486a114e Fix a regression from r1.224. In particular, codegen a cast from double ->
float as a truncation by going through memory.  This truncation was being
skipped, which caused 175.vpr to fail after aggressive register promotion.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14473 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-29 00:14:38 +00:00
Chris Lattner
3048373748 Move the IntrinsicLowering header into the CodeGen directory, as per PR346
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14266 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-20 07:49:54 +00:00
Chris Lattner
667ea024b5 Codegen sub C, X a little bit better for register pressure. Instead of
mov REG, C
sub REG, X

generate:

neg X
add X, C

which uses one less reg


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14213 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:50:37 +00:00
Chris Lattner
a6f9fe6dbc Fold setcc instructions into select and branches that are not in the same BB as
the setcc.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14212 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:29:22 +00:00
Chris Lattner
ccd9796a46 Do not fold loads into instructions if it is used more than once. In particular
we do not want to fold the load in cases like this:

  X = load
    = add A, X
    = add B, X


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14204 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 22:15:25 +00:00
Chris Lattner
f70c22b019 Rename Type::PrimitiveID to TypeId and ::getPrimitiveID() to ::getTypeID()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14201 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 18:19:28 +00:00
Chris Lattner
4adf066f99 Remove support for llvm.isnan. Alkis wins :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14189 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:48:07 +00:00
Chris Lattner
dc5724478e Add basic support for the isunordered intrinsic. The isnan stuff still needs to go
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14185 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:36:44 +00:00
Chris Lattner
01cdb1b367 By far, one of the most common uses of isnan is to make 'isunordered'
comparisons.  In an 'isunordered' predicate, which looks like this at
the LLVM level:

        %a = call bool %llvm.isnan(double %X)
        %b = call bool %llvm.isnan(double %Y)
        %COM = or bool %a, %b

We used to generate this code:

        fxch %ST(1)
        fucomip %ST(0), %ST(0)
        setp %AL
        fucomip %ST(0), %ST(0)
        setp %AH
        or %AL, %AH

With this patch, we generate this code:

        fucomip %ST(0), %ST(1)
        fstp %ST(0)
        setp %AL

Which should make alkis happy.  Tested as X86/compare_folding.llx:test1


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14148 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 05:33:49 +00:00
Chris Lattner
0ca2c8e02c Now that compare instructions aren't lumped in with the other twoargfp instructions,
we can get rid of the FpUCOM/FpUCOMi pseudo instructions, which makes stuff simpler
and faster.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14144 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 04:49:02 +00:00
Chris Lattner
b4fe76cbb5 Add direct support for the isnan intrinsic, implementing test/Regression/CodeGen/X86/isnan.llx
testcase


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14141 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 04:31:10 +00:00
John Criswell
6b5bd5857d Fix for PR#366. We use getClassB() so that we can handle cast instructions
that cast to bool.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14096 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-09 15:18:51 +00:00
Chris Lattner
d029cd2d5a Convert to the new TargetMachine interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13952 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-02 05:55:25 +00:00
Chris Lattner
df04097f87 Add some notes to myself, no functional changes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13695 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-23 21:23:12 +00:00
Brian Gaeke
9f088e481c Generate branch machine instructions with MachineBasicBlock operands instead of
LLVM BasicBlock operands.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13566 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-14 06:54:56 +00:00
Chris Lattner
b7cb9ffd34 Two more improvements for null pointer handling: storing a null pointer
and passing a null pointer into a function.

For this testcase:

void %test(int** %X) {
  store int* null, int** %X
  call void %test(int** null)
  ret void
}

we now generate this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov DWORD PTR [%EAX], 0
        mov DWORD PTR [%ESP], 0
        call test
        add %ESP, 12
        ret

instead of this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov DWORD PTR [%EAX], %ECX
        mov %EAX, 0
        mov DWORD PTR [%ESP], %EAX
        call test
        add %ESP, 12
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13558 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 15:26:48 +00:00
Chris Lattner
9f1b531090 Second half of my fixed-sized-alloca patch. This folds the LEA to compute
the alloca address into common operations like loads/stores.

In a simple testcase like this (which is just designed to excersize the
alloca A, nothing more):

int %test(int %X, bool %C) {
        %A = alloca int
        store int %X, int* %A
        store int* %A, int** %G
        br bool %C, label %T, label %F
T:
        call int %test(int 1, bool false)
        %V = load int* %A
        ret int %V
F:
        call int %test(int 123, bool true)
        %V2 = load int* %A
        ret int %V2
}

We now generate:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %CL, BYTE PTR [%ESP + 20]
***     mov DWORD PTR [%ESP + 8], %EAX
        mov %EAX, OFFSET G
        lea %EDX, DWORD PTR [%ESP + 8]
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov DWORD PTR [%ESP + 4], 0
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov DWORD PTR [%ESP + 4], 1
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Instead of:

test:
        sub %ESP, 20
        mov %EAX, DWORD PTR [%ESP + 24]
        mov %CL, BYTE PTR [%ESP + 28]
***     lea %EDX, DWORD PTR [%ESP + 16]
***     mov DWORD PTR [%EDX], %EAX
        mov %EAX, OFFSET G
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
***     mov DWORD PTR [%ESP + 12], %EDX
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov %EAX, 0
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13557 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 15:12:43 +00:00
Chris Lattner
cb2fd557ee Substantially improve code generation for address exposed locals (aka fixed
sized allocas in the entry block).  Instead of generating code like this:

entry:
  reg1024 = ESP+1234
... (much later)
  *reg1024 = 17


Generate code that looks like this:
entry:
  (no code generated)
... (much later)
  t = ESP+1234
  *t = 17

The advantage being that we DRAMATICALLY reduce the register pressure for these
silly temporaries (they were all being spilled to the stack, resulting in very
silly code).  This is actually a manual implementation of rematerialization :)

I have a patch to fold the alloca address computation into loads & stores, which
will make this much better still, but just getting this right took way too much time
and I'm sleepy.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13554 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-13 07:40:27 +00:00
Chris Lattner
2b10b08ad6 Pass boolean constants into function calls more efficiently, generating:
mov DWORD PTR [%ESP + 4], 1

instead of:

        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13494 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-12 16:35:04 +00:00
Chris Lattner
c81e6bae88 Fix a fairly serious pessimizaion that was preventing us from efficiently
compiling things like 'add long %X, 1'.  The problem is that we were switching
the order of the operands for longs even though we can't fold them yet.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13451 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-10 15:15:55 +00:00
Chris Lattner
9984fd0df9 Fix some comments, avoid sign extending booleans when zero extend works fine
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13440 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-09 23:16:33 +00:00
Chris Lattner
96e3b426d5 Generate more efficient code for casting booleans to integers (no sign extension required)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13439 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-09 22:28:45 +00:00
Chris Lattner
e7a31c98db Codegen floating point stores of constants into integer instructions. This
allows us to compile:

store float 10.0, float* %P

into:
        mov DWORD PTR [%EAX], 1092616192

instead of:

.CPItest_0:                                     # float 0x4024000000000000
.long   1092616192      # float 10
...
        fld DWORD PTR [.CPItest_0]
        fstp DWORD PTR [%EAX]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13409 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-07 21:18:15 +00:00
Chris Lattner
260195d2b1 Make comparisons against the null pointer as efficient as integer comparisons
against zero.  In particular, don't emit:

        mov %ESI, 0
        cmp %ECX, %ESI

instead, emit:

       test %ECX, %ECX


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13407 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-07 19:55:55 +00:00
Chris Lattner
bbc130d110 Remove unneeded check
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13355 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-04 19:35:11 +00:00
Chris Lattner
c8af02c403 Improve signed division by power of 2 *dramatically* from this:
div:
        mov %EDX, DWORD PTR [%ESP + 4]
        mov %ECX, 64
        mov %EAX, %EDX
        sar %EDX, 31
        idiv %ECX
        ret

to this:

div:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 5
        shr %ECX, 26
        mov %EDX, %EAX
        add %EDX, %ECX
        sar %EAX, 6
        ret

Note that the intel compiler is currently making this:

div:
        movl      4(%esp), %edx                                 #3.5
        movl      %edx, %eax                                    #4.14
        sarl      $5, %eax                                      #4.14
        shrl      $26, %eax                                     #4.14
        addl      %edx, %eax                                    #4.14
        sarl      $6, %eax                                      #4.14
        ret                                                     #4.14

Which has one less register->register copy.  (hint hint alkis :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13354 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-04 19:33:58 +00:00
Chris Lattner
9eb9b8ecb9 Improve code generated for integer multiplications by 2,3,5,9
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13342 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-04 15:47:14 +00:00
Chris Lattner
77993632a1 Remove unused #include
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13304 91177308-0d34-0410-b5e6-96231b3b80d8
2004-05-01 21:29:16 +00:00
Brian Gaeke
1afe7736ff Make RequiresFPRegKill() take a MachineBasicBlock arg.
In InsertFPRegKills(), just check the MachineBasicBlock for successors
instead of its corresponding BasicBlock.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13213 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-28 04:45:55 +00:00
Brian Gaeke
235aa5eba7 In InsertFPRegKills(), use the machine-CFG itself rather than the
LLVM CFG when trying to find the successors of BB.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13212 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-28 04:34:16 +00:00
Brian Gaeke
ea9ca67304 Update the machine-CFG edges whenever we see a branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@13211 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-28 04:19:37 +00:00
John Criswell
53b54be5fc Remove code to adjust the iterator for llvm.readio and llvm.writeio.
The iterator is pointing at the next instruction which should not disappear
when doing the load/store replacement.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12954 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-14 21:27:56 +00:00
John Criswell
e5a4c15da6 Added support for the llvm.readio and llvm.writeio intrinsics.
On x86, memory operations occur in-order, so these are just lowered into
volatile loads and stores.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12936 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-13 22:13:14 +00:00
Chris Lattner
82c5a9990f Implement a small optimization, which papers over the problem in
X86/2004-04-13-FPCMOV-Crash.llx

A more robust fix is to follow.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12935 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-13 21:56:09 +00:00
Chris Lattner
87e18deabc Emit the immediate form of in/out when possible.
Fix several bugs in the intrinsics:
  1. Make sure to copy the input registers before the instructions that use them
  2. Make sure to copy the value returned by 'in' out of EAX into the register
     it is supposed to be in.

This fixes assertions when using in/out and linear scan.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12896 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-13 17:20:37 +00:00
Chris Lattner
133dbb1285 Fix issues that the local allocator has dealing with instructions that implicitly use ST(0)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12855 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-12 03:02:48 +00:00
Chris Lattner
8d2822e7f1 Use the fucomi[p] instructions to perform floating point comparisons instead
of the fucom[p][p] instructions.  This allows us to code generate this function

bool %test(double %X, double %Y) {
        %C = setlt double %Y, %X
        ret bool %C
}

... into:

test:
        fld QWORD PTR [%ESP + 4]
        fld QWORD PTR [%ESP + 12]
        fucomip %ST(1)
        fstp %ST(0)
        setb %AL
        movsx %EAX, %AL
        ret

where before we generated:

test:
        fld QWORD PTR [%ESP + 4]
        fld QWORD PTR [%ESP + 12]
        fucompp
**      fnstsw
**      sahf
        setb %AL
        movsx %EAX, %AL
        ret

The two marked instructions (which are the ones eliminated) are very bad,
because they serialize execution of the processor.  These instructions are
available on the PPRO and later, but since we already use cmov's we aren't
losing any portability.

I retained the old code for the day when we decide we want to support back
to the 386.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12852 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-12 01:43:36 +00:00
Chris Lattner
9938286325 Fix a bug in my load/cast folding patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12849 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-12 00:23:04 +00:00
Chris Lattner
13c07feb20 Adjust some comments, fix a bug in my previous patch
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12848 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-12 00:12:04 +00:00
Chris Lattner
feac3e18aa On X86, casting an integer to floating point requires going through memory.
If the source of the cast is a load, we can just use the source memory location,
without having to create a temporary stack slot entry.

Before we code generated this:

double %int(int* %P) {
        %V = load int* %P
        %V2 = cast int %V to double
        ret double %V2
}

into:

int:
        sub %ESP, 4
        mov %EAX, DWORD PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%EAX]
        mov DWORD PTR [%ESP], %EAX
        fild DWORD PTR [%ESP]
        add %ESP, 4
        ret

Now we produce this:

int:
        mov %EAX, DWORD PTR [%ESP + 4]
        fild DWORD PTR [%EAX]
        ret

... which is nicer.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12846 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 23:21:26 +00:00
Chris Lattner
95157f7638 Implement folding of loads into floating point operations. This implements:
test/Regression/CodeGen/X86/fp_load_fold.llx


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12844 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 22:05:45 +00:00
Chris Lattner
6621ed94cc Unify all of the code for floating point +,-,*,/ into one function
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12842 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 21:23:56 +00:00
Chris Lattner
8ebf1c35a1 This implements folding of constant operands into floating point operations
for mul and div.

Instead of generating this:

test_divr:
        fld QWORD PTR [%ESP + 4]
        fld QWORD PTR [.CPItest_divr_0]
        fdivrp %ST(1)
        ret

We now generate this:

test_divr:
        fld QWORD PTR [%ESP + 4]
        fdivr QWORD PTR [.CPItest_divr_0]
        ret

This code desperately needs refactoring, which will come in the next
patch.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12841 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 21:09:14 +00:00
Chris Lattner
462fa82270 Restructure the mul/div/rem handling code to follow the pattern the other
instructions use.  This doesn't change any functionality except that long
constant expressions of these operations will now magically start working.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12840 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 20:56:28 +00:00
Chris Lattner
48b0c97e20 Codegen FP adds and subtracts with a constant more efficiently, generating:
fld QWORD PTR [%ESP + 4]
        fadd QWORD PTR [.CPItest_add_0]

instead of:

        fld QWORD PTR [%ESP + 4]
        fld QWORD PTR [.CPItest_add_0]
        faddp %ST(1)

I also intend to do this for mul & div, but it appears that I have to
refactor a bit of code before I can do so.

This is tested by: test/Regression/CodeGen/X86/fp_constant_op.llx


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12839 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 20:26:20 +00:00
Chris Lattner
427aeb476f Two changes:
1. If an incoming argument is dead, don't load it from the stack
  2. Do not code gen noop copies at all (ie, cast int -> uint), not even to
     a move.  This should reduce register pressure for allocators that are
     unable to coallesce away these copies in some cases.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12835 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-11 19:21:59 +00:00
Chris Lattner
85aa7097c2 Silence a spurious warning
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12815 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-10 18:32:01 +00:00
John Criswell
6d804f408a Reversed the order of the llvm.writeport() operands so that the value
is listed first and the address is listed second.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12795 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-09 19:09:14 +00:00
John Criswell
aee0cf3fca Changed assertions to error messages.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12787 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-09 15:10:15 +00:00
John Criswell
ca6ea0f137 Changes recommended by Chris:
InstSelectSimple.cpp:
  Change the checks for proper I/O port address size into an exit() instead
  of an assertion.  Assertions aren't used in Release builds, and handling
  this error should be graceful (not that this counts as graceful, but it's
  more graceful).

  Modified the generation of the IN/OUT instructions to have 0 arguments.
X86InstrInfo.td:
  Added the OpSize attribute to the 16 bit IN and OUT instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12786 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-08 22:39:13 +00:00
John Criswell
4ffff9e2fa Added the llvm.readport and llvm.writeport intrinsics for x86. These do
I/O port instructions on x86.  The specific code sequence is tailored to
the parameters and return value of the intrinsic call.
Added the ability for implicit defintions to be printed in the Instruction
Printer.
Added the ability for RawFrm instruction to print implict uses and
defintions with correct comma output.  This required adjustment to some
methods so that a leading comma would or would not be printed.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12782 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-08 20:31:47 +00:00
Chris Lattner
7b92de1e7d Fix PR313: [x86] JIT miscompiles unsigned short to floating point
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12711 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 19:29:36 +00:00
Chris Lattner
48c937e5c9 Fix a minor bug in previous checking
Enable folding of long seteq/setne comparisons into branches and select instructions
Implement unfolded long relational comparisons against a constants a bit more efficiently

Folding comparisons changes code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

into code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        jne .LBB2 # PC rel: F

This speeds up 186.crafty by 6% with llc-ls.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12702 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 17:34:50 +00:00
Chris Lattner
e80e637793 Improve codegen of long == and != comparisons against constants. Before,
comparing a long against zero got us this:

        sub %ESP, 8
        mov DWORD PTR [%ESP + 4], %ESI
        mov DWORD PTR [%ESP], %EDI
        mov %EAX, DWORD PTR [%ESP + 12]
        mov %EDX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov %ESI, 0
        mov %EDI, %EAX
        xor %EDI, %ECX
        mov %ECX, %EDX
        xor %ECX, %ESI
        or %EDI, %ECX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

Now it gets us this:

        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12696 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 16:02:27 +00:00
Chris Lattner
6ab06d5d19 Handle various other important cases of multiplying a long constant immediate. For
example, multiplying X*(1 + (1LL << 32)) now produces:

test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %ECX
        add %EDX, %ECX
        ret

[[[Note to Alkis: why isn't linear scan generating this code??  This might be a
 problem with your intervals being too conservative:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        add %EDX, %EAX
        ret

end note]]]

Whereas GCC produces this:

T:
        sub     %esp, 12
        mov     %edx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+8], %edi
        mov     %ecx, DWORD PTR [%esp+20]
        xor     %edi, %edi
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, %edi
        mov     %eax, %edx
        mov     DWORD PTR [%esp+4], %esi
        add     %ebx, %edx
        mov     %edi, DWORD PTR [%esp+8]
        lea     %edx, [%ecx+%ebx]
        mov     %esi, DWORD PTR [%esp+4]
        mov     %ebx, DWORD PTR [%esp]
        add     %esp, 12
        ret

I'm not sure example what GCC is smoking here, but it looks like it has just
confused itself with a bunch of stack slots or something.  The intel compiler
is better, but still not good:

T:
        movl      4(%esp), %edx                                 #2.11
        movl      8(%esp), %eax                                 #2.11
        lea       (%eax,%edx), %ecx                             #3.12
        movl      $1, %eax                                      #3.12
        mull      %edx                                          #3.12
        addl      %ecx, %edx                                    #3.12
        ret                                                     #3.12


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12693 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 04:55:43 +00:00
Chris Lattner
028adc422d Efficiently handle a long multiplication by a constant. For this testcase:
long %test(long %X) {
        %Y = mul long %X, 123
        ret long %Y
}

we used to generate:

test:
        sub %ESP, 12
        mov DWORD PTR [%ESP + 8], %ESI
        mov DWORD PTR [%ESP + 4], %EDI
        mov DWORD PTR [%ESP], %EBX
        mov %ECX, DWORD PTR [%ESP + 16]
        mov %ESI, DWORD PTR [%ESP + 20]
        mov %EDI, 123
        mov %EBX, 0
        mov %EAX, %ECX
        mul %EDI
        imul %ESI, %EDI
        add %ESI, %EDX
        imul %ECX, %EBX
        add %ESI, %ECX
        mov %EDX, %ESI
        mov %EBX, DWORD PTR [%ESP]
        mov %EDI, DWORD PTR [%ESP + 4]
        mov %ESI, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Now we emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, 123
        mul %EDX
        imul %ECX, %ECX, 123
        add %ECX, %EDX
        mov %EDX, %ECX
        ret

Which, incidently, is substantially nicer than what GCC manages:
T:
        sub     %esp, 8
        mov     %eax, 123
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+4], %esi
        mov     %esi, DWORD PTR [%esp+12]
        imul    %ecx, %ebx, 123
        mov     %ebx, DWORD PTR [%esp]
        mul     %esi
        mov     %esi, DWORD PTR [%esp+4]
        add     %esp, 8
        lea     %edx, [%ecx+%edx]
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12692 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 04:29:36 +00:00
Chris Lattner
722070e0ba Improve code generation of long shifts by 32.
On this testcase:

long %test(long %X) {
        %Y = shr long %X, ubyte 32
        ret long %Y
}

instead of:
t:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        sar %EAX, 0
        mov %EDX, 0
        ret


we now emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        mov %EDX, 0
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12688 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 03:42:38 +00:00
Chris Lattner
0652167bea Bugfixes: inc/dec don't set the carry flag!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12687 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 03:36:57 +00:00
Chris Lattner
92900a65a3 Improve code for passing constant longs as arguments to function calls.
For example, on this instruction:

        call void %test(long 1234)

Instead of this:
        mov %EAX, 1234
        mov %ECX, 0
        mov DWORD PTR [%ESP], %EAX
        mov DWORD PTR [%ESP + 4], %ECX
        call test

We now emit this:
        mov DWORD PTR [%ESP], 1234
        mov DWORD PTR [%ESP + 4], 0
        call test


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12686 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 03:23:00 +00:00
Chris Lattner
33f7fa317b Emit more efficient 64-bit operations when the RHS is a constant, and one
of the words of the constant is zeros.  For example:
  Y = and long X, 1234

now generates:
  Yl = and Xl, 1234
  Yh = 0

instead of:
  Yl = and Xl, 1234
  Yh = and Xh, 0


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12685 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 03:15:53 +00:00
Chris Lattner
7ba92306db Fix typeo
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12684 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 02:13:25 +00:00
Chris Lattner
ab1d0e0963 Add support for simple immediate handling to long instruction selection.
This allows us to handle code like 'add long %X, 123456789012' more efficiently.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12683 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 02:11:49 +00:00
Chris Lattner
edd5e4957a Implement negation of longs efficiently. For this testcase:
long %test(long %X) {
        %Y = sub long 0, %X
        ret long %Y
}

We used to generate:

test:
        sub %ESP, 4
        mov DWORD PTR [%ESP], %ESI
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %ESI, DWORD PTR [%ESP + 12]
        mov %EAX, 0
        mov %EDX, 0
        sub %EAX, %ECX
        sbb %EDX, %ESI
        mov %ESI, DWORD PTR [%ESP]
        add %ESP, 4
        ret

Now we generate:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        neg %EAX
        adc %EDX, 0
        neg %EDX
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12681 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 01:48:06 +00:00
Chris Lattner
502e36c3c9 Minor tweak to avoid an extra reg-reg copy that the register allocator has to eliminate
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12680 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 01:25:33 +00:00
Chris Lattner
29bf0623e5 Two changes:
* In promote32, if we can just promote a constant value, do so instead of
    promoting a constant dynamically.
  * In visitReturn inst, actually USE the promote32 argument that takes a
    Value*

The end result of this is that we now generate this:

test:
        mov %EAX, 0
        ret

instead of...

test:
        mov %AX, 0
        movzx %EAX, %AX
        ret

for:

ushort %test() {
        ret ushort 0
}


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12679 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-06 01:21:00 +00:00
Chris Lattner
28977af72a Support getelementptr instructions which use uint's to index into structure
types and can have arbitrary 32- and 64-bit integer types indexing into
sequential types.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12653 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-05 01:30:19 +00:00
Alkis Evlogimenos
bee8a094af Clean up code a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12615 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-02 18:11:32 +00:00
Alkis Evlogimenos
13ce339442 Fix type in instruction builder instantiation
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12610 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-02 15:51:03 +00:00
Chris Lattner
68626c2b30 Generate slightly smaller code, "test R, R" instead of "cmp R, 0"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12579 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-31 22:22:36 +00:00
Chris Lattner
352eb48f8e Codegen FP select instructions into X86 conditional moves. Annoyingly enough
the X86 does not support a full set of fp cmove instructions, so we can't always
fold the condition into the select.  :(  Yuck.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12577 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-31 22:03:35 +00:00
Chris Lattner
307ecbaddb Fold comparisons into select instructions, making much better code and
using our broad selection of movcc instructions.  :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12560 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-30 22:39:09 +00:00
Chris Lattner
12d96a0b4d Add direct support for integer select instructions, though we still don't support
folding compares into the select yet.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12553 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-30 21:22:00 +00:00
Chris Lattner
6f2ab04e91 Fix a fairly major performance problem. If a PHI node had a constant as
an incoming value from a block, the selector would evaluate the constant
at the TOP of the block instead of at the end of the block.  This made the
live range for the constant span the entire block, increasing register
pressure needlessly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12542 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-30 19:10:12 +00:00
Chris Lattner
ab18020cbd Malloc doesn't kill a load. This patch need not go into 1.2 though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12500 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-18 17:01:26 +00:00
Chris Lattner
85c84e759e Fix a really nasty bug that was breaking ijpeg in LLC mode. We were incorrectly
folding load instructions into other instructions across free instruction
boundaries.  Perhaps this will also fix the other strange failures?


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12494 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-18 06:29:54 +00:00
Chris Lattner
5634b9f5e7 It helps if I save the file. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12357 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-13 00:24:52 +00:00
Chris Lattner
317201d773 Rename the intrinsic enum values for llvm.va_* from Intrinsic::va_* to
Intrinsic::va*.  This avoid conflicting with macros in the stdlib.h file.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12356 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-13 00:24:00 +00:00
Chris Lattner
7dee5daf85 Implement folding explicit load instructions into binary operations. For a
testcase like this:

int %test(int* %P, int %A) {
        %Pv = load int* %P
        %B = add int %A, %Pv
        ret int %B
}

We now generate:
test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        add %EAX, DWORD PTR [%ECX]
        ret

Instead of:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%EAX]
        add %EAX, %ECX
        ret

... saving one instruction, and often a register.  Note that there are a lot
of other instructions that could use this, but they aren't handled.  I'm not
really interested in adding them, but mul/div and all of the FP instructions
could be supported as well if someone wanted to add them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12204 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-08 01:58:35 +00:00
Chris Lattner
721d2d4a6e Rearrange and refactor some code. No functionality changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12203 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-08 01:18:36 +00:00
Misha Brukman
538607fe45 Doxygenify some comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12064 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-01 23:53:11 +00:00
Chris Lattner
21585221b6 Handle passing constant integers to functions much more efficiently. Instead
of generating this code:

        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        call Y

we now generate:
        mov DWORD PTR [%ESP], 4
        mov DWORD PTR [%ESP + 4], 123
        call Y

Which hurts the eyes less.  :)

Considering that register pressure around call sites is already high (with all
of the callee clobber registers n stuff), this may help a lot.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12028 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-01 02:42:43 +00:00
Chris Lattner
ce6096f49b Fix a minor code-quality issue. When passing 8 and 16-bit integer constants
to function calls, we would emit dead code, like this:

int Y(int, short, double);
int X() {
  Y(4, 123, 4);
}

--- Old
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
***     mov %AX, 123
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Now we emit:
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Next up, eliminate the mov AX and movsx entirely!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12026 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-01 02:34:08 +00:00
Alkis Evlogimenos
8295f202d9 A big X86 instruction rename. The instructions are renamed to make
their names more decriptive. A name consists of the base name, a
default operand size followed by a character per operand with an
optional special size. For example:

ADD8rr -> add, 8-bit register, 8-bit register

IMUL16rmi -> imul, 16-bit register, 16-bit memory, 16-bit immediate

IMUL16rmi8 -> imul, 16-bit register, 16-bit memory, 8-bit immediate

MOVSX32rm16 -> movsx, 32-bit register, 16-bit memory


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11995 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-29 08:50:03 +00:00
Chris Lattner
ee352852e7 Eliminate the X86-specific BMI functions, using BuildMI instead.
Replace uses of addZImm with addImm.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11992 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-29 07:22:16 +00:00
Chris Lattner
168aa90bf6 Fix a miscompilation of 197.parser that occurs when you have single basic
block loops.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11990 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-29 07:10:16 +00:00
Chris Lattner
1ddf475b6a These two virtual methods are never called.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11984 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-29 05:59:33 +00:00
Alkis Evlogimenos
da474adb21 SHLD and SHRD take 32-bit operands but an 8-bit immediate. Rename them
to denote this fact.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11972 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-28 23:46:44 +00:00
Alkis Evlogimenos
8e475b8cfd Floating point loads/stores act on memory operands. Rename them to
denote this fact.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11971 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-28 23:42:35 +00:00
Alkis Evlogimenos
e35ba65b02 Rename SHL, SHR, SAR, SHLD and SHLR instructions to make them
consistent with the rest and also pepare for the addition of their
memory operand variants.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11902 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-27 06:57:05 +00:00
Alkis Evlogimenos
71e353ed35 Uncomment assertions that register# != 0 on calls to
MRegisterInfo::is{Physical,Virtual}Register. Apply appropriate fixes
to relevant files.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11882 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-26 22:00:20 +00:00
Chris Lattner
8dd8d261a4 Fix some warnings, some of which were spurious, and some of which were real
bugs.  Thanks Brian!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11859 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-26 01:20:02 +00:00
Chris Lattner
5f2c7b1975 Teach the instruction selector how to transform 'array' GEP computations into X86
scaled indexes.  This allows us to compile GEP's like this:

int* %test([10 x { int, { int } }]* %X, int %Idx) {
        %Idx = cast int %Idx to long
        %X = getelementptr [10 x { int, { int } }]* %X, long 0, long %Idx, ubyte 1, ubyte 0
        ret int* %X
}

Into a single address computation:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        lea %EAX, DWORD PTR [%EAX + 8*%ECX + 4]
        ret

Before it generated:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        shl %ECX, 3
        add %EAX, %ECX
        lea %EAX, DWORD PTR [%EAX + 4]
        ret

This is useful for things like int/float/double arrays, as the indexing can be folded into
the loads&stores, reducing register pressure and decreasing the pressure on the decode unit.
With these changes, I expect our performance on 256.bzip2 and gzip to improve a lot.  On
bzip2 for example, we go from this:

10665 asm-printer           - Number of machine instrs printed
   40 ra-local              - Number of loads/stores folded into instructions
 1708 ra-local              - Number of loads added
 1532 ra-local              - Number of stores added
 1354 twoaddressinstruction - Number of instructions added
 1354 twoaddressinstruction - Number of two-address instructions
 2794 x86-peephole          - Number of peephole optimization performed

to this:
9873 asm-printer           - Number of machine instrs printed
  41 ra-local              - Number of loads/stores folded into instructions
1710 ra-local              - Number of loads added
1521 ra-local              - Number of stores added
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
2142 x86-peephole          - Number of peephole optimization performed

... and these types of instructions are often in tight loops.

Linear scan is also helped, but not as much.  It goes from:

8787 asm-printer           - Number of machine instrs printed
2389 liveintervals         - Number of identity moves eliminated after coalescing
2288 liveintervals         - Number of interval joins performed
3522 liveintervals         - Number of intervals after coalescing
5810 liveintervals         - Number of original intervals
 700 spiller               - Number of loads added
 487 spiller               - Number of stores added
 303 spiller               - Number of register spills
1354 twoaddressinstruction - Number of instructions added
1354 twoaddressinstruction - Number of two-address instructions
 363 x86-peephole          - Number of peephole optimization performed

to:

7982 asm-printer           - Number of machine instrs printed
1759 liveintervals         - Number of identity moves eliminated after coalescing
1658 liveintervals         - Number of interval joins performed
3282 liveintervals         - Number of intervals after coalescing
4940 liveintervals         - Number of original intervals
 635 spiller               - Number of loads added
 452 spiller               - Number of stores added
 288 spiller               - Number of register spills
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
 258 x86-peephole          - Number of peephole optimization performed

Though I'm not complaining about the drop in the number of intervals.  :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11820 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-25 07:00:55 +00:00
Chris Lattner
b6bac51351 * Make the previous patch more efficient by not allocating a temporary MachineInstr
to do analysis.

*** FOLD getelementptr instructions into loads and stores when possible,
    making use of some of the crazy X86 addressing modes.

For example, the following C++ program fragment:

struct complex {
    double re, im;
    complex(double r, double i) : re(r), im(i) {}
};
inline complex operator+(const complex& a, const complex& b) {
    return complex(a.re+b.re, a.im+b.im);
}
complex addone(const complex& arg) {
    return arg + complex(1,0);
}

Used to be compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %ECX
        fld QWORD PTR [%EDX]
        fld1
        faddp %ST(1)
***     add %ECX, 8
        fld QWORD PTR [%ECX]
        fldz
        faddp %ST(1)
***     mov %ECX, %EAX
        fxch %ST(1)
        fstp QWORD PTR [%ECX]
***     add %EAX, 8
        fstp QWORD PTR [%EAX]
        ret

Now it is compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        fld QWORD PTR [%ECX]
        fld1
        faddp %ST(1)
        fld QWORD PTR [%ECX + 8]
        fldz
        faddp %ST(1)
        fxch %ST(1)
        fstp QWORD PTR [%EAX]
        fstp QWORD PTR [%EAX + 8]
        ret

Other programs should see similar improvements, across the board.  Note that
in addition to reducing instruction count, this also reduces register pressure
a lot, always a good thing on X86.  :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11819 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-25 06:13:04 +00:00
Chris Lattner
985fe3df6f add an inefficient way of folding structure and constant array indexes together
into a single LEA instruction.  This should improve the code generated for
things like X->A.B.C[12].D.

The bigger benefit is still coming though.  Note that this uses an LEA instruction
instead of an add, giving the register allocator more freedom.  We should probably
never generate ADDri32's.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11817 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-25 03:45:50 +00:00
Chris Lattner
5a83096d6a Implement special case for storing an immediate into memory so that we don't need
an intermediate register.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11816 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-25 02:56:58 +00:00
Alkis Evlogimenos
743d0a1f83 Refactor rewinding code for finding the first terminator of a basic
block into MachineBasicBlock::getFirstTerminator().

This also fixes a bug in the implementation of the above in both
RegAllocLocal and InstrSched, where instructions where added after the
terminator if the basic block's only instruction was a terminator (it
shouldn't matter for RegAllocLocal since this case never occurs in
practice).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11748 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-23 18:14:48 +00:00
Chris Lattner
fbc39d5045 Simplify code a bit, don't go off the end of the block, now that the current
block we are in might be empty


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11744 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-23 07:42:19 +00:00
Chris Lattner
65cf42d32f We were forgetting to add FP_REG_KILL instructions to basic blocks which will
eventually get an assignment due to elimination of PHIs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11743 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-23 07:29:45 +00:00
Chris Lattner
311ca2e51f Implement cast fp -> bool
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11728 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-23 03:21:41 +00:00
Chris Lattner
baa58a5691 Stop passing iterators around by reference now that we have ilists!
Implement cast Type::ULongTy -> double


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11726 91177308-0d34-0410-b5e6-96231b3b80d8
2004-02-23 03:10:10 +00:00