The iterator is pointing at the next instruction which should not disappear
when doing the load/store replacement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12954 91177308-0d34-0410-b5e6-96231b3b80d8
at the bottom of the loop instead of the top. This reduces the number of
overlapping live ranges a lot, for example, eliminating a spill in an important
loop in 183.equake with linear scan.
I still need to make the exit comparison of the loop use the post-incremented
version of this variable, but this is an easy first step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12952 91177308-0d34-0410-b5e6-96231b3b80d8
even when the "optimization" I added before is turned off. It generates this
extremely pointless code:
test:
fld QWORD PTR [%ESP + 4]
mov %AL, 0
test %AL, %AL
fcmove %ST(0), %ST(0)
ret
Good thing the optimizer will have removed this before code generation
anyway. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12939 91177308-0d34-0410-b5e6-96231b3b80d8
Fix several bugs in the intrinsics:
1. Make sure to copy the input registers before the instructions that use them
2. Make sure to copy the value returned by 'in' out of EAX into the register
it is supposed to be in.
This fixes assertions when using in/out and linear scan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12896 91177308-0d34-0410-b5e6-96231b3b80d8
SCC passes much more useful. In particular, this should fix the incredibly
stupid missed inlining opportunities that the inliner suffered from.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12860 91177308-0d34-0410-b5e6-96231b3b80d8
of the fucom[p][p] instructions. This allows us to code generate this function
bool %test(double %X, double %Y) {
%C = setlt double %Y, %X
ret bool %C
}
... into:
test:
fld QWORD PTR [%ESP + 4]
fld QWORD PTR [%ESP + 12]
fucomip %ST(1)
fstp %ST(0)
setb %AL
movsx %EAX, %AL
ret
where before we generated:
test:
fld QWORD PTR [%ESP + 4]
fld QWORD PTR [%ESP + 12]
fucompp
** fnstsw
** sahf
setb %AL
movsx %EAX, %AL
ret
The two marked instructions (which are the ones eliminated) are very bad,
because they serialize execution of the processor. These instructions are
available on the PPRO and later, but since we already use cmov's we aren't
losing any portability.
I retained the old code for the day when we decide we want to support back
to the 386.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12852 91177308-0d34-0410-b5e6-96231b3b80d8
If the source of the cast is a load, we can just use the source memory location,
without having to create a temporary stack slot entry.
Before we code generated this:
double %int(int* %P) {
%V = load int* %P
%V2 = cast int %V to double
ret double %V2
}
into:
int:
sub %ESP, 4
mov %EAX, DWORD PTR [%ESP + 8]
mov %EAX, DWORD PTR [%EAX]
mov DWORD PTR [%ESP], %EAX
fild DWORD PTR [%ESP]
add %ESP, 4
ret
Now we produce this:
int:
mov %EAX, DWORD PTR [%ESP + 4]
fild DWORD PTR [%EAX]
ret
... which is nicer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12846 91177308-0d34-0410-b5e6-96231b3b80d8
for mul and div.
Instead of generating this:
test_divr:
fld QWORD PTR [%ESP + 4]
fld QWORD PTR [.CPItest_divr_0]
fdivrp %ST(1)
ret
We now generate this:
test_divr:
fld QWORD PTR [%ESP + 4]
fdivr QWORD PTR [.CPItest_divr_0]
ret
This code desperately needs refactoring, which will come in the next
patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12841 91177308-0d34-0410-b5e6-96231b3b80d8
instructions use. This doesn't change any functionality except that long
constant expressions of these operations will now magically start working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12840 91177308-0d34-0410-b5e6-96231b3b80d8