mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-11-01 00:11:00 +00:00
56a31c69c8
double %test(uint %X) { %tmp.1 = cast uint %X to double ; <double> [#uses=1] ret double %tmp.1 } into: test: sub %ESP, 8 mov %EAX, DWORD PTR [%ESP + 12] mov %ECX, 0 mov DWORD PTR [%ESP], %EAX mov DWORD PTR [%ESP + 4], %ECX fild QWORD PTR [%ESP] add %ESP, 8 ret ... which basically zero extends to 8 bytes, then does an fild for an 8-byte signed int. Now we generate this: test: sub %ESP, 4 mov %EAX, DWORD PTR [%ESP + 8] mov DWORD PTR [%ESP], %EAX fild DWORD PTR [%ESP] shr %EAX, 31 fadd DWORD PTR [.CPItest_0 + 4*%EAX] add %ESP, 4 ret .section .rodata .align 4 .CPItest_0: .quad 5728578726015270912 This does a 32-bit signed integer load, then adds in an offset if the sign bit of the integer was set. It turns out that this is substantially faster than the preceeding sequence. Consider this testcase: unsigned a[2]={1,2}; volatile double G; void main() { int i; for (i=0; i<100000000; ++i ) G += a[i&1]; } On zion (a P4 Xeon, 3Ghz), this patch speeds up the testcase from 2.140s to 0.94s. On apoc, an athlon MP 2100+, this patch speeds up the testcase from 1.72s to 1.34s. Note that the program takes 2.5s/1.97s on zion/apoc with GCC 3.3 -O3 -fomit-frame-pointer. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17083 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
Analysis | ||
Archive | ||
AsmParser | ||
Bytecode | ||
CodeGen | ||
Debugger | ||
ExecutionEngine | ||
Linker | ||
Support | ||
System | ||
Target | ||
Transforms | ||
VMCore | ||
Makefile | ||
Makefile.am |