mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-01-16 14:31:59 +00:00
Chris Lattner
56a31c69c8
Rewrite support for cast uint -> FP. In particular, we used to compile this:
double %test(uint %X) { %tmp.1 = cast uint %X to double ; <double> [#uses=1] ret double %tmp.1 } into: test: sub %ESP, 8 mov %EAX, DWORD PTR [%ESP + 12] mov %ECX, 0 mov DWORD PTR [%ESP], %EAX mov DWORD PTR [%ESP + 4], %ECX fild QWORD PTR [%ESP] add %ESP, 8 ret ... which basically zero extends to 8 bytes, then does an fild for an 8-byte signed int. Now we generate this: test: sub %ESP, 4 mov %EAX, DWORD PTR [%ESP + 8] mov DWORD PTR [%ESP], %EAX fild DWORD PTR [%ESP] shr %EAX, 31 fadd DWORD PTR [.CPItest_0 + 4*%EAX] add %ESP, 4 ret .section .rodata .align 4 .CPItest_0: .quad 5728578726015270912 This does a 32-bit signed integer load, then adds in an offset if the sign bit of the integer was set. It turns out that this is substantially faster than the preceeding sequence. Consider this testcase: unsigned a[2]={1,2}; volatile double G; void main() { int i; for (i=0; i<100000000; ++i ) G += a[i&1]; } On zion (a P4 Xeon, 3Ghz), this patch speeds up the testcase from 2.140s to 0.94s. On apoc, an athlon MP 2100+, this patch speeds up the testcase from 1.72s to 1.34s. Note that the program takes 2.5s/1.97s on zion/apoc with GCC 3.3 -O3 -fomit-frame-pointer. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17083 91177308-0d34-0410-b5e6-96231b3b80d8
Low Level Virtual Machine (LLVM) ================================ This directory and its subdirectories contain source code for the Low Level Virtual Machine, a toolkit for the construction of highly optimized compilers, optimizers, and runtime environments. LLVM is open source software. You may freely distribute it under the terms of the license agreement found in LICENSE.txt. Please see the HTML documentation provided in docs/index.html for further assistance with LLVM.
Description
Languages
C++
48.7%
LLVM
38.5%
Assembly
10.2%
C
0.9%
Python
0.4%
Other
1.2%