mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-21 00:32:23 +00:00
50fad70279
For code like this: void foo(float *a, float *b, int n, int stride_a, int stride_b) { int i; for (i=0; i<n; i++) a[i*stride_a] = b[i*stride_b]; } we now emit: .LBB_foo2_2: ; no_exit lfs f0, 0(r4) stfs f0, 0(r3) addi r7, r7, 1 add r4, r2, r4 add r3, r6, r3 cmpw cr0, r7, r5 blt .LBB_foo2_2 ; no_exit instead of: .LBB_foo_2: ; no_exit mullw r8, r2, r7 ;; multiply! slwi r8, r8, 2 lfsx f0, r4, r8 mullw r8, r2, r6 ;; multiply! slwi r8, r8, 2 stfsx f0, r3, r8 addi r2, r2, 1 cmpw cr0, r2, r5 blt .LBB_foo_2 ; no_exit loops with variable strides occur pretty often. For example, in SPECFP2K there are 317 variable strides in 177.mesa, 3 in 179.art, 14 in 188.ammp, 56 in 168.wupwise, 36 in 172.mgrid. Now we can allow indvars to turn functions written like this: void foo2(float *a, float *b, int n, int stride_a, int stride_b) { int i, ai = 0, bi = 0; for (i=0; i<n; i++) { a[ai] = b[bi]; ai += stride_a; bi += stride_b; } } into code like the above for better analysis. With this patch, they generate identical code. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22740 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
Analysis | ||
Archive | ||
AsmParser | ||
Bytecode | ||
CodeGen | ||
Debugger | ||
ExecutionEngine | ||
Linker | ||
Support | ||
System | ||
Target | ||
Transforms | ||
VMCore | ||
Makefile |