llvm-6502/test/CodeGen
Chris Lattner beac75da37 implement rdar://6653118 - fastisel should fold loads where possible.
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:

int foo(int x, int y, int z) {
  return x+y+z;
}

used to compile into:

_foo:                                   ## @foo
	subq	$12, %rsp
	movl	%edi, 8(%rsp)
	movl	%esi, 4(%rsp)
	movl	%edx, (%rsp)
	movl	8(%rsp), %edx
	movl	4(%rsp), %esi
	addl	%edx, %esi
	movl	(%rsp), %edx
	addl	%esi, %edx
	movl	%edx, %eax
	addq	$12, %rsp
	ret

Now we produce:

_foo:                                   ## @foo
	subq	$12, %rsp
	movl	%edi, 8(%rsp)
	movl	%esi, 4(%rsp)
	movl	%edx, (%rsp)
	movl	8(%rsp), %edx
	addl	4(%rsp), %edx    ## Folded load
	addl	(%rsp), %edx     ## Folded load
	movl	%edx, %eax
	addq	$12, %rsp
	ret

Fewer instructions and less register use = faster compiles.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113102 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-05 02:18:34 +00:00
..
Alpha
ARM Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the 2010-09-03 01:35:08 +00:00
Blackfin
CBackend
CellSPU Fix lowering of INSERT_VECTOR_ELT in SPU. 2010-08-29 12:41:50 +00:00
CPP
Generic
MBlaze
Mips Correct bogus module triple specifications. 2010-08-30 10:48:29 +00:00
MSP430
PIC16
PowerPC
SPARC
SystemZ Correct bogus module triple specifications. 2010-08-30 10:48:29 +00:00
Thumb Re-apply r112883: 2010-09-03 18:37:12 +00:00
Thumb2 Re-apply r112883: 2010-09-03 18:37:12 +00:00
X86 implement rdar://6653118 - fastisel should fold loads where possible. 2010-09-05 02:18:34 +00:00
XCore