mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-23 15:29:51 +00:00
fd3f635103
The basic issue here is that bottom-up isel is matching the branch and compare, and was failing to fold the load into the branch/compare combo. Fixing this (by allowing folding into any instruction of a sequence that is selected) allows us to produce things like: cmpb $0, 52(%rax) je LBB4_2 instead of: movb 52(%rax), %cl cmpb $0, %cl je LBB4_2 This makes the generated -O0 code run a bit faster, but also speeds up compile time by putting less pressure on the register allocator and generating less code. This was one of the biggest classes of missing load folding. Implementing this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm) line count. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129656 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
Alpha | ||
ARM | ||
Blackfin | ||
CBackend | ||
CellSPU | ||
CPP | ||
Generic | ||
MBlaze | ||
Mips | ||
MSP430 | ||
PowerPC | ||
PTX | ||
SPARC | ||
SystemZ | ||
Thumb | ||
Thumb2 | ||
X86 | ||
XCore |