mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-17 03:30:28 +00:00
80d10ded8c
Altivec only directly supports aligned loads, but the loads have a strange property: If given an unaligned address, they truncate the address to the next lower aligned address, and load from there. This property, along with an extra load and some special-purpose permutation-control instructions that generate the appropriate permutations from the original unaligned address, allow efficient lowering of aligned loads. This code uses the trick explained in the Apple Velocity Engine optimization overview document to prevent the needed extra load from possibly causing a page fault if the original address happens to be aligned. As noted in the FIXMEs, there are several additional optimizations that can be performed to reduce the cost of these loads even more. These will be implemented in future commits. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182691 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
AArch64 | ||
ARM | ||
CPP | ||
Generic | ||
Hexagon | ||
Inputs | ||
MBlaze | ||
Mips | ||
MSP430 | ||
NVPTX | ||
PowerPC | ||
R600 | ||
SI | ||
SPARC | ||
SystemZ | ||
Thumb | ||
Thumb2 | ||
X86 | ||
XCore |