mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-02-07 14:33:15 +00:00
add some late optimizations that GCC does. It thinks these are a win
even on Core2, not just AMD processors which was a surprise to me. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72396 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
parent
0c85aabfdc
commit
d9b77159d6
@ -1883,3 +1883,17 @@ On Nehalem, it may even be cheaper to just use movups when unaligned than to
|
||||
fall back to lower-granularity chunks.
|
||||
|
||||
//===---------------------------------------------------------------------===//
|
||||
|
||||
Implement processor-specific optimizations for parity with GCC on these
|
||||
processors. GCC does two optimizations:
|
||||
|
||||
1. ix86_pad_returns inserts a noop before ret instructions if immediately
|
||||
preceeded by a conditional branch or is the target of a jump.
|
||||
2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
|
||||
code contains more than 3 branches.
|
||||
|
||||
The first one is done for all AMDs, Core2, and "Generic"
|
||||
The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
|
||||
Core 2, and "Generic"
|
||||
|
||||
//===---------------------------------------------------------------------===//
|
||||
|
Loading…
x
Reference in New Issue
Block a user