both source operands. In the canonical form, the 2nd operand is changed to an
undef and the shuffle mask is adjusted to only reference elements from the 1st
operand. Radar 7434842.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90417 91177308-0d34-0410-b5e6-96231b3b80d8
For VMOVv*i[16,32], op bit is don't care, and some cmode bits vary depending on
the immediate values.
Ref: Table A7-15 Modified immediate values for Advanced SIMD instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90173 91177308-0d34-0410-b5e6-96231b3b80d8
for all the processors where I have tried it, and even when it might not help
performance, the cost is quite low. The opportunities for duplicating
indirect branches are limited by other factors so code size does not change
much due to tail duplicating indirect branches aggressively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90144 91177308-0d34-0410-b5e6-96231b3b80d8
Make tail duplication of indirect branches much more aggressive (for targets
that indicate that it is profitable), based on further experience with
this transformation. I compiled 3 large applications with and without
this more aggressive tail duplication and measured minimal changes in code
size. ("size" on Darwin seems to round the text size up to the nearest
page boundary, so I can only say that any code size increase was less than
one 4k page.) Radar 7421267.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89814 91177308-0d34-0410-b5e6-96231b3b80d8
than doing the same via constpool:
1. Load from constpool costs 3 cycles on A9, movt/movw pair - just 2.
2. Load from constpool might stall up to 300 cycles due to cache miss.
3. Movt/movw does not use load/store unit.
4. Less constpool entries => better compiler performance.
This is only enabled on ELF systems, since darwin does not have needed
relocations (yet).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89720 91177308-0d34-0410-b5e6-96231b3b80d8
way for each TargetJITInfo subclass to allocate its own stubs. This
means stubs aren't as exactly-sized anymore, but it lets us get rid of
TargetJITInfo::emitFunctionStubAtAddr(), which lets ARM and PPC
support the eager JIT, fixing http://llvm.org/PR4816.
* Rename the JITEmitter's stub creation functions to describe the kind
of stub they create. So far, all of them create lazy-compilation
stubs, but they sometimes get used when far-call stubs are needed.
Fixing http://llvm.org/PR5201 will involve fixing this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89715 91177308-0d34-0410-b5e6-96231b3b80d8
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89711 91177308-0d34-0410-b5e6-96231b3b80d8
It's probably better in the long run to replace the
indirect-GlobalVariable system. That'll be done after a subsequent
patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89708 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes the NEON asm printing so the "predicate" field is printed between the opcode and the data type suffix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89706 91177308-0d34-0410-b5e6-96231b3b80d8
VDUPLND and VDUPLNQ to derive from N2V instead of N2VDup. VDUPLND and VDUPLNQ
now expect op19_18 and op17_16 as the first two args.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89699 91177308-0d34-0410-b5e6-96231b3b80d8
constant pool ranges, as CPEIsInRange() makes conservative assumptions about
the potential alignment changes from branch adjustments. The verification,
on the other hand, runs after those branch adjustments are made, so the
effects on alignment are known and already taken into account. The sanity
check in verify should check the range directly instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89473 91177308-0d34-0410-b5e6-96231b3b80d8
assembly can confuse things utterly, as it's assumed that instructions in
inline assembly are 4 bytes wide. For Thumb mode, that's often not true,
so the calculations for when alignment padding will be present get thrown off,
ultimately leading to out of range constant pool entry references. Making
more conservative assumptions that padding may be necessary when inline asm
is present avoids this situation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89403 91177308-0d34-0410-b5e6-96231b3b80d8
fully specified at this level. Subclasses of NLdStLN can specify selective
bit(s) for Inst{7-4}, as is done for VLD[234]LN* and VST[234]LN* inside
ARMInstrNEON.td.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89377 91177308-0d34-0410-b5e6-96231b3b80d8