It's always good to prune early, but formulae that are unsatisfactory
in their own right need to be removed before running any other pruning
heuristics. We easily avoid generating such formulae, but we need them
as an intermediate basis for forming other good formulae.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145906 91177308-0d34-0410-b5e6-96231b3b80d8
The new register allocator is much more able to split back up ranges too constrained by register classes.
Fixes <rdar://problem/10466609>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145899 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, all ARM::CONSTPOOL_ENTRY instructions had a hardwired
alignment of 4 bytes emitted by ARMAsmPrinter. Now the same alignment
is set on the basic block.
This is in preparation of supporting ARM constant pool islands with
different alignments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145890 91177308-0d34-0410-b5e6-96231b3b80d8
This was actually a bit of a mess. TLI.setPrefLoopAlignment was clearly
documented as taking log2(bytes) units, but the x86 target would still
set a preferred loop alignment of '16'.
CodePlacementOpt passed this number on to the basic block, and
AsmPrinter interpreted it as bytes.
Now both MachineFunction and MachineBasicBlock use logarithmic
alignments.
Obviously, MachineConstantPool still measures alignments in bytes, so we
can emulate the thrill of using as.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145889 91177308-0d34-0410-b5e6-96231b3b80d8
Whether a fixup needs relaxation for the associated instruction is a
target-specific function, as the FIXME indicated. Create a hook for that
and use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145881 91177308-0d34-0410-b5e6-96231b3b80d8
don't do this now, but add a test case to prevent this from happening in the
future.
Additional test for rdar://9892684
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145879 91177308-0d34-0410-b5e6-96231b3b80d8
Not right yet, as the rules for when to relax in the MCAssembler aren't
(yet) correct for ARM. This is a step in the proper direction, though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145871 91177308-0d34-0410-b5e6-96231b3b80d8
memory fences) in statistics registration, which works the same way that
ManagedStatic registration does.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145869 91177308-0d34-0410-b5e6-96231b3b80d8
where this would be bad as the backend shouldn't have a problem inlining small
memcpys.
rdar://10510150
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145865 91177308-0d34-0410-b5e6-96231b3b80d8
Combined destination and first source operand for f32 variant of the VMUL
(by scalar) instruction.
rdar://10522016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145842 91177308-0d34-0410-b5e6-96231b3b80d8
- Calling getUser in a loop is much more expensive than iterating over a few instructions.
- Use it instead of the open-coded loop in AddrModeMatcher.
- 5% speedup on ARMDisassembler.cpp Release builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145810 91177308-0d34-0410-b5e6-96231b3b80d8
-15% on ARMDisassembler.cpp (Release build). It's not that great to add another
layer of caching to the caching-heavy LVI but I don't see a better way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145770 91177308-0d34-0410-b5e6-96231b3b80d8
libgcc sets the stack limit field in TCB to 256 bytes above the actual
allocated stack limit. This means if the function's stack frame needs
less than 256 bytes, we can just compare the stack pointer with the
stack limit. This should result in lesser calls to __morestack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145766 91177308-0d34-0410-b5e6-96231b3b80d8