For now the allocator still uses the old (internal) construction mechanism by default. This will be phased out soon assuming
no issues with the builder system come up.
To invoke the new construction mechanism just pass '-regalloc=pbqp -pbqp-builder' to llc. To provide custom constraints a
Target just needs to extend PBQPBuilder and pass an instance of their derived builder to the RegAllocPBQP constructor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114272 91177308-0d34-0410-b5e6-96231b3b80d8
NO path to the destination containing side effects, not that SOME path contains no side effects.
In practice, this only manifests with CombinerAA enabled, because otherwise the chain has little
to no branching, so "any" is effectively equivalent to "all".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114268 91177308-0d34-0410-b5e6-96231b3b80d8
1) Do forward copy propagation. This makes it easier to estimate the cost of the
instruction being sunk.
2) Break critical edges on demand, including cases where the value is used by
PHI nodes.
Critical edge splitting is not yet enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114227 91177308-0d34-0410-b5e6-96231b3b80d8
great deal because we don't have to worry about maintaining SSA form.
Unconditionally copy back to dupli when the register is live out of the split
range, even if the live-out value was defined outside the range. Skipping the
back-copy only makes sense when the live range is going to spill outside the
split range, and we don't know that it will. Besides, this was a hack to avoid
SSA update issues.
Clear up some confusion about the end point of a half-open LiveRange. Methinks
LiveRanges need to be closed so both start and end are included in the range.
The low bits of a SlotIndex are symbolic, so a half-open range doesn't really
make sense. This would be a pervasive change, though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114043 91177308-0d34-0410-b5e6-96231b3b80d8
the 'zero' bit down into the back-end. There are other cases where this logic
isn't sufficient, so they should be handled separately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113665 91177308-0d34-0410-b5e6-96231b3b80d8
iterator when an optimization took place. This allows us to do more insane
things with the code than just remove an instruction or two.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113640 91177308-0d34-0410-b5e6-96231b3b80d8
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113570 91177308-0d34-0410-b5e6-96231b3b80d8
LiveIntervals already adds <imp-def> operands for super-registers when a subreg
def defines the whole register. Thus, it is not necessary to do it again when
rewriting.
In fact, the super-register imp-defs caused miscompilations because the late
scheduler couldn't see that the super-register was read.
We still add super-reg <imp-use,kill> operands when rewriting virtuals to
physicals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113299 91177308-0d34-0410-b5e6-96231b3b80d8
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113102 91177308-0d34-0410-b5e6-96231b3b80d8
Clobber ranges are no longer used when joining physical registers.
Instead, all aliases are checked for interference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113084 91177308-0d34-0410-b5e6-96231b3b80d8
overload UserInInstr. Explicitly check Allocatable. The early exit in the
condition will mean the performance impact of the extra test should be
minimal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113016 91177308-0d34-0410-b5e6-96231b3b80d8
solve the root problem, but it corrects the bug in the code I added to
support legalizing in the case where the non-extended type is also legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112997 91177308-0d34-0410-b5e6-96231b3b80d8
slot.
Teach it to also check for early clobbered aliases, and early clobber operands
following the current operand.
This fixes the miscompilation in PR8044 where EC registers eax and ecx were
being used for inputs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112988 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Use the SSAUpdator to turn calls to eh.exception that are not in a
landing pad into uses of registers rather than loads from a stack
slot. Doesn't touch the 'orrible hack code - Bill needs to persuade
me harder :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112952 91177308-0d34-0410-b5e6-96231b3b80d8
there are clearly no stores between the load and the store. This fixes
this miscompile reported as PR7833.
This breaks the test/CodeGen/X86/narrow_op-2.ll optimization, which is
safe, but awkward to prove safe. Move it to X86's README.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112861 91177308-0d34-0410-b5e6-96231b3b80d8