MachinePointerInfo, propagating the type out a level of API. Remove
the old MachineFunction::getMachineMemOperand impl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114393 91177308-0d34-0410-b5e6-96231b3b80d8
MachinePointerInfo struct, no functionality change.
This also adds an assert to MachineMemOperand::MachineMemOperand
that verifies that the Value* is either null or is an IR pointer type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114389 91177308-0d34-0410-b5e6-96231b3b80d8
For now the allocator still uses the old (internal) construction mechanism by default. This will be phased out soon assuming
no issues with the builder system come up.
To invoke the new construction mechanism just pass '-regalloc=pbqp -pbqp-builder' to llc. To provide custom constraints a
Target just needs to extend PBQPBuilder and pass an instance of their derived builder to the RegAllocPBQP constructor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114272 91177308-0d34-0410-b5e6-96231b3b80d8
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113570 91177308-0d34-0410-b5e6-96231b3b80d8
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113102 91177308-0d34-0410-b5e6-96231b3b80d8
Clobber ranges are no longer used when joining physical registers.
Instead, all aliases are checked for interference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113084 91177308-0d34-0410-b5e6-96231b3b80d8
any more. I plan to reimplement alloca promotion using SSAUpdater later.
It looks like Bill's URoR logic really always needs domtree, so the pass
now always asks for domtree info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112597 91177308-0d34-0410-b5e6-96231b3b80d8
general idea here is to have a group of x86 target specific nodes which are
going to be selected during lowering and then directly matched in isel.
The commit includes the addition of those specific nodes and a *bunch* of
patterns, and incrementally we're going to switch between them and what we
have right now. Both the patterns and target specific nodes can change as
we move forward with this work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111691 91177308-0d34-0410-b5e6-96231b3b80d8
extending vector load should extend each element in the same way as the
corresponding scalar extending load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111577 91177308-0d34-0410-b5e6-96231b3b80d8
base registers were required. This will allow for slightly better packing
of the locals when alignment padding is necessary after callee saved registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111508 91177308-0d34-0410-b5e6-96231b3b80d8
mapping. Have the local block track its alignment requirement, and then
apply that when the block itself is allocated. Previously, offsets could
get adjusted in PEI to be different, relative to one another, than the
block allocation thought they would be, which defeats the point of doing
the allocation this way. Continuing rdar://8277890
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111197 91177308-0d34-0410-b5e6-96231b3b80d8
experimental pass that allocates locals relative to one another before
register allocation and then assigns them to actual stack slots as a block
later in PEI. This will eventually allow targets with limited index offset
range to allocate additional base registers (not just FP and SP) to
more efficiently reference locals, as well as handle situations where
locals cannot be referenced via SP or FP at all (dynamic stack realignment
together with variable sized objects, for example). It's currently
incomplete and almost certainly buggy. Work in progress.
Disabled by default and gated via the -enable-local-stack-alloc command
line option.
rdar://8277890
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111059 91177308-0d34-0410-b5e6-96231b3b80d8
When splitting a live range, the new registers have fewer uses and the
permissible register class may be less constrained. Recompute the register class
constraint from the uses of new registers created for a split. This may let them
be allocated from a larger set, possibly avoiding a spill.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110703 91177308-0d34-0410-b5e6-96231b3b80d8