- Update maximal stack alignment when stack arguments are prepared before a
call.
- Test cases are enhanced to show it's not a Win32 specific issue but a generic
one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164946 91177308-0d34-0410-b5e6-96231b3b80d8
Add LIS::pruneValue() and extendToIndices(). These two functions are
used by the register coalescer when merging two live ranges requires
more than a trivial value mapping as supported by LiveInterval::join().
The pruneValue() function can remove the part of a value number that is
going to conflict in join(). Afterwards, extendToIndices can restore the
live range, using any new dominating value numbers and updating the SSA
form.
Use this complex value mapping to support merging a register into a
vector lane that has a conflicting value, but the clobbered lane is
undef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164074 91177308-0d34-0410-b5e6-96231b3b80d8
A value that is live in to a basic block should be returned by valueIn()
in LiveRangeQuery(getMBBStartIdx(MBB)), unless it is a PHI-def which
should be returned by valueDefined() instead.
Current code isn't using this functionality. Future code will.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163990 91177308-0d34-0410-b5e6-96231b3b80d8
If a PHI value happens to be live out from the layout predecessor of its
def block, the def slot index will be in the middle of the segment:
%vreg11 = [192r,240B:0)[352r,416B:2)[416B,496r:1) 0@192r 1@480B-phi %2@352r
A LiveRangeQuery for 480 should return NULL from valueIn() since the
PHI value is defined at the block entry, not live in to the block.
No test case, future code depends on this functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163971 91177308-0d34-0410-b5e6-96231b3b80d8
- BlockAddress has no support of BA + offset form and there is no way to
propagate that offset into machine operand;
- Add BA + offset support and a new interface 'getTargetBlockAddress' to
simplify target block address forming;
- All targets are modified to use new interface and X86 backend is enhanced to
support BA + offset addressing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163743 91177308-0d34-0410-b5e6-96231b3b80d8
The search for liveness is clipped to a specific number of instructions around the target MachineInstr, in order to avoid degenerating into an O(N^2) algorithm. It tries to use various clues about how instructions around (both before and after) a given MachineInstr use that register, to determine its state at the MachineInstr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163695 91177308-0d34-0410-b5e6-96231b3b80d8
The Hexagon target decided to use a lot of functionality from the
target-independent scheduler. That's fine, and other targets should be
able to do the same. This reorg and API update makes that easy.
For the record, ScheduleDAGMI was not meant to be subclassed. Instead,
new scheduling algorithms should be able to implement
MachineSchedStrategy and be done. But if need be, it's nice to be
able to extend ScheduleDAGMI, so I also made that easier. The target
scheduler is somewhat more apt to break that way though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163580 91177308-0d34-0410-b5e6-96231b3b80d8
The RegisterCoalescer understands overlapping live ranges where one
register is defined as a copy of the other. With this change, register
allocators using LiveRegMatrix can do the same, at least for copies
between physical and virtual registers.
When a physreg is defined by a copy from a virtreg, allow those live
ranges to overlap:
%CL<def> = COPY %vreg11:sub_8bit; GR32_ABCD:%vreg11
%vreg13<def,tied1> = SAR32rCL %vreg13<tied0>, %CL<imp-use,kill>
We can assign %vreg11 to %ECX, overlapping the live range of %CL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163336 91177308-0d34-0410-b5e6-96231b3b80d8
We will soon allow virtual register live ranges to overlap regunit live
ranges when the physreg is defined as a copy of the virtreg:
%EAX = COPY %vreg5
FOO %vreg5
BAR %EAX<kill>
There is no real interference since %vreg5 and %EAX have the same value
where they overlap.
This patch prevents addKillFlags from adding virtreg kill flags to FOO
where the assigned physreg is overlapping the virtual register live
range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163335 91177308-0d34-0410-b5e6-96231b3b80d8
The MachineOperand::TiedTo field was maintained, but not used.
This patch enables it in isRegTiedToDefOperand() and
isRegTiedToUseOperand() which are the actual functions use by the
register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163153 91177308-0d34-0410-b5e6-96231b3b80d8
After much agonizing, use a full 4 bits of precious MachineOperand space
to encode this. This uses existing padding, and doesn't grow
MachineOperand beyond its current 32 bytes.
This allows tied defs among the first 15 operands on a normal
instruction, just like the current MCInstrDesc constraint encoding.
Inline assembly needs to be able to tie more than the first 15 operands,
and gets special treatment.
Tied uses can appear beyond 15 operands, as long as they are tied to a
def that's in range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163151 91177308-0d34-0410-b5e6-96231b3b80d8
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163021 91177308-0d34-0410-b5e6-96231b3b80d8
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162861 91177308-0d34-0410-b5e6-96231b3b80d8
This means the same as LoadInst/StoreInst::isUnordered(), and implies
!isVolatile().
Atomic loads and stored are also ordered, and this is the right method
to check if it is safe to reorder memory operations. Ordered atomics
can't be reordered wrt normal loads and stores, which is a stronger
constraint than volatile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162859 91177308-0d34-0410-b5e6-96231b3b80d8
The isTied bit is set automatically when a tied use is added and
MCInstrDesc indicates a tied operand. The tie is broken when one of the
tied operands is removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162814 91177308-0d34-0410-b5e6-96231b3b80d8
While in SSA form, a MachineInstr can have pairs of tied defs and uses.
The tied operands are used to represent read-modify-write operands that
must be assigned the same physical register.
Previously, tied operand pairs were computed from fixed MCInstrDesc
fields, or by using black magic on inline assembly instructions.
The isTied flag makes it possible to add tied operands to any
instruction while getting rid of (some of) the inlineasm magic.
Tied operands on normal instructions are needed to represent predicated
individual instructions in SSA form. An extra <tied,imp-use> operand is
required to represent the output value when the instruction predicate is
false.
Adding a predicate to:
%vreg0<def> = ADD %vreg1, %vreg2
Will look like:
%vreg0<tied,def> = ADD %vreg1, %vreg2, pred:3, %vreg7<tied,imp-use>
The virtual register %vreg7 is the value given to %vreg0 when the
predicate is false. It will be assigned the same physreg as %vreg0.
This commit adds the isTied flag and sets it based on MCInstrDesc when
building an instruction. The flag is not used for anything yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162774 91177308-0d34-0410-b5e6-96231b3b80d8
Register operands are manipulated by a lot of target-independent code,
and it is not always possible to preserve target flags. That means it is
not safe to use target flags on register operands.
None of the targets in the tree are using register operand target flags.
External targets should be using immediate operands to annotate
instructions with operand modifiers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162770 91177308-0d34-0410-b5e6-96231b3b80d8
These extra flags are not required to properly order the atomic
load/store instructions. SelectionDAGBuilder chains atomics as if they
were volatile, and SelectionDAG::getAtomic() sets the isVolatile bit on
the memory operands of all atomic operations.
The volatile bit is enough to order atomic loads and stores during and
after SelectionDAG.
This means we set mayLoad on atomic_load, mayStore on atomic_store, and
mayLoad+mayStore on the remaining atomic read-modify-write operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162733 91177308-0d34-0410-b5e6-96231b3b80d8
The logic for recomputing latency based on a ScheduleDAG edge was
shady. This bypasses the problem by requiring the client to provide
operand indices. This ensures consistent use of the machine model's
API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162420 91177308-0d34-0410-b5e6-96231b3b80d8