This more accurately reflects what is actually being stored in the
field.
No functionality change intended.
Author: Matthew Curtis <mcurtis@codeaurora.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166215 91177308-0d34-0410-b5e6-96231b3b80d8
The TargetTransform changes are breaking LTO bootstraps of clang. I am
working with Nadav to figure out the problem, but I am reverting it for now
to get our buildbots working.
This reverts svn commits: 165665 165669 165670 165786 165787 165997
and I have also reverted clang svn 165741
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166168 91177308-0d34-0410-b5e6-96231b3b80d8
This is a more compact, less redundant representation, and it avoids
scanning long lists of aliases for ARM D-registers, for example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166124 91177308-0d34-0410-b5e6-96231b3b80d8
any scheduling heuristics nor does it build up any scheduling data structure
that other heuristics use. It essentially linearize by doing a DFA walk but
it does handle glues correctly.
IMPORTANT: it probably can't handle all the physical register dependencies so
it's not suitable for x86. It also doesn't deal with dbg_value nodes right now
so it's definitely is still WIP.
rdar://12474515
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166122 91177308-0d34-0410-b5e6-96231b3b80d8
All callers of these functions really want the isPhysRegOrOverlapUsed()
functionality which also checks aliases. For historical reasons, targets
without register aliases were calling isPhysRegUsed() instead.
Change isPhysRegUsed() to also check aliases, and switch all
isPhysRegOrOverlapUsed() callers to isPhysRegUsed().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166117 91177308-0d34-0410-b5e6-96231b3b80d8
This is just as fast, and it makes it possible to avoid leaking the
UsedPhysRegs BitVector implementation through
MachineRegisterInfo::addPhysRegsUsed().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166083 91177308-0d34-0410-b5e6-96231b3b80d8
This is a medium term workaround until we have a more robust solution
in the form of a register liveness utility for postRA passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166001 91177308-0d34-0410-b5e6-96231b3b80d8
Using the cached bit vector in MRI avoids comstantly allocating and
recomputing the reserved register bit vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165983 91177308-0d34-0410-b5e6-96231b3b80d8
Also provide an MRI::getReservedRegs() function to access the frozen
register set, and isReserved() and isAllocatable() methods to test
individual registers.
The various implementations of TRI::getReservedRegs() are quite
complicated, and many passes need to look at the reserved register set.
This patch makes it possible for these passes to use the cached copy in
MRI, avoiding a lot of malloc traffic and repeated calculations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165982 91177308-0d34-0410-b5e6-96231b3b80d8
isa<> et al. automatically infer when the cast is an upcast (including a
self-cast), so these are no longer necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165767 91177308-0d34-0410-b5e6-96231b3b80d8
Allows the new machine model to be used for NumMicroOps and OutputLatency.
Allows the HazardRecognizer to be disabled along with itineraries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165603 91177308-0d34-0410-b5e6-96231b3b80d8
This wasn't contributing anything significant to postRA heuristics except compile time (by my measurements) and will be replaced by a more general heuristic for cross-region dependencies within the scheduler itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165563 91177308-0d34-0410-b5e6-96231b3b80d8
- Update maximal stack alignment when stack arguments are prepared before a
call.
- Test cases are enhanced to show it's not a Win32 specific issue but a generic
one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164946 91177308-0d34-0410-b5e6-96231b3b80d8
Add LIS::pruneValue() and extendToIndices(). These two functions are
used by the register coalescer when merging two live ranges requires
more than a trivial value mapping as supported by LiveInterval::join().
The pruneValue() function can remove the part of a value number that is
going to conflict in join(). Afterwards, extendToIndices can restore the
live range, using any new dominating value numbers and updating the SSA
form.
Use this complex value mapping to support merging a register into a
vector lane that has a conflicting value, but the clobbered lane is
undef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164074 91177308-0d34-0410-b5e6-96231b3b80d8
A value that is live in to a basic block should be returned by valueIn()
in LiveRangeQuery(getMBBStartIdx(MBB)), unless it is a PHI-def which
should be returned by valueDefined() instead.
Current code isn't using this functionality. Future code will.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163990 91177308-0d34-0410-b5e6-96231b3b80d8
If a PHI value happens to be live out from the layout predecessor of its
def block, the def slot index will be in the middle of the segment:
%vreg11 = [192r,240B:0)[352r,416B:2)[416B,496r:1) 0@192r 1@480B-phi %2@352r
A LiveRangeQuery for 480 should return NULL from valueIn() since the
PHI value is defined at the block entry, not live in to the block.
No test case, future code depends on this functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163971 91177308-0d34-0410-b5e6-96231b3b80d8