A value that is live in to a basic block should be returned by valueIn()
in LiveRangeQuery(getMBBStartIdx(MBB)), unless it is a PHI-def which
should be returned by valueDefined() instead.
Current code isn't using this functionality. Future code will.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163990 91177308-0d34-0410-b5e6-96231b3b80d8
If a PHI value happens to be live out from the layout predecessor of its
def block, the def slot index will be in the middle of the segment:
%vreg11 = [192r,240B:0)[352r,416B:2)[416B,496r:1) 0@192r 1@480B-phi %2@352r
A LiveRangeQuery for 480 should return NULL from valueIn() since the
PHI value is defined at the block entry, not live in to the block.
No test case, future code depends on this functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163971 91177308-0d34-0410-b5e6-96231b3b80d8
new one, and add support for running the new pass in that mode and in
that slot of the pass manager. With this the new pass can completely
replace the old one within the pipeline.
The strategy for enabling or disabling the SSAUpdater logic is to do it
by making the requirement of the domtree analysis optional. By default,
it is required and we get the standard mem2reg approach. This is usually
the desired strategy when run in stand-alone situations. Within the
CGSCC pass manager, we disable requiring of the domtree analysis and
consequentially trigger fallback to the SSAUpdater promotion.
In theory this would allow the pass to re-use a domtree if one happened
to be available even when run in a mode that doesn't require it. In
practice, it lets us have a single pass rather than two which was
simpler for me to wrap my head around.
There is a hidden flag to force the use of the SSAUpdater code path for
the purpose of testing. The primary testing strategy is just to run the
existing tests through that path. One notable difference is that it has
custom code to handle lifetime markers, and one of the tests has been
enhanced to exercise that code.
This has survived a bootstrap and the test suite without serious
correctness issues, however my run of the test suite produced *very*
alarming performance numbers. I don't entirely understand or trust them
though, so more investigation is on-going.
To aid my understanding of the performance impact of the new SROA now
that it runs throughout the optimization pipeline, I'm enabling it by
default in this commit, and will disable it again once the LNT bots have
picked up one iteration with it. I want to get those bots (which are
much more stable) to evaluate the impact of the change before I jump to
any conclusions.
NOTE: Several Clang tests will fail because they run -O3 and check the
result's order of output. They'll go back to passing once I disable it
again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163965 91177308-0d34-0410-b5e6-96231b3b80d8
- The current_pos function is supposed to return all the written bytes, not the
current position of the underlying stream.
- This caused tell() to be broken whenever the underlying stream had buffered
content.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163948 91177308-0d34-0410-b5e6-96231b3b80d8
* wrap code blocks in \code ... \endcode;
* refer to parameter names in paragraphs correctly (\arg is not what most
people want -- it starts a new paragraph);
* use \param instead of \arg to document parameters in order to be consistent
with the rest of the codebase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163902 91177308-0d34-0410-b5e6-96231b3b80d8
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
This is mostly documentation for the new machine model. It is designed
to be flexible, easy to incrementally refine for a subtarget, and
provide all the information that MachineScheduler will need.
If all goes well, I will follow up with an example of the new model in
use for ARM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163877 91177308-0d34-0410-b5e6-96231b3b80d8
.set a, b - c + CONSTANT
d = b - c + CONSTANT
Both 'a' and 'd' should be marked as absolute symbols (N_ABS).
rdar://12219394
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163853 91177308-0d34-0410-b5e6-96231b3b80d8
* wrap code blocks in \code ... \endcode;
* refer to parameter names in paragraphs correctly (\arg is not what most
people want -- it starts a new paragraph).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163790 91177308-0d34-0410-b5e6-96231b3b80d8
Add some support for dealing with an object pointer on arguments.
Part of rdar://9797999
which now supports adding the object pointer attribute to the
subprogram as it should.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163754 91177308-0d34-0410-b5e6-96231b3b80d8
- BlockAddress has no support of BA + offset form and there is no way to
propagate that offset into machine operand;
- Add BA + offset support and a new interface 'getTargetBlockAddress' to
simplify target block address forming;
- All targets are modified to use new interface and X86 backend is enhanced to
support BA + offset addressing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163743 91177308-0d34-0410-b5e6-96231b3b80d8
The search for liveness is clipped to a specific number of instructions around the target MachineInstr, in order to avoid degenerating into an O(N^2) algorithm. It tries to use various clues about how instructions around (both before and after) a given MachineInstr use that register, to determine its state at the MachineInstr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163695 91177308-0d34-0410-b5e6-96231b3b80d8
Sub-register lane masks are bitmasks that can be used to determine if
two sub-registers of a virtual register will overlap. For example, ARM's
ssub0 and ssub1 sub-register indices don't overlap each other, but both
overlap dsub0 and qsub0.
The lane masks will be accurate on most targets, but on targets that use
sub-register indexes in an irregular way, the masks may conservatively
report that two sub-register indices overlap when the eventually
allocated physregs don't.
Irregular register banks also mean that the bits in a lane mask can't be
mapped onto register units, but the concept is similar.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163630 91177308-0d34-0410-b5e6-96231b3b80d8
Apparently, NumSubRegIndices was completely unused before. Adjust it by
one to include the null subreg index, just like getNumRegs() includes
the null register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163628 91177308-0d34-0410-b5e6-96231b3b80d8
The Hexagon target decided to use a lot of functionality from the
target-independent scheduler. That's fine, and other targets should be
able to do the same. This reorg and API update makes that easy.
For the record, ScheduleDAGMI was not meant to be subclassed. Instead,
new scheduling algorithms should be able to implement
MachineSchedStrategy and be done. But if need be, it's nice to be
able to extend ScheduleDAGMI, so I also made that easier. The target
scheduler is somewhat more apt to break that way though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163580 91177308-0d34-0410-b5e6-96231b3b80d8
For some reason .lcomm uses byte alignment and .comm log2 alignment so we can't
use the same setting for both. Fix this by reintroducing the LCOMM enum.
I verified this against mingw's gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163420 91177308-0d34-0410-b5e6-96231b3b80d8
- Darwin lied about not supporting .lcomm and turned it into zerofill in the
asm parser. Push the zerofill-conversion down into macho-specific code.
- This makes the tri-state LCOMMType enum superfluous, there are no targets
without .lcomm.
- Do proper error reporting when trying to use .lcomm with alignment on a target
that doesn't support it.
- .comm and .lcomm alignment was parsed in bytes on COFF, should be power of 2.
- Fixes PR13755 (.lcomm crashes on ELF).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163395 91177308-0d34-0410-b5e6-96231b3b80d8
- This patch is inspired by the failure of the following code snippet
which is used to convert enumerable values into encoding bits to
improve the readability of td files.
class S<int s> {
bits<2> V = !if(!eq(s, 8), {0, 0},
!if(!eq(s, 16), {0, 1},
!if(!eq(s, 32), {1, 0},
!if(!eq(s, 64), {1, 1}, {?, ?}))));
}
Later, PR8330 is found to report not exactly the same bug relevant
issue to bit/bits values.
- Instead of resolving bit/bits values separately through
resolveBitReference(), this patch adds getBit() for all Inits and
resolves bit value by resolving plus getting the specified bit. This
unifies the resolving of bit with other values and removes redundant
logic for resolving bit only. In addition,
BitsInit::resolveReferences() is optimized to take advantage of this
origanization by resolving VarBitInit's variable reference first and
then getting bits from it.
- The type interference in '!if' operator is revised to support possible
combinations of int and bits/bit in MHS and RHS.
- As there may be illegal assignments from integer value to bit, says
assign 2 to a bit, but we only check this during instantiation in some
cases, e.g.
bit V = !if(!eq(x, 17), 0, 2);
Verbose diagnostic message is generated when invalid value is
resolveed to help locating the error.
- PR8330 is fixed as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163360 91177308-0d34-0410-b5e6-96231b3b80d8
The RegisterCoalescer understands overlapping live ranges where one
register is defined as a copy of the other. With this change, register
allocators using LiveRegMatrix can do the same, at least for copies
between physical and virtual registers.
When a physreg is defined by a copy from a virtreg, allow those live
ranges to overlap:
%CL<def> = COPY %vreg11:sub_8bit; GR32_ABCD:%vreg11
%vreg13<def,tied1> = SAR32rCL %vreg13<tied0>, %CL<imp-use,kill>
We can assign %vreg11 to %ECX, overlapping the live range of %CL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163336 91177308-0d34-0410-b5e6-96231b3b80d8
We will soon allow virtual register live ranges to overlap regunit live
ranges when the physreg is defined as a copy of the virtreg:
%EAX = COPY %vreg5
FOO %vreg5
BAR %EAX<kill>
There is no real interference since %vreg5 and %EAX have the same value
where they overlap.
This patch prevents addKillFlags from adding virtreg kill flags to FOO
where the assigned physreg is overlapping the virtual register live
range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163335 91177308-0d34-0410-b5e6-96231b3b80d8
This Operand type takes a default argument, and is initialized to
this value if it does not appear in a patter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163315 91177308-0d34-0410-b5e6-96231b3b80d8
Make sure to return a pointer into the target memory, not the local memory.
Often they are the same, but we can't assume that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163217 91177308-0d34-0410-b5e6-96231b3b80d8
pointers-to-strong-pointers may be in play. These can lead to retains and
releases happening in unstructured ways, foiling the optimizer. This fixes
rdar://12150909.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163180 91177308-0d34-0410-b5e6-96231b3b80d8
implementation does not co-exist well with how the sideeffect and alignstack
attributes are handled. The reverts r161641.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163174 91177308-0d34-0410-b5e6-96231b3b80d8
The MachineOperand::TiedTo field was maintained, but not used.
This patch enables it in isRegTiedToDefOperand() and
isRegTiedToUseOperand() which are the actual functions use by the
register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163153 91177308-0d34-0410-b5e6-96231b3b80d8
After much agonizing, use a full 4 bits of precious MachineOperand space
to encode this. This uses existing padding, and doesn't grow
MachineOperand beyond its current 32 bytes.
This allows tied defs among the first 15 operands on a normal
instruction, just like the current MCInstrDesc constraint encoding.
Inline assembly needs to be able to tie more than the first 15 operands,
and gets special treatment.
Tied uses can appear beyond 15 operands, as long as they are tied to a
def that's in range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163151 91177308-0d34-0410-b5e6-96231b3b80d8
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder, or both.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163150 91177308-0d34-0410-b5e6-96231b3b80d8
Rationale: For each preprocessor macro, either the definedness is what's
meaningful, or the value is what's meaningful, or both. If definedness is
meaningful, we should use #ifdef. If the value is meaningful, we should use
and #ifdef interchangeably for the same macro, seems ugly to me, even if
undefined macros are zero if used.
This also has the benefit that including an LLVM header doesn't prevent
you from compiling with -Wundef -Werror.
Patch by John Garvin!
<rdar://problem/12189979>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163148 91177308-0d34-0410-b5e6-96231b3b80d8
by instruction address from DWARF.
Add --inlining flag to llvm-dwarfdump to demonstrate and test this functionality,
so that "llvm-dwarfdump --inlining --address=0x..." now works much like
"addr2line -i 0x...", provided that the binary has debug info
(Clang's -gline-tables-only *is* enough).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163128 91177308-0d34-0410-b5e6-96231b3b80d8
the NumMCOperands argument to the GetMCInstOperandNum() function that is set
to the number of MCOperands this asm operand mapped to.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163124 91177308-0d34-0410-b5e6-96231b3b80d8
MatchInstructionImpl() function.
These values are used by the ConvertToMCInst() function to index into the
ConversionTable. The values are also needed to call the GetMCInstOperandNum()
function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163101 91177308-0d34-0410-b5e6-96231b3b80d8