function is successfully handled by fast-isel. That's because function
arguments are *always* handled by SDISel. Introduce FastLowerArguments to
allow each target to provide hook to handle formal argument lowering.
As a proof-of-concept, add ARMFastIsel::FastLowerArguments to handle
functions with 4 or fewer scalar integer (i8, i16, or i32) arguments. It
completely eliminates the need for SDISel for trivial functions.
rdar://13163905
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174855 91177308-0d34-0410-b5e6-96231b3b80d8
support for updating SlotIndexes to MachineBasicBlock::SplitCriticalEdge(). This
calls renumberIndexes() every time; it should be improved to only renumber
locally.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174851 91177308-0d34-0410-b5e6-96231b3b80d8
present, it currently verifies them with the MachineVerifier, and this passed
all of the test cases in 'make check' (when accounting for existing verifier
errors). There were some assertion failures in the two-address pass, but they
also happened on code without phis and look like they are caused by different
kill flags from LiveIntervals.
The only part that doesn't work is the critical edge splitting heuristic,
because there isn't currently an efficient way to update LiveIntervals after
splitting an edge. I'll probably start by implementing the slow fallback and
test that it works before tackling the fast path for single-block ranges. The
existing code that updates LiveVariables is fairly slow as it is.
There isn't a command-line option for enabling this; instead, just edit
PHIElimination.cpp to require LiveIntervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174831 91177308-0d34-0410-b5e6-96231b3b80d8
This uses a liveness algorithm that does not depend on data from the
LiveVariables analysis, it is the first step towards removing
LiveVariables completely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174774 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts r171041. This was a nice idea that didn't work out well.
Clang warnings need to be associated with warning groups so that they can
be selectively disabled, promoted to errors, etc. This simplistic patch didn't
allow for that. Enhancing it to provide some way for the backend to specify
a front-end warning type seems like overkill for the few uses of this, at
least for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174748 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, even when a pre-increment load or store was generated,
we often needed to keep a copy of the original base register for use
with other offsets. If all of these offsets are constants (including
the offset which was combined into the addressing mode), then this is
clearly unnecessary. This change adjusts these other offsets to use the
new incremented address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174746 91177308-0d34-0410-b5e6-96231b3b80d8
Aside from the question of whether we report a warning or an error when we
can't satisfy a requested stack object alignment, the current implementation
of this is not good. We're not providing any source location in the diagnostics
and the current warning is not connected to any warning group so you can't
control it. We could improve the source location somewhat, but we can do a
much better job if this check is implemented in the front-end, so let's do that
instead. <rdar://problem/13127907>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174741 91177308-0d34-0410-b5e6-96231b3b80d8
Adds a function to target transform info to query for the cost of address
computation. The cost model analysis pass now also queries this interface.
The code in LoopVectorize adds the cost of address computation as part of the
memory instruction cost calculation. Only there, we know whether the instruction
will be scalarized or not.
Increase the penality for inserting in to D registers on swift. This becomes
necessary because we now always assume that address computation has a cost and
three is a closer value to the architecture.
radar://13097204
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174713 91177308-0d34-0410-b5e6-96231b3b80d8
syms before constructing the compile units so we're not emitting
section references to sections not there already.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174663 91177308-0d34-0410-b5e6-96231b3b80d8
Failure: undefined symbol 'Lline_table_start0'.
Root-cause: we use a symbol subtraction to calculate at_stmt_list, but
the line table entries are not dumped in the assembly.
Fix: use zero instead of a symbol subtraction for Compile Unit 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174479 91177308-0d34-0410-b5e6-96231b3b80d8
We generate one line table for each compilation unit in the object file.
Reviewed by Eric and Kevin.
rdar://problem/13067005
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174445 91177308-0d34-0410-b5e6-96231b3b80d8
base point of a load, and the overall alignment of the load. This caused infinite loops in DAG combine with the
original application of this patch.
ORIGINAL COMMIT LOG:
When the target-independent DAGCombiner inferred a higher alignment for a load,
it would replace the load with one with the higher alignment. However, it did
not place the new load in the worklist, which prevented later DAG combines in
the same phase (for example, target-specific combines) from ever seeing it.
This patch corrects that oversight, and updates some tests whose output changed
due to slightly different DAGCombine outputs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174431 91177308-0d34-0410-b5e6-96231b3b80d8
All targets are now adding return value registers as implicit uses on
return instructions, and there is no longer a need for the live out
lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174417 91177308-0d34-0410-b5e6-96231b3b80d8
Now that return value registers are return instruction uses, there is no
need for special treatment of return blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174416 91177308-0d34-0410-b5e6-96231b3b80d8
It caused hangups in compiling clang/lib/Parse/ParseDecl.cpp and clang/lib/Driver/Tools.cpp in stage2 on some hosts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174374 91177308-0d34-0410-b5e6-96231b3b80d8
it would replace the load with one with the higher alignment. However, it did
not place the new load in the worklist, which prevented later DAG combines in
the same phase (for example, target-specific combines) from ever seeing it.
This patch corrects that oversight, and updates some tests whose output changed
due to slightly different DAGCombine outputs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174343 91177308-0d34-0410-b5e6-96231b3b80d8
Per discussion in rdar://13127907, we should emit a hard error only if
people write code where the requested alignment is larger than achievable
and assumes the low bits are zeros. A warning should be good enough when
we are not sure if the source code assumes the low bits are zeros.
rdar://13127907
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174336 91177308-0d34-0410-b5e6-96231b3b80d8
This required disabling a PowerPC optimization that did the following:
input:
x = BUILD_VECTOR <i32 16, i32 16, i32 16, i32 16>
lowered to:
tmp = BUILD_VECTOR <i32 8, i32 8, i32 8, i32 8>
x = ADD tmp, tmp
The add now gets folded immediately and we're back at the BUILD_VECTOR we
started from. I don't see a way to fix this currently so I left it disabled
for now.
Fix some trivially foldable X86 tests too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174325 91177308-0d34-0410-b5e6-96231b3b80d8
We used to create children DIEs for a scope, then check whether ScopeDIE is
null. If ScopeDIE is null, the children DIEs will be dangling. Other DIEs can
link to those dangling DIEs, which are not emitted at all, causing dwarf error.
The current testing case is 4k lines, from MultiSource/BenchMark/McCat/09-vor.
rdar://problem/13071959
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174084 91177308-0d34-0410-b5e6-96231b3b80d8
conditions are met:
1. They share the same operand and are in the same BB.
2. Both outputs are used.
3. The target has a native instruction that maps to ISD::FSINCOS node or
the target provides a sincos library call.
Implemented the generic optimization in sdisel and enabled it for
Mac OSX. Also added an additional optimization for x86_64 Mac OSX by
using an alternative entry point __sincos_stret which returns the two
results in xmm0 / xmm1.
rdar://13087969
PR13204
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173755 91177308-0d34-0410-b5e6-96231b3b80d8
The common code in the post-RA scheduler to break anti-dependencies on the
critical path contained a flaw. In the reported case, an anti-dependency
between the overlapping registers %X4 and %R4 exists:
%X29<def> = OR8 %X4, %X4
%R4<def>, %X3<def,dead,tied3> = LBZU 1, %X3<kill,tied1>
The unpatched code breaks the dependency by replacing %R4 and its uses
with %R3, the first register on the available list. However, %R3 and
%X3 overlap, so this creates two overlapping definitions on the same
instruction.
The fix is straightforward, preventing selection of a register that
overlaps any other defined register on the same instruction.
The test case is reduced from the bug report, and verifies that we no
longer produce "lbzu 3, 1(3)" when breaking this anti-dependency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173706 91177308-0d34-0410-b5e6-96231b3b80d8
Fix that by adding a cast to the shift expander. This came up with vector shifts
on sse-less X86 CPUs.
<2 x i64> = shl <2 x i64> <2 x i64>
-> i64,i64 = shl i64 i64; shl i64 i64
-> i32,i32,i32,i32 = shl_parts i32 i32 i64; shl_parts i32 i32 i64
Now we cast the last two i64s to the right type. Fixes the crash in PR14668.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173615 91177308-0d34-0410-b5e6-96231b3b80d8
with an initial number of elements, instead of DenseMap, which has
zero initial elements, in order to avoid the copying of elements
when the size changes and to avoid allocating space every time
LegalizeTypes is run. This patch will not affect the memory footprint,
because DenseMap will increase the element size to 64
when the first element is added.
Patch by Wan Xiaofei.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173448 91177308-0d34-0410-b5e6-96231b3b80d8
Maintain separate per-node and per-tree book-keeping.
Track all instructions above a DAG node including nested subtrees.
Seperately track instructions within a subtree.
Record subtree parents.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173426 91177308-0d34-0410-b5e6-96231b3b80d8
Allow the strategy to select SchedDFS. Allow the results of SchedDFS
to affect initialization of the scheduler state.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173425 91177308-0d34-0410-b5e6-96231b3b80d8
This is mostly refactoring, along with adding an instruction count
within the subtrees and ensuring we only look at data edges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173420 91177308-0d34-0410-b5e6-96231b3b80d8
For sanity, create a root when NumDataSuccs >= 4. Splitting large
subtrees will no longer be detrimental after my next checkin to handle
nested tree. A magic number of 4 is fine because single subtrees
seldom rejoin more than this. It makes subtrees easier to visualize
and heuristics more sane.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173399 91177308-0d34-0410-b5e6-96231b3b80d8
Allow schedulers to order DAG edges by critical path. This makes
DFS-based heuristics more stable and effective.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173317 91177308-0d34-0410-b5e6-96231b3b80d8
The requirements of the strong heuristic are:
* A Protector is required for functions which contain an array, regardless of
type or length.
* A Protector is required for functions which contain a structure/union which
contains an array, regardless of type or length. Note, there is no limit to
the depth of nesting.
* A protector is required when the address of a local variable (i.e., stack
based variable) is exposed. (E.g., such as through a local whose address is
taken as part of the RHS of an assignment or a local whose address is taken as
part of a function argument.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173231 91177308-0d34-0410-b5e6-96231b3b80d8
SSPStrong applies a heuristic to insert stack protectors in these situations:
* A Protector is required for functions which contain an array, regardless of
type or length.
* A Protector is required for functions which contain a structure/union which
contains an array, regardless of type or length. Note, there is no limit to
the depth of nesting.
* A protector is required when the address of a local variable (i.e., stack
based variable) is exposed. (E.g., such as through a local whose address is
taken as part of the RHS of an assignment or a local whose address is taken as
part of a function argument.)
This patch implements the SSPString attribute to be equivalent to
SSPRequired. This will change in a subsequent patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173230 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we tried to infer it from the bit width size, with an added
IsIEEE argument for the PPC/IEEE 128-bit case, which had a default
value. This default value allowed bugs to creep in, where it was
inappropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173138 91177308-0d34-0410-b5e6-96231b3b80d8
A SparseMultiSet adds multiset behavior to SparseSet, while retaining SparseSet's desirable properties. Essentially, SparseMultiSet provides multiset behavior by storing its dense data in doubly linked lists that are inlined into the dense vector. This allows it to provide good data locality as well as vector-like constant-time clear() and fast constant time find(), insert(), and erase(). It also allows SparseMultiSet to have a builtin recycler rather than keeping SparseSet's behavior of always swapping upon removal, which allows it to preserve more iterators. It's often a better alternative to a SparseSet of a growable container or vector-of-vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173064 91177308-0d34-0410-b5e6-96231b3b80d8
The optimization handles esoteric cases but adds a lot of complexity both to the X86 backend and to other backends.
This optimization disables an important canonicalization of chains of SEXT nodes and makes SEXT and ZEXT asymmetrical.
Disabling the canonicalization of consecutive SEXT nodes into a single node disables other DAG optimizations that assume
that there is only one SEXT node. The AVX mask optimizations is one example. Additionally this optimization does not update the cost model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172968 91177308-0d34-0410-b5e6-96231b3b80d8
Further encapsulation of the Attribute object. Don't allow direct access to the
Attribute object as an aggregate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172853 91177308-0d34-0410-b5e6-96231b3b80d8
changing both the string of the dwo_name to be correct and the type of
the statement list.
Testcases all around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172699 91177308-0d34-0410-b5e6-96231b3b80d8
Move the early if-conversion pass into this group.
ILP optimizations usually need to find the right balance between
register pressure and ILP using the MachineTraceMetrics analysis to
identify critical paths and estimate other costs. Such passes should run
together so they can share dominator tree and loop info analyses.
Besides if-conversion, future passes to run here here could include
expression height reduction and ARM's MLxExpansion pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172687 91177308-0d34-0410-b5e6-96231b3b80d8
of a class. Emit static data member declarations and definitions
through correctly.
Part of PR14471.
Patch by Paul Robinson!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172590 91177308-0d34-0410-b5e6-96231b3b80d8
using the DW_FORM_GNU_addr_index and a separate .debug_addr section which
stays in the executable and is fully linked.
Sneak in two other small changes:
a) Print out the debug_str_offsets.dwo section.
b) Change form we're expecting the entries in the debug_str_offsets.dwo
section to take from ULEB128 to U32.
Add tests for all of this in the fission-cu.ll test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172578 91177308-0d34-0410-b5e6-96231b3b80d8
The included test case is derived from one of the GCC compatibility tests.
The problem arises after the selection DAG has been converted to type-legalized
form. The combiner first sees a 64-bit load that can be converted into a
pre-increment form. The original load feeds into a SRL that isolates the
upper 32 bits of the loaded doubleword. This looks like an opportunity for
DAGCombiner::ReduceLoadWidth() to replace the 64-bit load with a 32-bit load.
However, this transformation is not valid, as the replacement load is not
a pre-increment load. The pre-increment load produces an extra result,
which feeds a subsequent add instruction. The replacement load only has
one result value, and this value is propagated to all uses of the pre-
increment load, including the add. Because the add is looking for the
second result value as its operand, it ends up attempting to add a constant
to a token chain, resulting in a crash.
So the patch simply disables this transformation for any load with more than
two result values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172480 91177308-0d34-0410-b5e6-96231b3b80d8
When tryEvict() is looking for a cheaper register in the allocation
order, skip the tail of too expensive registers when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172281 91177308-0d34-0410-b5e6-96231b3b80d8
Remember the minimum cost of the registers in an allocation order and
the number of registers at the end of the allocation order that have the
same cost per use.
This information can be used to limit the search space for
RAGreedy::tryEvict() when looking for a cheaper register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172280 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes some of the cycles between libCodeGen and libSelectionDAG. It's still
a complete mess but as long as the edges consist of virtual call it doesn't
cause breakage. BasicTTI did static calls and thus broke some build
configurations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172246 91177308-0d34-0410-b5e6-96231b3b80d8
the target if it supports the different CAST types. We didn't do this
on X86 because of the different register sizes and types, but on ARM
this makes sense.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172245 91177308-0d34-0410-b5e6-96231b3b80d8
- recognize string "{memory}" in the MI generation
- mark as mayload/maystore when there's a memory clobber constraint.
PR14859.
Patch by Krzysztof Parzyszek
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172228 91177308-0d34-0410-b5e6-96231b3b80d8
When calling hasProperty() on an instruction inside a bundle, it should
always behave as if IgnoreBundle was passed, and just return properties
for the current instruction.
Only attempt to aggregate bundle properties whan asked about the bundle
header.
The assertion fires on existing ARM test cases without this fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172082 91177308-0d34-0410-b5e6-96231b3b80d8
requirement when creating stack objects in MachineFrameInfo.
Add CreateStackObjectWithMinAlign to throw error when the minimal alignment
can't be achieved and to clamp the alignment when the preferred alignment
can't be achieved. Same is true for CreateVariableSizedObject.
Will not emit error in CreateSpillStackObject or CreateStackObject.
As long as callers of CreateStackObject do not assume the object will be
aligned at the requested alignment, we should not have miscompile since
later optimizations which look at the object's alignment will have the correct
information.
rdar://12713765
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172027 91177308-0d34-0410-b5e6-96231b3b80d8